Skip to content

Conversation

@Mangodadada
Copy link
Contributor

PR types

others

PR changes

models

Description

add flash_attention on model chatglm_v2

@paddle-bot
Copy link

paddle-bot bot commented Oct 22, 2024

Thanks for your contribution!

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.


huxinye seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

@codecov
Copy link

codecov bot commented Oct 22, 2024

Codecov Report

Attention: Patch coverage is 45.00000% with 11 lines in your changes missing coverage. Please review.

Project coverage is 52.92%. Comparing base (76a118b) to head (a12cadc).
Report is 272 commits behind head on develop.

Files with missing lines Patch % Lines
paddlenlp/transformers/chatglm_v2/modeling.py 45.00% 11 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #9296      +/-   ##
===========================================
- Coverage    53.11%   52.92%   -0.19%     
===========================================
  Files          665      660       -5     
  Lines       109041   106857    -2184     
===========================================
- Hits         57918    56555    -1363     
+ Misses       51123    50302     -821     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

self.hidden_size_per_attention_head,
]
)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这一段reshape的逻辑不应该加在这里,破坏了原有的非fa2的逻辑。而且下面还有支持sequence_parallel的逻辑会重新reshape

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

按要求修改了

)
version_check = False
if self.config.use_flash_attention and version_check:
attention_mask = attention_mask
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

对qkv的reshape可以放在if config.use_flash_attention下面,并且需要考虑sequence parallel

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已经按要求修改了

Copy link
Contributor

@lugimzzz lugimzzz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@wawltor wawltor merged commit 2993974 into PaddlePaddle:develop Oct 28, 2024
6 of 12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants