Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable context parallelism in SFT #190

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

slimfrkha
Copy link

solve #189

lm loss of cp 2 and cp 1 are very similar in the fixed branch compared to main branch.

Screenshot 2024-12-24 at 3 42 31 PM
Screenshot 2024-12-24 at 3 42 56 PM

@CLAassistant
Copy link

CLAassistant commented Dec 24, 2024

CLA assistant check
All committers have signed the CLA.

Copy link
Collaborator

@haolin-nju haolin-nju left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good first contribution! Code looks good to me in general. However, I can't find lm_loss on fixed_cp1. Could you please attach it to the description?

BTW, could you please provide MT-Bench results on supervised fine-tuning Llama2-7B base model with CP enabled and disabled? We believe that the MT-Bench result will ensure the robustness of the PR. (You could find related instructions in https://github.com/lm-sys/FastChat/blob/main/fastchat/llm_judge/README.md. Besides, you could refer to https://code.alibaba-inc.com/torchx/rlhf/blob/master/docs/en/tutorial/data.md for data preparation and preprocessing. Ideally, the MT-Bench result shall be align with those in https://github.com/alibaba/ChatLearn/blob/main/docs/en/tutorial/tutorial_llama2.md#evaluation). If you have any other questions, please feel free to contact us. We are always glad to help ;)

@@ -81,6 +87,10 @@ def model_provider(pre_process=True, post_process=True):

def get_batch(data_iterator):
"""Generate a batch"""

if (not mpu.is_pipeline_first_stage()) and (not mpu.is_pipeline_last_stage()):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this line of code can be simplified

@slimfrkha
Copy link
Author

slimfrkha commented Dec 25, 2024

Good first contribution! Code looks good to me in general. However, I can't find lm_loss on fixed_cp1. Could you please attach it to the description?

BTW, could you please provide MT-Bench results on supervised fine-tuning Llama2-7B base model with CP enabled and disabled? We believe that the MT-Bench result will ensure the robustness of the PR. (You could find related instructions in https://github.com/lm-sys/FastChat/blob/main/fastchat/llm_judge/README.md. Besides, you could refer to https://code.alibaba-inc.com/torchx/rlhf/blob/master/docs/en/tutorial/data.md for data preparation and preprocessing. Ideally, the MT-Bench result shall be align with those in https://github.com/alibaba/ChatLearn/blob/main/docs/en/tutorial/tutorial_llama2.md#evaluation). If you have any other questions, please feel free to contact us. We are always glad to help ;)

about fixed_cp1, it is behind buggy_cp1
Screenshot 2024-12-25 at 2 48 02 PM

BTW the code is roughly copy pasting from pretrain_gpt of Megatron LM.
Nothing fancy about it.

@haolin-nju
Copy link
Collaborator

haolin-nju commented Dec 26, 2024

Good first contribution! Code looks good to me in general. However, I can't find lm_loss on fixed_cp1. Could you please attach it to the description?
BTW, could you please provide MT-Bench results on supervised fine-tuning Llama2-7B base model with CP enabled and disabled? We believe that the MT-Bench result will ensure the robustness of the PR. (You could find related instructions in https://github.com/lm-sys/FastChat/blob/main/fastchat/llm_judge/README.md. Besides, you could refer to https://code.alibaba-inc.com/torchx/rlhf/blob/master/docs/en/tutorial/data.md for data preparation and preprocessing. Ideally, the MT-Bench result shall be align with those in https://github.com/alibaba/ChatLearn/blob/main/docs/en/tutorial/tutorial_llama2.md#evaluation). If you have any other questions, please feel free to contact us. We are always glad to help ;)

about fixed_cp1, it is behind buggy_cp1 Screenshot 2024-12-25 at 2 48 02 PM

BTW the code is roughly copy pasting from pretrain_gpt of Megatron LM. Nothing fancy about it.

On one hand, we have to review the related license and ensure that everything is ok if the code is copied from other open-sourced repo. On the other hand, evaluating MT-Bench is necessary (or one of the necessary TODOs) to guarantee performance and reproducibility in ChatLearn. Therefore, it will take some time for us to go through all processes before merging this PR. It's our pleasure if you could help provide the MT-Bench result on this PR because we could double-check it in regression test. Again, thanks a lot for the contribution~!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants