Skip to content

[train] support target_parameters #5340

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Aug 18, 2025

Conversation

Jintao-Huang
Copy link
Collaborator

No description provided.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @Jintao-Huang, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

I've implemented support for a new target_parameters argument in the LoRA training configuration. This enhancement allows users to specify which model parameters should be targeted for LoRA adaptation directly by their names, offering an alternative to the existing target_modules and target_regex options. This change improves flexibility in fine-tuning large language models.

Highlights

  • LoRA Configuration: Introduced target_parameters to LoraConfig, enabling LoRA application based on parameter names.
  • Documentation: Updated Chinese and English documentation files to describe the new target_parameters argument and its usage, including the peft version requirement.
  • Dependency Management: Bumped the peft dependency version in requirements/framework.txt to allow versions up to <0.18, which is necessary for the new target_parameters feature.
  • Codebase Integration: Integrated the target_parameters argument across relevant swift modules, including argument parsing and adapter preparation logic for both standard and Megatron training setups.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for the target_parameters argument in LoRA training, which allows for more precise control over which parameters are adapted. The changes are well-implemented across the documentation, requirements, and both the standard and Megatron training paths. The peft dependency has been correctly updated to support this new feature. I have one minor suggestion to improve type hint consistency.

@@ -108,6 +108,7 @@ class TunerArguments:
# tuners
target_modules: List[str] = field(default_factory=lambda: ['all-linear'])
target_regex: Optional[str] = None
target_parameters: Optional[list[str]] = None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency with other type hints in this file, such as target_modules and modules_to_save, it's better to use List[str] from the typing module instead of the built-in list[str]. This will ensure uniformity across the codebase.

Suggested change
target_parameters: Optional[list[str]] = None
target_parameters: Optional[List[str]] = None

@Jintao-Huang
Copy link
Collaborator Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for the target_parameters argument in LoRA training, allowing users to specify LoRA targets by parameter names. This is achieved by updating the peft dependency, adding the new argument to TunerArguments, and passing it to the PEFT library during training setup. The documentation has also been updated accordingly. Additionally, this PR includes a fix for determining the tensor parallelism size in Megatron, which improves support for MoE models.

The changes are well-implemented. I have a couple of suggestions to improve the clarity of the documentation for the new target_parameters argument by explicitly stating its precedence over target_modules and target_regex.

@@ -214,6 +214,7 @@
- 🔥target_modules: 指定lora模块, 默认为`['all-linear']`。你也可以设置为module的后缀,例如:`--target_modules q_proj k_proj v_proj`。该参数不限于LoRA,可用于其他tuners。
- 注意:在LLM和多模态LLM中,'all-linear'的行为有所不同。若是LLM则自动寻找除lm_head外的linear并附加tuner;若是多模态LLM,则默认只在LLM上附加tuner,该行为可以被`freeze_llm`、`freeze_vit`、`freeze_aligner`控制。
- 🔥target_regex: 指定lora模块的regex表达式,默认为`None`。如果该值传入,则target_modules参数失效。该参数不限于LoRA,可用于其他tuners。
- target_parameters: 要替换为LoRA的参数名称列表。该参数的行为与 `target_modules` 类似,但传入的应是参数名称。该特性需要安装"peft>=0.17.0"。
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To clarify parameter precedence, it's better to state that target_parameters is mutually exclusive with and has higher priority than target_modules and target_regex.

Suggested change
- target_parameters: 要替换为LoRA的参数名称列表。该参数的行为与 `target_modules` 类似,但传入的应是参数名称。该特性需要安装"peft>=0.17.0"。
- target_parameters: 要替换为LoRA的参数名称列表。传入后,`target_modules``target_regex`将会失效。该特性需要安装"peft>=0.17.0"。

@@ -218,6 +218,7 @@ Other important parameters:
- 🔥 target_modules: Specifies the LoRA modules. The default is `['all-linear']`, but you can also pass layer-name suffixes, e.g. `--target_modules q_proj k_proj v_proj`. This argument is not restricted to LoRA and can be used with other tuners as well.
- Note: The behavior of the special value `'all-linear'` differs between plain LLMs and multimodal LLMs. For a standard LLM, it automatically locates every linear layer except `lm_head` and attaches a tuner. For a multimodal LLM, it attaches the tuner only to the LLM component by default. This default can be changed with the `freeze_llm`, `freeze_vit`, and `freeze_aligner` options.
- 🔥target_regex: Specifies a regex expression for LoRA modules, with a default of `None`. If this value is provided, the target_modules parameter becomes ineffective. This parameter is not limited to LoRA and can be used for other tuners.
- target_parameters: List of parameter names to be replaced with LoRA. This argument behaves similarly to target_modules, but you should pass parameter names instead. This feature requires "peft>=0.17.0".
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To more clearly explain the parameter precedence, I suggest modifying the description here. target_parameters is mutually exclusive with and has higher priority than target_modules and target_regex.

Suggested change
- target_parameters: List of parameter names to be replaced with LoRA. This argument behaves similarly to target_modules, but you should pass parameter names instead. This feature requires "peft>=0.17.0".
- target_parameters: List of parameter names to be replaced with LoRA. If this is provided, `target_modules` and `target_regex` will be ignored. This feature requires "peft>=0.17.0".

@Jintao-Huang Jintao-Huang merged commit 643b609 into modelscope:main Aug 18, 2025
1 of 2 checks passed
Jintao-Huang added a commit that referenced this pull request Aug 18, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants