Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
For multi-task training in pytorch, each data source will have their own dataloader. If the number of workers of dataloaders is large, there will be many (number of tasks * num_workers) worker processes stressing CPU. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Performance Optimization** - Adjusted default maximum worker configuration from 8 to 4 CPUs - Reduced potential parallel processing resources for the environment - **Documentation** - Updated documentation to reflect the change in default value for `NUM_WORKERS` from 8 to 4 <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Signed-off-by: Chun Cai <[email protected]>
- Loading branch information