Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Single GPU vs multiple GPUs stack (parallel) #626

Open
fdm-git opened this issue Mar 22, 2024 · 0 comments
Open

Single GPU vs multiple GPUs stack (parallel) #626

fdm-git opened this issue Mar 22, 2024 · 0 comments

Comments

@fdm-git
Copy link

fdm-git commented Mar 22, 2024

Hi there and first of all thanks for this great tool!

I was wondering if you could provide any feedback about having a single (eg.) RTX 4090 24GB vs (eg.) 4x 4060ti 16GB.

At the end 4x 4060ti stack tensor and cuda cores count will match a single 4090 tensor and cuda cores, the plus is having the 4x4060ti 16GB stack with a total 64GB of RAM instead of 24GB.

Can't tell if the 4x 4060ti stack memory bandwidth will be a bottleneck compared to a single 4090.

Any feedback will be appreciated, thanks!

@fdm-git fdm-git changed the title Single GPU vs multiple GPU stack (parallel) Single GPU vs multiple GPUs stack (parallel) Mar 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant