You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi there and first of all thanks for this great tool!
I was wondering if you could provide any feedback about having a single (eg.) RTX 4090 24GB vs (eg.) 4x 4060ti 16GB.
At the end 4x 4060ti stack tensor and cuda cores count will match a single 4090 tensor and cuda cores, the plus is having the 4x4060ti 16GB stack with a total 64GB of RAM instead of 24GB.
Can't tell if the 4x 4060ti stack memory bandwidth will be a bottleneck compared to a single 4090.
Any feedback will be appreciated, thanks!
The text was updated successfully, but these errors were encountered:
fdm-git
changed the title
Single GPU vs multiple GPU stack (parallel)
Single GPU vs multiple GPUs stack (parallel)
Mar 22, 2024
Hi there and first of all thanks for this great tool!
I was wondering if you could provide any feedback about having a single (eg.) RTX 4090 24GB vs (eg.) 4x 4060ti 16GB.
At the end 4x 4060ti stack tensor and cuda cores count will match a single 4090 tensor and cuda cores, the plus is having the 4x4060ti 16GB stack with a total 64GB of RAM instead of 24GB.
Can't tell if the 4x 4060ti stack memory bandwidth will be a bottleneck compared to a single 4090.
Any feedback will be appreciated, thanks!
The text was updated successfully, but these errors were encountered: