nVidia have crammed nearly twice the processing (depending on workload performance is somewhere between 125% and 185%) into a single, monolithic chip. Doubled memory bandwidth also helps.
However, these chips are huge. They're still using the 65nm process (for comparison, most new CPUs use 45nm, ATi are making chips on 55nm).
Huge chips = less per wafer (a 30cm diameter sliver of silicon). In the semiconductor industry, cost per wafer is fairly static, regardless of what you get out of it. So while Intel gets 2500 Atom processors on a wafer nVidia gets a maximum of just over 100 G200s.
This translates directly into chip cost.
Also, they suck a lot of power. Around 220W each. Anandtech's test rig runs a 1kw PSU. It was not capable of running two 280s in SLI...
Performance wise, the 260 is around as fast as a pair of 8800GTs, and the 280 is around as fast (with a higher variance, sometimes much faster, sometimes much slower) than a pair of 9800GX2s.
Unfortunately, they're more expensive than the alternatives listed above.
This is all with the release drivers. Perofmance will no doubt increase a little with time. Performance is better where higher resolutions and AA come into play.
There are some enhancements to improve CUDA performance, their general purpose computing on GPU, but how useful this is depends on suppot. It is cool though.
Basically, unless you want to future proof, if you already have a mobo capable of SLI, I'd go with the alternatives. If you only have a single PCI-E x16 slot, they might be worth a look. On a new build, the premium for SLI mobos might cut the cost enough to make it interesting.
I do not guarentee that I will follow my own advice ;)
Submitted by byrn on Mon, 2008-06-16 17:33
I thought it was fascinating that this thing has 1.4billion transistors and is the largest ever chip fabbed by TSMC.
As to whether it's worth going for my reading of it was similar to yours pricey for what it is probably too much to justify any performance gain. The 9800gx2 are fairly affordable these days so it doens't seem to offer much beyond improved cooling solution and latest and greatest bragging rights.
I would suspect Nvidia will spin a 55-45nm version as soon as they can to cut the costs but I doubt that will be released till they think they've milked this one sufficiently.
Still it will be something to consider on the next upgrade which might well be in these part's time frame
Comments
OK, a short summary:
nVidia have crammed nearly twice the processing (depending on workload performance is somewhere between 125% and 185%) into a single, monolithic chip. Doubled memory bandwidth also helps.
However, these chips are huge. They're still using the 65nm process (for comparison, most new CPUs use 45nm, ATi are making chips on 55nm).
Huge chips = less per wafer (a 30cm diameter sliver of silicon). In the semiconductor industry, cost per wafer is fairly static, regardless of what you get out of it. So while Intel gets 2500 Atom processors on a wafer nVidia gets a maximum of just over 100 G200s.
This translates directly into chip cost.
Also, they suck a lot of power. Around 220W each. Anandtech's test rig runs a 1kw PSU. It was not capable of running two 280s in SLI...
Performance wise, the 260 is around as fast as a pair of 8800GTs, and the 280 is around as fast (with a higher variance, sometimes much faster, sometimes much slower) than a pair of 9800GX2s.
Unfortunately, they're more expensive than the alternatives listed above.
This is all with the release drivers. Perofmance will no doubt increase a little with time. Performance is better where higher resolutions and AA come into play.
There are some enhancements to improve CUDA performance, their general purpose computing on GPU, but how useful this is depends on suppot. It is cool though.
Basically, unless you want to future proof, if you already have a mobo capable of SLI, I'd go with the alternatives. If you only have a single PCI-E x16 slot, they might be worth a look. On a new build, the premium for SLI mobos might cut the cost enough to make it interesting.
I do not guarentee that I will follow my own advice ;)
I thought it was fascinating that this thing has 1.4billion transistors and is the largest ever chip fabbed by TSMC.
As to whether it's worth going for my reading of it was similar to yours pricey for what it is probably too much to justify any performance gain. The 9800gx2 are fairly affordable these days so it doens't seem to offer much beyond improved cooling solution and latest and greatest bragging rights.
I would suspect Nvidia will spin a 55-45nm version as soon as they can to cut the costs but I doubt that will be released till they think they've milked this one sufficiently.
Still it will be something to consider on the next upgrade which might well be in these part's time frame