XFX Abandons GeForce GTX 400 Series
XFX is getting cozier with AMD by the day, which is an eyesore for NVIDIA. Amidst the launch of GeForce GTX 400 series, XFX did what could have been unimaginable a few months ago: abandon NVIDIA's high-end GPU launch. That's right, XFX has decided against making and selling GeForce GTX 480 and GeForce GTX 470 graphics cards, saying that it favours high-end GPUs from AMD, instead. This comes even as XFX seemed to have been ready with its own product art. Apart from making new non-reference design SKUs for pretty-much every Radeon HD 5000 series GPU, the company is working on even more premium graphics cards targeted at NVIDIA's high-end GPUs.
The rift between XFX and NVIDIA became quite apparent when XFX outright bashed NVIDIA's high-end lineup in a recent press communication about a new high-end Radeon-based graphics card it's designing. "XFX have always developed the most powerful, versatile Gaming weapons in the world - and have just stepped up to the gaming plate and launched something spectacular that may well literally blow the current NVIDIA offerings clean away," adding "GTX480 and GTX470 are upon us, but perhaps the time has come to Ferm up who really has the big Guns." The move may come to the disappointment of some potential buyers of GTX 400 series, as XFX's popular Double Lifetime Warranty scheme would be missed. XFX however, maintains that it may choose to work on lower-end Fermi-derivatives.
Preview on Nvidia's GeForce GTX 480 and 470 graphics processors:
Conclusions
These two new GeForces draw more power, generate more heat and noise, and have higher price tags than the closest competing Radeons, but they're not substantially faster at running current games. For many, that will be the brutal bottom line on the GeForce GTX 470 and 480. Given the complexity and the rich feature sets of modern graphics processors, that hardly seems fair, but the GF100 is facing formidable competition that made it to market first and is clearly more efficient in pretty much every way that matters. The GF100's major contribution to real-time graphics, beyond the DirectX 11 features that its competitor also possesses, is an increased geometry processing facility that has little value for today's games and questionable value for tomorrow's. As a graphics geek, it's not hard to admire this aspect of the GF100, but I think it will be difficult for gamers to appreciate for quite some time—perhaps throughout the useful life of these graphics cards.
Then again, given the supply problems and inflated prices that we've seen in the graphics card market over the past six months, we're just glad to see Nvidia back at the table. Even if the value propositions on the GeForce GTX 470 and 480 aren't spectacular, they're a darn sight better than zero competition for AMD.
Also, I suspect some folks will still find these graphics cards attractive for a host of pretty decent reasons. If consumer-level GPU computing takes off, the GF100 may be the GPU to have. We haven't formally tested its compute prowess against the latest Radeons, but the GTX 480's exceptional performance in 3DMark's cloth and particle simulations is a postive indicator. Nvidia is also constantly pushing on initiatives that can give its GPUs an exclusive advantage over any competitor, whether it be games with advanced PhysX effects or 3D Vision—or just plain old driver solidity and instant compatibility with the newest games, an area where I still think Nvidia has an edge on AMD.
Then again, AMD seems to be making inroads with game developers, which is what happens when you're first to market with a whole family of DX11 GPUs.
For what it's worth, the GF100 may not be a disappointment in all markets. With its geometry processing throughput, it should make a fantastic Quadro workstation graphics card. GF100-based Tesla cards could still succeed in the realm of dedicated GPU computing, too. The Fermi architecture really is ahead of any of its competitors there for a number of reasons that can't be ignored, and the question now is whether Nvidia can build a considerable business around it. The firm seemed to be expecting huge progress in this regard when it revealed the first details of this architecture to us.
We're curious to see how good a graphics chip this generation of Nvidia's technology could make when it's stripped of all the extra fat needed to serve other markets: the extensive double-precision support, ECC, fairly large caches, and perhaps two or three of its raster units. You don't need any of those things to play games—or even to transcode video on a GPU. A leaner, meaner mid-range variant of the Fermi architecture might make a much more attractive graphics card, especially if Nvidia can get some of the apparent chip-level issues worked out and reach some higher clock speeds.