Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
Programming

+ - Can 50-core Xeon Phi's x86 Architecture Best Nvidia's Massive GPUs?->

Submitted by Anonymous Coward
An anonymous reader writes "Nvidia's massively parallel GPUs are being harnessed by an increasing number of supercomputer makers to boost their performance, but at the cost of using a proprietary instruction set that was not designed for general-purpose computing. Now that Intel is releasing its own x86-based massively parallel processor--the Xeon Phi--the supercomputer community will have a choice to make: use Intel's x86 parallel processing tools to create their supercomputer applications or rewrite their applications to make use of Nvidia's GPU's and proprietary instructions. The verdict won't be in on which is best for several years, but I'm hoping to stimulate the programming community to start debating the pros and cons now, so that by the time Intel starts shipping its 50-core Xeon Phi this fall we can have enough data points to make an informed decision. What's your take on Intel's versus Nvidia's approach to supercomputing?"
Link to Original Source
This discussion was created for logged-in users only, but now has been archived. No new comments can be posted.

Can 50-core Xeon Phi's x86 Architecture Best Nvidia's Massive GPUs?

Comments Filter:

Our policy is, when in doubt, do the right thing. -- Roy L. Ash, ex-president, Litton Industries

Working...