An anonymous reader writes: Nvidia's massively parallel GPUs are being harnessed by an increasing number of supercomputer makers to boost their performance, but at the cost of using a proprietary instruction set that was not designed for general-purpose computing. Now that Intel is releasing its own x86-based massively parallel processor--the Xeon Phi--the supercomputer community will have a choice to make: use Intel's x86 parallel processing tools to create their supercomputer applications or rewrite their applications to make use of Nvidia's GPU's and proprietary instructions. The verdict won't be in on which is best for several years, but I'm hoping to stimulate the programming community to start debating the pros and cons now, so that by the time Intel starts shipping its 50-core Xeon Phi this fall we can have enough data points to make an informed decision. What's your take on Intel's versus Nvidia's approach to supercomputing?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 Internet speed test! ×