disCERNing Data Analysis 82
technodummy writes: "Wired is reporting how CERN is driving the Linux-based, EU funded, DataGRID project. And no, they say, it's nothing like Seti@Home. The description on the site of the project is: '
The objective is to enable next generation scientific exploration which requires intensive computation and analysis of shared large-scale databases, from hundreds of TeraBytes to PetaBytes, across widely distributed scientific communities.'" If you're interested in this, check out the Fermi Lab work with LinuxNetworkX data as well as the all-powerful Google search on the Fermi Collider Linux project. As jamie points out, "Colliders produce *amazing* amounts of data in *amazingly* short time periods... on the order of "here's a gigabyte, you have 10 milliseconds to pull whatever's valuable out of it before the next gigabyte arrives".
Storage to the rescue (Score:2, Insightful)
...or just write it all as it comes in and analyze it later. That's how most other science takes place. Since when is scientific analysis "real-time?"
In general, the scientific process does not require conclusions during an experiment. I think CERN should cite a different reason for this project, there are many valid ones.
Grid computing? (Score:3, Insightful)
Yes, if you can't invent an idea, rename it, and maybe you'll get some credit. What the hell, it's worked before [slashdot.org].
Oh well. More power to them. It looks like a great opportunity for the world to learn that Linux is a powerful tool [extremelinux.org].
Re:Storage to the rescue (Score:2, Insightful)