Red Hat Acquires InkTank, Ceph Maintainers 18
An anonymous reader writes "Red Hat announced their pending acquisition of Inktank this morning. Sage Weil and a team of researchers at University of California Santa Cruz first published the architecture in 2007. Sage joined DreamHost after college and continued development on Ceph until DreamHost spun off a Inktank, a company focused solely on Ceph. In Sage's blog post on the acquisition, he says 'In particular, joining forces with the Red Hat team will improve our ability to address problems at all layers of the storage stack, including in the kernel.' Sage goes on to announce that Inktank's proprietary management tools for Ceph will now be open sourced, citing Red Hat's pure open source development and business models.
Ceph has seen wide adoption in OpenStack customer deployments, alongside Red Hat's existing Gluster system." Ceph looks pretty cool if you're doing serious storage: CERN has a 3 Petabyte "prototype" cluster in use now (Only tangentially related, but still interesting, is how CERN does storage in general).
Ceph has seen wide adoption in OpenStack customer deployments, alongside Red Hat's existing Gluster system." Ceph looks pretty cool if you're doing serious storage: CERN has a 3 Petabyte "prototype" cluster in use now (Only tangentially related, but still interesting, is how CERN does storage in general).
Re:How do you back up Ceph? (Score:5, Informative)
Re:How do you back up Ceph? (Score:5, Informative)
(Inktank community guy here)
There are a number of different options for backup/disaster recovery solutions with Ceph, depending on what piece(s) of the platform you are using. For instance, the object gateways (think S3) from multiple clusters can be plugged together for multi-site replication. The CephFS and block device portions both have snapshotting built in that can be replicated offsite.
In the medium-term we're looking at having a way to replicate your entire cluster over the wire at the RADOS level (underlying object store). Longer-term we'd love to be able to offer WAN-scale replication for a single cluster and the ability to snapshot a cluster (or portions/pools therein) easily.
I hope that helps. If you have more questions hit me up on #ceph at OFTC.net IRC.