HDFS on Ceph (RBD)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Warren,

Following our brief chat after the Ceph Ops session at the Vancouver
summit today, I added a few more notes to the etherpad
(https://etherpad.openstack.org/p/YVR-ops-ceph).

I wonder whether you'd considered setting up crush layouts so you can
have multiple cinder AZs or volume-types that map to a subset of OSDs
in your cluster. You'd have them in pools with rep=1 (i.e., no
replication). Then have your Hadoop users follow a provisioning
pattern that involves attaching volumes from each crush ruleset and
building HDFS over them in a manner/topology so as to avoid breaking
HDFS for any single underlying OSD failure, assuming regular HDFS
replication is used on top. Maybe a pool per HDFS node is the
obvious/naive starting point, clearly that implies a certain scale to
begin with, but probably works for you...?

-- 
Cheers,
~Blairo
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux