Re: HDFS on Ceph (RBD)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We¹ve contemplated doing something like that, but we also realized that
it would result in manual work in Ceph everytime we lose a drive or
server, 
and a pretty bad experience for the customer when we have to do
maintenance.

We also kicked around the idea of leveraging the notion of a Hadoop rack
to define a set of instances which are Cinder volume backed, and the rest
be ephemeral drives (not Ceph backed ephemeral). Using 100% ephemeral
isn¹t out of the question either, but we have seen a few instances where
all the instances in a region were quickly terminated.

Our customer has also tried grabbing the Sahara code (Hadoop Swift) and
running it on their own to interface with RGW backed Swift, but ran into
an issue where Sahara code sequentially stats each item within a
container. 
I think there are efforts to multithread this.

-- 
Warren Wang





On 5/20/15, 7:27 PM, "Blair Bethwaite" <blair.bethwaite@xxxxxxxxx> wrote:

>Hi Warren,
>
>Following our brief chat after the Ceph Ops session at the Vancouver
>summit today, I added a few more notes to the etherpad
>(https://etherpad.openstack.org/p/YVR-ops-ceph).
>
>I wonder whether you'd considered setting up crush layouts so you can
>have multiple cinder AZs or volume-types that map to a subset of OSDs
>in your cluster. You'd have them in pools with rep=1 (i.e., no
>replication). Then have your Hadoop users follow a provisioning
>pattern that involves attaching volumes from each crush ruleset and
>building HDFS over them in a manner/topology so as to avoid breaking
>HDFS for any single underlying OSD failure, assuming regular HDFS
>replication is used on top. Maybe a pool per HDFS node is the
>obvious/naive starting point, clearly that implies a certain scale to
>begin with, but probably works for you...?
>
>-- 
>Cheers,
>~Blairo
>

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux