On 11/13/2010 11:50 PM, Pete Zaitcev wrote: > See, my agenda was to ask next if Jeff's CloudFS had enough > consistency to back a block device emulator and thus displace > Speepdog, which I see as too ad-hoc. Now if Jeff himself prefers to > package RBD (RADOS), then this may be an admission that CloudFS is > not up to it. On the first point, I believe the answer is that yes, GlusterFS can support block devices via loopback, and I know of several sites using it that way. It could probably be improved somewhat, as BTW could the loopback driver which somebody broke a while back by making it single thread per device. On the second point, I wouldn't say that CloudFS is "not up to it" so much as that CloudFS is not designed for it. It can handle that need *adequately*, and if a user/provider wanted to limit the number of different storage technologies in play (a reasonable goal IMO) then that would be the way to go. However, CloudFS is all about the "multi" - multi tenant, multi site, etc. - with all of the coherency issues implied by that in a world of complex namespaces and non-block-aligned access. Sleepdog or RBD, which don't have to worry about any of that, can and should take advantage of the simpler requirements to perform better in that particular situation. I don't even know that GlusterFS would "lose" in the performance comparison I had suggested to Steven a while back, but whether it would or not isn't particularly relevant to CloudFS's goals. As with the comparison to Ceph, the real issue is balancing a user's (or provider's) need for operational simplicity with their need for optimality of each component. If they want to keep things simple, I'd propose using just GlusterFS/CloudFS with block devices via loopback. If they want to differentiate themselves with higher performance for virtual block devices, at a cost in complexity (including complexity for "secondary" functions such as backups) then they could augment that with RBD or Sheepdog. _______________________________________________ cloud mailing list cloud@xxxxxxxxxxxxxxxxxxxxxxx https://admin.fedoraproject.org/mailman/listinfo/cloud