On Thu, Jul 25, 2013 at 7:41 PM, Rongze Zhu <rongze@xxxxxxxxxxxxxxx> wrote: > Hi folks, > > Recently, I use puppet to deploy Ceph and integrate Ceph with OpenStack. We > put computeand storage together in the same cluster. So nova-compute and > OSDs will be in each server. We will create a local pool for each server, > and the pool only use the disks of each server. Local pools will be used by > Nova for root disk and ephemeral disk. Hmm, this is constraining Ceph quite a lot; I hope you've thought about what this means in terms of data availability and even utilization of your storage. :) > In order to use the local pools, I need add some rules for the local pools > to ensure the local pools using only local disks. There is only way to add > rule in ceph: > > ceph osd getcrushmap -o crush-map > crushtool -c crush-map.txt -o new-crush-map > ceph osd setcrushmap -i new-crush-map > > If multiple servers simultaneously set crush map(puppet agent will do that), > there is the possibility of consistency problems. So if there is an command > for adding rule, which will be very convenient. Such as: > > ceph osd crush add rule -i new-rule-file > > Could I add the command into Ceph? We love contributions to Ceph, and this is an obvious hole in our atomic CLI-based CRUSH manipulation which a fix would be welcome for. Please be aware that there was a significant overhaul to the way these commands are processed internally between Cuttlefish and Dumpling-to-be that you'll need to deal with if you want to cross that boundary. I also recommend looking carefully at how we do the individual pool changes and how we handle whole-map injection to make sure the interface you use and the places you do data extraction makes sense. :) -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com