Re: add crush rule in one command

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 






On Fri, Jul 26, 2013 at 1:22 PM, Gregory Farnum <greg@xxxxxxxxxxx> wrote:
On Thu, Jul 25, 2013 at 7:41 PM, Rongze Zhu <rongze@xxxxxxxxxxxxxxx> wrote:
> Hi folks,
>
> Recently, I use puppet to deploy Ceph and integrate Ceph with OpenStack. We
> put computeand storage together in the same cluster. So nova-compute and
> OSDs will be in each server. We will create a local pool for each server,
> and the pool only use the disks of each server. Local pools will be used by
> Nova for root disk and ephemeral disk.

Hmm, this is constraining Ceph quite a lot; I hope you've thought
about what this means in terms of data availability and even
utilization of your storage. :)

We also will create global pool for Cinder, the IOPS of global pool will be betther than local pool.
The benefit of local pool is reducing the network traffic between servers and Improving the management of storage. We use one same Ceph Gluster for Nova,Cinder,Glance, and create different pools(and diffenrent rules) for them. Maybe it need more testing :)
 

> In order to use the local pools, I need add some rules for the local pools
> to ensure the local pools using only local disks. There is only way to add
> rule in ceph:
>
> ceph osd getcrushmap -o crush-map
> crushtool -c crush-map.txt -o new-crush-map
> ceph osd setcrushmap -i new-crush-map
>
> If multiple servers simultaneously set crush map(puppet agent will do that),
> there is the possibility of consistency problems. So if there is an command
> for adding rule, which will be very convenient. Such as:
>
> ceph osd crush add rule -i new-rule-file
>
> Could I add the command into Ceph?

We love contributions to Ceph, and this is an obvious hole in our
atomic CLI-based CRUSH manipulation which a fix would be welcome for.
Please be aware that there was a significant overhaul to the way these
commands are processed internally between Cuttlefish and
Dumpling-to-be that you'll need to deal with if you want to cross that
boundary. I also recommend looking carefully at how we do the
individual pool changes and how we handle whole-map injection to make
sure the interface you use and the places you do data extraction makes
sense. :)

Thank you for your quick reply, it is very useful for me :)
 
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com



--

Rongze Zhu - 朱荣泽

Email:      zrzhit@xxxxxxxxx
Blog:        http://way4ever.com
Weibo:    
http://weibo.com/metaxen
Github:     https://github.com/zhurongze
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux