Re: CRUSH straw2 can not handle big weight differences

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We kind of turned the crushmap inside out a little bit.

Instead of the traditional "for 1 PG, select OSDs from 3 separate data centers" we did "force selection from only one datacenter (out of 3) and leave enough options only to make sure precisely 1 SSD and 2 HDD are selected".

We then organized these "virtual datacenters" in the hierachy so that one of them in fact contain 3 options that lead to 3 physically separate servers in different locations.

Every physical datacenter has both SSD's and HDD's. The idea is that if one datacenter is lost, 2/3 of the SSD's still remain (and can be mapped to by marking the missing ones "out") so performance is maintained.





Den 2018-01-29 kl. 13:35, skrev Niklas:
Yes.
It is a hybrid solution where a placement group is always located on one NVMe drive and two HDD drives. Advantage is great read performance and cost savings. Disadvantages is low write performance. Still the write performance is good thanks to rockdb on Intel Optane disks in HDD servers.

Real world looks more like I described in a previous question (2018-01-23) here on ceph-users list, "Ruleset for optimized Ceph hybrid storage". Nobody answered so am guessing it is not possible to create my wanted rule. Now am trying to solve it with virtual datacenters in the crush map. Which works but maybe the the most optimal solution.


On 2018-01-29 13:21, Wido den Hollander wrote:


On 01/29/2018 01:14 PM, Niklas wrote:
...


Is it your intention to put all copies of a object in only one DC?

What is your exact idea behind this rule? What's the purpose?

Wido

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux