Re: Migrating (slowly) from spinning rust to ssd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You can't have a server with both SSDs and HDDs in this setup because you can't write a crush rule that is able to pick n distinct servers when also specifying different device classes.
A crush rule for this looks like this:

step take default class=ssd
step choose firstn 1 type host
emit
step take default class=hdd
step choose firstn -1 type host
emit

(No primary affinity needed)
It can pick the same server twice because it needs to start over to change the device class.

But yes, it does work very well for read-heavy workloads. But such a setup suffers more than usual from server outages: one SSD host fails, and lots of reads go to the HDD hosts, overloading them...


Paul

2018-06-01 16:34 GMT+02:00 Jonathan Proulx <jon@xxxxxxxxxxxxx>:
Hi All,

I looking at starting to move my deployed ceph cluster to SSD.

As a first step my though is to get a large enough set of SSD
expantion that I can set crush map to ensure 1 copy of every
(important) PG is on SSD and use primary affinity to ensure that copy
is primary.

I know this won't help with writes, but most of my pain is reads since
workloads are generally not cache freindly and write workloads while
larger ard fairly asynchronous so WAL and DB on SSD along with soem
write back caching on libvirt side (most of my load is VMs) makes
writes *seem* fast enough for now.

I have a few question before writing a check that size.

Is this completely insane?

Are there any hidden surprizes I may not have considered?

Will I really need to mess with crush map to get this to happen?  I
expect so, but if primary affinity settings along with current "rack"
level leaves is good enough to be sure each of 3 replicas is in a
different rack and at least one of those is on an SSD OSD I'd rather
not touch crush (bonus points if anyone has a worked example).

Thanks,
-Jon

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux