Migrating (slowly) from spinning rust to ssd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,

I looking at starting to move my deployed ceph cluster to SSD.

As a first step my though is to get a large enough set of SSD
expantion that I can set crush map to ensure 1 copy of every
(important) PG is on SSD and use primary affinity to ensure that copy
is primary.

I know this won't help with writes, but most of my pain is reads since
workloads are generally not cache freindly and write workloads while
larger ard fairly asynchronous so WAL and DB on SSD along with soem
write back caching on libvirt side (most of my load is VMs) makes
writes *seem* fast enough for now.

I have a few question before writing a check that size.

Is this completely insane?

Are there any hidden surprizes I may not have considered?

Will I really need to mess with crush map to get this to happen?  I
expect so, but if primary affinity settings along with current "rack"
level leaves is good enough to be sure each of 3 replicas is in a
different rack and at least one of those is on an SSD OSD I'd rather
not touch crush (bonus points if anyone has a worked example).

Thanks,
-Jon

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux