CacheCade to cache pool - worth it?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have a small, 3 node Firefly cluster. Each node hosting 6 OSDs, a 3 TB spinner each. Each host has 2 SSDs used for the journals. Also each host has 4 SSDs used as a 2 x RAID1 CacheCade array. The cluster is used to host KVM based virutal machines, about 180 now. I'm thinking about migrating from the CacheCade arrays to Ceph's cache tiering but I don't have any relevant experience with it. I'm cosindering two options: 1) the hardware setup stays the same and the 12 SSDs will be converted to the cache pool; 2) I'll move those SSDs into the KVM hosts and fill up the rest empty slots, making 20 SSDs in total and use them as the cache pool - this way I'll have 3 x 4 slots for spinners in the cluster. I'm not sure how should I assess the needs. Right now the CacheCade is doing well most of the time, but I don't like the idea of a local cache and it is not expandable. Also during peak times the CacheCade solution seems to be inadequate.

Your experiences or some suggestions based upon my description would be very welcomed.

Best regards,
Mate

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux