Re: How do I mix drive sizes in a CEPH cluster?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



My opinion, go for the 2 pools option. And, try to use SSD for journals. In our tests HDDs and VMs don't really work well together (Too much small IOs) but obviously it depends on what the VMs are running.

Another option would be to have an SSD cache tier in front of the HDD. That would really help.

But even with that, I would hesitate to have in the same pool both slow and fast HDD. As the slow HDD are quite bigger, you'll have to assign to them a quite high weight, meaning plenty of PGs will end there and you will not really benefit of the fast HDDs you have.

Another option would be to use the 15k HDDs to cache the slow ones... But then you'll lose plenty of space (Were you could get better results having some SSDs for a cache tier.)

Cheers!
Xavi.


-----Mensaje original-----
De: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] En nombre de Adam Carheden
Enviado el: jueves, 30 de marzo de 2017 20:37
Para: ceph-users <ceph-users@xxxxxxxxxxxxxx>
Asunto:  How do I mix drive sizes in a CEPH cluster?

When mixing hard drives of different sizes, what are the advantages and disadvantages of one big pool vs multiple pools with matching drives within each pool?

-= Long Story =-
Using a mix of new and existing hardware, I'm going to end up with 10x8T HDD and 42x600G@15krpm HDD. I can distribute drives evenly among
5 nodes for the 8T drives and among 7 nodes for the 600G drives. All drives will have journals on SSD. 2x10G LAG for the ceph network.
Usage will be rbd for VMs.

Is the following correct?

-= 1 big pool =-
* Should work fine, but performance is in question
* Smaller I/O could be inconsistent when under load. Normally small writes will all go to the SSDs, but under load that saturates the SSDs smaller writes may be slower if the bits happen to be on the slower 8T drives.
* Larger I/O should get the average performance off all drives assuming images are created with appropriate striping
* Rebuilds will be bottle-necked by the 8T drives

-= 2 pools with matching disks =-
* Should work fine
* Smaller I/O should be the same for both pools due to SSD journals
* Larger I/O will be faster for pool with 600G@15krpm drives due both to drive speed and count
* Larger I/O will be slower for pool with 8T drives for the same reasons
* Rebuilds will be significantly faster on the 600G/42-drive pool

Is either configuration a bad idea, or is it just a matter of my space/speed needs?

It should be possible to have 3 pools:
1) 8T only (slow pool)
2) 600G only (fast pool)
3) all OSDs (medium speed pool)
...but the rebuild would impact performance on the "fast" 600G drive pool if a 8T drive failed since the medium speed pool would be rebuilding across all drives, correct?

Thanks
--
Adam Carheden
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux