Re: Optimal OSD count for SSDs / NVMe disks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Robert,

Am 04.02.2016 um 00:45 schrieb Robert LeBlanc:
> Once we put in our cache tier the I/O on the spindles was so low, we
> just moved the journals off the SSDs onto the spindles and left the
> SSD space for cache. There have been testing showing that better
> performance can be achieved by putting more OSDs on an NVMe disk, but
> you also have to balance that with OSDs not being evenly distributed
> so some OSDs will use more space than others.

Hm, maybe it was due to our very small size for the cache (only 540GB in
total, limitted to max-bytes 220 GB as size=2 and your mentioned uneven
distribution) we found that during times where the cache pool flushed to
the storage pool client IO took a severe hit.

> I probably wouldn't go more than 4 100 GB partitions, but it really
> depends on the number of PGs and your data distribution. Also, even
> with all the data in the cache, there is still a performance penalty
> for having the caching tier vs. a native SSD pool. So if you are not
> using the tiering, move to a straight SSD pool.
Yes, I also have the feeling that less than 100 GB per OSD doesn't make
sense. I tend to 3 OSD with about 120GB + a bit for the journals as the
first "draft"-implementation.

Greetings
-Sascha-
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux