Re: Building ceph clusters with 8TB SSD drives?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We are using 4TB Kingston DC500M drives for 6+2 EC RBD data pools, 2 OSDs per disk. They deliver great iops, but I think they are TLC, so probably not comparable with QLC drives. I think QLC drives are OK for mostly cold/static data due to their performance drop when running full: https://www.howtogeek.com/428869/ssds-are-getting-denser-and-slower-thanks-to-qlc-flash/

For certain applications, the low bandwidth when full might be acceptable. They still deliver significantly more iops than HDD. For example, on our RBD pools, the average bandwidth per drive is below 50MB/s. So, a drop to max 80MB/s is acceptable under normal operations. Just don't expect great rebuild-times when bandwidth starts becoming the limit.

I consider using large flash drives for our ceph FS data pool in the long run. We are approaching 1000 OSDs on that pool and have mostly cold data.

A further benefit is no moving parts. The higher shock resistance can be a big plus.

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Matt Larson <larsonmattr@xxxxxxxxx>
Sent: 07 May 2021 22:10:40
To: ceph-users
Subject:  Building ceph clusters with 8TB SSD drives?

Is anyone trying Ceph clusters containing larger (4-8TB) SSD drives?

8TB SSDs are described here (
https://www.anandtech.com/show/16136/qlc-8tb-ssd-review-samsung-870-qvo-sabrent-rocket-q
) and make use QLC NAND flash memory to reach the costs and capacity.
Currently, the 8TB Samsung 870 SSD is $800/ea at some online retail stores.

SATA form-factor SSDs can reach read/write rates of 560/520 MB/s, while not
as great as nVME drives is still a multiple faster than 7200 RPM drives.
SSDs now appear to have much lower failure rates than HDs in 2021 (
https://www.techspot.com/news/89590-backblaze-latest-storage-reliability-figures-add-ssd-boot.html
).

Are there any major caveats to considering working with larger SSDs for
data pools?

Thanks,
  Matt

--
Matt Larson, PhD
Madison, WI  53705 U.S.A.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux