Re: Tuning CephFS on NVME for HPC / IO500

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 2022-12-01 8:26, Manuel Holtgrewe wrote:

The Ceph cluster nodes have 10x enterprise NVMEs each (all branded as "Dell enterprise disks"), 8 older nodes (last year) have "Dell Ent NVMe v2 AGN RI U.2 15.36TB" which are Samsung disks, 2 newer nodes (just delivered) have
"Dell Ent NVMe CM6 RI 15.36TB" which are Kioxia disks.

Does the "RI" stand for read-intensive?

I think you need mixed-use flash storage for a Ceph cluster as it has many random write accesses.

Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux