Re: Tuning CephFS on NVME for HPC / IO500

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



if its this:
http://www.acmemicro.com/Product/17848/Kioxia-KCD6XLUL15T3---15-36TB-SSD-NVMe-2-5-inch-15mm-CD6-R-Series-SIE-PCIe-4-0-5500-MB-sec-Read-BiCS-FLASH-TLC-1-DWPD

its listed as 1 DWPD with a 5 year warranty. So should be ok.

Thanks,
Kevin

________________________________________
From: Robert Sander <r.sander@xxxxxxxxxxxxxxxxxxx>
Sent: Wednesday, November 30, 2022 11:58 PM
To: ceph-users
Subject:  Re: Tuning CephFS on NVME for HPC / IO500

Check twice before you click! This email originated from outside PNNL.


Hi,

On 2022-12-01 8:26, Manuel Holtgrewe wrote:

> The Ceph cluster nodes have 10x enterprise NVMEs each (all branded as
> "Dell
> enterprise disks"), 8 older nodes (last year) have "Dell Ent NVMe v2
> AGN RI
> U.2 15.36TB" which are Samsung disks, 2 newer nodes (just delivered)
> have
> "Dell Ent NVMe CM6 RI 15.36TB" which are Kioxia disks.

Does the "RI" stand for read-intensive?

I think you need mixed-use flash storage for a Ceph cluster as it has
many random write accesses.

Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://gcc02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.heinlein-support.de%2F&amp;data=05%7C01%7Ckevin.fox%40pnnl.gov%7C5240a7d0843a49017a8c08dad37204ba%7Cd6faa5f90ae240338c0130048a38deeb%7C0%7C0%7C638054783937073502%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=oCkCgU2SgVMQME30%2FABSLAUPtr2x6XSsHo9nL71UZIc%3D&amp;reserved=0

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux