Re: device class for nvme disk is ssd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



That page has mixed info.  

What would we use instead? SATA / SAS that are progressively withering in the market, less performance for the same money? Why pay extra for an HBA just to use legacy media?

You can use NVMe for WAL+DB, with more complexity.   You’ll get faster metadata and lower latency on some small writes, but in the end the bulk drives are the bottleneck.  

All-NVMe chassis are the only way, legacy interfaces are a false economy.   

Ceph doesn’t seem to automatically class nvme devices but one can easily change the device class on an OSD 

> On Jun 28, 2023, at 8:29 AM, Marc <Marc@xxxxxxxxxxxxxxxxx> wrote:
> 
> 
>> 
>> Hi,
>> is it a problem that the device class for all my disks is SSD even all
>> of
>> these disks are NVME disks? If it is just a classification for ceph, so
>> I
>> can have pools on SSDs and NVMEs separated I don't care. But maybe ceph
>> handles NVME disks differently internally?
>> 
> 
> I am not entirely sure if it is still with newer versions of ceph, but what I understood from previous discussions is that it does not really pay of to use nvme as an osd and they are mostly used for speeding up writes to slower osds (hdd)
> 
> https://yourcmc.ru/wiki/Ceph_performance
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux