Re: NVMe's

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




>> With today’s networking, _maybe_ a super-dense NVMe box needs 100Gb/s where a less-dense probably is fine with 25Gb/s. And of course PCI lanes.
>> 
>> https://cephalocon2019.sched.com/event/M7uJ/affordable-nvme-performance-on-ceph-ceph-on-nvme-true-unbiased-story-to-fast-ceph-wido-den-hollander-42on-piotr-dalek-ovh
> 
> I was able to reach 35 Gb/s network traffic on each server (5 servers,
> with 6 NVMEs per server, one OSD per NVME) during a read benchmark
> from cephfs, and I wouldn't treat that as a super-dense box. So 25Gb/s
> may be a bit too tight.

Thanks for the data point — without real-world reports, it’s all just theoretical.  In the above presentation the point is made that latency is more important than throughput, but this is very, very much a function of the use-case.  For DBs on RBD volumes, there’s a lot of truth to that especially for writes.  For, say, object service or for things like OpenStack Glance, it may often be the other way around.

> The workload doesn’t demand NVMe performance, so SSD seems to be the most cost effective way to handle this.

To be pedantic, NVMe devices *are* SSDs, but you most likely mean SATA SSDs.

The thing is, with recent drives, chassis, and CPU models, it can be very possible to provision an NVMe server at a cost comparable to a conventional SATA SSD server, in which case, why not?

— aad
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux