Re: NVMe's

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 9/23/20 2:21 PM, Alexander E. Patrakov wrote:
On Wed, Sep 23, 2020 at 8:12 PM Anthony D'Atri <anthony.datri@xxxxxxxxx> wrote:

With today’s networking, _maybe_ a super-dense NVMe box needs 100Gb/s where a less-dense probably is fine with 25Gb/s. And of course PCI lanes.

https://cephalocon2019.sched.com/event/M7uJ/affordable-nvme-performance-on-ceph-ceph-on-nvme-true-unbiased-story-to-fast-ceph-wido-den-hollander-42on-piotr-dalek-ovh
I was able to reach 35 Gb/s network traffic on each server (5 servers,
with 6 NVMEs per server, one OSD per NVME) during a read benchmark
from cephfs, and I wouldn't treat that as a super-dense box. So 25Gb/s
may be a bit too tight.


FWIW, in the nodes I mentioned earlier, we have 8 P4510 NVMe drives and 4x25GbE ports.  We can do about 60-70GB/s (even some times closer to 80GB/s) per server depending on the exact workload and network/client setup.  I suspect it should be possible to saturate 100GbE in ideal setups and maybe even semi-utilize 200GbE (especially for reads) in the future.  Sadly we don't have any in-house to test right now.


Mark




--
Alexander E. Patrakov
CV: http://pc.cd/PLz7
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux