One thing that's worked for me to get more out of nvmes with Ceph is to create multiple partitions on the nvme with an osd on each partition. That way you get more osd processes and CPU per nvme device. I've heard of people using up to 4 partitions like this.
On Sun, Feb 24, 2019, 10:25 AM Vitaliy Filippov <vitalif@xxxxxxxxxx> wrote:
> We can get 513558 IOPS in 4K read per nvme by fio but only 45146 IOPS
> per OSD.by rados.
Don't expect Ceph to fully utilize NVMe's, it's software and it's slow :)
some colleagues tell that SPDK works out of the box, but almost doesn't
increase performance, because the userland-kernel interaction isn't the
bottleneck currently, it's Ceph code itself. I also tried once, but I
couldn't make it work. When I have some spare NVMe's I'll make another
attempt.
So... try it and share your results here :) we're all interested.
--
With best regards,
Vitaliy Filippov
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com