Re: Configuration about using nvme SSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have tried to devide an nvme disk into four partitions. However, no
significant improvement was found in performance by rados bench.
nvme with partition: 1 node 3 nvme 12 osd, 166066 iops in 4K read
nvme without partition: 1 node 3 nvme 3 osd 163336 iops in 4K read
My ceph version is 12.2.4.
What's wrong with my test?

Wido den Hollander <wido@xxxxxxxx> 于2019年2月25日周一 下午7:02写道:
>
>
>
> On 2/24/19 4:34 PM, David Turner wrote:
> > One thing that's worked for me to get more out of nvmes with Ceph is to
> > create multiple partitions on the nvme with an osd on each partition.
> > That way you get more osd processes and CPU per nvme device. I've heard
> > of people using up to 4 partitions like this.
> >
>
> Increasing the amount of Placement Groups also works. In addition you
> should also increase osd_op_num_threads_per_shard to something like 4.
>
> This will increase CPU usage, but you should also be able to get more
> out of the NVMe devices.
>
> In addition, make sure you pin the CPU C-States to 1 and disable
> powersaving for the CPU.
>
> Wido
>
> > On Sun, Feb 24, 2019, 10:25 AM Vitaliy Filippov <vitalif@xxxxxxxxxx
> > <mailto:vitalif@xxxxxxxxxx>> wrote:
> >
> >     > We can get 513558 IOPS in 4K read per nvme by fio but only 45146 IOPS
> >     > per OSD.by rados.
> >
> >     Don't expect Ceph to fully utilize NVMe's, it's software and it's
> >     slow :)
> >     some colleagues tell that SPDK works out of the box, but almost
> >     doesn't
> >     increase performance, because the userland-kernel interaction isn't
> >     the
> >     bottleneck currently, it's Ceph code itself. I also tried once, but I
> >     couldn't make it work. When I have some spare NVMe's I'll make another
> >     attempt.
> >
> >     So... try it and share your results here :) we're all interested.
> >
> >     --
> >     With best regards,
> >        Vitaliy Filippov
> >     _______________________________________________
> >     ceph-users mailing list
> >     ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
> >     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux