Re: Configuration about using nvme SSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I saw Intel had a demo of a luminous cluster running on top of the line hardware, they used 2 OSD partitions with the best performance.  I was interested that they would split them like that, and asked the demo person how they came to that number.  I never got a really good answer except that it would provide better performance.  So I guess this must be why.



On Mon, Feb 25, 2019 at 8:30 PM <vitalif@xxxxxxxxxx> wrote:
I create 2-4 RBD images sized 10GB or more with --thick-provision, then
run

fio -ioengine=rbd -direct=1 -invalidate=1 -name=test -bs=4k -iodepth=128
-rw=randwrite -pool=rpool -runtime=60 -rbdname=testimg

For each of them at the same time.

> How do you test what total 4Kb random write iops (RBD) you have?
>
> -----Original Message-----
> From: Vitaliy Filippov [mailto:vitalif@xxxxxxxxxx]
> Sent: 24 February 2019 17:39
> To: David Turner
> Cc: ceph-users; 韦皓诚
> Subject: *****SPAM***** Re: Configuration about using nvme
> SSD
>
> I've tried 4x OSD on fast SAS SSDs in a test setup with only 2 such
> drives in cluster - it increased CPU consumption a lot, but total 4Kb
> random write iops (RBD) only went from ~11000 to ~22000. So it was 2x
> increase, but at a huge cost.
>
>> One thing that's worked for me to get more out of nvmes with Ceph is
>> to create multiple partitions on the nvme with an osd on each
> partition.
>> That
>> way you get more osd processes and CPU per nvme device. I've heard of
>> people using up to 4 partitions like this.
>
> --
> With best regards,
>    Vitaliy Filippov
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux