Re: [ceph][nautilus] prformances with db/wal on nvme

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Den ons 20 maj 2020 kl 12:14 skrev Ignazio Cassano <ignaziocassano@xxxxxxxxx
>:

> Hello Janne, so do you think we must move from 10Gbs to 40 or 100GBs to
> to make the most of nvme ?
>

I think there are several factors to weigh in, when you need to maximize
performance, from putting BIOS into performance mode, having as fast
network as possible (some say 40 is 4x10 so 25Gbit is still faster in
latency than 40GE, but don't quote me on that) and so on. Also, in my
experience, nvmes can eat lots more parallel writes than ssds, so a single
test might just be easy enough for both drives to accept, whereas if you
had 100s of writers at the same time, it might favor the nvmes who would
not slow down as fast as ssds would in such a case.
Perhaps the nvmes will be far better everytime the rocksdb does
level-compactions so that it doesn't give you better write numbers in
tests, but less stalls when/if the DB needs reorganizing.

On the other hand, if writes always are huge and evil and long-lasting,
sooner or later you will be capped by how much data the backend disks can
take, regardless of the performance of nvme/ssd WAL/DBs.

Doing both local and remote (which you did) benchmarks, looking at iostat
for both nvme and ssds while doing the test and perhaps most important of
all, knowing when an acceptable result is achieved.
If you have a target to reach and you reach it then perhaps moving on to
other parts of designing is best.


> So we create a vm on the pool with db.wal on ssd and a vm on the pool with
>>> db/wal on nvme.
>>> Fio performances are almost the same on both .
>>> What do you think about it ?
>>> I expect better performance on pool with db/wal on pci express nvme
>>>
>>
>> Perhaps most of the time is lost talking over the network, making the
>> small differences in how well ssd -vs- nvme perform not visible.
>>
>
-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux