Re: EC Pool Disk Performance Toshiba vs Segate

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

Thanks for your reply!, while the performance within CEPH is different the two disk's are exactly the same in rated performance / type e.t.c just from two different manufacturers. Obviously id expected such a big difference between 5200 and 7200RPM or SMR and CMR for example but these are identical in this area. But both seem to perform the same in standard work loads. Just with the default CEPH setup they are miles apart.

Changing disks is one option but wanted to first see if there was some thing's I could at least try and level the performance across the field.

,Ashley

On Thu, Dec 13, 2018 at 5:21 PM Maged Mokhtar <mmokhtar@xxxxxxxxxxx> wrote:


On 13/12/2018 09:53, Ashley Merrick wrote:
I have a Mimic Bluestore EC RBD Pool running on 8+2, this is currently running across 4 node's.

3 Node's are running Toshiba disk's while one node is running Segate disks (same size, spinning speed, enterprise disks e.t.c), I have noticed huge difference in IOWAIT and disk latency performance between the two sets of disks, can also be seen from a ceph osd perf during read and write operations.

Speaking to my host (server provider), they bench marked the two disks before approving them for use in this type of server, they actually saw higher performance from the Toshiba disk during their tests.

They did however state there test where at higher / larger block sizes, I can imagine CEPH using EC of 8+2 the block sizes / requests are quite small?

Is there anything I can do ? Changing the RBD object size & stripe unit to a bigger than default? Will this make the data sent to the disk larger chunks at once compared to lot's of smaller block's.

If anyone else has any advice I'm open to trying.

P.s I have already disabled the disk cache on all disks and this was causing high write latency across all.

Thanks

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Since you say there is a huge difference between the disk types under your current workload, then i would focus on this, the logical thing to do is to replace them. You can probably run further benchmarks with fsync write speed at lower block sizes, but i think your current observation is conclusive enough.

Other less recommended options: use a lower ec profile such as k4 m2, getting a controller with write back cache. For sequential io increasing your read_ahead_kb, using librbd client cache, adjusting your client os cache parameters. Also if you have a controlled application like a backup app where you can specify the block size, then increase it to above 1MB. But again i would recommend you focus on changing disks.

/Maged

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux