On 13/12/2018 09:53, Ashley Merrick
wrote:
Since you say there is a huge difference between the disk types under your current workload, then i would focus on this, the logical thing to do is to replace them. You can probably run further benchmarks with fsync write speed at lower block sizes, but i think your current observation is conclusive enough.
Other less recommended options: use a lower ec profile such as k4
m2, getting a controller with write back cache. For sequential io
increasing your read_ahead_kb, using librbd client cache,
adjusting your client os cache parameters. Also if you have a
controlled application like a backup app where you can specify the
block size, then increase it to above 1MB. But again i would
recommend you focus on changing disks. /Maged |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com