Tune the OSDs or add more OSDs (if the problem is really in the disks).
Can you post iostat output for the disks that are loaded? (iostat -mx 1 /dev/sdX, a few lines…) What drives are those? What controller?
Jan
The idea is to cache rbd at a host level. Also could be possible to cache at the osd level. We have high iowait and we need to lower it a bit, since we are getting the max from our sas disks 100-110 iops per disk (3TB osd's), any advice? Flashcache? On Thursday, July 2, 2015, Jan Schermer < jan@xxxxxxxxxxx> wrote: I think I posted my experience here ~1 month ago.
My advice for EnhanceIO: don’t use it.
But you didn’t exactly say what you want to cache - do you want to cache the OSD filestore disks? RBD devices on hosts? RBD devices inside guests?
Jan
> On 02 Jul 2015, at 11:29, Emmanuel Florac <eflorac@xxxxxxxxxxxxxx> wrote:
>
> Le Wed, 1 Jul 2015 17:13:03 -0300
> German Anders <ganders@xxxxxxxxxxxx> écrivait:
>
>> Hi cephers,
>>
>> Is anyone out there that implement enhanceIO in a production
>> environment? any recommendation? any perf output to share with the
>> diff between using it and not?
>
> I've tried EnhanceIO back when it wasn't too stale, but never put it in
> production. I've set up bcache on trial, it has its problems (load is
> stuck at 1.0 because of the bcache_writeback kernel thread, and I
> suspect a crash was due to it) but works pretty well overall.
>
> --
> ------------------------------------------------------------------------
> Emmanuel Florac | Direction technique
> | Intellique
> | <eflorac@xxxxxxxxxxxxxx>
> | +33 1 78 94 84 02
> ------------------------------------------------------------------------
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
-- German Anders Storage System Engineer Leader Despegar | IT Team office +54 11 4894 3500 x3408 mobile +54 911 3493 7262 mail ganders@xxxxxxxxxxxx
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com