You could be suffering from a known, but unfixed issue [1] where spindle
contention from scrub and deep-scrub cause periodic stalls in RBD. You
can try to disable scrub and deep-scrub with:
# ceph osd set noscrub
# ceph osd set nodeep-scrub
If your problem stops, Issue #6278 is likely the cause. To re-enable
scrub and deep-scrub:
# ceph osd unset noscrub
# ceph osd unset nodeep-scrub
Because you seem to only have two OSDs, you may also be saturating your
disks even without scrub or deep-scrub.
http://tracker.ceph.com/issues/6278
Cheers,
Mike Dawson
On 9/16/2013 12:30 PM, Timofey wrote:
I use ceph for HA-cluster.
Some time ceph rbd go to have pause in work (stop i/o operations). Sometime it can be when one of OSD slow response to requests. Sometime it can be my mistake (xfs_freeze -f for one of OSD-drive).
I have 2 storage servers with one osd on each. This pauses can be few minutes.
1. Is any settings for fast change primary osd if current osd work bad (slow, don't response).
2. Can I use ceph-rbd in software raid-array with local drive, for use local drive instead of ceph if ceph cluster fail?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com