I never had those issues with Luminous, never once, since Nautilus this is a constant headache.
My issue is that I have OSDs that are over 85% whilst others are at 63%. My issue is that every time I do a rebalance or add new disks ceph moves PGs on near full OSDs and almost causes pool failures.
My STDDEV: 21.31 ...it's a joke.
It's simply not acceptable to deal with nearfull OSDs whilst others are half empty.
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com