Hi Ilya,
Thanks for the speedy reply - unfortunately increasing the quota doesn't
help, the process keeps being stuck forever. Or do you mean with kernel
4.7 this would work after upping the quota?
Cheers,
Thomas
On 25/08/16 09:16, Ilya Dryomov wrote:
On Wed, Aug 24, 2016 at 11:13 PM, Thomas <thomas@xxxxxxxxxxxxx> wrote:
Hi guys,
quick question in regards to ceph -> rbd -> quotas per pool. I'd like to set
a quota with max_bytes of a pool so that I can limit the amount a ceph
client can use, like so:
ceph osd pool set-quota pool1 max_bytes $(( 1024 * 1024 * 100 * 5))
This is all working, e.g., if data gets written into the rbd mapped device
(e.g. /dev/rbd0 --> /mnt/rbd0) and the pool reaches its capacity (full), it
sets the ceph cluster health to WARN and notifies that the pool is full and
the write stopps. However, when the write operation on the client stops it
also hangs after that, e.g. the process can't be killed and waits for the
device to respond, e.g.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 9226 0.0 0.0 5836 744 ? D 12:50 0:00 \_ dd
if=/dev/zero of=/mnt/rbd3/file count=500 bs=1M
Increasing the quota of the pool helps recovering the cluster health but the
write is stuck forever on the client (needs to be rebooted to get rid of the
process).
Ideas?
Hi Thomas,
You need kernel 4.7 for that to work - it will properly re-submit the
write after the quota is increased.
Thanks,
Ilya
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com