ceph rbd and pool quotas

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi guys,

quick question in regards to ceph -> rbd -> quotas per pool. I'd like to set a quota with max_bytes of a pool so that I can limit the amount a ceph client can use, like so:

ceph osd pool set-quota pool1 max_bytes $(( 1024 * 1024 * 100 * 5))

This is all working, e.g., if data gets written into the rbd mapped device (e.g. /dev/rbd0 --> /mnt/rbd0) and the pool reaches its capacity (full), it sets the ceph cluster health to WARN and notifies that the pool is full and the write stopps. However, when the write operation on the client stops it also hangs after that, e.g. the process can't be killed and waits for the device to respond, e.g.

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root      9226  0.0  0.0   5836   744 ?        D    12:50   0:00      \_ dd if=/dev/zero of=/mnt/rbd3/file count=500 bs=1M

Increasing the quota of the pool helps recovering the cluster health but the write is stuck forever on the client (needs to be rebooted to get rid of the process).

Ideas?

Cheers,
Thomas

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux