rbd snap ls: how much locking is involved?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

some of our applications (e.g., backy) use 'rbd snap ls' quite often. I see
regular occurrences of blocked requests on a headly loaded cluster which
correspond to snap_list operations. Log file example:

2016-01-20 11:38:14.389325 osd.13 172.22.4.44:6803/13012 40529 : cluster [WRN]
1 slow requests, 1 included below; oldest blocked for > 15.098679 secs
2016-01-20 11:38:14.389336 osd.13 172.22.4.44:6803/13012 40530 : cluster [WRN]
slow request 15.098679 seconds old, received at 2016-01-20 11:37:59.276665:
osd_op(client.256532559.0:2041
rbd_data.c390a692ae8944a.000000000000057b@snapdir [list-snaps] 266.95976dde
ack+read+known_if_redirected e807541) currently no flag points reached

Does anyone know if 'rbd snap ls' creates locks? On which level are these
locks created (volume, pool, global)? Would it be best to reduce the usage of
'rbd snap ls' on a heavly loaded cluster?

TIA

Christian

-- 
Dipl-Inf. Christian Kauhaus <>< · kc@xxxxxxxxxxxxxxx · +49 345 219401-0
Flying Circus Internet Operations GmbH · http://flyingcircus.io
Forsterstraße 29 · 06112 Halle (Saale) · Deutschland
HR Stendal 21169 · Geschäftsführer: Christian Theune, Christian Zagrodnick
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux