Re: 'rbd map' asynchronous behavior

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05/15/2012 04:49 AM, Andrey Korolyov wrote:
Hi,

There are strange bug when I tried to map excessive amounts of block
devices inside the pool, like following

for vol in $(rbd ls); do rbd map $vol; [some-microsleep]; [some
operation or nothing, I have stubbed guestfs mount here] ;
[some-microsleep];  unmap /dev/rbd/rbd/$vol ; [some-microsleep]; done,

udev or rbd seems to be somehow late and mapping fails. There is no
real-world harm at all, and such case can be easily avoided, but on
busy cluster timeout increases and I was able to catch same thing on
two-osd config in recovering state. For 0.1 second on healthy cluster,
all works okay, for 0.05 it may fail with following trace(just for me,
because I am testing on relatively old and crappy hardware, so others
may catch that on smaller intervals):

udev is asynchronous by nature. The rbd tool itself doesn't wait for
/dev to be populated because you may not be using the default udev rule
(or not using udev at all). Our test framework polls for the device to
make sure 'rbd map' and udev completed:

https://github.com/ceph/teuthology/blob/d6b9bd8b63c8c6c1181ece1f6941829d8d1d5152/teuthology/task/rbd.py#L190

Josh
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux