Re: [CEPH-DEVEL] [ceph-users] occasional failure to unmap rbd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 23, 2015 at 11:03 PM, Markus Kienast <mark@xxxxxxxxxxxxx> wrote:
> I am having the same issue here.

Which kernel are you running?  Could you attach your dmesg?

>
> root@paris3:/etc/neutron# rbd unmap /dev/rbd0
> rbd: failed to remove rbd device: (16) Device or resource busy
> rbd: remove failed: (16) Device or resource busy
>
> root@paris3:/etc/neutron# rbd info -p volumes
> volume-f3ab6892-f35e-4b98-8832-efbaaa2f4ca2
> 2015-11-23 22:42:06.842697 7f2d57e49700  0 -- :/2760503703 >>
> 10.90.90.4:6789/0 pipe(0x1773250 sd=3 :0 s=1 pgs=0 cs=0 l=1
> c=0x17734e0).fault
> rbd image 'volume-f3ab6892-f35e-4b98-8832-efbaaa2f4ca2':
> size 500 GB in 128000 objects
> order 22 (4096 kB objects)
> block_name_prefix: rbd_data.1b6d9e2aaa998b
> format: 2
> features: layering
> root@paris3:/etc/neutron# rados -p volumes listwatchers
> rbd_header.1b6d9e2aaa998b
> 2015-11-23 22:42:58.546723 7fec94fec700  0 -- :/2519796249 >>
> 10.90.90.4:6789/0 pipe(0x9cf260 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x9cf4f0).fault

Did you root cause these faults?

> watcher=10.90.90.3:0/3293327848 client.8471177 cookie=1
>
> root@paris3:/etc/neutron# ps ax | grep rbd
>  7814 ?        S      0:00 [jbd2/rbd0-8]

Was there an ext filesystem involved?  How was it umounted - do you
have a "umount <mountpoint>" process stuck in D state?

> 11003 ?        S      0:00 [jbd2/rbd1-8]
> 14042 ?        S      0:00 [jbd2/rbd2p1-8]
> 24228 ?        S      0:00 [jbd2/rbd3-8]
>
> root@paris3:/etc/neutron# ceph --version
> ceph version 0.80.11 (8424145d49264624a3b0a204aedb127835161070)
>
> root@paris3:/etc/neutron# ls /sys/block/rbd0/holders/
> returns nothing
>
> root@paris3:/etc/neutron# fuser -amv /dev/rbd0
>                      USER        PID ACCESS COMMAND
> /dev/rbd0:

What's the output of "cat /sys/bus/rbd/devices/0/client_id"?

What's the output of "sudo cat /sys/kernel/debug/ceph/*/osdc"?

Thanks,

                Ilya
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux