Re: mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Dec 21, 2016 at 9:42 PM, Stéphane Klein
<contact@xxxxxxxxxxxxxxxxxxx> wrote:
>
>
> 2016-12-21 19:51 GMT+01:00 Ilya Dryomov <idryomov@xxxxxxxxx>:
>>
>> On Wed, Dec 21, 2016 at 6:58 PM, Stéphane Klein
>> <contact@xxxxxxxxxxxxxxxxxxx> wrote:
>> >>
>> > 2016-12-21 18:47 GMT+01:00 Ilya Dryomov <idryomov@xxxxxxxxx>:
>> >>
>> >> On Wed, Dec 21, 2016 at 5:50 PM, Stéphane Klein
>> >> <contact@xxxxxxxxxxxxxxxxxxx> wrote:
>> >> > I have configured:
>> >> >
>> >> > ```
>> >> > ceph osd crush tunables firefly
>> >> > ```
>> >>
>> >> If it gets to rm, then it's probably not tunables.  Are you running
>> >> these commands by hand?
>> >
>> >
>> > Yes, I have executed this command on my mon-1 host
>> >
>> >>
>> >>
>> >> Anything in dmesg?
>> >
>> >
>> >
>> > This:
>> >
>> > ```
>> > [  614.278589] SELinux: initialized (dev tmpfs, type tmpfs), uses
>> > transition
>> > SIDs
>> > [  910.797793] SELinux: initialized (dev tmpfs, type tmpfs), uses
>> > transition
>> > SIDs
>> > [ 1126.251705] SELinux: initialized (dev tmpfs, type tmpfs), uses
>> > transition
>> > SIDs
>> > [ 1214.030659] Key type dns_resolver registered
>> > [ 1214.042308] Key type ceph registered
>> > [ 1214.043852] libceph: loaded (mon/osd proto 15/24)
>> > [ 1214.045944] rbd: loaded (major 252)
>> > [ 1214.053449] libceph: client4200 fsid
>> > 7ecb6ebd-2e7a-44c3-bf0d-ff8d193e03ac
>> > [ 1214.056406] libceph: mon0 172.28.128.2:6789 session established
>> > [ 1214.066596]  rbd0: unknown partition table
>> > [ 1214.066875] rbd: rbd0: added with size 0x19000000
>> > [ 1219.120342] EXT4-fs (rbd0): mounted filesystem with ordered data
>> > mode.
>> > Opts: (null)
>> > [ 1219.120754] SELinux: initialized (dev rbd0, type ext4), uses xattr
>> > ```
>> >
>> > If I reboot my client host, and remount this disk then I can delete the
>> > folder with "rm -rf" with success.
>>
>> I'm not following - you are running 7 VMs - 3 mon VMs, 3 osd VMs and
>> a "ceph-client" VM.  If ceph-client is debian the test case works
>
>
> Yes
>
>>
>> if it's something RHEL-based it doesn't, correct?
>
>
> Yes
>
>>
>>
>> Are they all on the same bare metal host?  Which of these are you
>> rebooting?
>
>
> this is this host in Vagrantfile
> https://github.com/harobed/poc-ceph-ansible/blob/master/vagrant-3mons-3osd/Vagrantfile#L61
>
>
>>
>>
>> Try dumping /sys/kernel/debug/ceph/<fsid.clientid>/osdc on the
>> ceph-client VM when rm hangs.
>
>
>
> osdc file is empty:
>
> # cat
> /sys/kernel/debug/ceph/7ecb6ebd-2e7a-44c3-bf0d-ff8d193e03ac.client4200/osdc

So no in-flight requests - nothing for rm to block on.

> # cat
> /sys/kernel/debug/ceph/7ecb6ebd-2e7a-44c3-bf0d-ff8d193e03ac.client4200/client_options
> name=admin,secret=<hidden>
> # cat
> /sys/kernel/debug/ceph/7ecb6ebd-2e7a-44c3-bf0d-ff8d193e03ac.client4200/monc
> have osdmap 19
> # cat
> /sys/kernel/debug/ceph/7ecb6ebd-2e7a-44c3-bf0d-ff8d193e03ac.client4200/monmap
> epoch 1
>     mon0    172.28.128.2:6789
>     mon1    172.28.128.3:6789
>     mon2    172.28.128.4:6789
> # cat
> /sys/kernel/debug/ceph/7ecb6ebd-2e7a-44c3-bf0d-ff8d193e03ac.client4200/osdmap
> epoch 19
> flags
> pool 0 pg_num 64 (63) read_tier -1 write_tier -1
> osd0    172.28.128.6:6800    100%    (exists, up)    100%
> osd1    172.28.128.5:6800    100%    (exists, up)    100%
> osd2    172.28.128.7:6800    100%    (exists, up)    100%
>
> Version informations:
>
> # cat /etc/centos-release
> CentOS Linux release 7.2.1511 (Core)
> # cat /etc/centos-release-upstream
> Derived from Red Hat Enterprise Linux 7.2 (Source)
> # uname --all
> Linux ceph-client-1 3.10.0-327.36.1.el7.x86_64 #1 SMP Sun Sep 18 13:04:29
> UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
> # ceph --version
> ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3)

Not sure what's going on here.  Using firefly version of the rbd CLI
tool isn't recommended of course, but doesn't seem to be _the_ problem.
Can you try some other distro with an equally old ceph - ubuntu trusty
perhaps?

Thanks,

                Ilya
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux