Hello list,
this bug is filed as: https://tracker.ceph.com/issues/23891
From my point of this is a bug, probably others are experiencing
also this problem and can provide additional details.
I would like to map a rbd using rbd-nbd. Without adding the
foreground flag it is not possible to map the device.
The command listed beyond, are executed on a mon server (same
thing happens on other servers)
##################################################################################
### Not working scenario: running rbd-nbd as background proccess
Shell 1:
# rmmod nbd
rmmod: ERROR: Module nbd is not currently loaded
# rbd-nbd map --nbds_max 16
RBD_XenStorage-07449252-bf96-4daa-b0a6-687b7f1c369c/RBD-17efad10-6807-433f-879a-c59b511f895c
-> it blocks probably forever
Strace of the problem situation
# strace -frvT -s128 -o /tmp/fail.strace rbd-nbd map --nbds_max
16
RBD_XenStorage-07449252-bf96-4daa-b0a6-687b7f1c369c/RBD-17efad10-6807-433f-879a-c59b511f895c
(see https://tracker.ceph.com/issues/23891)
Shell 2:
# rbd-nbd list-mapped
-> no output
# lsmod |grep nbd
-> no output
##################################################################################
### Working scenario running rbd-nbd in foreground:
Shell 1:
# rbd-nbd map -d --nbds_max 16
RBD_XenStorage-07449252-bf96-4daa-b0a6-687b7f1c369c/RBD-17efad10-6807-433f-879a-c59b511f895c
2018-04-26 23:22:19.681903 7fde4b442e80 0 ceph version 12.2.5
(cad919881333ac92274171586c827e01f554a70a) luminous (stable),
process (unknown), pid 326958
2018-04-26 23:22:19.681919 7fde4b442e80 0 pidfile_write: ignore
empty --pid-file
/dev/nbd0
-> Blocking, because its running as foreeground proccess
Shell 2:
# rbd-nbd list-mapped
pid pool
image snap device
326958 RBD_XenStorage-07449252-bf96-4daa-b0a6-687b7f1c369c
RBD-17efad10-6807-433f-879a-c59b511f895c - /dev/nbd0
# blockdev --getsize64 /dev/nbd0
21474836480
# blockdev --setra 1024 /dev/nbd0
# blockdev --getra /dev/nbd0
1024
# rbd-nbd unmap /dev/nbd0
Shell 1:
-> terminates in unmap
2018-04-26 23:24:55.660478 7fdded7ba700 0 rbd-nbd: disconnect
request received
###################################################################################
### Our System:
- Luminous/12.2.5
- Ubuntu 16.04
- 5 OSD Nodes (24*8 TB HDD OSDs, 48*1TB SSD OSDS, Bluestore,
6Gb Cache per OSD)
- Size per OSD, 192GB RAM, 56 HT CPUs)
- 3 Mons (64 GB RAM, 200GB SSD, 4 visible CPUs)
- 2 * 10 GBIT, SFP+, bonded xmit_hash_policy layer3+4
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com