reproducable rbd-nbd crashes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello cephers,

rbd-nbd crashes in a reproducible way here.

I created the following bug report: https://tracker.ceph.com/issues/40822

Do you also experience this problem?
Do you have suggestions for in depth debug data collection?

I invoke the following command on a freshly mapped rbd and rbd_rbd crashes:

# find . -type f -name "*.sql" -exec ionice -c3 nice -n 20 gzip -v {} \;
gzip: ./deprecated_data/data_archive.done/entry_search_201232.sql.gz already exists; do you wish to overwrite (y or n)? y
./deprecated_data/data_archive.done/entry_search_201232.sql:         84.1% -- replaced with ./deprecated_data/data_archive.done/entry_search_201232.sql.gz
./deprecated_data/data_archive.done/entry_search_201233.sql:
gzip: ./deprecated_data/data_archive.done/entry_search_201233.sql: Input/output error
gzip: ./deprecated_data/data_archive.done/entry_search_201234.sql: Input/output error
gzip: ./deprecated_data/data_archive.done/entry_search_201235.sql: Input/output error
gzip: ./deprecated_data/data_archive.done/entry_search_201236.sql: Input/output error
....

dmesg output:

[579763.020890] block nbd0: Connection timed out
[579763.020926] block nbd0: shutting down sockets
[579763.020943] print_req_error: I/O error, dev nbd0, sector 3221296950
[579763.020946] block nbd0: Receive data failed (result -32)
[579763.020952] print_req_error: I/O error, dev nbd0, sector 4523172248
[579763.021001] XFS (nbd0): metadata I/O error: block 0xc0011736 ("xlog_iodone") error 5 numblks 512
[579763.021031] XFS (nbd0): xfs_do_force_shutdown(0x2) called from line 1261 of file /build/linux-hwe-xJVMkx/linux-hwe-4.15.0/fs/xfs/xfs_log.c.  Return address = 0x00000000918af758
[579763.021046] print_req_error: I/O error, dev nbd0, sector 4523172248
[579763.021161] XFS (nbd0): Log I/O Error Detected.  Shutting down filesystem
[579763.021176] XFS (nbd0): Please umount the filesystem and rectify the problem(s)
[579763.176834] print_req_error: I/O error, dev nbd0, sector 3221296969
[579763.176856] print_req_error: I/O error, dev nbd0, sector 2195113096
[579763.176869] XFS (nbd0): metadata I/O error: block 0xc0011749 ("xlog_iodone") error 5 numblks 512
[579763.176884] XFS (nbd0): xfs_do_force_shutdown(0x2) called from line 1261 of file /build/linux-hwe-xJVMkx/linux-hwe-4.15.0/fs/xfs/xfs_log.c.  Return address = 0x00000000918af758
[579763.252836] print_req_error: I/O error, dev nbd0, sector 2195113352
[579763.252859] print_req_error: I/O error, dev nbd0, sector 2195113608
[579763.252869] print_req_error: I/O error, dev nbd0, sector 2195113864
[579763.356841] print_req_error: I/O error, dev nbd0, sector 2195114120
[579763.356885] print_req_error: I/O error, dev nbd0, sector 2195114376
[579763.358040] XFS (nbd0): writeback error on sector 2195119688
[579763.916813] block nbd0: Connection timed out
[579768.140839] block nbd0: Connection timed out
[579768.140859] print_req_error: 21 callbacks suppressed
[579768.140860] print_req_error: I/O error, dev nbd0, sector 2195112840
[579768.141101] XFS (nbd0): writeback error on sector 2195115592

/var/log/ceph/ceph-client.archiv.log

2019-07-18 14:52:55.387815 7fffcf7fe700  1 -- 10.23.27.200:0/3920476044 --> 10.23.27.151:6806/2322641 -- osd_op(unknown.0.0:1853 34.132 34:4cb446f4:::rbd_header.6c73776b8b4567:head [watch unwatch cookie 140736414969824] snapc 0=[] ondisk+write+known_if_redirected e256219) v8 -- 0x7fffc803a340 con 0
2019-07-18 14:52:55.388656 7fffe913b700  1 -- 10.23.27.200:0/3920476044 <== osd.17 10.23.27.151:6806/2322641 90 ==== watch-notify(notify (1) cookie 140736414969824 notify 1100452225614816 ret 0) v3 ==== 68+0+0 (1852866777 0 0) 0x7fffe187b4c0 con 0x7fffc00054d0
2019-07-18 14:52:55.388738 7fffe913b700  1 -- 10.23.27.200:0/3920476044 <== osd.17 10.23.27.151:6806/2322641 91 ==== osd_op_reply(1852 rbd_header.6c73776b8b4567 [notify cookie 140736550101040] v0'0 uv2102967 ondisk = 0) v8 ==== 169+0+8 (3077247585 0 3199212159) 0x7fffe0002ef0 con 0x7fffc00054d0
2019-07-18 14:52:55.388815 7fffcffff700  5 librbd::Watcher: 0x7fffc0001010 notifications_blocked: blocked=1
2019-07-18 14:52:55.388904 7fffcffff700  1 -- 10.23.27.200:0/3920476044 --> 10.23.27.151:6806/2322641 -- osd_op(unknown.0.0:1854 34.132 34:4cb446f4:::rbd_header.6c73776b8b4567:head [notify-ack cookie 0] snapc 0=[] ondisk+read+known_if_redirected e256219) v8 -- 0x7fffc00600a0 con 0
2019-07-18 14:52:55.389594 7fffe913b700  1 -- 10.23.27.200:0/3920476044 <== osd.17 10.23.27.151:6806/2322641 92 ==== osd_op_reply(1853 rbd_header.6c73776b8b4567 [watch unwatch cookie 140736414969824] v256219'2102968 uv2102967 ondisk = 0) v8 ==== 169+0+0 (242862078 0 0) 0x7fffe0002ef0 con 0x7fffc00054d0
2019-07-18 14:52:55.389838 7fffcd7fa700 10 librbd::image::CloseRequest: 0x555555946390 handle_unregister_image_watcher: r=0
2019-07-18 14:52:55.389849 7fffcd7fa700 10 librbd::image::CloseRequest: 0x555555946390 send_flush_readahead
2019-07-18 14:52:55.389848 7fffe913b700  1 -- 10.23.27.200:0/3920476044 <== osd.17 10.23.27.151:6806/2322641 93 ==== watch-notify(notify_complete (2) cookie 140736550101040 notify 1100452225614816 ret 0) v3 ==== 42+0+8 (2900410459 0 0) 0x7fffe09888e0 con 0x7fffc00054d0
2019-07-18 14:52:55.389895 7fffe913b700  1 -- 10.23.27.200:0/3920476044 <== osd.17 10.23.27.151:6806/2322641 94 ==== osd_op_reply(1854 rbd_header.6c73776b8b4567 [notify-ack cookie 0] v0'0 uv2102967 ondisk = 0) v8 ==== 169+0+0 (3363304947 0 0) 0x7fffe0002ef0 con 0x7fffc00054d0
2019-07-18 14:52:55.389944 7fffcffff700 20 librbd::watcher::Notifier: 0x7fffc00010c0 handle_notify: r=0
2019-07-18 14:52:55.389955 7fffcffff700 20 librbd::watcher::Notifier: 0x7fffc00010c0 handle_notify: pending=0
2019-07-18 14:52:55.389974 7fffcf7fe700 10 librbd::image::CloseRequest: 0x555555946390 handle_flush_readahead: r=0
2019-07-18 14:52:55.389978 7fffcf7fe700 10 librbd::image::CloseRequest: 0x555555946390 send_shut_down_cache
2019-07-18 14:52:55.390119 7fffcf7fe700 10 librbd::image::CloseRequest: 0x555555946390 handle_shut_down_cache: r=0
2019-07-18 14:52:55.390125 7fffcf7fe700 10 librbd::image::CloseRequest: 0x555555946390 send_flush_op_work_queue
2019-07-18 14:52:55.390130 7fffcf7fe700 10 librbd::image::CloseRequest: 0x555555946390 handle_flush_op_work_queue: r=0
2019-07-18 14:52:55.390135 7fffcf7fe700 10 librbd::image::CloseRequest: 0x555555946390 handle_flush_image_watcher: r=0
2019-07-18 14:52:55.390168 7fffcf7fe700 10 librbd::ImageState: 0x555555947170 0x555555947170 handle_close: r=0
2019-07-18 14:52:55.391016 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 >> 10.23.27.151:6806/2322641 conn(0x7fffc00054d0 :-1 s=STATE_OPEN pgs=823187 cs=1 l=1).mark_down
2019-07-18 14:52:55.391054 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 >> 10.23.27.152:6804/3417032 conn(0x7fffc8108a70 :-1 s=STATE_OPEN pgs=517757 cs=1 l=1).mark_down
2019-07-18 14:52:55.391070 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 >> 10.23.27.150:6814/2366227 conn(0x7fffc807dcb0 :-1 s=STATE_OPEN pgs=529918 cs=1 l=1).mark_down
2019-07-18 14:52:55.391098 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 >> 10.23.27.150:6804/2366065 conn(0x7fffc80fb770 :-1 s=STATE_OPEN pgs=912307 cs=1 l=1).mark_down
2019-07-18 14:52:55.391120 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 >> 10.23.27.151:6810/2322644 conn(0x7fffc81e2130 :-1 s=STATE_OPEN pgs=612830 cs=1 l=1).mark_down
2019-07-18 14:52:55.391129 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 >> 10.23.27.150:6818/2366231 conn(0x7fffc81fa980 :-1 s=STATE_OPEN pgs=791458 cs=1 l=1).mark_down
2019-07-18 14:52:55.391145 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 >> 10.23.27.150:6812/2366206 conn(0x555555954480 :-1 s=STATE_OPEN pgs=528620 cs=1 l=1).mark_down
2019-07-18 14:52:55.391153 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 >> 10.23.27.151:6818/2322735 conn(0x7fffc80d42f0 :-1 s=STATE_OPEN pgs=696439 cs=1 l=1).mark_down
2019-07-18 14:52:55.391161 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 >> 10.23.27.151:6802/2322629 conn(0x7fffc805b6a0 :-1 s=STATE_OPEN pgs=623470 cs=1 l=1).mark_down
2019-07-18 14:52:55.391477 7fffe993c700  1 -- 10.23.27.200:0/3920476044 reap_dead start
2019-07-18 14:52:55.391566 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 >> 10.23.27.150:6802/2366074 conn(0x7fffc8068670 :-1 s=STATE_OPEN pgs=757332 cs=1 l=1).mark_down
2019-07-18 14:52:55.392068 7fffe993c700  1 -- 10.23.27.200:0/3920476044 reap_dead start
2019-07-18 14:52:55.392086 7fffe993c700  1 -- 10.23.27.200:0/3920476044 reap_dead start
2019-07-18 14:52:55.392093 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 >> 10.23.27.151:6814/2322637 conn(0x7fffc8077360 :-1 s=STATE_OPEN pgs=680587 cs=1 l=1).mark_down
2019-07-18 14:52:55.392114 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 >> 10.23.27.151:6816/2322731 conn(0x7fffc8061f00 :-1 s=STATE_OPEN pgs=605415 cs=1 l=1).mark_down
2019-07-18 14:52:55.392129 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 >> 10.23.27.152:6820/51139 conn(0x7fffc811d5e0 :-1 s=STATE_OPEN pgs=343039 cs=1 l=1).mark_down
2019-07-18 14:52:55.392138 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 >> 10.23.27.152:6800/3416932 conn(0x7fffc820fa10 :-1 s=STATE_OPEN pgs=781718 cs=1 l=1).mark_down
2019-07-18 14:52:55.392147 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 >> 10.23.27.152:6806/3417034 conn(0x7fffc80533c0 :-1 s=STATE_OPEN pgs=536148 cs=1 l=1).mark_down
2019-07-18 14:52:55.392165 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 >> 10.23.27.152:6808/3417140 conn(0x7fffc8134810 :-1 s=STATE_OPEN pgs=761314 cs=1 l=1).mark_down
2019-07-18 14:52:55.392181 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 >> 10.23.27.152:6816/3416957 conn(0x7fffc8146610 :-1 s=STATE_OPEN pgs=837393 cs=1 l=1).mark_down
2019-07-18 14:52:55.392214 7fffe993c700  1 -- 10.23.27.200:0/3920476044 reap_dead start
2019-07-18 14:52:55.392228 7fffe993c700  1 -- 10.23.27.200:0/3920476044 reap_dead start
2019-07-18 14:52:55.392256 7fffe993c700  1 -- 10.23.27.200:0/3920476044 reap_dead start
2019-07-18 14:52:55.392286 7fffe993c700  1 -- 10.23.27.200:0/3920476044 reap_dead start
2019-07-18 14:52:55.392436 7fffe993c700  1 -- 10.23.27.200:0/3920476044 reap_dead start
2019-07-18 14:52:55.392448 7fffe993c700  1 -- 10.23.27.200:0/3920476044 reap_dead start
2019-07-18 14:52:55.392452 7fffe993c700  1 -- 10.23.27.200:0/3920476044 reap_dead start
2019-07-18 14:52:55.392194 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 >> 10.23.27.150:6810/2366205 conn(0x7fffc810f360 :-1 s=STATE_OPEN pgs=587338 cs=1 l=1).mark_down
2019-07-18 14:52:55.392882 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 >> 10.23.27.150:6806/2366079 conn(0x7fffc8005320 :-1 s=STATE_OPEN pgs=588682 cs=1 l=1).mark_down
2019-07-18 14:52:55.393119 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 >> 10.23.27.152:6822/3417160 conn(0x7fffc813b3d0 :-1 s=STATE_OPEN pgs=667392 cs=1 l=1).mark_down
2019-07-18 14:52:55.393136 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 >> 10.23.27.151:6812/2322646 conn(0x7fffc8148070 :-1 s=STATE_OPEN pgs=671116 cs=1 l=1).mark_down
2019-07-18 14:52:55.393150 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 >> 10.23.27.161:6810/3215426 conn(0x7fffc800f2c0 :-1 s=STATE_OPEN pgs=432372 cs=1 l=1).mark_down
2019-07-18 14:52:55.393165 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 >> 10.23.27.161:6822/3215746 conn(0x7fffc8166d60 :-1 s=STATE_OPEN pgs=363273 cs=1 l=1).mark_down
2019-07-18 14:52:55.393179 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 >> 10.23.27.161:6814/3215437 conn(0x7fffc8045cd0 :-1 s=STATE_OPEN pgs=360091 cs=1 l=1).mark_down
2019-07-18 14:52:55.393187 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 >> 10.23.27.161:6806/3215303 conn(0x7fffc80f4e90 :-1 s=STATE_OPEN pgs=301563 cs=1 l=1).mark_down
2019-07-18 14:52:55.393195 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 >> 10.23.27.161:6802/3215397 conn(0x7fffc806eec0 :-1 s=STATE_OPEN pgs=288488 cs=1 l=1).mark_down
2019-07-18 14:52:55.393644 7fffe993c700  1 -- 10.23.27.200:0/3920476044 reap_dead start
2019-07-18 14:52:55.393717 7fffe993c700  1 -- 10.23.27.200:0/3920476044 reap_dead start
2019-07-18 14:52:55.393732 7fffe993c700  1 -- 10.23.27.200:0/3920476044 reap_dead start
2019-07-18 14:52:55.393736 7fffe993c700  1 -- 10.23.27.200:0/3920476044 reap_dead start
2019-07-18 14:52:55.393741 7fffe993c700  1 -- 10.23.27.200:0/3920476044 reap_dead start
2019-07-18 14:52:55.393208 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 >> 10.23.27.161:6804/3215221 conn(0x7fffc81259b0 :-1 s=STATE_OPEN pgs=321005 cs=1 l=1).mark_down
2019-07-18 14:52:55.393758 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 >> 10.23.27.161:6808/3215396 conn(0x7fffc812d9a0 :-1 s=STATE_OPEN pgs=361415 cs=1 l=1).mark_down
2019-07-18 14:52:55.393767 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 >> 10.23.27.161:6818/3215660 conn(0x7fffc817ca00 :-1 s=STATE_OPEN pgs=357502 cs=1 l=1).mark_down
2019-07-18 14:52:55.393933 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 >> 10.23.27.153:6801/2546 conn(0x7fffd000ca40 :-1 s=STATE_OPEN pgs=2798975 cs=1 l=1).mark_down
2019-07-18 14:52:55.393965 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 >> 10.23.27.154:6789/0 conn(0x7fffa801a900 :-1 s=STATE_OPEN pgs=7851695 cs=1 l=1).mark_down
2019-07-18 14:52:55.394004 7fffe993c700  1 -- 10.23.27.200:0/3920476044 reap_dead start
2019-07-18 14:52:55.396891 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 shutdown_connections
2019-07-18 14:52:55.397293 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 shutdown_connections
2019-07-18 14:52:55.397384 7ffff7fe3000  1 -- 10.23.27.200:0/3920476044 wait complete.


The problem appeared now every 3 reproduction attempts:

Simple read/write of a file larger than rbd cache works without a problem:

# dd if=/dev/zero of=la bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 27.9281 s, 38.4 MB/s

# dd if=la of=/dev/null bs=1M
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 93.6031 s, 11.5 MB/s


Environment:

- Kernel: 4.15.0-45-generic
- Ceph Client: 12.2.11
- XFS Mount options: rw,relatime,attr2,discard,largeio,inode64,allocsize=4096k,logbufs=8,logbsize=256k,noquota,_netdev
- Volume:
   rbd image 'archive-001-srv_ec':
    size 3TiB in 786432 objects
    order 22 (4MiB objects)
    data_pool: rbd_hdd_ec
    block_name_prefix: rbd_data.34.6c73776b8b4567
    format: 2
    features: layering, exclusive-lock, object-map, fast-diff, deep-flatten, data-pool
    flags:
    create_timestamp: Thu Jul 11 17:59:00 2019
- Client configuration
    [client]
    rbd cache = true
    rbd cache size = 536870912
    rbd cache max dirty = 268435456
    rbd cache target dirty = 134217728
    rbd cache max dirty age = 15
    rbd readahead max bytes = 4194304
- Pool created by
   ceph osd erasure-code-profile set archive_profile \
     k=2 \
     m=2 \
     crush-device-class=hdd \
     crush-failure-domain=host
 
    ceph osd pool create rbd_hdd_ec 64 64 erasure archive_profile
    ceph osd pool set rbd_hdd_ec  allow_ec_overwrites true
    ceph osd pool application enable rbd_hdd_ec rbd

What can i do?
I never experienced something like this krbd or rbd-nbd (12.2.5 in my xen-hypervisor, https://github.com/vico-research-and-consulting/RBDSR)

Regards
Marc


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux