Hiya. Playing with a small cephs setup from the Quick start documentation.
Seeing an issue running rdb bench-write. Initial trace is provided
below, let me know if you need other information. fwiw the rados bench
works just fine.
Any idea what is causing this? Is it a parsing issue in the rbd command?
Thanks
--Glenn
root@ceph-client:~# uname -a
Linux ceph-client 4.1.6-rh1-xenU #1 SMP Fri Sep 4 02:50:30 UTC 2015
x86_64 x86_64 x86_64 GNU/Linux
root@ceph-client:~# rbd --version
ceph version 0.80.10 (ea6c958c38df1216bf95c927f143d8b13c4a9e70)
root@ceph-client:~# rbd showmapped
id pool image snap device
0 rbd foo - /dev/rbd0
root@ceph-client:/mnt/ceph-block-device# rbd info foo
rbd image 'foo':
size 4096 MB in 1024 objects
order 22 (4096 kB objects)
block_name_prefix: rb.0.1073.238e1f29
format: 1
root@ceph-client:/mnt/ceph-block-device# man rbd
root@ceph-client:/mnt/ceph-block-device# rbd bench-write foo
bench-write io_size 4096 io_threads 16 bytes 1073741824 pattern seq
SEC OPS OPS/SEC BYTES/SEC
*** Error in `rbd': free(): invalid pointer: 0x56a727a8 ***
*** Caught signal (Aborted) **
in thread f26feb40
ceph version 0.80.10 (ea6c958c38df1216bf95c927f143d8b13c4a9e70)
1: (()+0x24a20) [0x5664aa20]
2: [0xf77abe50]
3: [0xf77abe80]
4: (gsignal()+0x47) [0xf6a60607]
5: (abort()+0x143) [0xf6a63a33]
6: (()+0x68e53) [0xf6a9ae53]
7: (()+0x7333a) [0xf6aa533a]
8: (()+0x73fad) [0xf6aa5fad]
9: (operator delete(void*)+0x1f) [0xf6c4682f]
10: (librbd::C_AioWrite::~C_AioWrite()+0x26) [0xf76f90b6]
11: (Context::complete(int)+0x1f) [0xf76ca52f]
12: (librbd::rados_req_cb(void*, void*)+0x48) [0xf76d7f58]
13: (librados::C_AioSafe::finish(int)+0x2b) [0xf6eb719b]
14: (Context::complete(int)+0x17) [0xf6e8f6f7]
15: (Finisher::finisher_thread_entry()+0x1a8) [0xf6f5b3a8]
16: (Finisher::FinisherThread::entry()+0x1e) [0xf7746f7e]
17: (Thread::entry_wrapper()+0x4f) [0xf6f82ebf]
18: (Thread::_entry_func(void*)+0x1b) [0xf6f82efb]
19: (()+0x6f70) [0xf6d1ff70]
20: (clone()+0x5e) [0xf6b1dbee]
2015-09-04 04:30:46.755568 f26feb40 -1 *** Caught signal (Aborted) **
in thread f26feb40
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com