slow requests after rocksdb delete wal or table_file_deletion

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi, cephers
recenty, I am testing ceph 12.2.12 with bluestore using cosbench.
both SATA osd and ssd osd has slow request.
many slow request occur, and most slow logs after rocksdb delete wal
or table_file_deletion logs

does it means the bottleneck of Rocksdb? if so how to improve. if not
how to find out the bottleneck?

Thanks.

ceph 12.2.12
42 nodes, each node 10*8TB sata , 2* 900G sata SSD
each SSD has 5*10G lv as wal and 5*60GB lv as DB for SATA osd
and  one 50GB SSD lv as osd to hold radosgw index\gc\lc pools.


ops/iostat logs
https://gist.github.com/hnuzhoulin/abb732b5df9dd0200247dfee56850293
detail osd logs
:https://drive.google.com/file/d/1moa8Ilgqj-300nVMUVnR9Cc4194Q2ABy/view?usp=sharing

2019-09-25 20:26:16.755001 7f159edc2700  4 rocksdb: (Original Log Time
2019/09/25-20:26:16.754716) EVENT_LOG_v1 {"time_micros":
1569414376754705
, "job": 1017, "event": "flush_finished", "lsm_state": [2, 4, 46, 316,
0, 0, 0], "immutable_memtables": 0}
2019-09-25 20:26:16.755006 7f159edc2700  4 rocksdb: (Original Log Time
2019/09/25-20:26:16.754928)
[/build/ceph-12.2.12/src/rocksdb/db/db_impl_c
ompaction_flush.cc:132] [default] Level summary: base level 1 max
bytes base 268435456 files[2 4 46 316 0 0 0] max score 0.99

2019-09-25 20:26:16.755133 7f159edc2700  4 rocksdb:
[/build/ceph-12.2.12/src/rocksdb/db/db_impl_files.cc:388] [JOB 1017]
Try to delete WAL files
 size 235707203, prev total WAL file size 237105615, number of live WAL files 2.

2019-09-25 20:27:31.576827 7f15bd5ff700  0 log_channel(cluster) log
[WRN] : 6 slow requests, 5 included below; oldest blocked for >
30.589488 secs
2019-09-25 20:27:31.576839 7f15bd5ff700  0 log_channel(cluster) log
[WRN] : slow request 30.589488 seconds old, received at 2019-09-25
20:27:00.987184: osd_op(client.127567.0:87590000 31.eb5
31:ad7d21e3:::.dir.9612b61b-b07e-4b93-835e-4596b5b1b39b.127567.11.12:head
[call rgw.guard_bucket_resharding,call rgw.bucket_prepare_op] snapc
0=[] ondisk+write+known_if_redirected e11236) currently waiting for rw
locks
2019-09-25 20:27:31.576849 7f15bd5ff700  0 log_channel(cluster) log
[WRN] : slow request 30.212751 seconds old, received at 2019-09-25
20:27:01.363921: osd_op(client.99543.0:87452593 31.eb5
31:ad7d21e3:::.dir.9612b61b-b07e-4b93-835e-4596b5b1b39b.127567.11.12:head
[call rgw.guard_bucket_resharding,call rgw.bucket_prepare_op] snapc
0=[] ondisk+write+known_if_redirected e11236) currently waiting for rw
locks
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux