回复: 回复: scrub error with keyvalue backend

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I like keyvalue backend very much because it if a good performance
my request is simple: keep it running, now have another BUG which was fixed in 0.85 :


014-10-11 08:42:01.165836 7f8e3abb2700 1 heartbeat_map is_healthy 'KeyValueStore::op_tp thread 0x7f8e644a1700' had timed out after 60
2014-10-11 08:42:01.286205 7f8e3abb2700 1 heartbeat_map is_healthy 'KeyValueStore::op_tp thread 0x7f8e64ca2700' had timed out after 60
2014-10-11 08:42:01.286209 7f8e3abb2700 1 heartbeat_map is_healthy 'KeyValueStore::op_tp thread 0x7f8e644a1700' had timed out after 60
2014-10-11 08:42:01.286243 7f8e39bb0700 1 heartbeat_map is_healthy 'KeyValueStore::op_tp thread 0x7f8e64ca2700' had timed out after 60
2014-10-11 08:42:01.286256 7f8e39bb0700 1 heartbeat_map is_healthy 'KeyValueStore::op_tp thread 0x7f8e644a1700' had timed out after 60
2014-10-11 08:42:01.670037 7f8e6dcf7700 0 log [WRN] : 86 slow requests, 6 included below; oldest blocked for > 76.874851 secs
2014-10-11 08:42:01.670046 7f8e6dcf7700 0 log [WRN] : slow request 76.867403 seconds old, received at 2014-10-11 08:40:44.802551: osd_op(mds.0.1:76899860 10001846283.00000000 [create 0~0,setxattr parent (235)] 0.3c8de802 RETRY=1 ondisk+retry+write e158) v4 currently reached pg
2014-10-11 08:42:01.670049 7f8e6dcf7700 0 log [WRN] : slow request 76.844191 seconds old, received at 2014-10-11 08:40:44.825763: osd_op(client.5190.0:25456631 10001846ea0.00000000 [write 0~5174] 0.bf079148 RETRY=1 snapc 1=[] ondisk+retry+write e158) v4 currently reached pg
2014-10-11 08:42:01.670052 7f8e6dcf7700 0 log [WRN] : slow request 76.867380 seconds old, received at 2014-10-11 08:40:44.802574: osd_op(mds.0.1:76899903 100018462ae.00000000 [create 0~0,setxattr parent (235)] 0.847d393 RETRY=1 ondisk+retry+write e158) v4 currently reached pg
2014-10-11 08:42:01.670055 7f8e6dcf7700 0 log [WRN] : slow request 76.844154 seconds old, received at 2014-10-11 08:40:44.825800: osd_op(client.5190.0:25456733 10001846f06.00000000 [write 0~5386] 0.9e43e402 RETRY=1 snapc 1=[] ondisk+retry+write e158) v4 currently reached pg
2014-10-11 08:42:01.670058 7f8e6dcf7700 0 log [WRN] : slow request 76.867329 seconds old, received at 2014-10-11 08:40:44.802625: osd_op(mds.0.1:76899939 100018462d2.00000000 [create 0~0,setxattr parent (231)] 0.554a3444 RETRY=1 ondisk+retry+write e158) v4 currently reached pg
2014-10-11 08:42:02.099305 7f8e3abb2700 1 heartbeat_map is_healthy 'KeyValueStore::op_tp thread 0x7f8e64ca2700' had timed out after 60
2014-10-11 08:42:02.099308 7f8e3abb2700 1 heartbeat_map is_healthy 'KeyValueStore::op_tp thread 0x7f8e644a1700' had timed out after 60
2014-10-11 08:42:02.099405 7f8e39bb0700 1 heartbeat_map is_healthy 'KeyValueStore::op_tp thread 0x7f8e64ca2700' had timed out after 60
2014-10-11 08:42:02.099407 7f8e39bb0700 1 heartbeat_map is_healthy 'KeyValueStore::op_tp thread 0x7f8e644a1700' had timed out after 60
2014-10-11 08:42:02.415290 7f8e39bb0700 1 heartbeat_map is_healthy 'KeyValueStore::op_tp thread 0x7f8e64ca2700' had timed out after 60
2014-10-11 08:42:02.415293 7f8e39bb0700 1 heartbeat_map is_healthy 'KeyValueStore::op_tp thread 0x7f8e644a1700' had timed out after 60
2014-10-11 08:42:02.415331 7f8e3abb2700 1 heartbeat_map is_healthy 'KeyValueStore::op_tp thread 0x7f8e64ca2700' had timed out after 60
2014-10-11 08:42:02.415333 7f8e3abb2700 1 heartbeat_map is_healthy 'KeyValueStore::op_tp thread 0x7f8e644a1700' had timed out after 60
2014-10-11 08:42:02.599635 7f8e3abb2700 1 heartbeat_map is_healthy 'KeyValueStore::op_tp thread 0x7f8e64ca2700' had timed out after 60
2014-10-11 08:42:02.599639 7f8e3abb2700 1 heartbeat_map is_healthy 'KeyValueStore::op_tp thread 0x7f8e644a1700' had timed out after 60
2014-10-11 08:42:02.599806 7f8e39bb0700 1 heartbeat_map is_healthy 'KeyValueStore::op_tp thread 0x7f8e64ca2700' had timed out after 60
2014-10-11 08:42:02.599809 7f8e39bb0700 1 heartbeat_map is_healthy 'KeyValueStore::op_tp thread 0x7f8e644a1700' had timed out after 60


发件人: ceph-users
发送时间: 2014-10-10 16:09
收件人: ceph-users; ceph-users
主题:  回复: scrub error with keyvalue backend
is there anybody can help ?

 
发件人: ceph-users
发送时间: 2014-10-10 13:34
收件人: ceph-users
主题:  scrub error with keyvalue backend
Dear ceph, 

 # ceph -s
cluster e1f18421-5d20-4c3e-83be-a74b77468d61
health HEALTH_ERR 4 pgs inconsistent; 4 scrub errors
monmap e2: 3 mons at {storage-1-213=10.1.0.213:6789/0,storage-1-214=10.1.0.214:6789/0,storage-1-215=10.1.0.215:6789/0}, election epoch 16, quorum 0,1,2 storage-1-213,storage-1-214,storage-1-215
mdsmap e7: 1/1/1 up {0=storage-1-213=up:active}, 2 up:standby
osdmap e135: 18 osds: 18 up, 18 in
pgmap v84135: 1164 pgs, 3 pools, 801 GB data, 15264 kobjects
1853 GB used, 34919 GB / 36772 GB avail
1159 active+clean
4 active+clean+inconsistent
1 active+clean+scrubbing
client io 17400 kB/s wr, 611 op/s

[root@storage-1-213:~] [Fri Oct 10 - 13:30:19]
999 => # ceph -v
ceph version 0.80.6 (f93610a4421cb670b08e974c6550ee715ac528ae)

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux