Ceph 0.54 OSD low request

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi.

I am have 2 server:

earch server:

1 card raid: -------------- raid 6: 54TB,divided into 4 OSD (format ext4)
             -------------- raid 0: 248G, journal for 4 OSD.

My config file:


[global]
    auth supported = cephx
    auth cluster required = cephx
    auth service required = cephx
    auth client required = cephx
    keyring = /etc/ceph/keyring.admin
[mds]
    keyring = /etc/ceph/keyring.$name
    debug mds = 1
[mds.0]
    host = Ceph-98
[mds.1]
    host = Ceph-99

[osd]
    osd data = /srv/ceph/osd$id
    osd journal size = 106496
    osd class dir = /usr/lib/rados-classes
    keyring = /etc/ceph/keyring.$name
    osd mkfs type = ext4
    filestore xattr use omap = true
    filestore fiemap = false
    osd heartbeat interval = 12
    osd heartbeat grace = 35
    osd min down reports = 4
    osd mon report interval min = 45
    osd mon report interval max = 150
    osd op complaint time = 80

    filestore min sync interval = 1
    filestore max sync interval = 30
    osd scrub min interval = 120
    debug osd = 20
    debug ms = 1
    debug filestore = 20
[osd.0]
    host = Ceph-98
    devs = /dev/sdb1
    osd journal = /srv/ceph/ssd/journal-0
    cluster addr = 172.30.48.98
    public addr = 172.30.48.98
[osd.1]
    host = Ceph-98
    devs = /dev/sdb2
    osd journal = /srv/ceph/ssd/journal-1
    cluster addr = 172.30.48.98
    public addr = 172.30.48.98
[osd.2]
    host = Ceph-98
    devs = /dev/sdb3
    osd journal = /srv/ceph/ssd/journal-2
    cluster addr = 172.30.48.98
    public addr = 172.30.48.98

.....
[osd.7]
    host = Ceph-99
    devs = /dev/sda4
    osd journal = /srv/ceph/ssd/journal-7
    cluster addr = 172.30.48.99
    public addr = 172.30.48.99
[mon]
    mon data = /srv/ceph/mon$id
    mon osd down out interval = 3000
[mon.0]
    host = Ceph-98
    mon addr = 172.30.48.98:6789
[mon.1]
    host = Ceph-99
    mon addr = 172.30.48.99:6789

I used Centos 6.4 up kernel: 3.8.5-1.el6.elrepo.x86_64
Ceph 0.56.4

I get this error frequently:

2013-04-16 14:02:48.871577 osd.6 [WRN] 9 slow requests, 1 included below; 
oldest blocked for > 161.356701 secs
2013-04-16 14:02:48.871581 osd.6 [WRN] slow request 160.410153 seconds old, 
received at 2013-04-16 14:00:08.461397: osd_op(mds.0.31:353 200.0000211b 
[write 3203393~2607] 1.90aa1669) v4 currently waiting for subops from [3]
2013-04-16 14:02:49.871761 osd.6 [WRN] 9 slow requests, 2 included below; 
oldest blocked for > 162.356878 secs
2013-04-16 14:02:49.871766 osd.6 [WRN] slow request 160.798691 seconds old, 
received at 2013-04-16 14:00:09.073036: osd_op(mds.0.31:354 200.0000211b 
[write 3206000~3231] 1.90aa1669) v4 currently waiting for subops from [3]
2013-04-16 14:02:49.871780 osd.6 [WRN] slow request 160.083633 seconds old, 
received at 2013-04-16 14:00:09.788094: osd_op(mds.0.31:356 200.0000211b 
[write 3209231~3852] 1.90aa1669) v4 currently waiting for subops from [3]
2013-04-16 14:02:52.872229 osd.6 [WRN] 9 slow requests, 1 included below; 
oldest blocked for > 165.357349 secs
2013-04-16 14:02:52.872233 osd.6 [WRN] slow request 160.357224 seconds old, 
received at 2013-04-16 14:00:12.514974: osd_op(mds.0.31:357 200.0000211b 
[write 3213083~4503] 1.90aa1669) v4 currently waiting for subops from [3]
2013-04-16 14:02:53.872484 osd.6 [WRN] 9 slow requests, 1 included below; 
oldest blocked for > 166.357601 secs
2013-04-16 14:02:53.872489 osd.6 [WRN] slow request 160.099407 seconds old, 
received at 2013-04-16 14:00:13.773043: osd_op(mds.0.31:359 200.0000211b 
[write 3217586~4500] 1.90aa1669) v4 currently waiting for subops from [3]
2013-04-16 14:02:57.873113 osd.6 [WRN] 9 slow requests, 1 included below; 
oldest blocked for > 170.358236 secs
2013-04-16 14:02:57.873117 osd.6 [WRN] slow request 160.357995 seconds old, 
received at 2013-04-16 14:00:17.515090: osd_op(mds.0.31:361 200.0000211b 
[write 3222086~4486] 1.90aa1669) v4 currently waiting for subops from [3]


and:

ceph -w:

2013-04-16 13:45:12.159336 mon.0 [INF] pgmap v280241: 640 pgs: 638 
active+clean, 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep; 14550 
GB data, 14575 GB used, 89841 GB / 107 TB avail; 27B/s wr, 0op/s
2013-04-16 13:45:19.492099 mon.0 [INF] pgmap v280242: 640 pgs: 638 
active+clean, 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep; 14550 
GB data, 14575 GB used, 89841 GB / 107 TB avail


dmesg:

--------
Call Trace:
 [<ffffffff815d8719>] schedule+0x29/0x70
 [<ffffffff815d89fe>] schedule_preempt_disabled+0xe/0x10
 [<ffffffff815d7126>] __mutex_lock_slowpath+0xf6/0x170
 [<ffffffff815d700b>] mutex_lock+0x2b/0x50
 [<ffffffff81257859>] ima_rdwr_violation_check+0x79/0x1b0
 [<ffffffff812579b1>] ima_file_check+0x21/0x60
 [<ffffffff8119ee3d>] do_last+0x45d/0x7c0
 [<ffffffff811a19e3>] path_openat+0xb3/0x480
 [<ffffffff8113322b>] ? __alloc_pages_nodemask+0x2fb/0x320
 [<ffffffff811a1ee9>] do_filp_open+0x49/0xa0
 [<ffffffff811ae33d>] ? __alloc_fd+0xdd/0x150
 [<ffffffff8118f9b8>] do_sys_open+0x108/0x1f0
 [<ffffffff8118fae1>] sys_open+0x21/0x30
 [<ffffffff815e1e59>] system_call_fastpath+0x16/0x1b
libceph: osd3 172.30.48.98:6810 socket error on write
libceph: osd3 172.30.48.98:6810 socket error on write
libceph: osd6 down
libceph: osd6 up
libceph: osd3 172.30..48.98:6810 socket error on write
libceph: mon0 172.30..48.98:6789 socket closed (con state OPEN)
libceph: mon0 172.30..48.98:6789 session lost, hunting for new mon
libceph: mon1 172.30..48.98:6789 session established
libceph: mon1 172.30..48.98:6789 socket closed (con state OPEN)
libceph: mon1 172.30..48.98:6789 session lost, hunting for new mon
libceph: mon1 172.30..48.98:6789 socket closed (con state CONNECTING)
libceph: mds0 172.30..48.98:6813 socket closed (con state OPEN)
libceph: mds0 172.30..48.98:6813 socket closed (con state CONNECTING)
libceph: mds0 172.30..48.98:6813 socket closed (con state CONNECTING)
libceph: mds0 172.30..48.98:6813 socket closed (con state CONNECTING)
libceph: mds0 172.30..48.98:6813 socket closed (con state CONNECTING)
libceph: mds0 172.30..48.98:6813 socket closed (con state CONNECTING)


Please help me!



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux