see warning logs in ceph.log

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Your PGs are not active+clean, so no I/O is possible.
>
> Are you OSDs running?
>
> $ sudo ceph -s
>
> That should give you more information about what to do.
>
> Wido


Thanks. this is the info output, I saw osd is running.
Can you help more? thanks.

root at ceph2:~# ceph -s
    health HEALTH_WARN 192 pgs stale; 192 pgs stuck stale; 1/4 in osds 
are down
    monmap e1: 1 mons at {ceph2=172.17.6.176:6789/0}, election epoch 1, 
quorum 0 ceph2
    osdmap e60: 12 osds: 3 up, 4 in
     pgmap v2079: 192 pgs: 192 stale+active+clean; 1221 MB data, 66453 
MB used, 228 GB / 308 GB avail
    mdsmap e1: 0/0/1 up

root at ceph2:~#
root at ceph2:~# netstat -ntlp|grep ceph-osd
tcp        0      0 0.0.0.0:6800            0.0.0.0:* 
LISTEN      1969/ceph-osd
tcp        0      0 0.0.0.0:6801            0.0.0.0:* 
LISTEN      1969/ceph-osd
tcp        0      0 0.0.0.0:6802            0.0.0.0:* 
LISTEN      1969/ceph-osd
tcp        0      0 0.0.0.0:6803            0.0.0.0:* 
LISTEN      2185/ceph-osd
tcp        0      0 0.0.0.0:6804            0.0.0.0:* 
LISTEN      2185/ceph-osd
tcp        0      0 0.0.0.0:6805            0.0.0.0:* 
LISTEN      2185/ceph-osd


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux