Run "ceph health detail" and it should give you more information. (I'd guess an osd or mon has a full hard disk) Cheers Mike On 8 July 2013 21:16, Jordi Llonch <llonchj@xxxxxxxxx> wrote: > Hello, > > I am testing ceph using ubuntu raring with ceph version 0.61.4 > (1669132fcfc27d0c0b5e5bb93ade59d147e23404) on 3 virtualbox nodes. > > What is this HEALTH_WARN indicating? > > # ceph -s > health HEALTH_WARN > monmap e3: 3 mons at > {node1=192.168.56.191:6789/0,node2=192.168.56.192:6789/0,node3=192.168.56.193:6789/0}, > election epoch 52, quorum 0,1,2 node1,node2,node3 > osdmap e84: 3 osds: 3 up, 3 in > pgmap v3209: 192 pgs: 192 active+clean; 460 MB data, 1112 MB used, 135 > GB / 136 GB avail > mdsmap e37: 1/1/1 up {0=node3=up:active}, 1 up:standby > > > Thanks, > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Mike Bryant | Systems Administrator | Ocado Technology mike.bryant@xxxxxxxxx | 01707 382148 | www.ocadotechnology.com -- Notice: This email is confidential and may contain copyright material of Ocado Limited (the "Company"). Opinions and views expressed in this message may not necessarily reflect the opinions and views of the Company. If you are not the intended recipient, please notify us immediately and delete all copies of this message. Please note that it is your responsibility to scan this message for viruses. Company reg. no. 3875000. Ocado Limited Titan Court 3 Bishops Square Hatfield Business Park Hatfield Herts AL10 9NE _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com