[ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Gregory Farnum <greg at ...> writes:

> 
> What's the output of "ceph osd map"?
> 
> Your CRUSH map probably isn't trying to segregate properly, with 2
> hosts and 4 OSDs each.
> Software Engineer #42  <at>  http://inktank.com | http://ceph.com
> 
 Is this what you are looking for? 

ceph osd map rbd ceph
osdmap e104 pool 'rbd' (2) object 'ceph' -> pg 2.3482c180 (2.0) -> up ([3,5], 
p3) acting ([3,5,0], p3)

We're bringing on a 3rd host tomorrow with 4 more osd's. Would this correct 
the issue?



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux