[ceph-users] HEALTH_WARN active+degraded on fresh install CENTOS 6.5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Gregory Farnum <greg at ...> writes:

> ...and one more time, because apparently my brain's out to lunch today:
> 
> ceph osd tree
> 
> *sigh*
> 

haha, we all have those days.

[root at monitor01 ceph]# ceph osd tree
# id    weight  type name       up/down reweight
-1      14.48   root default
-2      7.24            host ceph01
0       2.72                    osd.0   up      1
1       0.9                     osd.1   up      1
2       0.9                     osd.2   up      1
3       2.72                    osd.3   up      1
-3      7.24            host ceph02
4       2.72                    osd.4   up      1
5       0.9                     osd.5   up      1
6       0.9                     osd.6   up      1
7       2.72                    osd.7   up      1

I notice that the weights are all over the place. I was planning on the 
following once I got things going.

6 1tb ssd osd's (across 3 hosts) as a writeback cache pool, and 6 3tb sata's 
behind them in another pool for data that isn't accessed as often. 






[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux