Documentation Discrepancy - Manual Configuration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Testing Ceph with 0.79.

1 Monitor
2 OSDs


Following the instructions here (http://ceph.com/docs/master/install/manual-deployment/)... I arrive at this state:

------
    cluster e3e1a87b-d282-41b5-b4ad-fb3f969e164f
     health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean
monmap e1: 1 mons at {a=10.208.39.100:6789/0}, election epoch 1, quorum 0 a
     osdmap e16: 2 osds: 2 up, 2 in
      pgmap v21: 192 pgs, 3 pools, 0 bytes data, 0 objects
            2004 MB used, 59381 MB / 65536 MB avail
                 192 active+degraded
------

'ceph osd dump' shows this:
------
...
pool 0 'data' replicated size 3 min_size 2 ...
pool 1 'metadata' replicated size 3 min_size 2 ...
pool 2 'rbd' replicated size 3 min_size 2 ...
...
------



It's not until I add a third OSD, or reduce the size of the pools, does the 192 pgs become unstuck and migrate towards active+clean.



I think the documentation should indicate that three nodes are required for Ceph to become optimal.



--
Thanks,
Dyweni
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux