Re: Understanding Ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 01/23/2013 10:19 AM, Patrick McGarry wrote:

> http://ceph.com/howto/building-a-public-ami-with-ceph-and-openstack/

> On Wed, Jan 23, 2013 at 10:13 AM, Sam Lang <sam.lang@xxxxxxxxxxx> wrote:

>> http://ceph.com/docs/master/rbd/rbd-openstack/

These are both great, I'm sure, but Patrick's page says "I chose to
follow the 5 minute quickstart guide" and the rbd-openstack page says
"Important ... you must have a running Ceph cluster."

My problem is I can;t find a "5 minute quickstart guide" for RHEL 6. and
I didn't get a "running ceph cluster" by trying to follow the existing
(ubuntu) guide and adjust for centos 6.3.

So I'm stuck at a point way before those guides become relevant: once I
had one OSD/MDS/MON box up, I got "HEALTH_WARN 384 pgs degraded; 384 pgs
stuck unclean; recovery 21/42 degraded (50.000%)" (384 appears be the
number of placement groups created by default).

What does that mean? That I only have one OSD? Or is it genuinely unhealthy?

-- 
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu

Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux