Re: Understanding Ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 24 Jan 2013, Dimitri Maziuk wrote:
> On 01/24/2013 03:07 PM, Dan Mick wrote:
> ...
> > Yeah; it's probably mostly just that one-OSD configurations are so
> > uncommon that we never special-cased that small user set.  Also, you can
> > run with a cluster in that state forever (well, until that one OSD dies
> > at least); I do that regularly with the default vstart.sh local test
> > cluster
> 
> Well, this goes back to the quick start guide: to me a more natural way
> to start is with one host, then add another. That's what I was trying to
> do, however, the quick start page ends with
> 
> "When your cluster echoes back HEALTH_OK, you may begin using Ceph."
> 
> and that doesn't happen with one host: you get "384 pgs stuck unclean"
> instead of "HEALTH_OK". To me that means I may *not* begin using ceph.
> 
> I did run "ceph osd pool set ... size 1" on each of the 3 default pools,
> verified that it took with "ceph osd dump | grep 'rep size'", and gave
> it a good half hour to settle. I still got "384 pgs stuck unclean" from
> "ceph health".
> 
> So I re-done it with 2 OSDs and got the expected HEALTH_OK right from
> the start.
> 
> John,
> 
> a) a note saying "if you have only one OSD you won't get HEALTH_OK until
> you add another one; you can start using the cluster" may be a useful
> addition to the quick start,
> 
> b) more importantly, if there are any plans to write more quickstart
> pages, I'd love to see the "add another OSD (MDS, MON) to an existing
> pool in 5 minutes".

There may be a related issue at work here: the default crush rules now 
replicate across hosts instead of across osds, so single-host configs may 
have similar problems (depending on whether you used mkcephfs to create 
the cluster or not).

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux