Re: 1-Node cluster with no replication

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05/06/2013 12:14 PM, John Wilkins wrote:
Guido,

My apologies. I seem to have omitted the PG troubleshooting section from
the index. It has been addressed. See
http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/

Ceph OSDs peer and check on each other. So running a cluster with only
one OSD is not recommended. Operationally, it's perfectly fine to
bootstrap a cluster that way, but an operating cluster should have at
least two OSDs running. See
http://ceph.com/docs/master/rados/operations/monitoring-osd-pg/#peering and
http://ceph.com/docs/master/rados/configuration/mon-osd-interaction/ to
learn how OSDs interact with each other and monitors.

Regards,


John

Semi-related, it would be nice (but not strictly critical) if we could distinguish between health warnings that prevented operation in a single-osd configuration. It would make osd performance testing a bit cleaner.

Mark




On Mon, May 6, 2013 at 8:04 AM, Guido Winkelmann
<guido-ceph@xxxxxxxxxxxxxxxxx <mailto:guido-ceph@xxxxxxxxxxxxxxxxx>> wrote:

    Am Montag, 6. Mai 2013, 16:59:12 schrieb Wido den Hollander:
     > On 05/06/2013 04:51 PM, Guido Winkelmann wrote:
     > > Am Montag, 6. Mai 2013, 16:41:43 schrieb Wido den Hollander:
     > >> On 05/06/2013 04:15 PM, Guido Winkelmann wrote:
     > >>> Am Montag, 6. Mai 2013, 16:05:31 schrieb Wido den Hollander:
     > >>>> On 05/06/2013 04:00 PM, Guido Winkelmann wrote:
     > >>>>> Hi,
     > >>>>>
     > >>>>> How do I run a 1-node cluster with no replication?
     > >>>>>
     > >>>>> I'm trying to run a small 1-node cluster on my local
    workstation and
     > >>>>> another on my notebook for experimentation/development
    purposes, but
     > >>>>> since I only have on OSD, I'm always getting HEALTH_WARN as the
     > >>>>> cluster
     > >>>>> status from ceph -s. Can I somehow tell ceph to just not
    bother with
     > >>>>> replication for this cluster?
     > >>>>
     > >>>> Have you set min_size to 1 for all the pools?
     > >>>
     > >>> You mean in the crushmap?
     > >>
     > >> No, it's pool setting.
     > >>
     > >> See:
    http://ceph.com/docs/master/rados/operations/pools/#set-pool-values
     > >
     > > Hm, I set that to 1 now, and nothing changed:
     > Have you also set "size" to 1? Meaning no replication.
     >
     > Both size and min_size should be set to 1.

    I set size to 1 now, too. ceph -s no longer reports degraded pgs
    now, but I
    still get a HEALTH_WARN:

    $ ceph -s
        health HEALTH_WARN 384 pgs stuck unclean

    _______________________________________________
    ceph-users mailing list
    ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
John Wilkins
Senior Technical Writer
Intank
john.wilkins@xxxxxxxxxxx <mailto:john.wilkins@xxxxxxxxxxx>
(415) 425-9599
http://inktank.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux