Re: Question about configuration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Got it. Thanks !

regards,
Yasu

From: Gregory Farnum <greg@xxxxxxxxxxx>
Subject: Re: Question about configuration
Date: Thu, 10 Jan 2013 16:55:00 -0800
Message-ID: <CAPYLRzhrPaRpPwJ=m5G51-KpY_jev2fn_gXaXEC6OdcAEDXfcA@xxxxxxxxxxxxxx>

> On Thu, Jan 10, 2013 at 4:51 PM, Yasuhiro Ohara <yasu@xxxxxxxxxxxx> wrote:
>>
>> Hi, Greg,
>>
>> When I went through the Ceph document, I could find the description
>> about /etc/init.d only, so it is still the easiest for me.
>> Is there documentation on other (upstart?) system or do I need to
>> learn those system ? Or just letting me know how to install the
>> resource file (for Ceph in upstart) might work for me.
> 
> There isn't really any documentation right now and if you started off
> with sysvinit it's probably easiest to continue that way. It will work
> with that system too; it's just that if you run "sudo service ceph -a
> start" then it's going to go and turn on all the daemons listed in its
> local ceph.conf.
> -Greg
> 
>>
>> Thanks.
>>
>> regards,
>> Yasu
>>
>> From: Gregory Farnum <greg@xxxxxxxxxxx>
>> Subject: Re: Question about configuration
>> Date: Thu, 10 Jan 2013 16:43:59 -0800
>> Message-ID: <CAPYLRzhZr81ko3QgC1oGXs_12-C6z-EVmuf3k47Ej3j8P_T0AA@xxxxxxxxxxxxxx>
>>
>>> On Thu, Jan 10, 2013 at 4:39 PM, Yasuhiro Ohara <yasu@xxxxxxxxxxxx> wrote:
>>>>
>>>> Hi,
>>>>
>>>> What will happen when constructing a cluster of 10 host,
>>>> but the hosts are gradually removed from the cluster
>>>> one by one (in each step waiting Ceph status to become healthy),
>>>> and reaches eventually to, say, 3 hosts ?
>>>>
>>>> In other words, is there any problem with having 10 osd configuration
>>>> in the ceph.conf, but actually only 3 is up (the 7 are down and out) ?
>>>
>>> If you're not using the /etc/init.d ceph script to start up everything
>>> with the -a option, this will work just fine.
>>>
>>>>
>>>> I assume that if the size of the replication is 3, we can turn off
>>>> 2 osds at each time, and Ceph can recover itself to the healthy state.
>>>> Is it the case ?
>>>
>>> Yeah, that should work fine. You might consider just marking OSDs
>>> "out" two at a time and not actually killing them until the cluster
>>> has become quiescent again, though — that way they can participate as
>>> a source for recovery.
>>> -Greg
��.n��������+%������w��{.n����z��u���ܨ}���Ơz�j:+v�����w����ޙ��&�)ߡ�a����z�ޗ���ݢj��w�f



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux