Re: Problems with OSDs (cuttlefish)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi John,

Thanks for your answer!

But Maybe I haven´t install the cuttlefish correctly in my hosts.

sudo initctl list | grep ceph
-> none

No ceph-all found anywhere.

Steps that I have done to install cuttlefish:

sudo rpm --import 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'
sudo su -c 'rpm -Uvh http://ceph.com/rpm-cuttlefish/el6/x86_64/ceph-release-1-0.el6.noarch.rpm'
sudo yum install ceph


Thanks a lot and best regards,
Álvaro.




-----Mensaje original-----
De: John Wilkins [mailto:john.wilkins@xxxxxxxxxxx] 
Enviado el: jueves, 06 de junio de 2013 2:48
Para: Alvaro Izquierdo Jimeno
CC: ceph-users@xxxxxxxxxxxxxx
Asunto: Re:  Problems with OSDs (cuttlefish)

You can also start/stop an individual daemon this way:

sudo stop ceph-osd id=0
sudo start ceph-osd id=0



On Wed, Jun 5, 2013 at 4:33 PM, John Wilkins <john.wilkins@xxxxxxxxxxx> wrote:
> Ok. It's more like this:
>
> sudo initctl list | grep ceph
>
> This lists all your ceph scripts and their state.
>
> To start the cluster:
>
> sudo start ceph-all
>
> To stop the cluster:
>
> sudo stop ceph-all
>
> You can also do the same with all OSDs, MDSs, etc. I'll write it up 
> and check it in.
>
>
> On Wed, Jun 5, 2013 at 3:16 PM, John Wilkins <john.wilkins@xxxxxxxxxxx> wrote:
>> Alvaro,
>>
>> I ran into this too. Clusters running with ceph-deploy  now use upstart.
>>
>>> start ceph
>>> stop ceph
>>
>> Should work. I'm testing and will update the docs shortly.
>>
>> On Wed, Jun 5, 2013 at 7:41 AM, Alvaro Izquierdo Jimeno 
>> <aizquierdo@xxxxxxxx> wrote:
>>> Hi all,
>>>
>>>
>>>
>>> I already installed Ceph Bobtail in centos machines and it’s run perfectly.
>>>
>>>
>>>
>>> But now I have to install Ceph Cuttlefish over Redhat 6.4. I have 
>>> two machines (until the moment). We can assume the hostnames IP1 and 
>>> IP2  ;).  I want (just to test) two monitors (one per host) and two osds (one per host).
>>>
>>> In both machines, I have a XFS logical volume:
>>>
>>> Disk /dev/mapper/lvceph:  X GB, Y bytes
>>>
>>> The logical volume is formatted with XFS (sudo mkfs.xfs -f -i 
>>> size=2048 /dev/mapper/ lvceph) and mounted
>>>
>>> In /etc/fstab I have:
>>>
>>> /dev/mapper/lvceph   /ceph                   xfs
>>> defaults,inode64,noatime        0 2
>>>
>>>
>>>
>>> After use ceph-deploy to install (ceph-deploy install --stable 
>>> cuttlefish
>>> IP1 IP2), create (ceph-deploy new IP1 IP2) and add two monitors 
>>> (ceph-deploy --overwrite-conf mon create  IP1 y ceph-deploy 
>>> --overwrite-conf mon create IP2), I want to add the two osds
>>>
>>>
>>>
>>> ceph-deploy osd prepare IP1:/ceph
>>>
>>> ceph-deploy osd activate IP1:/ceph
>>>
>>> ceph-deploy osd prepare IP2:/ceph
>>>
>>> ceph-deploy osd activate IP2:/ceph
>>>
>>>
>>>
>>>
>>>
>>> But no one osd is up (neither in)
>>>
>>> #sudo ceph -d osd stat
>>>
>>> e3: 2 osds: 0 up, 0 in
>>>
>>> #sudo ceph osd tree
>>>
>>> # id    weight  type name       up/down reweight
>>>
>>> -1      0       root default
>>>
>>> 0       0       osd.0   down    0
>>>
>>> 1       0       osd.1   down    0
>>>
>>>
>>>
>>> I tried to start both osds:
>>>
>>> #sudo /etc/init.d/ceph  -a start osd.0
>>>
>>> /etc/init.d/ceph: osd.0 not found (/etc/ceph/ceph.conf defines , 
>>> /var/lib/ceph defines )
>>>
>>>
>>>
>>> I suppose I have something wrong in the ceph-deploy osd prepare or 
>>> activate but, can anybody help me to find it?
>>>
>>> Is needed to add anything more to /etc/ceph/ceph.conf?
>>>
>>> Now it looks like:
>>>
>>> [global]
>>>
>>> filestore_xattr_use_omap = true
>>>
>>> mon_host = the_ip_of_IP1, the_ip_of_IP2
>>>
>>> osd_journal_size = 1024
>>>
>>> mon_initial_members = IP1,IP2
>>>
>>> auth_supported = cephx
>>>
>>> fsid = 43501eb5-e8cf-4f89-a4e2-3c93ab1d9cc5
>>>
>>>
>>>
>>>
>>>
>>> Thanks in advanced and best regards,
>>>
>>> Álvaro
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> ____________
>>> Verificada la ausencia de virus por G Data AntiVirus Versión: AVA 
>>> 22.10143 del 05.06.2013 Noticias de virus: www.antiviruslab.com
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>>
>>
>> --
>> John Wilkins
>> Senior Technical Writer
>> Intank
>> john.wilkins@xxxxxxxxxxx
>> (415) 425-9599
>> http://inktank.com
>
>
>
> --
> John Wilkins
> Senior Technical Writer
> Intank
> john.wilkins@xxxxxxxxxxx
> (415) 425-9599
> http://inktank.com



--
John Wilkins
Senior Technical Writer
Intank
john.wilkins@xxxxxxxxxxx
(415) 425-9599
http://inktank.com

____________
Verificada la ausencia de virus por G Data AntiVirus 
Versión: AVA 22.10160 del 06.06.2013 
Noticias de virus: www.antiviruslab.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux