Re: CEPH-DEPLOY TRIALS/EVALUATION RESULT ON CEPH VERSION 61.7

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the response;

Here's the version:

root@ubuntuceph700athf1:/etc/ceph# aptitude versions ceph-deploy
Package ceph-deploy:                        
i   1.0-1                                                                        stable                                                   500 



I notice that you did have a ceph-deploy release the day after my evaluation, so yes, I'll definitely try this again in the near future.

Regards,
-ben


________________________________________
From: Alfredo Deza [alfredo.deza@xxxxxxxxxxx]
Sent: Friday, August 16, 2013 7:31 AM
To: Aquino, BenX O
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  CEPH-DEPLOY TRIALS/EVALUATION RESULT ON CEPH VERSION 61.7

On Fri, Aug 9, 2013 at 12:05 PM, Aquino, BenX O <benx.o.aquino@xxxxxxxxx> wrote:
> CEPH-DEPLOY EVALUATION ON CEPH VERSION 61.7
>
> ADMINNODE:
>
> root@ubuntuceph900athf1:~# ceph -v
>
> ceph version 0.61.7 (8f010aff684e820ecc837c25ac77c7a05d7191ff)
>
> root@ubuntuceph900athf1:~#
>
>
>
> SERVERNODE:
>
> root@ubuntuceph700athf1:/etc/ceph# ceph -v
>
> ceph version 0.61.7 (8f010aff684e820ecc837c25ac77c7a05d7191ff)
>
> root@ubuntuceph700athf1:/etc/ceph#
>
>
>
> ===============================================================:
>
> Trial-1 of using ceph-deploy results:
> (http://ceph.com/docs/next/start/quick-ceph-deploy/)
>
>
>
> My trial-1 scenario is using ceph-deploy to replace 2 OSD's (osd.2 and
> OSD.11) of a ceph node.
>
>
>
> Obesrvation:
>
> ceph-deploy creating a symbolic-links ceph-0-->ceph-2 dir and
> ceph-1--->ceph-11 dir.
>
> I did not ran into any errors or issue in this trials.
>
>
>
> once concern:
>
> --ceph deploy did not update linux fstab of mount point of osd data.
>
>
>
> =======================================================================================
>
>
>
> Trial 2: (http://ceph.com/docs/next/start/quick-ceph-deploy/)
>
>
>
> I notice my node did not have any contents in
> /var/lib/ceph/boostrap-{osd}|{mds} .
>
> Result: FAILURE TO MOVE FORWARD BEYOND THIS STEP
>
>
>
> Tip from http://ceph.com/docs/next/start/quick-ceph-deploy/
>
> If you don’t have these keyrings, you may not have created a monitor
> successfully,
>
> or you may have a problem with your network connection.
>
> Ensure that you complete this step such that you have the foregoing keyrings
> before proceeding further.
>
>
>
> Tip from (http://ceph.com/docs/next/start/quick-ceph-deploy/:
>
> You may repeat this procedure. If it fails, check to see if the
> /var/lib/ceph/boostrap-{osd}|{mds} directories on the server node have
> keyrings.
>
> If they do not have keyrings, try adding the monitor again; then, return to
> this step.
>
>
>
> My WORKAROUND1:
>
> COPIED CONTENTS OF /var/lib/ceph/boostrap-{osd}|{mds} FROM ANOTHER NODE
>
>
>
> My WORKAROUND2:
>
> USED CREATE A NEW CLUSTER PROCEDURE with CEPH-DEPLOY to create missing
> keyrings.
>
>
>
> =============================:
>
> TRIAL-3:  Attemp to build a new cluster/1-Node using ceph deploy:
>
>
>
> RESULT FAILED TO GO BEYOND THE ERROR LOGS BELOW:
>
>
>
> root@ubuntuceph900athf1:~/my-cluster# ceph-deploy osd prepare
> ubuntuceph700athf1:sde1:/var/lib/ceph/journal/osd.0.journal
>
> ceph-disk-prepare -- /dev/sde1 /var/lib/ceph/journal/osd.0.journal returned
> 1
>
> meta-data=/dev/sde1              isize=2048   agcount=4, agsize=30524098
> blks
>
>          =                       sectsz=512   attr=2, projid32bit=0
>
> data     =                       bsize=4096   blocks=122096390, imaxpct=25
>
>          =                       sunit=0      swidth=0 blks
>
> naming   =version 2              bsize=4096   ascii-ci=0
>
> log      =internal log           bsize=4096   blocks=59617, version=2
>
>          =                       sectsz=512   sunit=0 blks, lazy-count=1
>
> realtime =none                   extsz=4096   blocks=0, rtextents=0
>
>
>
> WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same
> device as the osd data
>
> umount: /var/lib/ceph/tmp/mnt.iMsc1G: device is busy.
>
>         (In some cases useful info about processes that use
>
>          the device is found by lsof(8) or fuser(1))
>
> ceph-disk: Unmounting filesystem failed: Command '['/bin/umount', '--',
> '/var/lib/ceph/tmp/mnt.iMsc1G']' returned non-zero exit status 1
>
>
>
> ceph-deploy: Failed to create 1 OSDs
>
>
>
> root@ubuntuceph900athf1:~/my-cluster# ceph-deploy osd prepare
> ubuntuceph700athf1:sde1
>
> ceph-disk-prepare -- /dev/sde1 returned 1
>
> meta-data=/dev/sde1              isize=2048   agcount=4, agsize=30524098
> blks
>
>          =                       sectsz=512   attr=2, projid32bit=0
>
> data     =                       bsize=4096   blocks=122096390, imaxpct=25
>
>          =                       sunit=0      swidth=0 blks
>
> naming   =version 2              bsize=4096   ascii-ci=0
>
> log      =internal log           bsize=4096   blocks=59617, version=2
>
>          =                       sectsz=512   sunit=0 blks, lazy-count=1
>
> realtime =none                   extsz=4096   blocks=0, rtextents=0
>
>
>
> umount: /var/lib/ceph/tmp/mnt.0JxBp1: device is busy.
>
>         (In some cases useful info about processes that use
>
>          the device is found by lsof(8) or fuser(1))
>
> ceph-disk: Unmounting filesystem failed: Command '['/bin/umount', '--',
> '/var/lib/ceph/tmp/mnt.0JxBp1']' returned non-zero exit status 1
>
>
>
> ceph-deploy: Failed to create 1 OSDs
>
> root@ubuntuceph900athf1:~/my-cluster# ceph-deploy osd prepare
> ubuntuceph700athf1:sde1
>
> ceph-disk-prepare -- /dev/sde1 returned 1
>
>
>
> ceph-disk: Error: Device is mounted: /dev/sde1
>
>
>
> ceph-deploy: Failed to create 1 OSDs
>
>
>
>
>
> Attempted on the local node:
>
> root@ubuntuceph700athf1:/etc/ceph# ceph-deploy osd prepare
> ubuntuceph700athf1:sde1:/var/lib/ceph/journal/osd.0.journal
>
> ceph-disk-prepare -- /dev/sde1 /var/lib/ceph/journal/osd.0.journal returned
> 1
>
> ceph-disk: Error: Device is mounted: /dev/sde1
>
>
>
> /dev/sde1 on /var/lib/ceph/tmp/mnt.GzZLAr type xfs (rw,noatime)
>
>
>
> RESULT:
>
> ceph-deploy complaints that osd drive is mounted, it was not mounted prior
> to running command, ceph-deploy mounted it, then complains that its mounted.
>
>
>
>
>
> root@ubuntuceph700athf1:/etc/ceph# ceph-deploy osd prepare
> ubuntuceph700athf1:sdf1:/var/lib/ceph/journal/osd.1.journal
>
> ceph-disk-prepare -- /dev/sdf1 /var/lib/ceph/journal/osd.1.journal returned
> 1
>
> meta-data=/dev/sdf1              isize=2048   agcount=4, agsize=30524098
> blks
>
>          =                       sectsz=512   attr=2, projid32bit=0
>
> data     =                       bsize=4096   blocks=122096390, imaxpct=25
>
>          =                       sunit=0      swidth=0 blks
>
> naming   =version 2              bsize=4096   ascii-ci=0
>
> log      =internal log           bsize=4096   blocks=59617, version=2
>
>          =                       sectsz=512   sunit=0 blks, lazy-count=1
>
> realtime =none                   extsz=4096   blocks=0, rtextents=0
>
>
>
> WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same
> device as the osd data
>
> umount: /var/lib/ceph/tmp/mnt.BxJFZa: device is busy.
>
>         (In some cases useful info about processes that use
>
>          the device is found by lsof(8) or fuser(1))
>
> ceph-disk: Unmounting filesystem failed: Command '['/bin/umount', '--',
> '/var/lib/ceph/tmp/mnt.BxJFZa']' returned non-zero exit status 1
>
>
>
> ceph-deploy: Failed to create 1 OSDs
>
> root@ubuntuceph700athf1:/etc/ceph#
>
>
>
> RESULT:
>
> ceph-deploy complaints that osd drive is mounted, it was not mounted prior
> to running command, ceph-deploy mounted it, then complains that its mounted.
>
>

What version of ceph-deploy are you using? Make sure you update as
we've made two releases in the past 10 days that fix a lot of issues.

>
>
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux