Re: RHEL 7.1 ceph-disk failures creating OSD with ver 0.94.2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It doesn't appear to be related to using wwn's for the drive id. The verbose output shows ceph converting from wwn to sd letter. I ran with verbose on and used sd letters for the data drive and the journal and get the same failures. I'm attempting to create OSD's manually now. 

[root@ceph0 ceph]# ceph-disk -v prepare --cluster ceph --cluster-uuid b2c2e866-ab61-4f80-b116-20fa2ea2ca94 --fs-type xfs --zap-disk /dev/sdc /dev/sdb1
DEBUG:ceph-disk:Zapping partition table on /dev/sdc
INFO:ceph-disk:Running command: /usr/sbin/sgdisk --zap-all -- /dev/sdc
Caution: invalid backup GPT header, but valid main header; regenerating
backup header from main header.

Warning! Main and backup partition tables differ! Use the 'c' and 'e' options
on the recovery & transformation menu to examine the two tables.

Warning! One or more CRCs don't match. You should repair the disk!

****************************************************************************
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
verification and recovery are STRONGLY recommended.
****************************************************************************
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
INFO:ceph-disk:Running command: /usr/sbin/sgdisk --clear --mbrtogpt -- /dev/sdc
Creating new GPT entries.
The operation has completed successfully.
INFO:ceph-disk:calling partx on zapped device /dev/sdc
INFO:ceph-disk:re-reading known partitions will display errors
INFO:ceph-disk:Running command: /usr/sbin/partx -d /dev/sdc
partx: specified range <1:0> does not make sense
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
DEBUG:ceph-disk:Journal is file /dev/sdb1
WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data
DEBUG:ceph-disk:Creating osd partition on /dev/sdc
INFO:ceph-disk:Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:6d05612e-5cc0-422c-9228-4e53ee0f27ac --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/sdc
The operation has completed successfully.
INFO:ceph-disk:calling partx on created device /dev/sdc
INFO:ceph-disk:re-reading known partitions will display errors
INFO:ceph-disk:Running command: /usr/sbin/partx -a /dev/sdc
partx: /dev/sdc: error adding partition 1
INFO:ceph-disk:Running command: /usr/bin/udevadm settle
DEBUG:ceph-disk:Creating xfs fs on /dev/sdc1
INFO:ceph-disk:Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdc1
meta-data=/dev/sdc1              isize=2048   agcount=4, agsize=244188597 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=976754385, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=476930, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
DEBUG:ceph-disk:Mounting /dev/sdc1 on /var/lib/ceph/tmp/mnt.DQ8nOj with options noatime,inode64
INFO:ceph-disk:Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdc1 /var/lib/ceph/tmp/mnt.DQ8nOj
DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.DQ8nOj
DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.DQ8nOj/journal -> /dev/sdb1
DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.DQ8nOj
INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.DQ8nOj
INFO:ceph-disk:Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdc
The operation has completed successfully.
INFO:ceph-disk:calling partx on prepared device /dev/sdc
INFO:ceph-disk:re-reading known partitions will display errors
INFO:ceph-disk:Running command: /usr/sbin/partx -a /dev/sdc
partx: /dev/sdc: error adding partition 1
[root@ceph0 ceph]#

> -----Original Message-----
> From: Loic Dachary [mailto:loic@xxxxxxxxxxx]
> Sent: Saturday, June 27, 2015 1:08 AM
> To: Bruce McFarland; ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  RHEL 7.1 ceph-disk failures creating OSD
> 
> Hi Bruce,
> 
> I think the problem comes from using /dev/disk/by-id/wwn-
> 0x500003959bd02f56 instead of /dev/sdw for the data disk, because ceph-
> disk has a device name parsing logic that works with /dev/XXX. Could you
> run the ceph-disk prepare command again with --verbose to confirm ? If
> that's the case there should be an error instead of what appears to be
> something that only does part of the work.
> 
> Cheers
> 
> On 26/06/2015 18:56, Bruce McFarland wrote:
> > Loic,
> > Thank you very much for the partprobe workaround. I rebuilt the cluster
> using 94.2.
> >
> > I've created partitions on the journal SSDs with parted and then use ceph-
> disk prepare as below. I'm not seeing all of the disks with the tmp mounts
> when I check 'mount' but I also don't see any of the mount directory mount
> points at /var/lib/ceph/osd. I'm see the following output from prepare.
> When I attempt to 'activate' it errors out saying the devices don't exist.
> >
> > ceph-disk prepare --cluster ceph --cluster-uuid
> > b2c2e866-ab61-4f80-b116-20fa2ea2ca94 --fs-type xfs --zap-disk
> > /dev/disk/by-id/wwn-0x500003959bd02f56
> > /dev/disk/by-id/wwn-0x500080d91010024b-part1
> > Caution: invalid backup GPT header, but valid main header;
> > regenerating backup header from main header.
> >
> >
> **************************************************************
> ********
> > ******
> > Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT,
> > but disk verification and recovery are STRONGLY recommended.
> >
> **************************************************************
> ********
> > ****** GPT data structures destroyed! You may now partition the disk
> > using fdisk or other utilities.
> > Creating new GPT entries.
> > The operation has completed successfully.
> > partx: specified range <1:0> does not make sense WARNING:ceph-
> disk:OSD
> > will not be hot-swappable if journal is not the same device as the osd
> > data WARNING:ceph-disk:Journal /dev/disk/by-id/wwn-
> 0x500080d91010024b-part1 was not prepared with ceph-disk. Symlinking
> directly.
> > The operation has completed successfully.
> > partx: /dev/disk/by-id/wwn-0x500003959bd02f56: error adding partition 1
> > meta-data=/dev/sdw1              isize=2048   agcount=4, agsize=244188597
> blks
> >          =                       sectsz=512   attr=2, projid32bit=1
> >          =                       crc=0        finobt=0
> > data     =                       bsize=4096   blocks=976754385, imaxpct=5
> >          =                       sunit=0      swidth=0 blks
> > naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
> > log      =internal log           bsize=4096   blocks=476930, version=2
> >          =                       sectsz=512   sunit=0 blks, lazy-count=1
> > realtime =none                   extsz=4096   blocks=0, rtextents=0
> > The operation has completed successfully.
> > partx: /dev/disk/by-id/wwn-0x500003959bd02f56: error adding partition
> > 1
> >
> >
> > [root@ceph0 ceph]# ceph -v
> > ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
> > [root@ceph0 ceph]# rpm -qa | grep ceph
> > ceph-radosgw-0.94.2-0.el7.x86_64
> > libcephfs1-0.94.2-0.el7.x86_64
> > ceph-common-0.94.2-0.el7.x86_64
> > python-cephfs-0.94.2-0.el7.x86_64
> > ceph-0.94.2-0.el7.x86_64
> > [root@ceph0 ceph]#
> >
> >
> >
> >> -----Original Message-----
> >> From: Loic Dachary [mailto:loic@xxxxxxxxxxx]
> >> Sent: Friday, June 26, 2015 3:29 PM
> >> To: Bruce McFarland; ceph-users@xxxxxxxxxxxxxx
> >> Subject: Re:  RHEL 7.1 ceph-disk failures creating OSD
> >>
> >> Hi,
> >>
> >> Prior to firefly v0.80.8 ceph-disk zap did not call partprobe and
> >> that was causing the kind of problems you're experiencing. It was
> >> fixed by
> >>
> https://github.com/ceph/ceph/commit/e70a81464b906b9a304c29f474e672
> >> 6762b63a7c and is described in more details at
> >> http://tracker.ceph.com/issues/9665. Rebooting the machine ensures
> >> the partition table is up to date and that's what you probably want
> >> to do after that kind of failure. You can however avoid the failure by
> running:
> >>
> >>  * ceph-disk zap
> >>  * partproble
> >>  * ceph-disk prepare
> >>
> >> Cheers
> >>
> >> P.S. The "partx: /dev/disk/by-id/wwn-0x500003959ba80a4e: error adding
> >> partition 1" can be ignored, it does not actually matter. A message
> >> was added later to avoid confusion with a real error.
> >> .
> >> On 26/06/2015 17:09, Bruce McFarland wrote:
> >>> I have moved storage nodes to RHEL 7.1 and used the basic server
> >>> install. I
> >> installed ceph-deploy and used the ceph.repo/epel.repo for
> >> installation of ceph 80.7. I have tried ceph-disk with issuing "zap"
> >> on the same command line as "prepare" and on a separate command
> line
> >> immediately before the ceph-disk prepare. I consistently run into the
> >> partition errors and am unable to create OSD's on RHEL 7.1.
> >>>
> >>>
> >>>
> >>> ceph-disk prepare --cluster ceph --cluster-uuid
> >>> 373a09f7-2070-4d20-8504-
> >> c8653fb6db80 --fs-type xfs --zap-disk /dev/disk/by-id/wwn-
> >> 0x500003959ba80a4e /dev/disk/by-id/wwn-0x500080d9101001d6-part1
> >>>
> >>> Caution: invalid backup GPT header, but valid main header;
> >>> regenerating
> >>>
> >>> backup header from main header.
> >>>
> >>>
> >>>
> >>>
> >>
> **************************************************************
> >> **************
> >>>
> >>> Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT,
> >>> but
> >> disk
> >>>
> >>> verification and recovery are STRONGLY recommended.
> >>>
> >>>
> >>
> **************************************************************
> >> **************
> >>>
> >>> GPT data structures destroyed! You may now partition the disk using
> >>> fdisk
> >> or
> >>>
> >>> other utilities.
> >>>
> >>> The operation has completed successfully.
> >>>
> >>> WARNING:ceph-disk:OSD will not be hot-swappable if journal is not
> >>> the
> >> same device as the osd data
> >>>
> >>> The operation has completed successfully.
> >>>
> >>> meta-data=/dev/sdc1              isize=2048   agcount=4,
> agsize=244188597
> >> blks
> >>>
> >>>          =                       sectsz=512   attr=2, projid32bit=1
> >>>
> >>>          =                       crc=0        finobt=0
> >>>
> >>> data     =                       bsize=4096   blocks=976754385, imaxpct=5
> >>>
> >>>          =                       sunit=0      swidth=0 blks
> >>>
> >>> naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
> >>>
> >>> log      =internal log           bsize=4096   blocks=476930, version=2
> >>>
> >>>          =                       sectsz=512   sunit=0 blks, lazy-count=1
> >>>
> >>> realtime =none                   extsz=4096   blocks=0, rtextents=0
> >>>
> >>> The operation has completed successfully.
> >>>
> >>> partx: /dev/disk/by-id/wwn-0x500003959ba80a4e: error adding
> >>> partition 1
> >>>
> >>>
> >>>
> >>> _______________________________________________
> >>> ceph-users mailing list
> >>> ceph-users@xxxxxxxxxxxxxx
> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>>
> >>
> >> --
> >> Loïc Dachary, Artisan Logiciel Libre
> >
> 
> --
> Loïc Dachary, Artisan Logiciel Libre

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux