Re: Should ceph-volume lvm prepare not be backwards compitable with ceph-disk?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Jun 2, 2018 at 12:31 PM, Oliver Freyermuth
<freyermuth@xxxxxxxxxxxxxxxxxx> wrote:
> Am 02.06.2018 um 11:44 schrieb Marc Roos:
>>
>>
>> ceph-disk does not require bootstrap-osd/ceph.keyring and ceph-volume
>> does
>
> I believe that's expected when you use "prepare".
> For ceph-volume, "prepare" already bootstraps the OSD and fetches a fresh OSD id,
> for which it needs the keyring.
> For ceph-disk, this was not part of "prepare", but you only needed a key for "activate" later, I think.

This is exactly create.

Do note that the split into prepare+activate is just to try and
accommodate the use case of not wanting multiple OSDs
up at the same time.

There are some ceph-volume internals that required us to know what OSD
id this was before activating, which is why
the bootstrap is being used here.

It would be best to just use `create` really, and not bother with these steps.

>
> Since we always use "create" here via ceph-deploy, I'm not an expert on the subtle command differences, though -
> but ceph-deploy is doing a good job at making you survive without learning them ;-).

ceph-deploy just bypasses the prepare+activate and calls 'create' on
ceph-volume directly.

>
> Cheers,
>         Oliver
>
>>
>>
>>
>> [@~]# ceph-disk prepare --bluestore --zap-disk /dev/sdf
>>
>> ***************************************************************
>> Found invalid GPT and valid MBR; converting MBR to GPT format.
>> ***************************************************************
>>
>> GPT data structures destroyed! You may now partition the disk using
>> fdisk or
>> other utilities.
>> Creating new GPT entries.
>> The operation has completed successfully.
>> The operation has completed successfully.
>> The operation has completed successfully.
>> The operation has completed successfully.
>> meta-data=/dev/sdf1              isize=2048   agcount=4, agsize=6400
>> blks
>>          =                       sectsz=4096  attr=2, projid32bit=1
>>          =                       crc=1        finobt=0, sparse=0
>> data     =                       bsize=4096   blocks=25600, imaxpct=25
>>          =                       sunit=0      swidth=0 blks
>> naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
>> log      =internal log           bsize=4096   blocks=1608, version=2
>>          =                       sectsz=4096  sunit=1 blks, lazy-count=1
>> realtime =none                   extsz=4096   blocks=0, rtextents=0
>> Warning: The kernel is still using the old partition table.
>> The new table will be used at the next reboot.
>> The operation has completed successfully.
>>
>> [@~]# ceph-disk  zap /dev/sdf
>> /dev/sdf1: 4 bytes were erased at offset 0x00000000 (xfs): 58 46 53 42
>> 100+0 records in
>> 100+0 records out
>> 104857600 bytes (105 MB) copied, 0.946816 s, 111 MB/s
>> 110+0 records in
>> 110+0 records out
>> 115343360 bytes (115 MB) copied, 0.876412 s, 132 MB/s
>> Caution: invalid backup GPT header, but valid main header; regenerating
>> backup header from main header.
>>
>> Warning! Main and backup partition tables differ! Use the 'c' and 'e'
>> options
>> on the recovery & transformation menu to examine the two tables.
>>
>> Warning! One or more CRCs don't match. You should repair the disk!
>>
>> ************************************************************************
>> ****
>> Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but
>> disk
>> verification and recovery are STRONGLY recommended.
>> ************************************************************************
>> ****
>> GPT data structures destroyed! You may now partition the disk using
>> fdisk or
>> other utilities.
>> Creating new GPT entries.
>> The operation has completed successfully.
>>
>>
>>
>> [@ ~]# fdisk -l /dev/sdf
>> WARNING: fdisk GPT support is currently new, and therefore in an
>> experimental phase. Use at your own discretion.
>>
>> Disk /dev/sdf: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
>> Units = sectors of 1 * 512 = 512 bytes
>> Sector size (logical/physical): 512 bytes / 4096 bytes
>> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
>> Disk label type: gpt
>> Disk identifier: 7DB3B9B6-CD8E-41B5-85BA-3ABB566BAF8E
>>
>>
>> #         Start          End    Size  Type            Name
>>
>>
>> [@ ~]# ceph-volume lvm prepare --bluestore --data /dev/sdf
>> Running command: /bin/ceph-authtool --gen-print-key
>> Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd
>> --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
>> 8a2440c2-55a3-4b09-8906-965c25e36066
>>  stderr: 2018-06-02 17:00:47.309487 7f5a083c1700 -1 auth: unable to find
>> a keyring on /var/lib/ceph/bootstrap-osd/ceph.keyring: (2) No such file
>> or directory
>>  stderr: 2018-06-02 17:00:47.309502 7f5a083c1700 -1 monclient: ERROR:
>> missing keyring, cannot use cephx for authentication
>>  stderr: 2018-06-02 17:00:47.309505 7f5a083c1700  0 librados:
>> client.bootstrap-osd initialization error (2) No such file or directory
>>  stderr: [errno 2] error connecting to the cluster
>> -->  RuntimeError: Unable to create a new OSD id
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux