Re: Migration from filestore to blustore

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, I have done this: ... your mileage may vary upon creation parameters
....

# cat bs.sh

ID=$1
echo "wait for cluster ok"
while ! ceph health | grep HEALTH_OK ; do echo -n "."; sleep 10 ; done
echo "ceph osd out $ID"
ceph osd out $ID
sleep 10
while ! ceph health | grep HEALTH_OK ; do sleep 10 ; done
echo "systemctl stop ceph-osd@$ID.service"
systemctl stop ceph-osd@$ID.service
sleep 60
DEVICE=`mount | grep /var/lib/ceph/osd/ceph-$ID| cut -f1 -d"p"`

umount /var/lib/ceph/osd/ceph-$ID
echo "ceph-disk zap $DEVICE"
ceph-disk zap $DEVICE
ceph osd destroy $ID --yes-i-really-mean-it
echo "ceph-disk prepare --bluestore $DEVICE --osd-id $ID"
ceph-disk prepare --bluestore $DEVICE --osd-id $ID
sleep 10;
ceph osd metadata $ID
ceph -s
echo "wait for cluster ok"
while ! ceph health | grep HEALTH_OK ; do echo -n "."; sleep 10 ; done
ceph -s
echo " proceed with next"




Gerhard W. Recher

net4sec UG (haftungsbeschränkt)
Leitenweg 6
86929 Penzing

+49 171 4802507
Am 20.11.2017 um 14:34 schrieb Iban Cabrillo:
>
> Hi Wido,
>   The disk was empty, I checked that there were no remapped pgs,
> before run ceph-disk prepare. Re-run ceph-disk again?
>
> Regards, i
>
>
> El lun., 20 nov. 2017 14:12, Wido den Hollander <wido@xxxxxxxx
> <mailto:wido@xxxxxxxx>> escribió:
>
>
>     > Op 20 november 2017 om 14:02 schreef Iban Cabrillo
>     <cabrillo@xxxxxxxxxxxxxx <mailto:cabrillo@xxxxxxxxxxxxxx>>:
>     >
>     >
>     > Hi cephers,
>     >   I was trying to migrate from Filestore to bluestore followig the
>     > instructions but after the ceph-disk prepare the new osd had not
>     join to
>     > the cluster again:
>     >
>     >    [root@cephadm ~]# ceph osd tree
>     > ID CLASS WEIGHT   TYPE NAME                STATUS    REWEIGHT
>     PRI-AFF
>     > -1       58.21509 root default
>     > -7       58.21509     datacenter 10GbpsNet
>     > -2       29.12000         host cephosd01
>     >  1   hdd  3.64000             osd.1               up  1.00000
>     1.00000
>     >  3   hdd  3.64000             osd.3               up  1.00000
>     1.00000
>     >  5   hdd  3.64000             osd.5               up  1.00000
>     1.00000
>     >  7   hdd  3.64000             osd.7               up  1.00000
>     1.00000
>     >  9   hdd  3.64000             osd.9               up  1.00000
>     1.00000
>     > 11   hdd  3.64000             osd.11              up  1.00000
>     1.00000
>     > 13   hdd  3.64000             osd.13              up  1.00000
>     1.00000
>     > 15   hdd  3.64000             osd.15              up  1.00000
>     1.00000
>     > -3       29.09509         host cephosd02
>     >  0   hdd  3.63689             osd.0        destroyed        0
>     1.00000
>     >  2   hdd  3.63689             osd.2               up  1.00000
>     1.00000
>     >  4   hdd  3.63689             osd.4               up  1.00000
>     1.00000
>     >  6   hdd  3.63689             osd.6               up  1.00000
>     1.00000
>     >  8   hdd  3.63689             osd.8               up  1.00000
>     1.00000
>     > 10   hdd  3.63689             osd.10              up  1.00000
>     1.00000
>     > 12   hdd  3.63689             osd.12              up  1.00000
>     1.00000
>     > 14   hdd  3.63689             osd.14              up  1.00000
>     1.00000
>     > -8              0     datacenter 1GbpsNet
>     >
>     >
>     > The state is destroyed yet
>     >
>     > The operation has completed successfully.
>     > [root@cephosd02 ~]# ceph-disk prepare --bluestore /dev/sda
>     --osd-id 0
>     > The operation has completed successfully.
>
>     Did you wipe the disk yet? Make sure it's completely empty before
>     you re-create the OSD.
>
>     Wido
>
>     > The operation has completed successfully.
>     > The operation has completed successfully.
>     > meta-data=/dev/sda1              isize=2048   agcount=4,
>     agsize=6400 blks
>     >          =                       sectsz=512   attr=2, projid32bit=1
>     >          =                       crc=1        finobt=0, sparse=0
>     > data     =                       bsize=4096   blocks=25600,
>     imaxpct=25
>     >          =                       sunit=0      swidth=0 blks
>     > naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
>     > log      =internal log           bsize=4096   blocks=864, version=2
>     >          =                       sectsz=512   sunit=0 blks,
>     lazy-count=1
>     > realtime =none                   extsz=4096   blocks=0, rtextents=0
>     >
>     > The metadata was on SSD disk
>     >
>     > In the logs I only see this :
>     >
>     > 2017-11-20 14:00:48.536252 7fc2d149dd00 -1  ** ERROR: unable to
>     open OSD
>     > superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or
>     directory
>     > 2017-11-20 14:01:08.788158 7f4a9165fd00  0 set uid:gid to 167:167
>     > (ceph:ceph)
>     > 2017-11-20 14:01:08.788179 7f4a9165fd00  0 ceph version 12.2.0
>     > (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc), process
>     > (unknown), pid 115029
>     >
>     > Any Advise?
>     >
>     > Regards, I
>     >
>     > --
>     >
>     ############################################################################
>     > Iban Cabrillo Bartolome
>     > Instituto de Fisica de Cantabria (IFCA)
>     > Santander, Spain
>     > Tel: +34942200969
>     > PGP PUBLIC KEY:
>     > http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC
>     >
>     ############################################################################
>     > Bertrand Russell:*"El problema con el mundo es que los estúpidos
>     están
>     > seguros de todo y los inteligentes están **llenos de dudas*"
>     > _______________________________________________
>     > ceph-users mailing list
>     > ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
>     > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux