Re: Migration osds to Bluestore on Ubuntu 14.04 Trusty

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Klimenko,

I did a migration from filestore to bluestroe on centos7 with ceph version 12.2.5.
As it's the pro environment, I removed and recreated OSDs on each server at a time, online.
Although I migreated on centos, I create osd manually so that you can have a try.

Except one raid1 disk for system, my server has one SSD(sdg) for wal/db, and 5 SATAs(sdb-sdf) 
for storage. Here is my steps of replacing osd.16 and osd.17:

1. set osds out and remove them from cluster
ceph osd out 16 && ceph osd out 17
systemctl stop ceph-osd@16 && systemctl stop ceph-osd@17
ceph osd crush remove osd.16 && ceph osd crush remove osd.17
ceph osd rm osd.16 && ceph osd rm osd.17
ps. if you replace few disks you don't have to wait until the recovery done as the cluster would heal itself.

2. clear the partition infomation on the disks
sgdisk --zap-all /dev/sdb  
sgdisk --zap-all /dev/sdc 

3. make meta partitions
sgdisk --new=1:0:+1GB --change-name=1:osd_data_16 --partition-guid=1:$(uuidgen) --mbrtogpt -- /dev/sdb  
sgdisk --largest-new=2 --change-name=2:bluestore_block_16 --partition-guid=2:$(uuidgen) --mbrtogpt -- /dev/sdb
sgdisk --new=1:0:+1GB --change-name=1:osd_data_17 --partition-guid=1:$(uuidgen) --mbrtogpt -- /dev/sdc
sgdisk --largest-new=2 --change-name=2:bluestore_block_17 --partition-guid=2:$(uuidgen) --mbrtogpt -- /dev/sdc  

4. format partitions
mkfs -t xfs -f -i size=2048 -- /dev/sdb1
mkfs -t xfs -f -i size=2048 -- /dev/sdc1  

5. make wal/db partitions
sgdisk --new=1:0:+1GB --change-name=1:bluestore_block_db_16 --partition-guid=1:$(uuidgen) --mbrtogpt -- /dev/sdg
sgdisk --new=2:0:+8GB --change-name=2:bluestore_block_wal_16 --partition-guid=2:$(uuidgen) --mbrtogpt -- /dev/sdg
sgdisk --new=3:0:+1GB --change-name=3:bluestore_block_db_17 --partition-guid=3:$(uuidgen) --mbrtogpt -- /dev/sdg
sgdisk --new=4:0:+8GB --change-name=4:bluestore_block_wal_17 --partition-guid=4:$(uuidgen) --mbrtogpt -- /dev/sdg  

6. create osd
ceph osd create

7. prepare the osd
mount /dev/sdb1 /var/lib/ceph/osd/ceph-16
mount /dev/sdc1 /var/lib/ceph/osd/ceph-17  
echo "bluestore" > /var/lib/ceph/osd/ceph-16/type
echo "bluestore" > /var/lib/ceph/osd/ceph-17/type  

8. edit ceph.conf

[osd.16]
host = ceph-osd1
osd data = "">bluestore block path = /dev/disk/by-partlabel/bluestore_block_16
bluestore block db path = /dev/disk/by-partlabel/bluestore_block_db_16
bluestore block wal path = /dev/disk/by-partlabel/bluestore_block_wal_16

[osd.17]
host = ceph-osd1
osd data = "">bluestore block path = /dev/disk/by-partlabel/bluestore_block_17
bluestore block db path = /dev/disk/by-partlabel/bluestore_block_db_17
bluestore block wal path = /dev/disk/by-partlabel/bluestore_block_wal_17

9. make keys

ceph-osd -i 16 --mkkey --mkfs
ceph-osd -i 17 --mkkey --mkfs  

10. authorize

ceph auth add osd.16 osd 'allow *' mon 'allow profile osd' mgr 'allow profile osd' -i /var/lib/ceph/osd/ceph-16/keyring
ceph auth add osd.17 osd 'allow *' mon 'allow profile osd' mgr 'allow profile osd' -i /var/lib/ceph/osd/ceph-17/keyring  

11. edit crushmap

ceph osd crush add 16 1 host=ceph-osd1
ceph osd crush add 17 1 host=ceph-osd1

12. start OSDs

Last but not least, you may edit udev rules as well as chown ceph osd directories so that ceph has
permisson on wiriting the disks and directories.

I hope it helps. Thanks

Klimenko, Roman <RKlimenko@xxxxxxxxx> 于2018年11月16日周五 上午8:35写道:

Hi everyone!

As I noticed, ceph-volume lacks Ubuntu Trusty compatibility  https://tracker.ceph.com/issues/23496 

So, I can't follow this instruction http://docs.ceph.com/docs/mimic/rados/operations/bluestore-migration/ 

Do I have any other option to migrate my Filestore osds (Luminous 12.2.9)  to Bluestore?

P.S This is a test environment, so I can try anything

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux