Re: OSD Bluestore Migration Issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



-2        21.81000             host node24
  0   hdd   7.26999                 osd.0             destroyed        0 1.00000
  8   hdd   7.26999                 osd.8                    up  1.00000 1.00000
 16   hdd   7.26999                 osd.16                   up  1.00000 1.00000

Should I do these prior to running without the osd-id specified?
# ceph osd crush remove osd.$ID
# ceph auth del osd.$ID
# ceph osd rm osd.$ID

And then it fill in the missing osd.0.
Will set norebalance flag first to prevent data reshuffle upon the osd being removed from the crush map.

Thanks,

Reed

On Jan 9, 2018, at 2:05 PM, Alfredo Deza <adeza@xxxxxxxxxx> wrote:

On Tue, Jan 9, 2018 at 2:19 PM, Reed Dier <reed.dier@xxxxxxxxxxx> wrote:
Hi ceph-users,

Hoping that this is something small that I am overlooking, but could use the
group mind to help.

Ceph 12.2.2, Ubuntu 16.04 environment.
OSD (0) is an 8TB spinner (/dev/sda) and I am moving from a filestore
journal to a blocks.db and WAL device on an NVMe partition (/dev/nvme0n1p5).

I have an OSD that I am trying to convert to bluestore and running into some
trouble.

Started here until the ceps-volume create statement, which doesn’t work.
http://docs.ceph.com/docs/master/rados/operations/bluestore-migration/
Worth mentioning I also flushed the journal on the nvme partition before
nuking the OSD.

$ sudo ceph-osd -i 0 --flush-journal


So I first started with this command:

$ sudo ceph-volume lvm create --bluestore --data /dev/sda --block.db
/dev/nvme0n1p5 --osd-id 0


Pastebin to the ceph-volume log: https://pastebin.com/epkM3aP6

However the OSD doesn’t start.

I was just able to replicate this by using an ID that doesn't exist in
the cluster. On a cluster with just one OSD (with an ID of 0) I
created
an OSD with --osd-id 3, and had the exact same results.


Pastebin to ceph-osd log: https://pastebin.com/9qEsAJzA

I tried restarting the process, by deleting the LVM structures, zapping the
disk using ceph-volume.
This time using prepare and activate instead of create.

$ sudo ceph-volume lvm prepare --bluestore --data /dev/sda --block.db
/dev/nvme0n1p5 --osd-id 0

$ sudo ceph-volume lvm activate --bluestore 0
227e1721-cd2e-4d7e-bb48-bc2bb715a038


Also ran the enable on the ceph-volume systemd unit per
http://docs.ceph.com/docs/master/install/manual-deployment/

$ sudo systemctl enable
ceph-volume@lvm-0-227e1721-cd2e-4d7e-bb48-bc2bb715a038


Same results.

Any help is greatly appreciated.

Could you try without passing --osd-id ?

Thanks,

Reed

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux