Re: OSD Bluestore Migration Issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



After removing the —osd-id flag, everything came up normally.

 -2        21.82448             host node24
  0   hdd   7.28450                 osd.0                 up  1.00000 1.00000
  8   hdd   7.26999                 osd.8                 up  1.00000 1.00000
 16   hdd   7.26999                 osd.16                up  1.00000 1.00000

Given the vanilla-ness to this ceph-volume command, is this something ceph-deploy-able?

I’m seeing ceph-deploy 1.5.39 as the latest stable release.

ceph-deploy --username root disk zap $NODE:$HDD
ceph-deploy --username root osd create $NODE:$HDD:$SSD

In that example $HDD is the main OSD device, and $SSD is the NVMe partition I want to use for block.db (and block.wal). Or is the syntax different from the filestore days?
And I am assuming that no --bluestore would be necessary given that I am reading that bluestore is the default and filestore requires intervention.

Thanks,

Reed

On Jan 9, 2018, at 2:10 PM, Reed Dier <reed.dier@xxxxxxxxxxx> wrote:

-2        21.81000             host node24
  0   hdd   7.26999                 osd.0             destroyed        0 1.00000
  8   hdd   7.26999                 osd.8                    up  1.00000 1.00000
 16   hdd   7.26999                 osd.16                   up  1.00000 1.00000

Should I do these prior to running without the osd-id specified?
# ceph osd crush remove osd.$ID
# ceph auth del osd.$ID
# ceph osd rm osd.$ID

And then it fill in the missing osd.0.
Will set norebalance flag first to prevent data reshuffle upon the osd being removed from the crush map.

Thanks,

Reed

On Jan 9, 2018, at 2:05 PM, Alfredo Deza <adeza@xxxxxxxxxx> wrote:

On Tue, Jan 9, 2018 at 2:19 PM, Reed Dier <reed.dier@xxxxxxxxxxx> wrote:
Hi ceph-users,

Hoping that this is something small that I am overlooking, but could use the
group mind to help.

Ceph 12.2.2, Ubuntu 16.04 environment.
OSD (0) is an 8TB spinner (/dev/sda) and I am moving from a filestore
journal to a blocks.db and WAL device on an NVMe partition (/dev/nvme0n1p5).

I have an OSD that I am trying to convert to bluestore and running into some
trouble.

Started here until the ceps-volume create statement, which doesn’t work.
http://docs.ceph.com/docs/master/rados/operations/bluestore-migration/
Worth mentioning I also flushed the journal on the nvme partition before
nuking the OSD.

$ sudo ceph-osd -i 0 --flush-journal


So I first started with this command:

$ sudo ceph-volume lvm create --bluestore --data /dev/sda --block.db
/dev/nvme0n1p5 --osd-id 0


Pastebin to the ceph-volume log: https://pastebin.com/epkM3aP6

However the OSD doesn’t start.

I was just able to replicate this by using an ID that doesn't exist in
the cluster. On a cluster with just one OSD (with an ID of 0) I
created
an OSD with --osd-id 3, and had the exact same results.


Pastebin to ceph-osd log: https://pastebin.com/9qEsAJzA

I tried restarting the process, by deleting the LVM structures, zapping the
disk using ceph-volume.
This time using prepare and activate instead of create.

$ sudo ceph-volume lvm prepare --bluestore --data /dev/sda --block.db
/dev/nvme0n1p5 --osd-id 0

$ sudo ceph-volume lvm activate --bluestore 0
227e1721-cd2e-4d7e-bb48-bc2bb715a038


Also ran the enable on the ceph-volume systemd unit per
http://docs.ceph.com/docs/master/install/manual-deployment/

$ sudo systemctl enable
ceph-volume@lvm-0-227e1721-cd2e-4d7e-bb48-bc2bb715a038


Same results.

Any help is greatly appreciated.

Could you try without passing --osd-id ?

Thanks,

Reed

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux