Re: ceph-bluestore-tool bluefs-bdev-expand on mimic

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Valentin,

pre-Nautilus releases don't support OSD main device expansion, DB and/or WAL can be expanded only.

Recently corresponding error message has been added, see: https://tracker.ceph.com/issues/39144


Thanks,

Igor


On 7/3/2019 10:22 AM, Valentin Bajrami wrote:
Hello everyone,

Recently I was trying to expand an OSD disk using bluefs-bdev-expand.
Since this OSD is a virtual machine managed by oVirt, I first resized
the virtual disk of the virtual machine. The result from ''lsblk'' was:

vdb
     252:16   0  200G  0 disk
└─ceph--bc94ec07--2ac3--4965--8750--bb9e42ec670f-osd--block--aa7de90e--0442--4cd9--9927--a17dd666ea74
253:2    0  100G  0 lvm

As you can see the block device /dev/vdb has 200G but the logical volume
is still 100G. I then used the following:

lvextend -L+100G
/dev/ceph--bc94ec07--2ac3--4965--8750--bb9e42ec670f/osd--block--aa7de90e--0442--4cd9--9927--a17dd666ea74

After using lvextend I then ran:

# ceph-bluestore-tool show-label --path /var/lib/ceph/osd/ceph-7/
inferring bluefs devices from bluestore path
{
     "/var/lib/ceph/osd/ceph-7//block": {
         "osd_uuid": "aa7de90e-0442-4cd9-9927-a17dd666ea74",
         "size": 107372085248,
         "btime": "2019-07-02 13:56:58.589154",
         "description": "main",
         "bluefs": "1",
         "ceph_fsid": "6effd8df-d109-4ef3-9cfa-c68f9756a54b",
         "kv_backend": "rocksdb",
         "magic": "ceph osd volume v026",
         "mkfs_done": "yes",
         "osd_key": "AQCJRhtdZZgTEBAA7G7fzTyj0d2r4RRa/uxaZQ==",
         "ready": "ready",
         "whoami": "7"
     }
}

So the command: ceph-bluestore-tool bluefs-bdev-expand --path
/var/lib/ceph/osd/ceph-7  results in an error I currently cannot
reproduce but the bottom line is that it doesn't expand.

Is bluefs-bdev-expand supported on mimic?  Is there a clean way to
expand an OSD ? Right now I'm running the following from ceph-deploy:

# ceph-deploy disk zap vm1-osd1 /dev/vdb
# ceph-deploy osd create  vm1-osd1 --data /dev/vdb

The above deletes everything and recreates it which is really not ideal.

Any suggestion?


Thanks in advance.

_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx




[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux