Hello,
I'm wondering if it's possible to grow a volume (such as in a cloud/VM environment) and use pvresize/lvextend to utilize the extra space in my pool.
I am testing with the following environment:
* Running on cloud provider (Google Cloud)
* 3 nodes, 1 OSD each
* 1 storage pool with "size" of 3 (data replicated on all nodes)
* Initial disk size of 100 GB on each node, initialized as bluestore OSDs
I grew all three volumes (100 GB -> 150 GB) being used as OSDs in the Google console. Then used pvresize/lvextend on all devices and rebooted all nodes one-by-one. In the end, the nodes are somewhat recognizing the additional space, but it's showing up as being utilized.
Before resize (there's ~1 GB of data in my pool):
$ ceph -s
cluster:
id: 553ca7bd-925a-4dc5-a928-563b520842de
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph01,ceph02,ceph03
mgr: ceph01(active), standbys: ceph02, ceph03
mds: cephfs-1/1/1 up {0=ceph01=up:active}, 2 up:standby
osd: 3 osds: 3 up, 3 in
data:
pools: 2 pools, 200 pgs
objects: 281 objects, 1024 MB
usage: 6316 MB used, 293 GB / 299 GB avail
pgs: 200 active+clean
After resize:
$ ceph -s
cluster:
id: 553ca7bd-925a-4dc5-a928-563b520842de
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph01,ceph02,ceph03
mgr: ceph01(active), standbys: ceph02, ceph03
mds: cephfs-1/1/1 up {0=ceph02=up:active}, 2 up:standby
osd: 3 osds: 3 up, 3 in
data:
pools: 2 pools, 200 pgs
objects: 283 objects, 1024 MB
usage: 156 GB used, 293 GB / 449 GB avail
pgs: 200 active+clean
So, after "growing" all OSDs by 50 GB (and object size remaining the same), the new 50 GB of additional space shows up as as used space. Also, the pool max available size stays the same.
$ ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
449G 293G 156G 34.70
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
cephfs_data 2 1024M 1.09 92610M 261
cephfs_metadata 3 681k 0 92610M 22
I've tried searching around on the Internet and looked through documentation to see if/how growing bluestore volume OSDs is possible and haven't come up with anything. I'd greatly appreciate any help in this area if anyone has experience. Thanks.
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com