Jason Dillaman wrote:
gwcli doesn't allow you to shrink images (it silently ignores you). Use
'rbd resize' and restart the GWs to pick up the new size.
This is exactly what does not work in my case, as the larger size is
stored in /etc/target/saveconfig.json and that's why RBD is resized back
on rbd-target-gw restart.
from /var/log/tcmu-runner.log :
2018-06-29 17:36:32.715 5386 [ERROR] tcmu_rbd_check_image_size:800
rbd/libvirt.tower-prime-e-3tb: Mismatched sizes. RBD image size
3000596692992. Requested new size 3298534883328.
from /etc/target/saveconfig.json :
"attributes": {
"dev_config": "rbd/libvirt/tower-prime-e-3tb;osd_op_timeout=30",
"dev_size": 3298534883328,
...
"config": "rbd/libvirt/tower-prime-e-3tb;osd_op_timeout=30",
"name": "libvirt.tower-prime-e-3tb",
"size": 3298534883328,
why can't tcmu-runner take the size from RBD itself, I don't know.
> it back to gwcli/disks), I discover that its size is rounded up to 3
> TiB, i.e. 3072 GiB or 786432*4M Ceph objects. As we know, GPT is
> 'targetcli ls /' (there, it is still 3.0T). Also, when I restart
> rbd-target-gw.service, it gets resized back up to 3.0T as shown by
Well, I see this size in /etc/target/saveconfig.json
And I see how the RBD is extended in /var/log/tcmu-runner.log
And I remember that once I lazily added 2.7T RBD specifying
its size as
3T in gwcli. Now trying to fix that wihout deleting/recreating the
RBD...
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com