On a Thursday in 2023, Peter Krempa wrote:
Allow users to easily resize 'raw' images on block devices to the full capacity of the block device. Obviously this won't work on file-backed storage (filling the remaining capacity is most likely wrong) or for formats with metadata due to the overhead. Signed-off-by: Peter Krempa <pkrempa@xxxxxxxxxx> --- docs/manpages/virsh.rst | 6 +++++- include/libvirt/libvirt-domain.h | 1 + src/libvirt-domain.c | 5 +++++ tools/virsh-domain.c | 10 +++++++++- 4 files changed, 20 insertions(+), 2 deletions(-) diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c index fa9d356e15..327553f6d7 100644 --- a/tools/virsh-domain.c +++ b/tools/virsh-domain.c @@ -2949,9 +2949,12 @@ static const vshCmdOptDef opts_blockresize[] = { }, {.name = "size", .type = VSH_OT_INT, - .flags = VSH_OFLAG_REQ, .help = N_("New size of the block device, as scaled integer (default KiB)") }, + {.name = "capacity", + .type = VSH_OT_BOOL, + .help = N_("resize to capacity of source (block device)") + }, {.name = NULL} }; @@ -2963,6 +2966,11 @@ cmdBlockresize(vshControl *ctl, const vshCmd *cmd) unsigned long long size = 0; unsigned int flags = 0; + VSH_ALTERNATIVE_OPTIONS("size", "capacity"); + + if (vshCommandOptBool(cmd, "capacity")) + flags |= VIR_DOMAIN_BLOCK_RESIZE_CAPACITY; +
Below, there's some silly code that runs (or gets optimized away) even if size wasn't specified: /* Prefer the older interface of KiB. */ if (size % 1024 == 0) size /= 1024; Avoiding it for the --capacity flag would probably require separate bool variables. That would negate the need for the new macro. Jano
if (vshCommandOptStringReq(ctl, cmd, "path", (const char **) &path) < 0) return false; -- 2.43.0 _______________________________________________ Devel mailing list -- devel@xxxxxxxxxxxxxxxxx To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxx
Attachment:
signature.asc
Description: PGP signature
_______________________________________________ Devel mailing list -- devel@xxxxxxxxxxxxxxxxx To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxx