Re: [REPOST PATCH v2 3/9] qemu: Adjust maxparams logic for qemuDomainGetBlockIoTune

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 11/25/2016 10:28 AM, Erik Skultety wrote:
> On Mon, Nov 21, 2016 at 06:35:48PM -0500, John Ferlan wrote:
>> Rather than using negative logic and setting the maxparams to a lesser
>> value based on which capabilities exist, alter the logic to modify the
>> maxparams based on a base value plus the found capabilities. Reduces the
>> chance that some backported feature produces an incorrect value.
>>
>> Signed-off-by: John Ferlan <jferlan@xxxxxxxxxx>
>> ---
>>  src/qemu/qemu_driver.c | 23 ++++++++++++++---------
>>  1 file changed, 14 insertions(+), 9 deletions(-)
>>
>> diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
>> index d039255..87d219f 100644
>> --- a/src/qemu/qemu_driver.c
>> +++ b/src/qemu/qemu_driver.c
>> @@ -112,9 +112,12 @@ VIR_LOG_INIT("qemu.qemu_driver");
>>  
>>  #define QEMU_NB_MEM_PARAM  3
>>  
>> -#define QEMU_NB_BLOCK_IO_TUNE_PARAM  6
>> -#define QEMU_NB_BLOCK_IO_TUNE_PARAM_MAX  13
>> -#define QEMU_NB_BLOCK_IO_TUNE_PARAM_MAX_LENGTH 19
>> +#define QEMU_NB_BLOCK_IO_TUNE_BASE_PARAMS 6
>> +#define QEMU_NB_BLOCK_IO_TUNE_MAX_PARAMS 7
>> +#define QEMU_NB_BLOCK_IO_TUNE_LENGTH_PARAMS 6
>> +#define QEMU_NB_BLOCK_IO_TUNE_ALL_PARAMS (QEMU_NB_BLOCK_IO_TUNE_BASE_PARAMS + \
>> +                                          QEMU_NB_BLOCK_IO_TUNE_MAX_PARAMS + \
>> +                                          QEMU_NB_BLOCK_IO_TUNE_LENGTH_PARAMS)
>>  
>>  #define QEMU_NB_NUMA_PARAM 2
>>  
>> @@ -17719,7 +17722,7 @@ qemuDomainGetBlockIoTune(virDomainPtr dom,
>>      virDomainBlockIoTuneInfo reply;
>>      char *device = NULL;
>>      int ret = -1;
>> -    int maxparams = QEMU_NB_BLOCK_IO_TUNE_PARAM_MAX_LENGTH;

NB: "current" persistent def max - moved just to make it clearer what
the max was vis-a-vis persistent vs. live.

>> +    int maxparams;
>>  
>>      virCheckFlags(VIR_DOMAIN_AFFECT_LIVE |
>>                    VIR_DOMAIN_AFFECT_CONFIG |
>> @@ -17753,11 +17756,13 @@ qemuDomainGetBlockIoTune(virDomainPtr dom,
>>              goto endjob;
>>          }
>>  
>> -        if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_DRIVE_IOTUNE_MAX))
>> -            maxparams = QEMU_NB_BLOCK_IO_TUNE_PARAM;
>> -        else if (!virQEMUCapsGet(priv->qemuCaps,
>> -                                 QEMU_CAPS_DRIVE_IOTUNE_MAX_LENGTH))
>> -            maxparams = QEMU_NB_BLOCK_IO_TUNE_PARAM_MAX;
>> +        maxparams = QEMU_NB_BLOCK_IO_TUNE_BASE_PARAMS;
>> +        if (virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_DRIVE_IOTUNE_MAX))
>> +            maxparams += QEMU_NB_BLOCK_IO_TUNE_MAX_PARAMS;
>> +        if (virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_DRIVE_IOTUNE_MAX_LENGTH))
>> +            maxparams += QEMU_NB_BLOCK_IO_TUNE_LENGTH_PARAMS;
>> +    } else {
>> +        maxparams = QEMU_NB_BLOCK_IO_TUNE_ALL_PARAMS;
>>      }
>>  
> 
> I'm curious about the else branch. Can you elaborate more on why do you assume
> qemu supports all the features when you retrieve a persistentdef vs livedef?

History mainly. How does one choose otherwise? When it's persistent,
then you can get/return all 'valid' values vs. when it's live you can
only get/return those that the domain can retrieve. Typically the order
of fetching is done weighted towards which version the value was added.
That's why I stuck group_name between the "max" and "max_length" - so it
has a chance of being fetched instead of one max_length value as I know
it was added to qemu in between the two.

> 
> From what I understand, this is one of the APIs that you have to call twice,
> since the RPC layer does not allocate a structure of an appropriate size
> automatically. So with the first run, you say, please allocate a structure of
> size maxparams equal _ALL_PARAMS (without knowing that qemu actually supports 
> all these features) and rerun.

True - it's one of those pass 0 or pass 0 and/or some other value in
order to indicate that you want to get a count so that you can allocate
an appropriately sized buffer.

> Then during the second run, you run BLOCK_IOTUNE_ASSIGN on all of them, what am
> I missing?

Not quite sure what you're asking - if dig into the macro you'll note a
comparison "if (*nparams < maxparams && " which essentially ensures that
we'll only ever fill in as many as we should. On the "second" pass
through the code we'll hit the else of the "if (*nparams == 0) {" which
can further 'refine' maxparams to be whatever was passed in *nparams
(e.g. whatever was allocated by the caller).  Similar to any other chunk
of code there certainly is assumption that the allocation was done
correctly.

FWIW: Let's say on your first pass the domain isn't running and you
return ALL, but on the second pass the domain is running and maxparams
gets set to MAX. That means we'll fetch up to 'MAX' from the running
domain with the other entries being left at whatever they were
initialized to by the caller. We'll also return MAX (a lesser value) in
nparams from the call indicating to the caller that less values were
fetched than the size of the buffer. It's up to the caller to recognize
that if they so desire. They could also print out whatever size they
passed in.


John

> 
> Erik
> 
>>      if (*nparams == 0) {
>> -- 
>> 2.7.4
>>
>> --
>> libvir-list mailing list
>> libvir-list@xxxxxxxxxx
>> https://www.redhat.com/mailman/listinfo/libvir-list

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list



[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]
  Powered by Linux