Re: [kvm-unit-tests PATCH RFC] s390x: factor out running of tests into common code

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12.09.2017 18:24, Paolo Bonzini wrote:
> On 12/09/2017 18:17, David Hildenbrand wrote:
>> On 12.09.2017 17:46, Paolo Bonzini wrote:
>>> On 12/09/2017 17:15, David Hildenbrand wrote:
>>>> On 12.09.2017 17:01, Paolo Bonzini wrote:
>>>>> On 12/09/2017 16:16, David Hildenbrand wrote:
>>>>>> I want to use this in another file, so instead of replicating, factoring
>>>>>> it out feels like the right thing to do.
>>>>>>
>>>>>> Let's directly provide it for all architectures, they just have
>>>>>> to implement get_time_ms() in asm/time.h to use it.
>>>>>>
>>>>>> Signed-off-by: David Hildenbrand <david@xxxxxxxxxx>
>>>>>> ---
>>>>>>
>>>>>> The next step would be allowing to add custom parameters to a test.
>>>>>> (defining them similar to the test cases).
>>>>>>
>>>>>> Any other requests/nice to have? Does this make sense?
>>>>>
>>>>> It does, but we have at least this, vmx.c (v1 and v2) and vmexit.c, with
>>>>> not exactly overlapping usecases and command-line syntax.  Would it make
>>>>
>>>> Yes, the ultimate goal would be to combine all into one. I only had a
>>>> look at some ARM tests yet. Will have a look at these.
>>>>
>>>>> sense to have an API that is not home-grown, e.g. a subset of GTest?
>>>>> The command-line arguments could be globs (like GTest's "-p").
>>>>
>>>> I'll have a look. Any experts around?
>>>
>>> Take a look at https://developer.gnome.org/glib/stable/glib-Testing.html.
>>>
>>> But... one obstacle is probably the lack of malloc in x86, so this would
>>> be a prerequisite.  Andrew first proposed it last year, and there was
>>> some discussion on the design and how to abstract giving more memory to
>>> malloc.  You can find the result here:
>>> https://patchwork.kernel.org/patch/9409843/ -- it's a longish thread,
>>> but it should be fun to implement. :)
>>>
>>> The idea is to have four allocator APIs:
>>>
>>> - "phys" (phys_alloc) provides contiguous physical addresses and is used
>>> in early allocations or for tests that do not enable the MMU.  It need
>>> not be able to free anything.  It also provides an arch-dependent
>>> interface to register the available memory.
>>>
>>> - "page" (alloc_page) is initialized after the MMU is started.  It picks
>>> whatever memory wasn't allocated by phys, and slices it in pages.  Like
>>> "phys", it provides contiguous physical addresses.
>>>
>>> - "virt" (alloc_vpages) is also initialized after the MMU is started,
>>> and it allocates virtual address space.  It also provides an
>>> arch-dependent interface to install PTEs.  It provides contiguous
>>> virtual addresses.
>>>
>>> - "malloc" is the API we all love :) and it uses a pluggable allocator
>>> underneath, called "morecore" in the thread, with the prototype
>>>
>>>     void *(*morecore)(size_t size, size_t align_min);
>>>
>>> In the beginning "morecore" simply uses "phys", later it is switched
>>> (when the MMU is started) to use both "page" and "virt".  That is it
>>> takes not necessarily consecutive physical memory at page
>>> granularity from "page", it assigning consecutive virtual address (from
>>> "virt") to it, and if needed it also slices pages into smaller allocations.
>>>
>>> Of course each could be arbitrarily complicated, but it need not be.
>>>
>>> "page" could be a buddy system, but really in practice it need not even
>>> support reusing freed pages, except for 1-page allocations.  This makes
>>> it enough to use a simple free list of pages, plus a separately-tracked
>>> huge block of consecutive free physical addresses at the top.
>>>
>>> Likewise, "malloc" could be Doug Lea's malloc, but really in practice it
>>> will just be a small veneer over morecore, with a dummy free.  Like the
>>> testing thing, the important part is avoiding under-engineering.
>>>
>>> Quoting from the thread, you can choose separately at each layer whether
>>> to implement freeing or not.  "phys", "virt" and "morecore" do not need
>>> it.  See in particular the Nov. 3, 2016, 5:34 p.m. message for a plan to
>>> transform Drew's v2 series
>>> (https://www.spinics.net/lists/kvm/msg139685.html) into the design
>>> envisioned above.
>>
>> Holy cow, that sounds like a lot of work, especially as malloc is also
>> not expected to work on s390x - guess because of which architecture I
>> started to factor out ;)
>>
>> phys_alloc_init() yields only arm and powerpc.
>>
>> ... so while having GTest would be the optimal solution, I wonder if it
>> is worth the trouble right now. (sure, malloc for x86 and s390x would
>> also be nice, but already getting that implemented sounds like it could
>> take quite a while).
>>
>> Hmm.... probably have too look into the details of the malloc discussion...
> 
> It's really less work than it sounds.  I'm not wed to GTest, but I think
> the ability to define test cases programmatically is useful (vmexit.c

I agree, as I said I would also like to use that. But at least at the
first sight this really looks like a lot of work.

> for example has a case where the testcase list is actually in QEMU!) and
> it pretty much requires malloc.
> 
> Paolo
> 

gtestutils.c also uses g_timer() ... this will be fun :)

-- 

Thanks,

David



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux