Re: [PATCH v5 1/9] pvs: add driver skeleton

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05/01/2012 07:03 AM, Dmitry Guryanov wrote:
> On 05/01/2012 03:27 AM, Eric Blake wrote:
>> On 04/20/2012 10:01 AM, Dmitry Guryanov wrote:
>>> Add driver, which can report node info only.
>> Since this is the first commit in the series, can you please add more
>> information about pvs?  This content from your 0/9 message would be
>> useful here:
>>
>>>> Parallels Virtuozzo Server is a cloud-ready virtualization
>>>> solution that allows users to simultaneously run multiple virtual
>>>> machines and containers on the same physical server.
>>>>
>>>> Current name of this product is Parallels Server Bare Metal and
>>>> more information about it can be found here -
>>>> http://www.parallels.com/products/server/baremetal/sp/.
>>>>
>>>> This driver will work with PVS version 6.0 , beta version
>>>> scheduled at 2012 Q2.
>> I'm still thinking this series might be worth including in 0.9.12, but
>> I'm worried about getting good reviews (as you can see, I'm only on 1/9
>> and ran out of time today).
> 
> Thanks for your review, Eric !
> 
> Should I correct the issues right now and resend patches, or
> it's better to wait for other patches review ?

I'll churn through some more reviews in the next couple hours, if you'd
like to wait a bit more.

>> This will not work when cross-compiling.  For that matter, 'uname -i' is
>> not portable.  I'm not sure what better solution there is for deciding
>> whether to default things to true or false based on whether the $host
>> will be 64-bit, but it's certainly not right to be probing uname of
>> $build.
> There is a variable $|build_cpu|, we can check it instead of
> running uname command.

Not $build_cpu (the machine doing the build), but $host (the machine
where the build will run).  And indeed, I see $host is
x86_64-unknown-linux-gnu for me, so the first part of the triple will
hopefully be good enough to tell 32-bit vs. 64-bit.

Is PVS really 64-bit only?


>>> +int
>>> +pvsRegister(void)
>>> +{
>>> +    if (virRegisterDriver(&pvsDriver)<  0)
>>> +        return -1;
>> Should we be checking for whether the PRLCTL binary even exists, before
>> registering this driver?
>>
> I check for prlctl binary existence in pvsOpen function, but
> can move the check to pvsRegister.

I guess the difference is that on systems without pvs support, checking
in open means pvs:// will be a recognized URI that always fails, while
checking in register will make pvs:// a rejected URI up front.  I don't
know which way is more user in line with the rest of libvirt, but I
personally like knowing as soon as possible that I've got issues
(getting a connection and waiting until a failed open is not as soon as
finding out that I can't even get a connection).

-- 
Eric Blake   eblake@xxxxxxxxxx    +1-919-301-3266
Libvirt virtualization library http://libvirt.org

Attachment: signature.asc
Description: OpenPGP digital signature

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list

[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]