Re: [KVM] About releasing vcpu when closing vcpu fd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Gu Zheng,

Will prefer to wait for the moment . There is no point doing duplicate
things in parallel.

Thanks
Anshul Makkar


On Wed, Jul 2, 2014 at 11:43 AM, Igor Mammedov <imammedo@xxxxxxxxxx> wrote:
> On Mon, 30 Jun 2014 16:41:07 +0200
> Anshul Makkar <anshul.makkar@xxxxxxxxxxxxxxxx> wrote:
>
>> Hi,
>>
>> Currently as per the specs for cpu_hot(un)plug, ACPI GPE Block:  IO
>> ports 0xafe0-0xafe3  where each bit corresponds to each CPU.
>>
>> Currently, EJ0 method in acpi-dsdt-cpu-hotplu.dsl doesn't do anything.
>>
>> Method(CPEJ, 2, NotSerialized) {
>>         // _EJ0 method - eject callback
>>         Sleep(200)
>>     }
>>
>> I want to implement a notification mechanism for CPU hotunplug just
>> like we have in memory hotunplug where in we write to particular IO
>> port and this read/write is caught in the memory-hotplug.c.
>>
>> So, just want a suggestion as to whether I should expand the IO port
>> range from 0xafe0 to 0xafe4 (addition of 1 byte), where last byte is
>> for notification of EJ0 event.
> I have it in my TODO list to rewrite CPU hotplug IO interface to be
> similar with memory hotplug one. So you can try to it, it will
> allow to drop CPUs bitmask and make interface scalable to more then
> 256 cpus.
>
>>
>> Or if you have any other suggestion, please share.
>>
>> Thanks
>> Anshul Makkar
>>
>> On Fri, Jun 6, 2014 at 3:41 PM, Anshul Makkar
>> <anshul.makkar@xxxxxxxxxxxxxxxx> wrote:
>> > Oh yes, sorry for the ambiguity.  I meant proposal to "park" unplugged vcpus.
>> >
>> > Thanks for the suggesting the practical approach.
>> >
>> > Anshul Makkar
>> >
>> > On Fri, Jun 6, 2014 at 3:36 PM, Gleb Natapov <gleb@xxxxxxxxxxxxx> wrote:
>> >> On Fri, Jun 06, 2014 at 03:02:59PM +0200, Anshul Makkar wrote:
>> >>> IIRC, Igor was of the opinion that  patch for vcpu deletion will be
>> >>> incomplete till its handled properly in kvm i.e vcpus are destroyed
>> >>> completely. http://comments.gmane.org/gmane.comp.emulators.kvm.devel/114347
>> >>> .
>> >>>
>> >>> So can the above proposal  where just vcpus can be  disabled and
>> >>> reused in qemu is an acceptable solution ?
>> >>>
>> >> If by "above proposal" you mean the proposal in the email you linked,
>> >> then no since it tries to destroy vcpu, but does it incorrectly. If you
>> >> mean proposal to "park" unplugged vcpu, so that guest will not be able
>> >> to use it, then yes, it is pragmatic path forward.
>> >>
>> >>
>> >>> Thanks
>> >>> Anshul Makkar
>> >>>
>> >>> On Thu, May 29, 2014 at 10:12 AM, Gleb Natapov <gleb@xxxxxxxxxx> wrote:
>> >>> > On Thu, May 29, 2014 at 01:40:08PM +0800, Gu Zheng wrote:
>> >>> >> >> There was a patch(from Chen Fan, last august) about releasing vcpu when
>> >>> >> >> closing vcpu fd <http://www.spinics.net/lists/kvm/msg95701.html>, but
>> >>> >> >> your comment said "Attempt where made to make it possible to destroy
>> >>> >> >> individual vcpus separately from destroying VM before, but they were
>> >>> >> >> unsuccessful thus far."
>> >>> >> >> So what is the pain here? If we want to achieve the goal, what should we do?
>> >>> >> >> Looking forward to your further comments.:)
>> >>> >> >>
>> >>> >> > CPU array is accessed locklessly in a lot of places, so it will have to be RCUified.
>> >>> >> > There was attempt to do so 2 year or so ago, but it didn't go anyware. Adding locks is
>> >>> >> > to big a price to pay for ability to free a little bit of memory by destroying vcpu.
>> >>> >>
>> >>> >> Yes, it's a pain here. But if we want to implement "vcpu hot-remove", this must be
>> >>> >> fixed sooner or later.
>> >>> > Why?  "vcpu hot-remove" already works (or at least worked in the past
>> >>> > for some value of "work").  No need to destroy vcpu completely, just
>> >>> > park it and tell a guest not to use it via ACPI hot unplug event.
>> >>> >
>> >>> >> And any guys working on kvm "vcpu hot-remove" now?
>> >>> >>
>> >>> >> > An
>> >>> >> > alternative may be to make sure that stopped vcpu takes as little memory as possible.
>> >>> >>
>> >>> >> Yeah. But if we add a new vcpu with the old id that we stopped before, it will fail.
>> >>> >>
>> >>> > No need to create vcpu again, just unpark it and notify a guest via ACPI hot plug event that
>> >>> > vcpu can be used now.
>> >>> >
>> >>> > --
>> >>> >                         Gleb.
>> >>> > --
>> >>> > To unsubscribe from this list: send the line "unsubscribe kvm" in
>> >>> > the body of a message to majordomo@xxxxxxxxxxxxxxx
>> >>> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> >>
>> >> --
>> >>                         Gleb.
>
>
> --
> Regards,
>   Igor
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux