Re: [PATCH] util: Remove empty resource partition created by libvirt

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Michal Privoznik <mprivozn@xxxxxxxxxx> writes:

> On 12.08.2015 07:39, Nikunj A Dadhania wrote:
>> 
>> Hi Daniel,
>> 
>> "Daniel P. Berrange" <berrange@xxxxxxxxxx> writes:
>>> On Tue, Aug 11, 2015 at 04:57:15PM +0530, Nikunj A Dadhania wrote:
>>>> The default resource partition is created in the domain start path if it
>>>> is not existing. Even when libvirtd is stopped after shutting down all
>>>> domains, the resource partition still exists.
>>>>
>>>> The patch adds code to removes the default resource partition in the
>>>> cgroup removal path of the domain. If the default resource partition is
>>>> found to have no child cgroup, the default resource partition will be
>>>> removed.
>>>>
>>>> Moreover, the code does not remove the user provided resource
>>>> partitions.
>>>>
>>>> Signed-off-by: Nikunj A Dadhania <nikunj@xxxxxxxxxxxxxxxxxx>
>>>
>>> I don't think we want to be doing this. In non-systemd hosts this will
>>> be deleting the heirarchy that the sysadmin manually pre-created for
>>> their VMs.  In a systemd host it will also end up deleting slices that
>>> were created by systemd.
>> 
>> AFAIU, there are three cases here:
>> 
>> 1) User created resource partition, for example /production/foo
>>    As this is created by user, we should not touch them. And my patch
>>    does not remove them
>>    
>> 2) systemd created /machine.slice
>>    If not libvirt, should systemd clean this up when the libvirtd
>>    service is stopped ?
>
> No, machined should clean that up based on signalization sent by libvirt
> when it is started up again. I guess the scenario is as follows:
> 1) libvirt is starting a container
> 2) as part of the process, a remote procedure is called (via dbus) on
> machined to precreate the machine.slice
> 3) the container is started
> 4) libvirtd.service is stopped
> 5) container is stopped

In my case I was looking for:

3) the container is started
4) the container is stopped
5) libvirtd.service is stopped

Who would remove machine.slice in this case? Basically, there are no
containers running currently.

> Now you have dangling machine.slice. But this in fact is correct,
> because libvirt needs to clean up its runtime metadata too. Therefore
> the process should go on like this:
>
> 6) libvirtd.service is started again
> 7) libvirt notices that the container has stopped
> 8) as part of cleanup process it instructs machined to remove the
> machine.slice

Are you suggesting that current libvirtd along with machined is doing
the above 3 steps? Or we would need to enable that?

>> 
>>    Currently, my patch does remove this when its found empty
>>    
>> 3) libvirt created /machine
>>    As this was created manually by libvirt, should we delete it here in
>>    libvirt daemon
>
> This one can make sense.

Regards,
Nikunj

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list



[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]