Re: [PATCH v1 03/11] fs/proc/vmcore: disallow vmcore modifications after the vmcore was opened

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03.12.24 11:42, Baoquan He wrote:
On 11/29/24 at 11:38am, David Hildenbrand wrote:
On 25.11.24 15:41, Baoquan He wrote:
On 11/22/24 at 10:30am, David Hildenbrand wrote:
On 22.11.24 10:16, Baoquan He wrote:
On 10/25/24 at 05:11pm, David Hildenbrand wrote:
......snip...
@@ -1482,6 +1470,10 @@ int vmcore_add_device_dump(struct vmcoredd_data *data)
    		return -EINVAL;
    	}
+	/* We'll recheck under lock later. */
+	if (data_race(vmcore_opened))
+		return -EBUSY;


Hi,

As I commented to patch 7, if vmcore is opened and closed after
checking, do we need to give up any chance to add device dumping
as below?

fd = open(/proc/vmcore);
...do checking;
close(fd);

quit any device dump adding;

run makedumpfile on s390;
     ->fd = open(/proc/vmcore);
       -> try to dump;
     ->close(fd);

The only reasonable case where this could happen (with virtio_mem) would be
when you hotplug a virtio-mem device into a VM that is currently in the
kdump kernel. However, in this case, the device would not provide any memory
we want to dump:

(1) The memory was not available to the 1st (crashed) kernel, because
      the device got hotplugged later.
(2) Hotplugged virtio-mem devices show up with "no plugged memory",
      meaning there wouldn't be even something to dump.

Drivers will be loaded (as part of the kernel or as part of the initrd)
before any kdump action is happening. Similarly, just imagine your NIC
driver not being loaded when you start dumping to a network share ...

This should similarly apply to vmcoredd providers.

There is another concern I had at some point with changing the effective
/proc/vmcore size after someone already opened it, and might assume the size
will stay unmodified (IOW, the file was completely static before vmcoredd
showed up).

So unless there is a real use case that requires tracking whether the file
is no longer open, to support modifying the vmcore afterwards, we should
keep it simple.

I am not aware of any such cases, and my experiments with virtio_mem showed
that the driver get loaded extremely early from the initrd, compared to when
we actually start messing with /proc/vmcore from user space.

It's OK, David, I don't have strong opinion about the current
implementation. I raised this concern because

1) I saw the original vmcoredd only warn when doing register if
vmcore_opened is true;

2) in patch 1, it says vmcore_mutex is introduced to protect vmcore
modifications from concurrent opening. If we are confident, the old
vmcoredd_mutex can guarantee it, I could be wrong here.

I don't see how. We're protecting the list, but not the vmcoredd_update_size(), which modifies sizes, offsets and all that.


Anyway, it's just a tiny concern, I believe it won't cause issue at
present. So it's up to you.

I can keep allowing to add stuff after the vmcore was opened and closed again (but not while it is open). It doesn't make any sense IMHO, but it seems to be easy to add.

Thanks!

--
Cheers,

David / dhildenb





[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux