On 29.03.2022 16:08, Christoph Hellwig wrote: > On Mon, Mar 28, 2022 at 02:18:16PM +0300, Kirill Tkhai wrote: >> This patchset adds a new driver allowing to attach QCOW2 files >> as block devices. Its idea is to implement in kernel only that >> features, which affect runtime IO performance (IO requests >> processing functionality). > > From a quick looks it seems to be like this should be a block driver > just like the loop driver and not use device mapper. Why would > you use device mapper to basically reimplement a fancy loop driver > to start with? This is a driver for containers and virtual machines. One of basic features for them is migration and backups. There are several drives for that, which are already implemented in device-mapper. For example, dm-era and dm-snap. Instead of implementing such the functionality in QCOW2 driver once again, the sane behavior is to use already implemented drivers. The module-approach is better for support and errors eliminating just because of less code. 1)A device-mapper based driver does not require migration and backup devices are built in a stack for the whole device lifetime: a)Normal work, almost 100% of time: there is only /dev/mapper/qcow2_dev. b)Migration: /dev/mapper/qcow2_dev is reloaded with a migration device, which points to new qcow2_dev.real: /dev/mapper/qcow2_dev [migration driver] /dev/mapper/qcow2_dev.real [dm-qcow2 driver] After migration is completed, we reload /dev/mapper/qcow2_dev back to use dm-qcow2 driver. So, there is no excess dm layers during normal work. 2)In case of the driver is not a device-mapper based, it's necessary to have the stack built for the whole device lifetime, since it's impossible to reload bare block driver with dm-based driver on demand: /dev/mapper/qcow2_dev [migration driver] /dev/qcow2_dev.real [bare qcow2 driver] So, we would have excess dm layer during whole device lifetime. Our performance tests show, that a single dm layer may cause up to 10% performance decrease on NVME, so the reason is to eliminate such the fall. Also, the general reasoning say that excess layer is a wrong way. Other reason is previous experience of implementing file-backed block drivers. We had ploop driver before. Ploop format is much simpler that QCOW2 format, but there are about 17K strings, while dm-qcow2 driver took about 6K strings. Device mapper allows to avoid writing a lot of code, the only thing you need is to implement proper .ctr and .dtr functions, while the rest of configuration actions are done by simple device-mapper reload. >> The maintenance operations are >> synchronously processed in userspace, while device is suspended. >> >> Userspace is allowed to do only that operations, which never >> modifies virtual disk's data. It is only allowed to modify >> QCOW2 file metadata providing that disk's data. The examples >> of allowed operations is snapshot creation and resize. > > And this sounds like a pretty fragile design. It basically requires > both userspace and the kernel driver to access metadata on disk, which > sounds rather dangerous. I don't think so. Device-mapper already allows to replace a device driver with another driver. Nobody blames dm-linear, that it may be reloaded to point wrong partition, while it can. Nobody blames loop, that someone in userspace may corrupt its blocks, and filesystem on that device will become broken. The thing is kernel and userspace never access the file at the same time. It case of maintenance actions may be called in userspace, they must be, since this reduces the kernel code. >> This example shows the way of device-mapper infrastructure >> allows to implement drivers following the idea of >> kernel/userspace components demarcation. Thus, the driver >> uses advantages of device-mapper instead of implementing >> its own suspend/resume engine. > > What do you need more than a queue freeze? Theoretically, I can, in case of this flushes all pending requests. But this will increase driver code significantly, since there won't be possible to use reload mechanism, and there are other problems like problems with performance like I wrote above. So, this approach look significantly worse. Kirill -- dm-devel mailing list dm-devel@xxxxxxxxxx https://listman.redhat.com/mailman/listinfo/dm-devel