Re: [PATCH v4 21/25] ibnbd: server: functionality for IO submission to file or block dev

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 9/26/19 7:04 AM, Jinpu Wang wrote:
On Wed, Sep 18, 2019 at 11:46 PM Bart Van Assche <bvanassche@xxxxxxx> wrote:
On 6/20/19 8:03 AM, Jack Wang wrote:
Isn't the use of O_DSYNC something that should be configurable?
I know scst allow O_DSYNC to be configured, but in our production, we
only use with O_DSYNC,
  we sure can add options to allow it to configure it, but we don't
have a need yet.

Shouldn't upstream code be general purpose instead of only satisfying the need of a single user?

+struct ibnbd_dev *ibnbd_dev_open(const char *path, fmode_t flags,
+                              enum ibnbd_io_mode mode, struct bio_set *bs,
+                              ibnbd_dev_io_fn io_cb)
+{
+     struct ibnbd_dev *dev;
+     int ret;
+
+     dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+     if (!dev)
+             return ERR_PTR(-ENOMEM);
+
+     if (mode == IBNBD_BLOCKIO) {
+             dev->blk_open_flags = flags;
+             ret = ibnbd_dev_blk_open(dev, path, dev->blk_open_flags);
+             if (ret)
+                     goto err;
+     } else if (mode == IBNBD_FILEIO) {
+             dev->blk_open_flags = FMODE_READ;
+             ret = ibnbd_dev_blk_open(dev, path, dev->blk_open_flags);
+             if (ret)
+                     goto err;
+
+             ret = ibnbd_dev_vfs_open(dev, path, flags);
+             if (ret)
+                     goto blk_put;

This looks really weird. Why to call ibnbd_dev_blk_open() first for file
I/O mode? Why to set dev->blk_open_flags to FMODE_READ in file I/O mode?

The reason behind is we want to be able to symlink to the block device.
And for File io mode, we only allow exporting block device.

This sounds weird to me ...

Please use the in-kernel asynchronous I/O API instead of kernel_read()
and kernel_write() and remove the fileio_wq workqueue. Examples of how
to use call_read_iter() and call_write_iter() are available in the loop
driver and also in drivers/target/target_core_file.c.

What the benefits of using call_read_iter/call_write_iter, does it
offer better performance?

The benefits of using in-kernel asynchronous I/O I know of are:
* Better performance due to fewer context switches. For the posted code as many kernel threads will be active as the queue depth. So more context switches will be triggered than necessary. * Removal the file I/O workqueue and hence a reduction of the number of kernel threads.

Thanks,

Bart.



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux