On Wed, Jul 13, 2022 at 08:59:16PM -0600, Jens Axboe wrote: > On 7/13/22 8:54 PM, Jens Axboe wrote: > > On 7/13/22 6:19 PM, Ming Lei wrote: > >> On Wed, Jul 13, 2022 at 02:25:25PM -0600, Jens Axboe wrote: > >>> On 7/13/22 8:07 AM, Ming Lei wrote: > >>>> Hello Guys, > >>>> > >>>> ublk driver is one kernel driver for implementing generic userspace block > >>>> device/driver, which delivers io request from ublk block device(/dev/ublkbN) into > >>>> ublk server[1] which is the userspace part of ublk for communicating > >>>> with ublk driver and handling specific io logic by its target module. > >>> > >>> Ming, is this ready to get merged in an experimental state? > >> > >> Hi Jens, > >> > >> Yeah, I think so. > >> > >> IO path can survive in xfstests(-g auto), and control path works > >> well in ublksrv builtin hotplug & 'kill -9' daemon test. > >> > >> The UAPI data size should be good, but definition may change per > >> future requirement change, so I think it is ready to go as > >> experimental. > > > > OK let's give it a go then. I tried it out and it seems to work for me, > > even if the shutdown-while-busy is something I'd to look into a bit > > more. > > > > BTW, did notice a typo on the github page: > > > > 2) dependency > > - liburing with IORING_SETUP_SQE128 support > > > > - linux kernel 5.9(IORING_SETUP_SQE128 support) > > > > that should be 5.19, typo. > > I tried this: > > axboe@m1pro-kvm ~/g/ubdsrv (master)> sudo ./ublk add -t loop /dev/nvme0n1 > axboe@m1pro-kvm ~/g/ubdsrv (master) [255]> That looks one issue in ubdsrv, and '-f /dev/nvme0n1' is needed. > > and got this dump: > > [ 34.041647] WARNING: CPU: 3 PID: 60 at block/blk-mq.c:3880 blk_mq_release+0xa4/0xf0 > [ 34.043858] Modules linked in: > [ 34.044911] CPU: 3 PID: 60 Comm: kworker/3:1 Not tainted 5.19.0-rc6-00320-g5c37a506da31 #1608 > [ 34.047689] Hardware name: linux,dummy-virt (DT) > [ 34.049207] Workqueue: events blkg_free_workfn > [ 34.050731] pstate: 60400005 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) > [ 34.053026] pc : blk_mq_release+0xa4/0xf0 > [ 34.054360] lr : blk_mq_release+0x44/0xf0 > [ 34.055694] sp : ffff80000b16bcb0 > [ 34.056804] x29: ffff80000b16bcb0 x28: 0000000000000000 x27: 0000000000000000 > [ 34.059135] x26: 0000000000000000 x25: ffff00001fe9bb05 x24: 0000000000000000 > [ 34.061454] x23: ffff000005062eb8 x22: ffff000004608998 x21: 0000000000000000 > [ 34.063775] x20: ffff000004608a50 x19: ffff000004608950 x18: ffff80000b7b3c88 > [ 34.066085] x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000 > [ 34.068410] x14: 0000000000000002 x13: 0000000000013638 x12: 0000000000000000 > [ 34.070715] x11: ffff80000945b7e8 x10: 0000000000006f2e x9 : 00000000ffffffff > [ 34.073037] x8 : ffff800008fb5000 x7 : ffff80000860cf28 x6 : 0000000000000000 > [ 34.075334] x5 : 0000000000000000 x4 : 0000000000000028 x3 : ffff80000b16bc14 > [ 34.077650] x2 : ffff0000086d66a8 x1 : ffff0000086d66a8 x0 : ffff0000086d6400 > [ 34.079966] Call trace: > [ 34.080789] blk_mq_release+0xa4/0xf0 > [ 34.081811] blk_release_queue+0x58/0xa0 > [ 34.082758] kobject_put+0x84/0xe0 > [ 34.083590] blk_put_queue+0x10/0x18 > [ 34.084468] blkg_free_workfn+0x58/0x84 > [ 34.085511] process_one_work+0x2ac/0x438 > [ 34.086449] worker_thread+0x1cc/0x264 > [ 34.087322] kthread+0xd0/0xe0 > [ 34.088053] ret_from_fork+0x10/0x20 I guess there should be some validation missed in driver side too, will look into it. Thanks, Ming