If this is the wrong mailing list to ask my questions then i am sorry and tell me where i should post them.
I have written a passthrough block device driver using 'make_request' call. This block device driver simply passes any request that comes to it down to lvm.
However, the read performance for my passthrough driver is around 65MB/s (measured through dd) and write performance is around 140MB/s for dd block size 4096.
The write performance matches with lvm's write performance more or less but, the read performance on lvm is around 365MB/s.
I am posting snippets of code which i think are relevant here:
static int passthrough_make_request(struct request_queue * queue, struct bio * bio)
{
passthrough_device_t * passdev = queue->queuedata;
bio->bi_bdev = passdev->bdev_backing;
generic_make_request(bio);
return 0;
}
For initializing the queue i am using following:
blk_queue_make_request(passdev->queue, passthrough_make_request);
passdev->queue->queuedata = sbd;
passdev->queue->unplug_fn = NULL;
bdev_backing = passdev->bdev_backing;
blk_queue_stack_limits(passdev->queue, bdev_get_queue(bdev_backing));
if ((bdev_get_queue(bdev_backing))->merge_bvec_fn) {
blk_queue_merge_bvec(sbd->queue, sbd_merge_bvec_fn);
}
Now, I browsed through dm code in kernel to see if there is some flag or something which i am not using which is causing this huge performance penalty.
But, I have not found anything.
If you have any ideas about what i am possibly doing wrong then please tell me.
Thanks in advance.
Regards,
Neha
-- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel