pdev0 = pass through device on top of lvm
root@voffice-base:/home/neha/sbd# time dd if=/dev/pdev0 of=/dev/null bs=4096 count=1024 iflag=direct
1024+0 records in
1024+0 records out
4194304 bytes (4.2 MB) copied, 4.09488 s, 1.0 MB/s
real 0m4.100s
user 0m0.028s
sys 0m0.000s
root@voffice-base:/home/neha/sbd# time dd if=/dev/shm/image of=/dev/pdev0 bs=4096 count=1024 oflag=direct
1024+0 records in
1024+0 records out
4194304 bytes (4.2 MB) copied, 0.0852398 s, 49.2 MB/s
real 0m0.090s
user 0m0.004s
sys 0m0.012s
Thanks,
Neha
On Thu, Apr 11, 2013 at 11:53 AM, Rajat Sharma <fs.rajat@xxxxxxxxx> wrote:
so you mean direct I/O read of your passthrough device is lower than
direct I/O read of lvm?
On Thu, Apr 11, 2013 at 8:39 PM, neha naik <nehanaik27@xxxxxxxxx> wrote:
> Hi,
> I am calling the merge function of the block device driver below me(since
> mine is only pass through). Does this not work?
> When i tried seeing what read requests were coming then i saw that when i
> issue dd with count=1 it retrieves 4 pages,
> so i tried with 'direct' flag. But even with direct io my read performance
> is way lower than my write performance.
>
> Regards,
> Neha
>
>
> On Wed, Apr 10, 2013 at 11:15 PM, Rajat Sharma <fs.rajat@xxxxxxxxx> wrote:
>>
>> Hi,
>>
>> On Thu, Apr 11, 2013 at 2:23 AM, neha naik <nehanaik27@xxxxxxxxx> wrote:
>> > Hi All,
>> > Nobody has replied to my query here. So i am just wondering if there
>> > is a
>> > forum for block device driver where i can post my query.
>> > Please tell me if there is any such forum.
>> >
>> > Thanks,
>> > Neha
>> >
>> > ---------- Forwarded message ----------
>> > From: neha naik <nehanaik27@xxxxxxxxx>
>> > Date: Tue, Apr 9, 2013 at 10:18 AM
>> > Subject: Passthrough device driver performance is low on reads compared
>> > to
>> > writes
>> > To: kernelnewbies@xxxxxxxxxxxxxxxxx
>> >
>> >
>> > Hi All,
>> > I have written a passthrough block device driver using 'make_request'
>> > call. This block device driver simply passes any request that comes to
>> > it
>> > down to lvm.
>> >
>> > However, the read performance for my passthrough driver is around 65MB/s
>> > (measured through dd) and write performance is around 140MB/s for dd
>> > block
>> > size 4096.
>> > The write performance matches with lvm's write performance more or less
>> > but,
>> > the read performance on lvm is around 365MB/s.
>> >
>> > I am posting snippets of code which i think are relevant here:
>> >
>> > static int passthrough_make_request(
>> > struct request_queue * queue, struct bio * bio)
>> > {
>> >
>> > passthrough_device_t * passdev = queue->queuedata;
>> > bio->bi_bdev = passdev->bdev_backing;
>> > generic_make_request(bio);
>> > return 0;
>> > }
>> >
>> > For initializing the queue i am using following:
>> >
>> > blk_queue_make_request(passdev->queue, passthrough_make_request);
>> > passdev->queue->queuedata = sbd;
>> > passdev->queue->unplug_fn = NULL;
>> > bdev_backing = passdev->bdev_backing;
>> > blk_queue_stack_limits(passdev->queue, bdev_get_queue(bdev_backing));
>> > if ((bdev_get_queue(bdev_backing))->merge_bvec_fn) {
>> > blk_queue_merge_bvec(sbd->queue, sbd_merge_bvec_fn);
>> > }
>> >
>>
>> What is the implementation for sbd_merge_bvec_fn? Please debug through
>> it to check requests are merging or not? May be that is the cause of
>> lower performance?
>>
>> > Now, I browsed through dm code in kernel to see if there is some flag or
>> > something which i am not using which is causing this huge performance
>> > penalty.
>> > But, I have not found anything.
>> >
>> > If you have any ideas about what i am possibly doing wrong then please
>> > tell
>> > me.
>> >
>> > Thanks in advance.
>> >
>> > Regards,
>> > Neha
>> >
>>
>> -Rajat
>>
>> >
>> > _______________________________________________
>> > Kernelnewbies mailing list
>> > Kernelnewbies@xxxxxxxxxxxxxxxxx
>> > http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
>> >
>
>
_______________________________________________ Kernelnewbies mailing list Kernelnewbies@xxxxxxxxxxxxxxxxx http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies