Re: multiple BLK-MQ queues for Ceph's RADOS Block Device (RBD) and CephFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I completely agree to what you said regarding Ceph client. This is exactly my understanding of a Ceph client.

And regarding blk-mq, I meant for a block device. A multi-queue implementation of a block device.



On Wednesday, July 15, 2020, Ilya Dryomov <idryomov@xxxxxxxxx> wrote:
> On Wed, Jul 15, 2020 at 12:47 AM Bobby <italienisch1987@xxxxxxxxx> wrote:
>>
>>
>>
>> Hi Ilya,
>>
>> Thanks for the reply. It's basically both i.e. I have a specific project currently and also I am looking to make ceph-fuse faster.
>>
>> But for now, let me ask specifically the project based question. In the project I have to write a blk-mq kernel driver for the Ceph client machine. The Ceph client machine will transfer the data to HBA or lets say any embedded device.
>
> What is a "Ceph client machine"?
>
> A Ceph client (or more specifically a RADOS client) speaks RADOS
> protocol and transfers data to OSD daemons.  It can't transfer data
> directly to a physical device because something has to take care of
> replication, ensure consistency and self healing, etc.  This is the
> job of the OSD.
>
>>
>> My hope is that there can be an alternative and that alternative is to not implement a blk-mq kernel driver and instead do the stuff in userspace. I am trying to avoid writing a blk-mq kernel driver and yet achieve the multi-queue implementation through userspace. Is it possible?
>>
>> Also AFAIK, the Ceph’s block storage implementation uses a client module and this client module has two implementations librbd (user-space) and krbd (kernel module). I have not gone deep into these client modules. but can librbd help me with this?
>
> I guess I don't understand the goal of your project.  A multi-queue
> implementation of what exactly?  A Ceph block device, a Ceph filesystem
> or something else entirely?  It would help if you were more specific
> because "a multi-queue driver for Ceph" is really vague.
>
> Thanks,
>
>                 Ilya
>
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx

[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux