Re: map RBD into CephFS?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 26, 2014 at 10:37 AM, David Champion <dgc@xxxxxxxxxxxx> wrote:
> Thanks, Greg, for the response.
>
> * On 26 Feb 2014, Gregory Farnum wrote:
>> >
>> > 1. Place the 8m files in a disk image.  Mount the disk image (read-only)
>> > to provide access to the 8m files, and allow copying the disk image to
>> > accelerate read of the enture dataset.
>> >
>> > 2. Put the 8m files in an RBD, and mount that instead.  I guess if it's
>> > RO I can map it to multiple heads -- true?
>>
>> Should be fine.
>
> Good to know, even if this isn't really an option. :)
>
>
>> > Questions:
>> >
>> > q1. CephFS has a tunable for max file size, currently set to 1TB.  If
>> > I want to change this, what needs to be done or redone?  Do I have to
>> > rebuild, or can I just change the param, restart services, and be off?
>>
>> What version are you running? It varies on whether that's set at FS
>> creation time or (on new enough code, but I don't remember when
>> off-hand) can be set via the cli ("ceph mds set max_file_size
>> <size_in_bytes>".
>
> 0.72.1.  Would it be safe to try this and see?  Or could that break
> something?  I assume that I'll know it worked if I can create a 1.6T
> file. :)

If it accepts the command, it worked properly. ;)

>
>> > q2. Sounds fine, except then the only access to the RBD raw blocks is
>> > via the block dev in /dev.  I expose the CephFS mount to users, but not
>> > /dev.  Is there a way to map the RBD as a pseudo-file within the CephFS
>> > mount?  If not, then perhaps I'm looking at a bind/loopback mount of
>> > /dev/rbd/rbd into the user-visible namespace?
>>
>> No, definitely no mapping of RBD into CephFS. It's a completely
>> different data format.
>
> OK, thanks.  I thought there might be a logical layer that allows
> mapping the raw blocks as sequences with a name.  (In theory it should
> work!)

No, we'd need a coordination layer between the RBD clients and the
CephFS metadata system. That would be bad.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux