Re: Filestore OSD on CephFS?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Burkhard:
Thank you, this is literally what I was looking for.  A VM with RBD images attached was my first choice (and what we do for a test and integration lab today), but am trying to give as much possible space to the underlying cluster without having to frequently add/remove OSDs and rebalance the “sub-cluster”.  I didn’t think about a loopback-mapped file on CephFS — but at that point, to your point, I might as well use RBD.  :-)

To be clear, I know the question comes across as ludicrous.  It *seems* like this is going to work okay for the light workload use case that I have in mind — I just didn’t want to risk impacting the underlying cluster too much or hit any other caveats that perhaps someone else has run into before.  I doubt many people have tried CephFS as a Filestore OSD since in general, it seems like a pretty silly idea.

Thanks,

--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Service-Disabled Veteran-Owned Business
1775 Wiehle Avenue Suite 101 | Reston, VA 20190
c: 228-547-8045 f: 571-266-3106
www.knightpoint.com 
DHS EAGLE II Prime Contractor: FC1 SDVOSB Track
GSA Schedule 70 SDVOSB: GS-35F-0646S
GSA MOBIS Schedule: GS-10F-0404Y
ISO 9001 / ISO 20000 / ISO 27001 / CMMI Level 3

Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, copy, use, disclosure, or distribution is STRICTLY prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.

On Jan 16, 2019, at 8:27 AM, Burkhard Linke <Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:

Hi,


just some comments:


CephFS has an overhead for accessing files (capabilities round trip to MDS for first access, cap cache management, limited number of concurrent caps depending on MDS cache size...), so using the cephfs filesystem as storage for a filestore OSD will add some extra overhead. I would use a loopback file since it reduces the cephfs overhead (one file, one cap), but it might also introduce other restrictions, e.g. fixed size of the file.

If you can use a ceph cluster as 'backend storage', you can also use a rbd image. This should be remove most of the restrictions you have already mentioned (except fixed size again). You can also use multiple images to have multiple OSDs ;-)


Regards,

Burkhard


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux