Re: RBD and Ceph FS for private cloud

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

If performance is critical you'd want CephFS kernel clients to access
your CephFS volumes/subvolumes.  On the other hand, if you can't trust
the clients in your cloud, then it's recommended that you set up a
gateway (NFS-Ganesha server) for CephFS. NFS-Ganesha server uses
libcephfs (userspace CephFS client)  to access the backend Ceph FS
subvolumes. CephFS kernel client has better performance than userspace
CephFS client.

I assume you want to use cephadm to set up the NFS service. If you
plan to deploy the ingress (HAProxy+keepalived) service
(https://docs.ceph.com/en/latest/cephadm/services/nfs/#high-availability-nfs)
in front of it, keep in mind  the NFS-Ganesha server will be seeing
HA-proxy's IP address instead of the client IP address. So that may
affect your client authorization model.  I am aware of OpenStack
deployments that do not use cephadm to deploy the NFS service.
Instead, they setup their own active/passive NFS-Ganesha cluster
gateways, using pacemaker+corosync for example, in front of CephFS.

-Ramana

On Thu, Nov 3, 2022 at 7:28 AM Eugen Block <eblock@xxxxxx> wrote:
>
> Hi,
>
> as always the answer is "it depends". Our company uses the ceph
> cluster for all three protocols. We have an openstack cluster (rbd)
> and use cephfs for work and home directories, and radosgw for k8s
> backups. And we don't face any performance issues. I'd recommend to
> give cephfs a try, no need to add a gateway inbetween as a potential
> bottleneck, unless any policies require so.
>
> Regards,
> Eugen
>
> Zitat von Mevludin Blazevic <mblazevic@xxxxxxxxxxxxxx>:
>
> > Hi all,
> >
> > i am planning to set up on my ceph cluster an RBD pool for virtual
> > machines created on my Cloudstack environment. In parallel, a Ceph
> > FS pool should be used as a secondary storage for VM snapshots, ISOs
> > etc. Are there any performance issues when using both RBD and CephFS
> > or is it better to use a separate NFS server? Moreover, when setting
> > up Ceph NFS using ceph orch, only one host is "registered" from
> > which i can mount the Ceph FS. Should I use more than one host (e.g
> > high-availability nfs)? Any suggestions?
> >
> > Best,
> >
> > Mevludin
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux