Multiple CephFS mounts and FSCache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello everyone,

I've been trying to use CephFS and mix it together with fscache, however i was never able to have multiple mounts with fscache enabled.
Is this a known intentional limitation or a bug?
It would be possible to work around it by mounting the root of the filesystem and using bind mounts but i have separate volumes that need to be separately mounted.

How to replicate:
mount -t ceph -o fsc admin@.filesystem1=/path1 /tmp/one # Succeeds
mount -t ceph -o fsc admin@.filesystem1=/path2 /tmp/two # Fails complaining about no mds being available

The alternative of using no fscache works just fine:
mount -t ceph admin@.filesystem1=/path1 /tmp/one # Succeeds
mount -t ceph admin@.filesystem1=/path2 /tmp/two # Succeeds

Versions:
- ceph quincy 17.2.6
- linux 6.4.6
- cachefilesd 0.10.10

-- 
Alex D.
RedXen System & Infrastructure Administration
https://redxen.eu/

Attachment: signature.asc
Description: PGP signature

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux