FW: CephFS concurrency question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Just realised this never went to the group, sorry folks.
 
Is it worth me trying the FUSE driver, is that likely to make a difference in this type of scenario? I'm still concerned whether what I'm trying to do with CephFS is even supposed to work like this. Ignoring the Openstack/libvirt parts for the moment should I be able to create a file and read from it from two different hosts at the same time, can anyone confirm?
 

From: neville.taylor@xxxxxxxxxxxxx
To: john.spray@xxxxxxxxxx
Subject: RE: CephFS concurrency question
Date: Tue, 21 Apr 2015 16:20:59 +0100

Hi John,
 
I'm using different pools and users for cinder volumes and CephFS. I've created a CephFS user which has rwx on the CephFS pool, I then exported the key to a secret file and pass that in the command to mount CephFS from within /etc/fstab as follows:
 
X.X.X.X:6789:/       /var/lib/nova/instances ceph    name=cephfs,secretfile=/etc/ceph/cephfs.secret,noatime         0       2
 
If I shutdown either host everything works fine so I'm not sure it's a permissions thing. It only seems to go wrong once I try to access a file from a host after it has already been accessed on the other.
 
Thanks,
 
Neville

 

Date: Tue, 21 Apr 2015 15:31:40 +0100
From: john.spray@xxxxxxxxxx
To: neville.taylor@xxxxxxxxxxxxx; ceph-users@xxxxxxxxxxxxxx
Subject: Re: CephFS concurrency question

On 21/04/15 13:43, Neville wrote:
To test this further I tried the following basic tests:
 
On Host 2:
 
root@devops-kvm02:/var/lib/nova/instances# echo hello > test
root@devops-kvm02:/var/lib/nova/instances# cat test
hello
root@devops-kvm02:/var/lib/nova/instances#

Then from Host 1:
 
root@devops-kvm01:/var/lib/nova/instances# cat test
cat: test: Operation not permitted
root@devops-kvm01:/var/lib/nova/instances#

Then back on Host 2:

root@devops-kvm02:/var/lib/nova/instances# cat test
cat: test: Operation not permitted
root@devops-kvm02:/var/lib/nova/instances#

Should this even work? My understanding is CephFS allows concurrent access but I'm not sure if there is some file locking going on that I need to understand.

You might want to check your OSD authentication keys for the client hosts.  The results above seem consistent with settings that forbid the clients from reading objects from the CephFS data pool (kvm02 can initially read because it has its written data in cache).  Perhaps your hosts have keys set up that explicitly limit their access to the RBD pools, and don't take account of the CephFS data pool.

John


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux