This is certainly intended behavior. If you checked the layout on the particular file, you would see it hasn’t changed. Directory layouts are the default for new files, not a control mechanism for existing files. It might be confusing, so we can talk about different presentations if there’s a better option (though this has been the behavior as long as we’ve had this functionality), but auto-migrating data is a significant challenge we haven’t sorted through and will have distinct controls if we ever manage to make it happen. -Greg On Wed, Mar 5, 2025 at 5:06 AM Florian Haas <florian.haas@xxxxxxxxxx> wrote: > Hello everyone, > > I'm seeing some behaviour in CephFS that strikes me as unexpected, and I > wonder if others have thoughts about it. > > Consider this scenario: > > * Ceph Reef (18.2.4) deployed with Cephadm running on Ubuntu Jammy, > CephFS client is running kernel 5.15.0-133-generic. > * CephFS is mounted to /mnt, using a CephX identity that has rwp > permissions. > * CephFS is using a single data pool named cephfs_data. > * In the CephFS root directory, there is a subdirectory named > "restricted". That subdirectory currently has a single file, > "test2.bin", of 8 MiB. > > There is no other data in the data pool. I can verify this with the > following rados command: > > rados -n client.cephfs -p cephfs_data ls > 10000000003.00000001 > 10000000003.00000000 > > Now I run this, from the root of my mounted CephFS: > > cd /mnt > setfattr -n ceph.dir.layout.pool_namespace -v restricted /mnt/restricted > > I realise this is naughty, because there is currently a file named > /mnt/restricted/test2.bin, and I'm not supposed to set layout attributes > on a non-empty directory. However, the command does succeed, and I am > able to read back the ceph.dir.layout xattr: > > getfattr -n ceph.dir.layout restricted > # file: restricted > ceph.dir.layout="stripe_unit=4194304 stripe_count=1 object_size=4194304 > pool=cephfs_data pool_namespace=restricted" > > If at this stage I try to read RADOS objects in the "restricted" > namespace, I get an empty list: > > rados -p cephfs_data -N restricted ls > (empty output) > > Now I move my test file out of the restricted subdirectory and back in: > > mv /mnt/restricted/test2.bin /mnt/ > sync > mv /mnt/test2.bin /mnt/restricted/ > sync > > No change: > > rados -p cephfs_data -N restricted ls > (empty output) > > Next, I try moving the file out, and *copying* it back in: > > mv /mnt/restricted/test2.bin /mnt/ > sync > cp /mnt/test2.bin /mnt/restricted/ > sync > > Now, I do see objects in the "restricted" namespace: > > rados -p cephfs_data -N restricted ls > 100000001fd.00000001 > 100000001fd.00000000 > > Also, if I create a *new* file in the "restricted" subdirectory, then > its RADOS objects do end up in the correct namespace: > > dd if=/dev/urandom of=/mnt/restricted/test3.bin count=3 bs=4M > > rados -p cephfs_data -N restricted ls > 100000001fe.00000000 > 100000001fe.00000002 > 100000001fe.00000001 > 100000001fd.00000001 > 100000001fd.00000000 > > By contrast, here's what happens to a non-empty *file,* when I try to > set ceph.file.layout: > > dd if=/dev/urandom of=/mnt/test4.bin count=4 bs=4M > > setfattr -n ceph.file.layout.pool_namespace -v restricted test4.bin > setfattr: test4.bin: Directory not empty > > So that fails, somewhat expectedly (although the error message is odd). > > In summary, there are two things that confuse me here: > > 1. Why does setting ceph.dir.layout.pool_namespace on a non-empty > directory succeed, when setting ceph.file.layout.pool_namespace on a > non-empty file fails (and even confusingly with a "Directory not empty" > message)? > > 2. Considering that setting ceph.dir.layout.pool_namespace on a > non-empty directory does succeed, why does mv'ing a file to a directory > with a different pool_namespace, and then mv'ing it back, not result in > its RADOS objects moving to the other namespace? > > So I'm curious: is this a bug (or two), or have I been misunderstanding > what's actually the expected behaviour? > > Cheers, > Florian > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx