Hi Florian, Point 1 is certainly a bug regarding the choice of terms in the response (confusion between file and directory). Point 2 is known (cf. https://ewal.dev/cephfs-migrating-files-between-pools) and described in the documentation: only new files are written to the new pool after setting the new layout. The 'mv' command does not move the Rados objects associated with the file from one pool to another. Only reading and rewriting the file does. What reflections/impediments led to this? I do not know. But what you observed is expected. Hope this helps, Cheers, Frédéric. ________________________________ De : Florian Haas <florian.haas@xxxxxxxxxx> Envoyé : mercredi 5 mars 2025 14:07 À : ceph-users Objet : Unintuitive (buggy?) CephFS behaviour when dealing with pool_namespace layout attribute Hello everyone, I'm seeing some behaviour in CephFS that strikes me as unexpected, and I wonder if others have thoughts about it. Consider this scenario: * Ceph Reef (18.2.4) deployed with Cephadm running on Ubuntu Jammy, CephFS client is running kernel 5.15.0-133-generic. * CephFS is mounted to /mnt, using a CephX identity that has rwp permissions. * CephFS is using a single data pool named cephfs_data. * In the CephFS root directory, there is a subdirectory named "restricted". That subdirectory currently has a single file, "test2.bin", of 8 MiB. There is no other data in the data pool. I can verify this with the following rados command: rados -n client.cephfs -p cephfs_data ls 10000000003.00000001 10000000003.00000000 Now I run this, from the root of my mounted CephFS: cd /mnt setfattr -n ceph.dir.layout.pool_namespace -v restricted /mnt/restricted I realise this is naughty, because there is currently a file named /mnt/restricted/test2.bin, and I'm not supposed to set layout attributes on a non-empty directory. However, the command does succeed, and I am able to read back the ceph.dir.layout xattr: getfattr -n ceph.dir.layout restricted # file: restricted ceph.dir.layout="stripe_unit=4194304 stripe_count=1 object_size=4194304 pool=cephfs_data pool_namespace=restricted" If at this stage I try to read RADOS objects in the "restricted" namespace, I get an empty list: rados -p cephfs_data -N restricted ls (empty output) Now I move my test file out of the restricted subdirectory and back in: mv /mnt/restricted/test2.bin /mnt/ sync mv /mnt/test2.bin /mnt/restricted/ sync No change: rados -p cephfs_data -N restricted ls (empty output) Next, I try moving the file out, and *copying* it back in: mv /mnt/restricted/test2.bin /mnt/ sync cp /mnt/test2.bin /mnt/restricted/ sync Now, I do see objects in the "restricted" namespace: rados -p cephfs_data -N restricted ls 100000001fd.00000001 100000001fd.00000000 Also, if I create a *new* file in the "restricted" subdirectory, then its RADOS objects do end up in the correct namespace: dd if=/dev/urandom of=/mnt/restricted/test3.bin count=3 bs=4M rados -p cephfs_data -N restricted ls 100000001fe.00000000 100000001fe.00000002 100000001fe.00000001 100000001fd.00000001 100000001fd.00000000 By contrast, here's what happens to a non-empty *file,* when I try to set ceph.file.layout: dd if=/dev/urandom of=/mnt/test4.bin count=4 bs=4M setfattr -n ceph.file.layout.pool_namespace -v restricted test4.bin setfattr: test4.bin: Directory not empty So that fails, somewhat expectedly (although the error message is odd). In summary, there are two things that confuse me here: 1. Why does setting ceph.dir.layout.pool_namespace on a non-empty directory succeed, when setting ceph.file.layout.pool_namespace on a non-empty file fails (and even confusingly with a "Directory not empty" message)? 2. Considering that setting ceph.dir.layout.pool_namespace on a non-empty directory does succeed, why does mv'ing a file to a directory with a different pool_namespace, and then mv'ing it back, not result in its RADOS objects moving to the other namespace? So I'm curious: is this a bug (or two), or have I been misunderstanding what's actually the expected behaviour? Cheers, Florian _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx