Sebastian Knust <sknust@xxxxxxxxxxxxxxxxxxxxxxx> writes: > Hi, > > I am running a Ceph Octopus (15.2.13) cluster mainly for CephFS. Moving (with > mv) a large directory (mail server backup, so a few million small files) within > the cluster takes multiple days, even though both source and destination share > the same (default) file layout and - at least on the client I am performing the > move on - are located within the same mount point. > > I also see that the move is done by recursive copying and later deletion, as I > would only expect between different file systems / mount points. A reason for that to happen could be the usage of quotas in the filesystem. If you have quotas set in any of the source or destination hierarchies the rename(2) syscall will fail with -EXDEV (the "Invalid cross-device link" error). And I guess that 'mv' will then revert to the less efficient recursive copy. A possible solution would be to temporarily remove the quotas (i.e. setting them to '0'), and setting them back after the rename. Cheers, -- Luís > > > Checking with cephfs-shell (16.2.5), the move fails with the "Invalid > cross-device link [Errno 18]" error. However, stat shows the same device > ID for source and destination: > > CephFS:~/>>> mv /source/foo /dest/foo > cephfs.OSError: error in rename /source/foo to /dest/foo: Invalid cross-device > link [Errno 18] > > CephFS:~/>>> stat /source/foo > Device: 18446744073709551614 Inode: 1099620656366 > > CephFS:~/>>> stat /dest/ > Device: 18446744073709551614 Inode: 1099570814227 > > Full output at https://pastebin.com/9V6FZ6hP > > > Any ideas why this happens? > > The /source was originally created by ceph fs subvolume create ..., however I > was not using the volume/subvolume features and reorganised the data - the > directory inode is still the same. > > Cheers > Sebastian > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx