Thanks both of you for the valuable insights. This is mostly as I expected. I think it would be a pretty interesting feature if the CephFS kernel driver would be able to recognize/coordinate mounts to the same CephFS instance and then delegate the move operation accordingly. But honestly I have no idea if this is technically feasible at all nor if their would be that many applications for that. For now I will be able to reach all directories from within the same mount and thus keep operations to the minimum. But I'll will certainly keep in mind that there are exceptions such as with the quotas. Best, Mathias On 5/18/2022 4:39 PM, Frank Schilder wrote: > Hi, > > a move across mount points should default to mv+rm, because these are 2 different file systems from the OS' point of view. > > However, a move within the same mount point should be atomic. Unfortunately, there is an exception: if the move crosses directories with quotas set. This had been fixed some time ago but, unfortunately, reverted back to bad behaviour. See https://tracker.ceph.com/issues/48203. > > If you are affected by this, please make some noise. The change applied is a serious performance regression working around a close to irrelevant corner case (it does not even solve it!) and it would be great if the devs would pick up on the suggestion to add a mount option "fast_move" to enable the usual atomic move on demand, with the low cost sanity checks applied. > > Best regards, > ================= > Frank Schilder > AIT Risø Campus > Bygning 109, rum S14 > > ________________________________________ > From: Magnus HAGDORN <Magnus.Hagdorn@xxxxxxxx> > Sent: 18 May 2022 15:45:54 > To: ceph-users@xxxxxxx > Subject: Re: Moving data between two mounts of the same CephFS > > Hi Mathias, > I have noticed in the past the moving directories within the same mount > point can take a very long time using the system mv command. I use a > python script to archive old user directories by moving them to a > different part of the filesystem which is not exposed to the users. I > use the rename method of a python Path object which is atomic. > > In your case I would expect a copy and unlink because the operation is > across different mount points. > > Is this operation a one off or a regular occurance? If it is a one off > then I would do it as adminstrator. If it is a regular occurance I > would look into re-arranging the filesystem layout to make this > possible. > > Regards > magnus > > > On Wed, 2022-05-18 at 13:34 +0000, Kuhring, Mathias wrote: >> This email was sent to you by someone outside the University. >> You should only click on links or attachments if you are certain that >> the email is genuine and the content is safe. >> >> Dear Ceph community, >> >> Let's say I want to make different sub-directories of my CephFS >> separately available on a client system, >> i.e. without exposing the parent directories (because it contains >> other >> sensitive data, for instance). >> >> I can simply mount specific different folders, as primitively >> illustrated here: >> >> CephFS root: >> - FolderA >> - FolderB >> - FolderC >> >> Client mounts: >> - MountA --> cephfs:/FolderA >> - MountB --> cephfs:/FolderB >> >> Now I'm wondering what actually happens in the background when I move >> (not copy) data from MountA to MountB. >> In particular, is CephFS by chance aware of this situation and >> actually >> performs an atomic move internally? >> Or is more like a copy and unlink operation via the client? >> >> I appreciate your thoughts. >> >> Best wishes, >> Mathias >> >> _______________________________________________ >> ceph-users mailing list -- ceph-users@xxxxxxx >> To unsubscribe send an email to ceph-users-leave@xxxxxxx > The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. Is e buidheann carthannais a th’ ann an Oilthigh Dhùn Èideann, clàraichte an Alba, àireamh clàraidh SC005336. > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx -- Mathias Kuhring Dr. rer. nat. Bioinformatician HPC & Core Unit Bioinformatics Berlin Institute of Health at Charité (BIH) E-Mail: mathias.kuhring@xxxxxxxxxxxxxx Mobile: +49 172 3475576 _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx