Dear Kent, On 12/13/23 14:48, Donald Buczek wrote: > On 12/13/23 13:28, Kent Overstreet wrote: >> On Wed, Dec 13, 2023 at 08:37:57AM +0100, Donald Buczek wrote: >>> Probably not for the specific applications I mentioned (backup, mirror, >>> accounting). These are intended to run continuously, slowly and unnoticed >>> in the background, so they are memory and i/o throttled via cgroups anyway >>> and one is even using sleep after so-and-so many stat calls to reduce >>> its impact. >>> >>> If they could tell a directory from a snapshot, I would probably stop them >>> from walking into snapshots. And if not, the snapshot id is all that is >>> needed to tell a clone in a snapshot from a hardlink. So these don't really >>> need the filehandle. >> >> Perhaps we should allocate a bit for differentiating a snapshot from a >> non snapshot subvolume? > Are there non-snapshots subvolumes? > > From debugfs bcachefs/../btrees, I've got the impression, that every > volume starts with a (single) snapshot. > [...] > So is there really a type difference between the objects created by > `bcachefs subvolume create` and `bcachefs subvolume snapshot` ? It appears > that they both point to a volume which points to a snapshot in the snapshot > tree. On a second thought: Even if my guesses were true, it would make sense to give userspace the information. I'd probably code my backup code to walk into volumes (or singleton snapshots) directories and copy the data just as it would do with conventional directories. There is no risk of seeing the same file multiple times. Only the hardlink logic should regard these volume borders and don't treat entries in different volumes as hardlink candidates. Only for subvolumes which potentially duplicate data we'd need special coding to avoid copying the same data to the backup volume over and over. Although we already might have a similar problem already with reflink copies. Best Donald > > Best > > Donald > > >>> In the thread it was assumed, that there are other (unspecified) >>> applications which need the filehandle and currently use name_to_handle_at(). >>> >>> I though it was self-evident that a single syscall to retrieve all >>> information atomically is better than a set of syscalls. Each additional >>> syscall has overhead and you need to be concerned with the data changing >>> between the calls. >> >> All other things being equal, yeah it would be. But things are never >> equal :) >> >> Expanding struct statx is not going to be as easy as hoped, so we need >> to be a bit careful how we use the remaining space, and since as Dave >> pointed out the filehandle isn't needed for checking uniqueness unless >> nlink > 1 it's not really a hotpath in any application I can think of. >> >> (If anyone does know of an application where it might matter, now's the >> time to bring it up!) >> >>> Userspace nfs server as an example of an application, where visible >>> performance is more relevant, was already mentioned by someone else. >> >> I'd love to hear confirmation from someone more intimately familiar with >> NFS, but AFAIK it shouldn't matter there; the filehandle exists to >> support resuming IO or other operations to a file (because the server >> can go away and come back). If all the client did was a stat, there's no >> need for a filehandle - that's not needed until a file is opened. > -- Donald Buczek buczek@xxxxxxxxxxxxx Tel: +49 30 8413 1433