On Thu, Jan 04, 2024 at 07:15:42AM +0100, Christoph Hellwig wrote: > On Wed, Jan 03, 2024 at 03:48:49PM -0800, Darrick J. Wong wrote: > > "To support these cases, a pair of ``xfile_obj_load`` and ``xfile_obj_store`` > > functions are provided to read and persist objects into an xfile. An errors > > encountered here are treated as an out of memory error." > > Ok. > > > > -DEFINE_XFILE_EVENT(xfile_pwrite); > > > +DEFINE_XFILE_EVENT(xfile_obj_load); > > > +DEFINE_XFILE_EVENT(xfile_obj_store); > > > > Want to shorten the names to xfile_load and xfile_store? That's really > > what they're doing anyway. > > Fine with me. Just for the trace points or also for the functions? Might as well do them both, I don't think anyone really depends on those exact names. I don't. :) > Also - returning ENOMEM for the API misuse cases (too large object, > too large total size) always seemed weird to me. Is there a really > strong case for it or should we go for actually useful errors for those? The errors returned by the xfile APIs can float out to userspace, so I'd rather have them all turn into: $ xfs_io -c 'scrub <fubar>' / XFS_IOC_SCRUB_METADATA: Cannot allocate memory. vs. $ xfs_io -c 'scrub <fubar>' / XFS_IOC_SCRUB_METADATA: File is too large. So that users won't think that the root directory is too big or something. --D