RE: [PATCH] ceph: Introduce CONFIG_CEPH_LIB_DEBUG and CONFIG_CEPH_FS_DEBUG

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ilya,

On Thu, 2025-01-16 at 00:04 +0100, Ilya Dryomov wrote:
> On Wed, Jan 15, 2025 at 1:41 AM Viacheslav Dubeyko
> <Slava.Dubeyko@xxxxxxx> wrote:
> > 
> > 

<skipped>

> > 
> > -void ceph_msg_data_cursor_init(struct ceph_msg_data_cursor
> > *cursor,
> > -                              struct ceph_msg *msg, size_t length)
> > +int ceph_msg_data_cursor_init(struct ceph_msg_data_cursor *cursor,
> > +                             struct ceph_msg *msg, size_t length)
> >  {
> > +#ifdef CONFIG_CEPH_LIB_DEBUG
> >         BUG_ON(!length);
> >         BUG_ON(length > msg->data_length);
> >         BUG_ON(!msg->num_data_items);
> > +#else
> > +       if (!length)
> > +               return -EINVAL;
> > +
> > +       if (length > msg->data_length)
> > +               return -EINVAL;
> > +
> > +       if (!msg->num_data_items)
> > +               return -EINVAL;
> > +#endif /* CONFIG_CEPH_LIB_DEBUG */
> 
> Hi Slava,
> 
> I don't think this is a good idea.  I'm all for returning errors
> where
> it makes sense and is possible and such cases don't actually need to
> be
> conditioned on a CONFIG option.  Here, this EINVAL error would be
> raised very far away from the cause -- potentially seconds later and
> in
> a different thread or even a different kernel module.  It would still
> (eventually) hang the client because the messenger wouldn't be able
> to
> make progress for that connection/session.
> 

First of all, let's split the patch on two parts:
(1) CONFIG options suggestion;
(2) practical application of CONFIG option.

I believe that such CONFIG option is useful for adding
pre-condition and post-condition checks in methods that
could be executed in debug compilation and it will be
excluded from release compilation for production case.

Potentially, the first application of this CONFIG option
is not good enough. However, the kernel crash is good for
the problem investigation (debug compilation, for example),
but end-user would like to see working kernel but not crashed one.
And returning error is a way to behave in a nice way,
from my point of view.

> With this patch in place, in the scenario that you have been chasing
> where CephFS apparently asks to read X bytes but sets up a reply
> message with a data buffer that is smaller than X bytes, the
> messenger
> would enter a busy loop, endlessly reporting the new error,
> "faulting",
> reestablishing the session, resending the outstanding read request
> and
> attempting to fit the reply into the same (short) reply message.  I'd
> argue that an endless loop is worse than an easily identifiable
> BUG_ON
> in one of the kworker threads.
> 
> There is no good way to process the new error, at least not with the
> current structure of the messenger.  In theory, the read request
> could
> be failed, but that would require wider changes and a bunch of
> special
> case code that would be there just to recover from what could have
> been
> a BUG_ON for an obvious programming error.
> 

Yes, I totally see your point. But I believe that as kernel crash as
busy loop is wrong behavior. Ideally, we need to report the error and
continue to work without kernel crash or busy loop. Would we rework
the logic to be more user-friendly and to behave more nicely?
I don't quite follow why do we have busy loop even if we know that
request is failed? Generally speaking, failed request should be
discarded, from the common sense. :)

Thanks,
Slava.





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux