Re: Ceph 16.2.14: osd crash, bdev() _aio_thread got r=-1 ((1) Operation not permitted)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you, Tyler. Unfortunately (or fortunately?) the drive is fine in this
case: there were no errors reported by the kernel at the time, and I
successfully managed to run a bunch of tests on the drive for many hours
before rebooting the host. The drive has worked without any issues for 3
days now.

I've already checked the file descriptor numbers, the defaults already are
very high and the usage is rather low.

/Z

On Wed, 6 Dec 2023 at 03:24, Tyler Stachecki <stachecki.tyler@xxxxxxxxx>
wrote:

> On Tue, Dec 5, 2023 at 10:13 AM Zakhar Kirpichenko <zakhar@xxxxxxxxx>
> wrote:
> >
> > Any input from anyone?
> >
> > /Z
>
> IIt's not clear whether or not these issues are related. I see three
> things in this e-mail chain:
> 1) bdev() _aio_thread with EPERM, as in the subject of this e-mail chain
> 2) bdev() _aio_thread with the I/O error condition (see [1], which is
> a *slightly* different if/else switch than the EPERM)
> 3) The tracker, which seems related to BlueFS (?):
> https://tracker.ceph.com/issues/53906
>
> Shooting in a dark a bit... but for the EPERM issue you mention, maybe
> try raising the number of file descriptors available to the
> process/container/system? Not sure why else would you get EPERM in
> this context.
>
> On 2) I have definitely seen that happen before and it's always
> matched with an I/O error reported by the kernel in `dmesg` output.
> Sometimes the drive keeps going along fine after a restart of the
> OSD/reboot of the system, sometimes not.
>
> Cheers,
> Tyler
>
> >
> > On Mon, 4 Dec 2023 at 12:52, Zakhar Kirpichenko <zakhar@xxxxxxxxx>
> wrote:
> >
> > > Hi,
> > >
> > > Just to reiterate, I'm referring to an OSD crash loop because of the
> > > following error:
> > >
> > > "2023-12-03T04:00:36.686+0000 7f08520e2700 -1 bdev(0x55f02a28a400
> > > /var/lib/ceph/osd/ceph-56/block) _aio_thread got r=-1 ((1) Operation
> not
> > > permitted)". More relevant log entries: https://pastebin.com/gDat6rfk
> > >
> > > The crash log suggested that there could be a hardware issue but there
> was
> > > none, I was able to access the block device for testing purposes
> without
> > > any issues, and the problem went away after I rebooted the host, this
> OSD
> > > is currently operating without any issues under load.
> > >
> > > Any ideas?
> > >
> > > /Z
> > >
> > > On Sun, 3 Dec 2023 at 16:09, Zakhar Kirpichenko <zakhar@xxxxxxxxx>
> wrote:
> > >
> > >> Thanks! The bug I referenced is the reason for the 1st OSD crash, but
> not
> > >> for the subsequent crashes. The reason for those is described where
> you
> > >> <snip />. I'm asking for help with that one.
> > >>
> > >> /Z
> > >>
> > >> On Sun, 3 Dec 2023 at 15:31, Kai Stian Olstad <ceph+list@xxxxxxxxxx>
> > >> wrote:
> > >>
> > >>> On Sun, Dec 03, 2023 at 06:53:08AM +0200, Zakhar Kirpichenko wrote:
> > >>> >One of our 16.2.14 cluster OSDs crashed again because of the dreaded
> > >>> >https://tracker.ceph.com/issues/53906 bug.
> > >>>
> > >>> <snip />
> > >>>
> > >>> >It would be good to understand what has triggered this condition and
> > >>> how it
> > >>> >can be resolved without rebooting the whole host. I would very much
> > >>> >appreciate any suggestions.
> > >>>
> > >>> If you look closely at 53906 you'll see it's a duplicate of
> > >>> https://tracker.ceph.com/issues/53907
> > >>>
> > >>> In there you have the fix and a workaround until next minor is
> released.
> > >>>
> > >>> --
> > >>> Kai Stian Olstad
> > >>>
> > >>
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux