Re: Ceph 16.2.x excessive logging, how to reduce?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for your reply, Marc!

I did try to play with various debug settings. The issue is that mons
produce logs of all commands issued by clients, not just mgr. For example,
an Openstack Cinder node asking for space it can use:

Oct  9 07:59:01 ceph03 bash[4019]: debug 2023-10-09T07:59:01.303+0000
7f489da8f700  0 log_channel(audit) log [DBG] : from='client.?
10.208.1.11:0/3286277243' entity='client.cinder' cmd=[{"prefix":"osd pool
get-quota", "pool": "volumes-ssd", "format":"json"}]: dispatch

It is unclear which particular mon debug option out of many controls this
particular type of debug. I tried searching for documentation of mon debug
options to no avail.

/Z


On Mon, 9 Oct 2023 at 10:03, Marc <Marc@xxxxxxxxxxxxxxxxx> wrote:

>
> Did you do something like this
>
> Getting keys with
> ceph daemon mon.a config show | grep debug_ | grep mgr
>
> ceph tell mon.* injectargs --$monk=0/0
>
> >
> > Any input from anyone, please?
> >
> > This part of Ceph is very poorly documented. Perhaps there's a better
> place
> > to ask this question? Please let me know.
> >
> > /Z
> >
> > On Sat, 7 Oct 2023 at 22:00, Zakhar Kirpichenko <zakhar@xxxxxxxxx>
> wrote:
> >
> > > Hi!
> > >
> > > I am still fighting excessive logging. I've reduced unnecessary logging
> > > from most components except for mon audit:
> https://pastebin.com/jjWvUEcQ
> > >
> > > How can I stop logging this particular type of messages?
> > >
> > > I would appreciate your help and advice.
> > >
> > > /Z
> > >
> > > On Thu, 5 Oct 2023 at 06:47, Zakhar Kirpichenko <zakhar@xxxxxxxxx>
> wrote:
> > >
> > >> Thank you for your response, Igor.
> > >>
> > >> Currently debug_rocksdb is set to 4/5:
> > >>
> > >> # ceph config get osd debug_rocksdb
> > >> 4/5
> > >>
> > >> This setting seems to be default. Is my understanding correct that
> > you're
> > >> suggesting setting it to 3/5 or even 0/5? Would setting it to 0/5 have
> > any
> > >> negative effects on the cluster?
> > >>
> > >> /Z
> > >>
> > >> On Wed, 4 Oct 2023 at 21:23, Igor Fedotov <igor.fedotov@xxxxxxxx>
> wrote:
> > >>
> > >>> Hi Zakhar,
> > >>>
> > >>> do reduce rocksdb logging verbosity you might want to set
> debug_rocksdb
> > >>> to 3 (or 0).
> > >>>
> > >>> I presume it produces a  significant part of the logging traffic.
> > >>>
> > >>>
> > >>> Thanks,
> > >>>
> > >>> Igor
> > >>>
> > >>> On 04/10/2023 20:51, Zakhar Kirpichenko wrote:
> > >>> > Any input from anyone, please?
> > >>> >
> > >>> > On Tue, 19 Sept 2023 at 09:01, Zakhar Kirpichenko <
> zakhar@xxxxxxxxx>
> > >>> wrote:
> > >>> >
> > >>> >> Hi,
> > >>> >>
> > >>> >> Our Ceph 16.2.x cluster managed by cephadm is logging a lot of
> very
> > >>> >> detailed messages, Ceph logs alone on hosts with monitors and
> > several
> > >>> OSDs
> > >>> >> has already eaten through 50% of the endurance of the flash system
> > >>> drives
> > >>> >> over a couple of years.
> > >>> >>
> > >>> >> Cluster logging settings are default, and it seems that all
> daemons
> > >>> are
> > >>> >> writing lots and lots of debug information to the logs, such as
> for
> > >>> >> example: https://pastebin.com/ebZq8KZk (it's just a snippet, but
> > >>> there's
> > >>> >> lots and lots of various information).
> > >>> >>
> > >>> >> Is there a way to reduce the amount of logging and, for example,
> > >>> limit the
> > >>> >> logging to warnings or important messages so that it doesn't
> include
> > >>> every
> > >>> >> successful authentication attempt, compaction etc, etc, when the
> > >>> cluster is
> > >>> >> healthy and operating normally?
> > >>> >>
> > >>> >> I would very much appreciate your advice on this.
> > >>> >>
> > >>> >> Best regards,
> > >>> >> Zakhar
> > >>> >>
> > >>> >>
> > >>> >>
> > >>> > _______________________________________________
> > >>> > ceph-users mailing list -- ceph-users@xxxxxxx
> > >>> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > >>>
> > >>
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux