Re: Ceph 16.2.x excessive logging, how to reduce?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi!

I am still fighting excessive logging. I've reduced unnecessary logging
from most components except for mon audit: https://pastebin.com/jjWvUEcQ

How can I stop logging this particular type of messages?

I would appreciate your help and advice.

/Z

On Thu, 5 Oct 2023 at 06:47, Zakhar Kirpichenko <zakhar@xxxxxxxxx> wrote:

> Thank you for your response, Igor.
>
> Currently debug_rocksdb is set to 4/5:
>
> # ceph config get osd debug_rocksdb
> 4/5
>
> This setting seems to be default. Is my understanding correct that you're
> suggesting setting it to 3/5 or even 0/5? Would setting it to 0/5 have any
> negative effects on the cluster?
>
> /Z
>
> On Wed, 4 Oct 2023 at 21:23, Igor Fedotov <igor.fedotov@xxxxxxxx> wrote:
>
>> Hi Zakhar,
>>
>> do reduce rocksdb logging verbosity you might want to set debug_rocksdb
>> to 3 (or 0).
>>
>> I presume it produces a  significant part of the logging traffic.
>>
>>
>> Thanks,
>>
>> Igor
>>
>> On 04/10/2023 20:51, Zakhar Kirpichenko wrote:
>> > Any input from anyone, please?
>> >
>> > On Tue, 19 Sept 2023 at 09:01, Zakhar Kirpichenko <zakhar@xxxxxxxxx>
>> wrote:
>> >
>> >> Hi,
>> >>
>> >> Our Ceph 16.2.x cluster managed by cephadm is logging a lot of very
>> >> detailed messages, Ceph logs alone on hosts with monitors and several
>> OSDs
>> >> has already eaten through 50% of the endurance of the flash system
>> drives
>> >> over a couple of years.
>> >>
>> >> Cluster logging settings are default, and it seems that all daemons are
>> >> writing lots and lots of debug information to the logs, such as for
>> >> example: https://pastebin.com/ebZq8KZk (it's just a snippet, but
>> there's
>> >> lots and lots of various information).
>> >>
>> >> Is there a way to reduce the amount of logging and, for example, limit
>> the
>> >> logging to warnings or important messages so that it doesn't include
>> every
>> >> successful authentication attempt, compaction etc, etc, when the
>> cluster is
>> >> healthy and operating normally?
>> >>
>> >> I would very much appreciate your advice on this.
>> >>
>> >> Best regards,
>> >> Zakhar
>> >>
>> >>
>> >>
>> > _______________________________________________
>> > ceph-users mailing list -- ceph-users@xxxxxxx
>> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux