Re: Splitting large message written to stdout, explanation?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





@Lennart  Earlier our unit file had the following definition

StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=sbagent



I'm not sure how Systemd was handling this, but my assumption is that systemd redirects STDOUT , STDERR to  /dev/log and then systemd would pick that up and write to the respective file based. Given I found no help with rsyslog to deal with the large size log message (which are few in number) I looked at the journald conf. 

I removed the above explicit definition i.e. StandardOutput etc ..  from the unit file.

And now currently,  our logs are written on STDOUT and STDERR and systemd writes it to journal which `rsyslog` observer redirects them to a specific file(that is what my understanding)



As mentioned you can use the _LINE_BREAK= field to reassemble the
lines. But seriously, if you are logging megabytes of data in single
log messages you are doing things wrong. Rivisit what you are doing
there, you are trying to hammer a square log message into a round log
transport. Bad idea.


@Lennart How? JFI, this is what the split message of a large log message looks like.

May 22 05:22:38 088c16 echo-command[31926]:. Start ...
... 2109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654
May 22 05:22:38 088c16 echo-command[31926]: 32109876543210987654321098765432109876543210987654321098765432109876543210987654321098765... .. End
 


Start ... End suppose to be a single line but because it reach the upper limit of 48K it was broken. Now how can I assemble them?
 

Thanks

On Mon, May 22, 2023 at 2:01 PM Lennart Poettering <lennart@xxxxxxxxxxxxxx> wrote:
On Mo, 22.05.23 09:31, Virendra Negi (virendra.negi@xxxxxxxxxxxxxxxxxxxx) wrote:

> Ok, I think I get a sense of who is doing what which results in the Large
> Message getting Split as per my understanding it's *LINE_MAX* value in the
> `journalctl` conf that causes the Large message to get split.
>
> The default value is 48K and compared to the size of the split message and
> it comes approx to 48K. Now that explains that I'm thinking *is there** a
> way to prepend any "IDENTIFIER"  for the message that was split* from the
> original message? so that we can reassemble/merge them at the central L
> *ogstash* server.
>
> I'm looking at the correct section of code,
> https://github.com/systemd/systemd/blob/main/src/journal/journald-stream.c#L498
> I
> don't think there exists anything like it. Still want to check if there is
> anything possible?

As mentioned you can use the _LINE_BREAK= field to reassemble the
lines. But seriously, if you are logging megabytes of data in single
log messages you are doing things wrong. Rivisit what you are doing
there, you are trying to hammer a square log message into a round log
transport. Bad idea.

Lennart

--
Lennart Poettering, Berlin


 
 


Disclaimer: This e-mail and any documents, files, or previous e-mail messages appended or attached to it may contain confidential and/or privileged information. If you are not the intended recipient (or have received this email in error) please notify the sender immediately and delete this e-mail. Any unauthorized copying, disclosure or distribution of the material in this email is strictly prohibited & unlawful. The recipient acknowledges that Margo Networks Private Limited (SugarBox) may be unable to exercise control or ensure or guarantee the integrity of the text of the email message and the text is not warranted as to completeness and accuracy. Before opening and accessing the attachment, if any, please check and scan for virus


[Index of Archives]     [LARTC]     [Bugtraq]     [Yosemite Forum]     [Photo]

  Powered by Linux