Re: [Gluster-devel] Modifying gluster's logging mechanism

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Barak,

My replies inline.

On Thu, Nov 21, 2019 at 6:34 PM Barak Sason Rofman <bsasonro@xxxxxxxxxx> wrote:

Hello Gluster community,

My name is Barak and I’ve joined RH gluster development in August.
Shortly after my arrival, I’ve identified a potential problem with gluster’s logging mechanism and I’d like to bring the matter up for discussion.

The general concept of the current mechanism is that every worker thread that needs to log a message has to contend for a mutex which guards the log file, write the message and, flush the data and then release the mutex.
I see two design / implementation problems with that mechanism:

  1. The mutex that guards the log file is likely under constant contention.

  2. The fact that each worker thread perform the IO by himself, thus slowing his "real" work.


Both  above points are true, and can have an impact when there is lot of logging. While some of us would say we knew the impact of it, we had not picked this up as a priority item to fix for below reasons.

* First of all, when we looked at log very early in project's life, our idea was based mostly on kernel logs (/var/log/messages). We decided, as a file-system, because it is very active with I/Os and should run for years together without failing, there should be NO log messages when the system is healthy, which should be 99%+ time.

* Now, if there are no logs when everything is healthy, and most of the things are healthy 99% of the time, naturally the focus was not 'performance' of logging infra, but the correctness. This is where, the strict ordering through locks to preserve the timestamps of logs, and have it organized came by.


Initial tests, done by removing logging from the regression testing, shows an improvement of about 20% in run time. This indicates we’re taking a pretty heavy performance hit just because of the logging activity.



That is interesting observation. For this alone, can we have an option to disable all logging during regression? That would fasten up things for normal runs immediately.
 

In addition to these problems, the logging module is due for an upgrade:

  1. There are dozens of APIs in the logger, much of them are deprecated - this makes it very hard for new developers to keep evolving the project.

  2. One of the key points for Gluster-X, presented in October at Bangalore, is the switch to a structured logging all across gluster.



+1
 

Given these points, I believe we’re in a position that allows us to upgrade the logging mechanism by both switching to structured logging across the project AND replacing the logging system itself, thus “killing two birds with one stone”.

Moreover, if the upgrade is successful, the new logger mechanism might be adopted by other teams in Red Hat, which lead to uniform logging activity across different products.



This, in my opinion is a good reason to undertake this activity. Mainly because we should be having our logging infra similar with other tools, and one shouldn't be having a learning curve to understand gluster's logging.
 

I’d like to propose a logging utility I’ve been working on for the past few weeks.
This project is still a work in progress (and still much work needs to be done in it), but I’d like to bring this matter up now so if the community will want to advance on that front, we could collaborate and shape the logger to best suit the community’s needs.

An overview of the system:

The logger provides several (number and size are user-defined) pre-allocated buffers which threads can 'register' to and receive a private buffer. In addition, a single, shared buffer is also pre-allocated (size is user-defined). The number of buffers and their size is modifiable at runtime (not yet implemented).

Worker threads write messages in one of 3 ways that will be described next, and an internal logger threads constantly iterates the existing buffers and drains the data to the log file.

As all allocations are allocated at the initialization stage, no special treatment it needed for "out of memory" cases.

The following writing levels exist: 

  1. Level 1 - Lockless writing: Lockless writing is achieved by assigning each thread a private ring buffer. A worker threads write to that buffer and the logger thread drains that buffer into a log file.

In case the private ring buffer is full and not yet drained, or in case the worker thread has not registered for a private buffer, we fall down to the following writing methods:

  1. Level 2 - Shared buffer writing: The worker thread will write it's data into a buffer that's shared across all threads. This is done in a synchronized manner.

In case the private ring buffer is full and not yet drained AND the shared ring buffer is full and not yet drained, or in case the worker thread has not registered for a private buffer, we fall down to the last writing method:

  1. Level 3 - Direct write: This is the slowest form of writing - the worker thread directly write to the log file.

The idea behind this utility is to reduce as much as possible the impact of logging on runtime. Part of this reduction comes at the cost of having to parse and reorganize the messages in the log files using a dedicated tool (yet to be implemented) as there is no guarantee on the order of logged messages.

This is agreeable compromise for reducing load on running system. As, most of the times, logs are used for debugging, and are mostly debugged in isolation after the event happens, if there exists a tool to get things ordered, then it should be fine. 

I am in agreement with Sankarshan, who responded in another thread about tools like EFK/ELK stack for centralized logging. By moving to centralized logging, and having proper structure for logging, the initial goals we had wouldn't be broken, and we will have better performance even when we are logging in DEBUG mode.
 

The full logger project is hosted on: https://github.com/BarakSason/Lockless_Logger

For project documentation visit: https://baraksason.github.io/Lockless_Logger/


Please make sure, we have minimum dependency on this project, and also the install steps are easy for your project. One of the reasons Gluster succeeded is because it is very easy to install (and uninstall), without much dependencies. I personally would like to keep that behavior same. 
 
While developing, at least for couple of releases after this gets used, we need to have current logging structure also as an option. So, there would be confidence to try new thing.

Regards,
Amar

I thank you all for reading through my suggestion and I’m looking forward to your feedback,

--
Barak Sason Rofman

Gluster Storage Development

Red Hat Israel

34 Jerusalem rd. Ra'anana, 43501

bsasonro@redhat.com    T: +972-9-7692304
M: +972-52-4326355

_______________________________________________

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968


NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-devel

________

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux