Re: Logging in a multi-brick daemon

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 16 February 2017 at 07:30, Shyam <srangana@xxxxxxxxxx> wrote:
On 02/15/2017 08:51 PM, Atin Mukherjee wrote:

On Thu, 16 Feb 2017 at 04:09, Jeff Darcy <jdarcy@xxxxxxxxxx
<mailto:jdarcy@xxxxxxxxxx>> wrote:

    One of the issues that has come up with multiplexing is that all of
    the bricks in a process end up sharing a single log file.  The
    reaction from both of the people who have mentioned this is that we
    should find a way to give each brick its own log even when they're
    in the same process, and make sure gf_log etc. are able to direct
    messages to the correct one.  I can think of ways to do this, but it
    doesn't seem optimal to me.  It will certainly use up a lot of file
    descriptors.  I think it will use more memory.  And then there's the
    issue of whether this would really be better for debugging.  Often
    it's necessary to look at multiple brick logs while trying to
    diagnose this problem, so it's actually kind of handy to have them
    all in one file.  Which would you rather do?

    (a) Weave together entries in multiple logs, either via a script or
    in your head?

    (b) Split or filter entries in a single log, according to which
    brick they're from?

    To me, (b) seems like a much more tractable problem.  I'd say that
    what we need is not multiple logs, but *marking of entries* so that
    everything pertaining to one brick can easily be found.  One way to
    do this would be to modify volgen so that a brick ID (not name
    because that's a path and hence too long) is appended/prepended to
    the name of every translator in the brick.  Grep for that brick ID,
    and voila!  You now have all log messages for that brick and no
    other.  A variant of this would be to leave the names alone and
    modify gf_log so that it adds the brick ID automagically (based on a
    thread-local variable similar to THIS).  Same effect, other than
    making translator names longer, so I'd kind of prefer this
    approach.  Before I start writing the code, does anybody else have
    any opinions, preferences, or alternatives I haven't mentioned yet?



A few questions/thoughts here:

Debugging will involve getting far more/bigger files from customers unless we have a script (?) to  grep out only those messages pertaining to the volume in question. IIUC, this would just be grepping for the volname and then determining which brick each message pertains to based on the brick id, correct? 

Would brick ids remain constant across add/remove brick operations? An easy way would probably be just to use the client xlator number as the brick id which would make it easy to map the brick to client connection.

With several brick processes all writing to the same log file, can there be problems with interleaving messages? 

Logrotate might kick in faster as well causing us to lose debugging data if only a limited number of files are saved, as all those files would now hold less log data per volume. The logrotate config options would need to be changed to keep more files.

Having all messages for the bricks of the same volume in a single file would definitely be helpful. Still thinking through logging all messages for all bricks in a single file. :)

 

(b) is better. Considering centralized logging or log file redirection etc, (a) becomes unnatural and unwieldy, (b) is better.



I like this idea. +1



    _______________________________________________
    Gluster-devel mailing list
    Gluster-devel@xxxxxxxxxxx <mailto:Gluster-devel@gluster.org>
    http://lists.gluster.org/mailman/listinfo/gluster-devel

--
- Atin (atinm)


_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-devel

_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-devel

_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux