Re: [PATCH] log: support 256 colors with --graph=256colors

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Nguyễn Thái Ngọc Duy  <pclouds@xxxxxxxxx> writes:

> diff --git a/graph.c b/graph.c
> index d4e8519..75375a1 100644
> --- a/graph.c
> +++ b/graph.c
> @@ -78,6 +78,7 @@ static void graph_show_line_prefix(const struct diff_options *diffopt)
>  
>  static const char **column_colors;
>  static unsigned short column_colors_max;
> +static int column_colors_step;
>  
>  void graph_set_column_colors(const char **colors, unsigned short colors_max)
>  {
> @@ -234,10 +235,24 @@ void graph_setup_line_prefix(struct diff_options *diffopt)
>  }
>  
>  
> -struct git_graph *graph_init(struct rev_info *opt)
> +struct git_graph *graph_init_with_options(struct rev_info *opt, const char *arg)
>  {
>  	struct git_graph *graph = xmalloc(sizeof(struct git_graph));
>  
> +	if (arg && !strcmp(arg, "256colors")) {
> +		int i, start = 17, stop = 232;
> +		column_colors_max = stop - start;
> +		column_colors =
> +			xmalloc((column_colors_max + 1) * sizeof(*column_colors));
> +		for (i = start; i < stop; i++) {
> +			struct strbuf sb = STRBUF_INIT;
> +			strbuf_addf(&sb, "\033[38;5;%dm", i);
> +			column_colors[i - start] = strbuf_detach(&sb, NULL);
> +		}
> +		column_colors[column_colors_max] = xstrdup(GIT_COLOR_RESET);
> +		/* ignore the closet 16 colors on either side for the next line */
> +		column_colors_step = 16;
> +	}

So you pre-fill a table of colors with 232-17=215 slots.  Is the
idea that it is a co-prime with column_colors_step which is set to
16 so that going over the table with wraparound will cover all its
elements?

> @@ -382,6 +397,20 @@ static unsigned short graph_get_current_column_color(const struct git_graph *gra
>   */
>  static void graph_increment_column_color(struct git_graph *graph)
>  {
> +	if (column_colors_step) {
> +		static int random_initialized;
> +		int v;
> +
> +		if (!random_initialized) {
> +			srand((unsigned int)getpid());
> +			random_initialized = 1;
> +		}
> +		v = rand() % (column_colors_max - column_colors_step * 2);
> +		graph->default_column_color += column_colors_step + v;
> +		graph->default_column_color %= column_colors_max;
> +		return;
> +	}
> +
>  	graph->default_column_color = (graph->default_column_color + 1) %
>  		column_colors_max;
>  }

This is too ugly to live as-is for two reasons.

 - Do you really need rand()?  Doesn't this frustrate somebody who
   runs the same "git log" in two terminals in order to view an
   overly tall graph, expecting both commands that were started with
   the same set of arguments to paint the same line in the same
   color?  

 - Even if you needed rand(), you should be able to factor out
   computation of V so that the function does not need an early
   return that hints totally different processing for two codepaths.

        static void graph_increment_column_color(struct git_graph *g)
        {
                int next;

                if (! 256-color in use) {
                        next = 1;
                } else {
                        do whatever to compute your v;
                        next = v + column_colors_step;
                }
                g->default_column_color =
                (g->default_column_color + v) / column_colors_max;
        }

   Or something like that, perhaps?




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]