Re: [libgpiod][PATCH] treewide: rework struct gpiod_line_bulk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Oct 23, 2020 at 11:28:31AM +0200, Bartosz Golaszewski wrote:
> From: Bartosz Golaszewski <bgolaszewski@xxxxxxxxxxxx>
> 

[snip]

> @@ -74,6 +74,106 @@ struct gpiod_chip {
>  	char label[32];
>  };
>  
> +/*
> + * The structure is defined in a way that allows internal users to allocate
> + * bulk objects that hold a single line on the stack - that way we can reuse
> + * a lot of code between functions that take single lines and those that take
> + * bulks as arguments while not unnecessarily allocating memory dynamically.
> + */
> +struct gpiod_line_bulk {
> +	struct gpiod_chip *owner;
> +	unsigned int num_lines;
> +	unsigned int max_lines;
> +	struct gpiod_line *lines[1];
> +};
> +

owner is only used to check that added lines are on same chip.
Just compare with lines[0]->chip in gpiod_line_bulk_add_line()
instead?

Also, line_bulk_same_chip() is now redundant as lines can only be added
to the bulk via gpiod_line_bulk_add_line() which performs the check,
so remove it and all uses of it throughout.

[snip]
>  
> +struct gpiod_line_bulk_iter {
> +	struct gpiod_line_bulk *bulk;
> +	unsigned int num_lines;
> +	unsigned int index;
> +};
> +
>  static int dir_filter(const struct dirent *dir)
>  {
>  	return !strncmp(dir->d_name, "gpiochip", 8);
> @@ -169,3 +175,30 @@ struct gpiod_line *gpiod_line_iter_next(struct gpiod_line_iter *iter)
>  	return iter->offset < (iter->num_lines)
>  					? iter->lines[iter->offset++] : NULL;
>  }
> +
> +struct gpiod_line_bulk_iter *
> +gpiod_line_bulk_iter_new(struct gpiod_line_bulk *bulk)
> +{
> +	struct gpiod_line_bulk_iter *iter;
> +
> +	iter = malloc(sizeof(*iter));
> +	if (!iter)
> +		return NULL;
> +
> +	iter->bulk = bulk;
> +	iter->index = 0;
> +	iter->num_lines = gpiod_line_bulk_num_lines(bulk);
> +
> +	return iter;
> +}
> +
> +void gpiod_line_bulk_iter_free(struct gpiod_line_bulk_iter *iter)
> +{
> +	free(iter);
> +}
> +
> +struct gpiod_line *gpiod_line_bulk_iter_next(struct gpiod_line_bulk_iter *iter)
> +{
> +	return iter->index < iter->num_lines ?
> +		gpiod_line_bulk_get_line(iter->bulk, iter->index++) : NULL;
> +}

I question the value of the struct gpiod_line_bulk_iter, and also
struct gpiod_line_iter.
They seem worse than the user performing a for-loop over the
bulk indicies or chip offsets and getting each line themselves. They
add a malloc overhead, in the case of gpiod_line_iter both a malloc and
a calloc, as well as the corresponding frees.

What benefit do they provide?

Similarly gpiod_line_bulk_foreach_line().

And I'm not sure about the utility of the struct gpiod_chip_iter either
as it bails if opening any of the chips fails.  There a several reasons
that could occur, e.g. permissions or unplugging, so you might want to
leave it to the user to decide what to do.

So I would prefer an iter that provides prospective chip names, or just
a scandir() wrapper that returns an array of names.

Wrt rethinking the libgpiod API for v2, I'd like to either reduce the
API to a minimal function set, for example stripping out the iters, or
at least identify the minimal set that everything else is built on, and
then look at how to rework those to expose the uAPI v2 features.
e.g. given lines in a bulk now have to all be on the same chip, and given
uAPI v2 supports per-line configs even within a bulk, the whole line and
bulk lifecycle, as well as their relationship the chip and the line
request, can be rethought.

I'd also like to see ALL libgpiod types become opaque.

But that is getting way outside the scope of this patch...

Cheers,
Kent.



[Index of Archives]     [Linux SPI]     [Linux Kernel]     [Linux ARM (vger)]     [Linux ARM MSM]     [Linux Omap]     [Linux Arm]     [Linux Tegra]     [Fedora ARM]     [Linux for Samsung SOC]     [eCos]     [Linux Fastboot]     [Gcc Help]     [Git]     [DCCP]     [IETF Announce]     [Security]     [Linux MIPS]     [Yosemite Campsites]

  Powered by Linux