Re: [PATCH v2 1/2] pwm: pwm-gpio: New driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

thanks for reviewing this.

On Sun, Jan 17, 2021 at 02:04:34PM +0100, Uwe Kleine-König wrote:
Maybe only add the line for the binding doc in the second patch?

I would have called this struct pwm_gpio_ddata. Given that pwm_gpio is
the common prefix of all variables and functions in this driver,
pwm_gpio alone is a bit short.

I will change these as suggested.

Usually you want to check the return value of gpiod_set_value. @Linus +
Bartosz: What do you think, does it need checking for an error here?

I will wait until they reply.

+
+	if (!atomic_read(&pwm_gpio->enabled))
+		return HRTIMER_NORESTART;
+
+	if (pwm_gpio->state) {
+		ns = pwm_gpio->cur.toff_ns;
+		pwm_gpio->state = false;
+	} else {
+		if (spin_trylock(&pwm_gpio->lock)) {
+			pwm_gpio->cur = pwm_gpio->new;
+			spin_unlock(&pwm_gpio->lock);
+		}
+		ns = pwm_gpio->cur.ton_ns;
+		pwm_gpio->state = true;
+	}
+	pwm_gpio->output = pwm_gpio->state ^ pwm_gpio->cur.invert;
+
+	schedule_work(&pwm_gpio->work);
+	hrtimer_forward(handle, hrtimer_get_expires(handle), ns_to_ktime(ns));

This is hard to understand. I think we need more comments explaining
(e.g.) the semantic of the members of struct pwm_gpio.

Would it be OK if I added the following comment in the code?

pwm_gpio_apply writes the new settings to pwm_gpio->new, synchronized by the spinlock. At the beginning of every PWM cycle pwm_gpio->new is copied into pwm_gpio->cur, but only if the spinlock can be locked immediately (meaning the settings aren't being applied concurrently to the beginning of the period). By not waiting for the spinlock, no extra jitter in the PWM period is introduced.

Does it make sense to use the workqueue only if the GPIO is a sleeping
one and for fast GPIOs call gpiod_set_value directly?

I thought about this but didn't implement it as it seemed overkill to me. The workqueue is needed anyway to cope with sleeping GPIOs, and it can deal with fast GPIOs with insignificant degradation compared to a direct implementation.

+static const struct pwm_ops pwm_gpio_ops = {
+	.free = pwm_gpio_free,

A free without request seems wrong. The callback stops the PWM, this is
wrong, the PWM should continue to toggle.


Would it be OK to remove the pwm_gpio_free callback altogether and move the cancel_work_sync() call to pwm_gpio_remove?

Error message please if pwmchip_add fails

I will add this.

+}
+
+static int pwm_gpio_remove(struct platform_device *pdev)
+{
+	int ret;
+	struct pwm_gpio *pwm_gpio = platform_get_drvdata(pdev);
+
+	ret = pwmchip_remove(&pwm_gpio->chip);
+	if (ret)
+		return ret;

This exit path is bad. The return value of the remove callback is
ignored and after pwm_gpio_remove() returns the gpio and *pwm_gpio are
freed.

+
+	hrtimer_cancel(&pwm_gpio->timer);
+
+	return 0;
+}

Would it be ok to cancel the timer first and then "return pwmchip_remove(...)"?

Best regards,
Nicola




[Index of Archives]     [Linux SPI]     [Linux Kernel]     [Linux ARM (vger)]     [Linux ARM MSM]     [Linux Omap]     [Linux Arm]     [Linux Tegra]     [Fedora ARM]     [Linux for Samsung SOC]     [eCos]     [Linux Fastboot]     [Gcc Help]     [Git]     [DCCP]     [IETF Announce]     [Security]     [Linux MIPS]     [Yosemite Campsites]

  Powered by Linux