Jeff Garzik wrote:
+int ata_host_add_ports_pinfo_ar(struct ata_host *host,
+ const struct ata_port_info **pinfo_ar,
+ int n_ports)
+{
+ int rc;
+
+ rc = ata_host_add_ports(host, pinfo_ar[0]->sht, n_ports);
+ if (rc == 0)
+ __ata_host_init_pinfo(host, pinfo_ar, n_ports, 1);
+ return rc;
+}
Just implement a single version for each case (array and non-array). The
LLDD can pass in an indication of whether or not to copy the first
element into succeeding elements.
Okay.
+static void __ata_host_free_irqs(struct ata_host *host, void **markerp)
+{
+ struct ata_irq *airq, *tmp;
+
+ list_for_each_entry_safe(airq, tmp, &host->irq_list, node) {
+ if (!markerp || airq->marker == *markerp) {
+ list_del(&airq->node);
+ free_irq(airq->irq, airq->dev_id);
+ kfree(airq);
+ }
+ }
+}
Ugh, I just don't like this at all. I would much rather have a hook or
entry point where the LLDD is given the capability to call request_irq()
itself, in some exotic situations. Then, helpers provided by libata can
handle the common cases. That's much more modular than the above.
What I was trying to do was to make libata keep track of allocated
resources including IRQ handlers. While implementing this patch, I've
located several bugs failing to free / or freeing twice resources in the
init-failure path or detach path.
With iomap, multiple IRQ handlers and different IRQ modes, the resource
management is getting more complex. Bugs acquiring the resources are
easy to catch but the other way around is not. So, I wanted to push all
the 'free's into libata helpers such that LLDs can call single or a few
libata free helpers and be done with it - or, more importantly, the free
code can be boilerplate part shared across all/most LLDs.
That's where the above IRQ handler list comes from. The marker thing is
to make different libata core modules (pci, legacy, etc...) deallocate
its own and completely hidden from LLDs.
I'd like to know your opinion on this.
Thanks.
--
tejun
-
To unsubscribe from this list: send the line "unsubscribe linux-ide" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html