Re: mutual exclusion locks over PCI memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 20, 2009 at 07:10:24AM -0700, Matthew Wilcox wrote:
> On Fri, Feb 20, 2009 at 07:09:51PM +0530, arun c wrote:
> > PCI host machine (PPC cpu) writes commands to
> > the PCI memory space of the (Coldfire CPU)
> > target device. Target device takes the command
> > and executes it.
> > 
> > Target devices SDRAM is exposed over PCI to host.
> > A circular buffer residing on target memory is used
> > for command exchange.
> > 
> > I should not allow host and target to play on the
> > buffer simultaneously in order to avoid corruption.
> > 
> > Does anybody know how to implement a lock
> > suitable for this issue?
> > 
> > or any lock less algorithm exists for communication
> > over PCI?
> 
> I would have thought that a standard head and tail lockless queue would
> be perfect for your application.  Expressed in C:
> 
> struct queue {
> 	unsigned head;
> 	unsigned entries[QUEUE_SIZE];
> 	unsigned tail;
> }
> 
> int queue_size(struct queue __iomem *q)
> {
> 	int size = readl(&q->head) - readl(&q->tail);

First, it would be alot more efficient if "head" and "tail" were
indexes into a base of the command list.
Then "queue_index = readl(&q)" would eliminate one MMIO read
(2000-3000 CPU cycles).

However, one of those two should be read-only and the other should
be write-only for the host. Target is a consumer for when the host
pushes a new command and host is consumer when the target completes
a command.  Completions do not have to be in the same cacheline or
even same queue if there is some other way to associate completions
with an issued command (Hint: "tags").

Plenty of drivers are doing the same thing today but DMA-ing the
completion status into host memory and reading the command queue
from host memory. See how Gige networking drivers handle TX and RX
descriptors for examples.

hth,
grant

> 	if (size < 0)
> 		size += QUEUE_SIZE;
> 	return size;
> }
> 
> int queue_empty(struct queue __iomem *q)
> {
> 	return readl(&q->head) == readl(&q->tail);
> }
> 
> int queue_push(struct queue __iomem *q, unsigned item)
> {
> 	unsigned head = readl(&q->head) + 1;
> 	if (head == readl(&q->tail))
> 		return FAIL;
> 	writel(&q->entries[head], item);
> 	writel(&q->head, head);
> }
> 
> Something like that anyway ... I obviously haven't tested or even
> compiled it.
> 
> -- 
> Matthew Wilcox				Intel Open Source Technology Centre
> "Bill, look, we understand that you're interested in selling us this
> operating system, but compare it to ours.  We can't possibly take such
> a retrograde step."
> --
> To unsubscribe from this list: send the line "unsubscribe linux-pci" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux