Re: Performance bug in c-r4k.c cache handling code

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Sep 20, 2005 at 10:09:20AM +0100, Dominic Sweetman wrote:

> > > I found an performance bug in c-r4k.c:r4k_dma_cache_inv, where a
> > > Hit_Writeback_Inv instead of Hit_Invalidate is done.
> 
> The MIPS64 spec (which is really all there is to set standards in this
> area) regards Hit_Invalidate as optional.  So it would be nice not to
> use it.  CPUs have no standard "configuration" register you can read
> to establish which cacheops work, so to identify capable CPUs you must
> use a table of CPU attributes indexed by the CPU ID, which encourages
> the crime of building software which can't possibly run on a new CPU...
> 
> So long as the buffer is in fact clean, then in most implementations a
> Hit_Writeback_Invalidate will be just as efficient.

This are R4700 numbers, the only I was able to find in a quick search.

  Hit_Invalidate_D		 7 cycles for a cache miss
				 9 cycles for a cache hit
  Hit_Writeback_Invalidate_D	 7 cycles for a cache miss
				12 cycles for a cache hit if the cache line
				   is clean.
				14 cycles for a cache hit if the cache line
				   is dirty (Writeback).
  Hit_Writeback_D		 7 cycles for a cache miss
				10 cycles for a cache hit if the cache line
				   is clean
				14 cycles for a cache hit if the cache line
				   is dirty (Writeback).

> Moreover, CPUs always "post" writes to some extent, so a small
> percentage of dirty lines can be handled without any great overhead.
> So a significant advantage can only occur when the buffer you want to
> invalidate (prior to DMA-in) was fairly recently densely written by
> the CPU; and this is only safe when all that data can be guaranteed to
> now be of no importance to anyone.

Linux has a well-defined ABI that DMA drivers are supposed to use; the
functions of this ABI that perform cache flushes also take a DMA
direction argument based on which the implementation can deciede on what
the best flush function for a particular case will be.

> Randomly and retrospectively discarding writes could generate some
> very interesting bugs, or (indeed) usually hide some very interesting
> bugs.  It's the kind of thing one would lik to avoid!
> 
> I suppose where DMA data subsequently gets decorated by the CPU then
> handed on to some other layer, then the buffer is freed...?
> 
> > FYI, for R4k DECstations the need to flush the cache for newly allocated 
> > skbs reduces throughput of FDDI reception by about a half (!), down from 
> > about 90Mbps (that's for the /260)...

Software coherency will result in many server / client type operations
approximate worst case as none of the data will reside in caches.  Routers
are going to be somewhat better off - as long as they don't peek to deep
into the packets, that is.

> How did you measure the high throughput?  Have you got a
> machine with DMA-coherency you can turn on and off?

Afaik AMD Alchemy processors have configurable coherency.

  Ralf


[Index of Archives]     [Linux MIPS Home]     [LKML Archive]     [Linux ARM Kernel]     [Linux ARM]     [Linux]     [Git]     [Yosemite News]     [Linux SCSI]     [Linux Hams]

  Powered by Linux