Re: Cache strategy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jun 16, 2009 at 11:38 AM, <jcupitt@xxxxxxxxx> wrote:
> 2009/6/16 Øyvind Kolås <islewind@xxxxxxxxx>:
>> On Tue, Jun 16, 2009 at 11:11 AM, <jcupitt@xxxxxxxxx> wrote:
>>>>> Another thing worth mentioning is that caches on every node doesn't
>>>>> scale well to concurrent evaluation of the graph since the evaluators
>>>>> would need to all the time synchronize usage of the caches, preventing
>>>>> nice scaling of performance as you use more CPU cores/CPUs.
>>>>
>>>> In most instances, this would only incur synchronization of a few
>>>> tiles where the chunks/work regions overlaps. Unless you are stupid
>>>> and compute with chunk-size~=tile-size the impact of this should be
>>>> mostly neglible.
>>>
>>> You would still need a lock on the cache wouldn't you? For example, if
>>> the cache is held as a GHashTable of tiles, even if individual tiles
>>> are disjoint and not shared, you'll still need to lock the hash table
>>> before you can search it. A couple of locks on every tile on every
>>> node will hurt SMP scaling badly.
>>
>> You do not need to make this be a global hashtable the way it is done
>> with GeglBuffers it would end up being one hashtable per node that has
>> a cache, not one cache for for all nodes. If I understood your concern
>> correctly it was about having a global lock that is locked.
>
> I'm probably misunderstanding. I was worried that there would be a
> hash of tiles on each node, and that this hash would be accessed by
> more than one thread.
>
> If you have that, then even if tiles are disjoint, you'll need a
> lock/unlock pair around each hash table access, won't you? Otherwise,
> how can you add or remove tiles safely?

You do do need a lock/unlock pair around tile accesses for GeglBuffers
generally if buffers are to be shared between processes. (as earlier
mentioned, caches are normal buffers allocated on demand for the
nodes). One of the GSOC projects this year is investigating GPU based
storage for GeglBuffers, this will involve further extension of the
semantics of tile locking not only for writing but also for reading
since migrations of tile data to/from GPU based texture storage will
be needed.

Locking mutexes for such accesses, perhaps even over batches of tiles
when reading/writing a rectangle of pixels I expect to be detrimental
to performance only if the parallellization is implemented so that the
threads will be blocking each others access. This should be avoidable.

GeglBuffer used to/still has code that allows access from multiple
processes, in that case the processes have separate tile caches but
still need atomic accesses to ensure that the buffer/tile revisions
are up to date. Personally I find it more interesting to extend GEGL
in the direction of multiple processes or even multiple hosts
simultaneously processing than the programmer overhead of
synchronizing threads.
-- 
«The future is already here. It's just not very evenly distributed»
                                                 -- William Gibson
http://pippin.gimp.org/                            http://ffii.org/
_______________________________________________
Gegl-developer mailing list
Gegl-developer@xxxxxxxxxxxxxxxxxxxxxx
https://lists.XCF.Berkeley.EDU/mailman/listinfo/gegl-developer


[Index of Archives]     [Yosemite News]     [Yosemite Photos]     [gtk]     [GIMP Users]     [KDE]     [Gimp's Home]     [Gimp on Windows]     [Steve's Art]

  Powered by Linux