Re: Is offloading to GPU a worthwhile feature?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Konstantin Ryabitsev <konstantin@xxxxxxxxxxxxxxxxxxx> writes:
> On 04/08/18 09:59, Jakub Narebski wrote:

>>> This is an entirely idle pondering kind of question, but I wanted to
>>> ask. I recently discovered that some edge providers are starting to
>>> offer systems with GPU cards in them -- primarily for clients that need
>>> to provide streaming video content, I guess. As someone who needs to run
>>> a distributed network of edge nodes for a fairly popular git server, I
>>> wondered if git could at all benefit from utilizing a GPU card for
>>> something like delta calculations or compression offload, or if benefits
>>> would be negligible.
>> 
>> The problem is that you need to transfer the data from the main memory
>> (host memory) geared towards low-latency thanks to cache hierarchy, to
>> the GPU memory (device memory) geared towards bandwidth and parallel
>> access, and back again.  So to make sense the time for copying data plus
>> the time to perform calculations on GPU (and not all kinds of
>> computations can be speed up on GPU -- you need fine-grained massively
>> data-parallel task) must be less than time to perform calculations on
>> CPU (with multi-threading).
>
> Would something like this be well-suited for tasks like routine fsck,
> repacking and bitmap generation? That's the kind of workloads I was
> imagining it would be most well-suited for.

All of those, I think, would need to use some graph algorithms.  While
there are here ready graph libraries on GPU (like nVidia's nvGRAPH),
graphs are irregular structures not that well souted to the SIMD type of
parallelism that GPU is best for.

I also wonder if the amound of memory on GPU would be enough (and if
not, would be it possible to perform calculations in batches).

>> Also you would need to keep non-GPU and GPGPU code in sync.  Some parts
>> of code do not change much; and there also solutions to generate dual
>> code from one source.
>> 
>> Still, it might be good idea,
>
> I'm still totally the wrong person to be implementing this, but I do
> have access to Packet.net's edge systems which carry powerful GPUs for
> projects that might be needing these for video streaming services. It
> seems a shame to have them sitting idle if I can offload some of the
> RAM- and CPU-hungry tasks like repacking to be running there.

Happily, GPGPU programming (in CUDA C mainly, which limits use to nVidia
hardware) is one of my areas if interests...

Best regards,
--
Jakub Narębski




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux