Re: Is offloading to GPU a worthwhile feature?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

Konstantin Ryabitsev <konstantin@xxxxxxxxxxxxxxxxxxx> writes:

> This is an entirely idle pondering kind of question, but I wanted to
> ask. I recently discovered that some edge providers are starting to
> offer systems with GPU cards in them -- primarily for clients that need
> to provide streaming video content, I guess. As someone who needs to run
> a distributed network of edge nodes for a fairly popular git server, I
> wondered if git could at all benefit from utilizing a GPU card for
> something like delta calculations or compression offload, or if benefits
> would be negligible.
>
> I realize this would be silly amounts of work. But, if it's worth it,
> perhaps we can benefit from all the GPU computation libs written for
> cryptocoin mining and use them for something good. :)

The problem is that you need to transfer the data from the main memory
(host memory) geared towards low-latency thanks to cache hierarchy, to
the GPU memory (device memory) geared towards bandwidth and parallel
access, and back again.  So to make sense the time for copying data plus
the time to perform calculations on GPU (and not all kinds of
computations can be speed up on GPU -- you need fine-grained massively
data-parallel task) must be less than time to perform calculations on
CPU (with multi-threading).

Also you would need to keep non-GPU and GPGPU code in sync.  Some parts
of code do not change much; and there also solutions to generate dual
code from one source.

Still, it might be good idea,
-- 
Jakub Narębski





[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux