Re: os that rather uses the gpu?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Jul 17, 2010 at 1:08 PM, Les <hlhowell@xxxxxxxxxxx> wrote:

> But unfortunately, Robert, networks are inherently low bandwidth.  To
> achieve full throughput you still need parallelism in the networks
> themselves.  I think from your description, you are discussing the fluid
> models which are generally deconstructed to finite series over limited
> areas for computation with established boundary conditions.  This
> partitioning permits the distributed processing approach, but suffers at
> the boundaries where either the boundary crossing phenomena are
> discounted, or simulated, or I guess passed via some coding algorithm to
> permit recursive reduction for some number of iterations.
>
> What your field, and most others related to fluid dynamics, such as
> plasma studies, explosion studies and so on, needs is full wide band
> memory access across thousands of processors (millions perhaps?).
>
> I don't pretend to understand all the implications of various
> computational requirements of your field, or that of neuroprocessors,
> which is another area where massive parallelism requires deep memory
> access, as my own problem area is rather limited to data spaces of only
> a few gigabytes, and generally serial processing is capable of dealing
> with it, although not real-time.
>
> There are a number of neat processing ideas that have applications to
> specific parallel type problems, such as CAPP, SIMD, MIMD arrays, and
> neural networks.  Whether these solutions will match your problem,
> likely depends a great deal on your view of the problem.  As in most
> endeavors in life, our vision of the solution is as you speak of the
> others here, limited by our view of the problem.
>
> Very few people outside the realm of computation analysis ever deal with
> the choices of algorithms, architecture, real through put, processing
> limitations, bandwidth limitations, data access and distribution
> problems, and so on.  Fewer still deal with massive quantities of data.
> Search engines deal with some of the issues, people like google deal
> with all kinds of distributed problems across data spaces that dwarf the
> efforts of even most sciences.  Some algorithms, as you point out, have
> limitations of the Multiple Instruction, Multiple Data sort, which place
> great demands on memory bandwidth, and processor speeds, as well as
> interprocess communications.
>
> But saying that a particular architecture is unfit for an application
> means that you have to understand both the application, and the
> architecture.  These are both difficult today, as the architectures are
> changing about every 11 months or maybe less right now.  Computation via
> interferometry for example is one of the old (new) fields where a known
> capability is only now capable of being explored.  Optical computing,
> 3d shading, broadcast 3d, and other immersion technologies add new
> requirements to the GPGU's being discussed.  Motion coupled with 3d
> means that the shaders and other elements need greater processing power,
> and greater through put.  Their architecture is undergoing severe
> redesign.  Even microcontrollers are expanding their reach via multiple
> processors (via things like the propeller chip for example).
>
>        I am a test applications consultant.  My trade forces me to
> continuously update my skills and try to keep up with the multitude of
> new architectures.  I have almost no free time, as researching new
> architectures, designing new algorithms, understanding the application
> of algorithms to new tests, and hardware design requirements eats up
> many hours every day. Fortunately I am and always have been a NERD and
> proud of it.  I work at it.
>
> Since you have a deep knowledge of your requirements, perhaps you should
> put some time into thinking of a design methodology other than those I
> have mentioned or those that you know, in an attempt to improve the
> science.  I am sure many others would be very appreciative, and quite
> likely supportive.
>

Most of my discussion of this issue has been on the usenet forum
comp.arch.  I asked the community there for input on what would be
most useful, and, as part of a result of lengthy discussion, I started
a google groups mailing list and at least a place-holder for a wiki.

So far, most of the discussion has stayed on comp.arch, but I'm hoping
to attract a wider audience than just computer architects or people
with a special interest in computer architecture.

My announcement, as posted to comp.arch follows:


On Jul 23, 4:35 pm, Robert Myers <rbmyers...@xxxxxxxxx> wrote:
> Thanks for all those who contributed to my RFC on high-bandwidth computing.
>
> I have set up a mailing list/discussion forum on google groups:
>
> http://groups.google.com/group/high-bandwidth-computing
>
> Andy Glew has been kind enough to offer a home for a wiki:
>
> http://hbc.comp-arch.net/
>
> That link currently redirects to a wiki, which so far has nothing but
> mostly my blather in it.  I hope this is a low point for the SNR.
>
> Those who already have a gmail address can log in to google groups using
> that ID and password.
>
> Very early on, one contributor asked the reasonable question: why a
> separate group?
>
> I want to be able to attract contributors who might not naturally
> contribute to comp.arch without wearing out the patience of the
> comp.arch regulars.
>
> I've forwarded the existing thread to the google groups list by hand.
> If you want to continue to post here, please cc the high-bandwidth
> computing group.
>
> I'm new to all this.  Suggestions are encouraged.  Thanks to all who
> have contributed so far.  I hope the discussion will continue.
>

Some recent threads on comp.arch have pursued the subject of
non-sequential, high-bandwidth access to memory very aggressively.
Those who are interested in why GPU's aren't necessarily god's answer
to the needs of HPC might want to have a look.

Robert.
-- 
users mailing list
users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines


[Index of Archives]     [Older Fedora Users]     [Fedora Announce]     [Fedora Package Announce]     [EPEL Announce]     [EPEL Devel]     [Fedora Magazine]     [Fedora Summer Coding]     [Fedora Laptop]     [Fedora Cloud]     [Fedora Advisory Board]     [Fedora Education]     [Fedora Security]     [Fedora Scitech]     [Fedora Robotics]     [Fedora Infrastructure]     [Fedora Websites]     [Anaconda Devel]     [Fedora Devel Java]     [Fedora Desktop]     [Fedora Fonts]     [Fedora Marketing]     [Fedora Management Tools]     [Fedora Mentors]     [Fedora Package Review]     [Fedora R Devel]     [Fedora PHP Devel]     [Kickstart]     [Fedora Music]     [Fedora Packaging]     [Fedora SELinux]     [Fedora Legal]     [Fedora Kernel]     [Fedora OCaml]     [Coolkey]     [Virtualization Tools]     [ET Management Tools]     [Yum Users]     [Yosemite News]     [Gnome Users]     [KDE Users]     [Fedora Art]     [Fedora Docs]     [Fedora Sparc]     [Libvirt Users]     [Fedora ARM]

  Powered by Linux