Re: os that rather uses the gpu?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Jul 17, 2010 at 1:08 PM, Les <hlhowell@xxxxxxxxxxx> wrote:

But unfortunately, Robert, networks are inherently low bandwidth.  To
achieve full throughput you still need parallelism in the networks
themselves.  I think from your description, you are discussing the fluid
models which are generally deconstructed to finite series over limited
areas for computation with established boundary conditions.  This
partitioning permits the distributed processing approach, but suffers at
the boundaries where either the boundary crossing phenomena are
discounted, or simulated, or I guess passed via some coding algorithm to
permit recursive reduction for some number of iterations.

What your field, and most others related to fluid dynamics, such as
plasma studies, explosion studies and so on, needs is full wide band
memory access across thousands of processors (millions perhaps?).

<big snip of insightful comments> 
 

       I am a test applications consultant.  My trade forces me to
continuously update my skills and try to keep up with the multitude of
new architectures.  I have almost no free time, as researching new
architectures, designing new algorithms, understanding the application
of algorithms to new tests, and hardware design requirements eats up
many hours every day. Fortunately I am and always have been a NERD and
proud of it.  I work at it.

Since you have a deep knowledge of your requirements, perhaps you should
put some time into thinking of a design methodology other than those I
have mentioned or those that you know, in an attempt to improve the
science.  I am sure many others would be very appreciative, and quite
likely supportive.

I'm suspicious of the claimed narrowness of your credentials, because you have gotten so many things right. ;-)

Someone who *really* knows what he's talking about in this problem area might be inclined to say, "This guy Myers is an idiot, because what he apparently wants to do was shown not to be possible decades ago."

Nature, not hardware budgets, dictates the range of scales and the volume of data that needs to be dealt with, and you are correct (if perhaps on the low side) as to the scale of resources that would be required for a head-on assault against some really important problems (hurricane prediction, for example, or meaningful climate prediction).  There is no conceivable way that such a head-on assault could ever be mounted with any hardware that I know of that anyone has ever imagined.

What *can* be done that no one seems interested in doing is to explore how the effects I have described (computational artifacts resulting from accommodating the problem to the available hardware, as you apparently understand) exhibit themselves at attainable ratios of scales.  Every lab director knows that, important though such knowledge might be, the mere fact that the potential importance of said knowledge is so hard to describe to a layman means that significant funding will never be forthcoming.  Even worse, you might do an expensive and insightful exploration only to fail to discover anything that can be applied to "real world" modeling (a null result).

When you sink a lot of money into a huge new particle accelerator, there is always the possibility that you will discover nothing new that is interesting.  What's different is that huge particle accelerators capture the public imagination in a way that the grubby details of computational physics, no matter how fundamental, never will.  A bigger particle accelerator, if nothing else, allows you to estimate new bounds for as yet undetected phenomena.  The bigger computers we keep building only push the problem I have described more deeply into the mud.

I have talked to computer architects as to what is conceivably possible.  What I need to do right now is computation that will allow me to show people what I'm talking about, rather than asking them to imagine it, even though it is all very clear to me.

A closely-related question--how important are collective, nonlocal, and nonlinear phenomena in neurobiology, and how much global bandwidth do they imply?-- may eventually push the computational frontier that I see as being ignored.

Robert.
-- 
users mailing list
users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines
[Index of Archives]     [Older Fedora Users]     [Fedora Announce]     [Fedora Package Announce]     [EPEL Announce]     [EPEL Devel]     [Fedora Magazine]     [Fedora Summer Coding]     [Fedora Laptop]     [Fedora Cloud]     [Fedora Advisory Board]     [Fedora Education]     [Fedora Security]     [Fedora Scitech]     [Fedora Robotics]     [Fedora Infrastructure]     [Fedora Websites]     [Anaconda Devel]     [Fedora Devel Java]     [Fedora Desktop]     [Fedora Fonts]     [Fedora Marketing]     [Fedora Management Tools]     [Fedora Mentors]     [Fedora Package Review]     [Fedora R Devel]     [Fedora PHP Devel]     [Kickstart]     [Fedora Music]     [Fedora Packaging]     [Fedora SELinux]     [Fedora Legal]     [Fedora Kernel]     [Fedora OCaml]     [Coolkey]     [Virtualization Tools]     [ET Management Tools]     [Yum Users]     [Yosemite News]     [Gnome Users]     [KDE Users]     [Fedora Art]     [Fedora Docs]     [Fedora Sparc]     [Libvirt Users]     [Fedora ARM]

  Powered by Linux