Re: newer HP G4 and G5 servers and centos :-) [VERY OFF TOPIC]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On Fri, May 25, 2007 at 10:36:21AM +1200, MrKiwi said:
> Walt Reed wrote:
> <snip>
> >Any memory problem I've seen has shown up within the
> >first month. It helps that we burn them in for a month before putting
> >them into production. I run some drive / memory excersizing utilities
> >during this time that pound on the server pretty hard. Compiling the
> >Linux kernel over and over again also seems to be a good test :-)
> >
> <snip>
> >Ditto. Have about 30 385's that have been pretty solid, but the G5's are
> >faster. It matters when you are doing jobs that take months to run...
> >:-)
> 
> Walt - If you don't mind/are allowed to, can you tell us 
> what your servers do? The reason i ask is this;
> 
> It seems to me that many users make decisions based on what 
> they read on these lists, in mags, in (*cringe*) gartner 
> reports etc, but i think we often miss the fact that many of 
> the 'data points' come from squeaky wheels or completely 
> irrelevant demos and 'studys'.
> 
> I always try to weight the current opinions based on the 
> authors experience, zen-ness etc, *not* volume and clever 
> nouns. I also weight heavily people with large karma, like 
> (on this list) John, Mark, Karanbir, Jim etc - you *know* 
> they would have looked at any issues like outdated firmware 
> before they comment, so if they give something a bad review 
> you can be fairly sure it is deserved.
> 
> So to get to the point - if you are doing tasks that take 
> months, either you are doing it wrong, or it is a *real* job 
> that requires a *real* OS on *great* hardware to get it done 
> - I suspect the latter, which means your data 
> points/comments are much more relevant than most.
> 
> So ... spill the beans :)

Sure... The big beasty is GIS data. We pre-render the entire US at 15
zoom levels into fairly large "meta" tiles with multiple layers, hand
tweak the 100 largest cities, then combine layers and split in to
smaller tiles. The result ends up a lot like Google Maps except with
demographic data instead of point data. Every time Navtek releases new
map data with updated roads / etc. info, we start the process over again. 

http://www.dataplace.org

Besides the pre-rendering, we have a large pile of servers that handle
real-time rendering because there is no realistic way we could
pre-render 1500+ indicators for the entire US at all zoom levels. Plus
we allow people to upload their own datasets.

We tried Blade servers too, but AT&T pulled a hissy fit about the power
/ heat load and only let us use 40% of our rack space, so we went back
to traditional servers (plus I hated the design of the power
distribution on the p-class enclosures. Talk about stupid.)

An interesting thing about this project is that it is built almost
entirely using open source technology with the exception of
pre-rendering which uses ESRI because the quality of the maps is much
better than open source tools such as mapserver. We use mapserver for
the real-time rendering.

Besides dataplace, we have a pile of other more traditional web sites
that we host. By the time you add up all the development, staging,
database, etc. servers, it's a lot of equipment (all HP servers, with
the EMC and Cisco,) and a huge amount of data.


_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos

[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux