I have a brand new poweredge 2900 with 10 SAS drives configured in two
arrays
via the built-in PERC 5 raid controller as:
raid 1: 2x73GB
raid 10: 8x300GB
It's got 4GB of ram, and it's intended to be an NFS filestore.
For some strange reason, logging in with ssh works great, it returns a
prompt, all seems well.
I go to run a simple command like 'top' or 'yum -y install <package>',
and my xterm/ssh session
just locks. In some cases, it's drawn half of the top screen and hung,
in other cases, it doesn't even
do that. Kill the xterm window, bring a new one up, right back in, try
it again, it repeats.
What's interesting to me is that I have all kinds of other 'lesser'
systems running CentOS 4.4, and
I have none of these issues with them. My ~1.1TB raid 10 drive is
sliced up into 4 parts, with the
big one being about 950GB. Near as I can figure, I haven't hit any
limitations, but I'm stumped
by something that I *think* is probably either relatively trivial, or
just a straight out hardware
incompatibility. One thought is that it could be related to the Gb
ethernet devices (bge).
Commands like 'ifconfig -a' work great. 'dmesg | grep eth0' locks up
the session.
This is relatively frustrating. Googling doesn't seem to net any real
results, and
I can't seem to find anything relevant in the logs.
One more relevant bit to add, this behavior does not exist from the console.
Peter
--
Peter Serwe <peter at infostreet dot com>
http://www.infostreet.com
"The only true sports are bullfighting, mountain climbing and auto racing." -Ernest Hemingway
"Because everything else requires only one ball." -Unknown
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos