> > i tested gigabit eth and 100 mbit ethernet cards on some systems and i > noticed that bottleneck was generated between CPU and disk. Simply, you > can see it yourself with helping copy/http get/write and time utility.I > remember there are alot articles on the net regarding system > bottlenecks. You can use google..:-) > Last time I said 'With help of 64-bits 1GB Network card and CAT5e cable (I think 350 Mbps)'. This is wrong. The maximum speed of network part could only up to is 1000/8 = 125 MB/s, minus 1MB overhead, 124 MB/s, not 350Mbps. Compare with 266 MB/s on system bus (66MHz) and 48 - 96 MB/s on hard disk (7200rpm, 8.5 ms average seek time with RAID-0), one can see that the disk is bottleneck if it is data-intensive application, such as stock database. > > Also, cluster server doenst make your sytems more performance. It makes > your systems single point of failure or fault tolerans. > > Hope this help.. > Ilker G. > > Hong Hsu wrote: > > > Chris, Many Thanks for your help AND point out my one typo. > > > > Actually I am thinking to add power to my system, either using Linux's SMP > > and dual Intel Pentium 4 processors with a single system bus, OR clustering > > two single processor Linux machines. With help of 64-bits 1GB Network card > > and CAT5e cable (I think 350 Mbps), network part doesn't seem a bottleneck. > > But I not sure which approach has better performance. > > > > Thanks again, > > -Hong > > > > > > "Christopher P Wright " wrote: > > > > > >>>I have general questions regarding a typical Linux system's speed and > >>>wonder this is right place to ask these questions. If this is not, Could > >>>someone point out which group I can post. > >>> > >>probably not the best place, but i know of nowhere else. > >> > >> > >>>With following typical components on a motherboard: > >>> 512 MB 10K RPM DRAM, > >>> Intel 850 chipset with 64-bits Data bus width and 400MHz Data Rate, > >>> 32-bit/64-bit PCI 2.10 bus (33MHz/66MHz) > >>> 20 GB Hard Drive > >>> > >>i think the 10k rpm goes with the harddrive, as dram doesn't rotate =) > >> > >> > >>> Does Intel 850's 400MHz data rate fully used or not on 66MHz bus > >>>speed? As speed of these components are measured by rates, I am > >>>wondering how one can evaluate the system's speed roughly in terms of > >>>using MB/sec so that one can see potential bottleneck or trend of > >>>improvement. On Windows, using PCMark2002 benchmark software from > >>>MadOnion.com, one can see 20-70 MB/sec on HD, 700 - 1,400 MB/sec on > >>>DRAM. Is there a way one can measure system bus actual speed under > >>>Linux or benchmark for DRAM, HD? > >>> > >>the 400Mhz is the dram clock. 66Mhz is standard pci (as is 33 sometimes). > >>obviously, the 400Mhz would not be used fully from a 66Mhz feed. this is > >>typical. (memory is faster than pci cards, etc). > >> > >>to test HD performance one can use 'hdparm -T -t'. that benchmarks hd > >>speeds, and buffered reads (sort of memory bandwidth maybe?). im not sure > >>of a tool to measure dram bandwidth directly, but i'm sure they exist > >>somewhere. be sure to enable dma on the harddrive before you benchmark > >>it, or the speeds will be worse. ( 'hdparm -d1 [device]' ) > >> > >> > >>>In addition, if I like to add a 1GB Network Interface Card for > >>>clustering two same machines, should I add a 32-bit NIC card or 64-bit > >>>NIC in terms of performance, and why? > >>> > >>64bit would generally be able to transfer data to/from the system twice > >>as fast as a 32bit card, simply because it transfers twice as many bits > >>per cycle. i think 64 bit may be clocked higher (the 66mhz) than the 32 > >>bit (33 mhz???) but i'm foggy on pci specs in that regard, someone else > >>probably knows far more in depth. > >> > >>ttyl > >>chris -- Kernelnewbies: Help each other learn about the Linux kernel. Archive: http://mail.nl.linux.org/kernelnewbies/ FAQ: http://kernelnewbies.org/faq/