Memory will most definitely be your problem. I think you could get away with a fairly low end processor (read < 1 GHz) but you will need a lot of memory depending on how much you want to do. I have a router in place that I was running out of memory for the connection tracking sub system. I ended up allocating 1 GB of RAM to just connection tracking. In fact you need 1 GB (or very close to) to be able to track 65535 connections. You may think this is way over kill, but not really. Keep in mind that connections tend to hang around on average 10 minutes after they are closed b/c not all systems out there close them correctly and thus they have to time out (10 minutes). You can get away with less RAM but you need to watch your DMESG to make sure that you don't see any issues with your connection tracking table filling up, it acts like a FIFO if memory serves. If you are not doing much in the way if *VERY* *ADVANCED* firewalling, just basic source and / or destination IP v alidation and / or source and / or destination port validation will not need much of a processor. In fact I'd try it with a 500 MHz to 1 GHz system, what ever is the most economical that you can get your hands on. Another problem that you may run in to will be filling your ARP table. The kernel space ARP table is not very large at all, only like 64 or maybe up to 255 IP MAC pairs. I want to say it's closer to 64. Thus you may want to take a look at using the ARP Daemon for Linux to offload the ARP cache to thus avoiding this issue. Basically how it works (from what I've read) is you reduce the number of times you query the ARP cache in kernel to 0 which will cause the kernel to query the user space daemon for the ARP data. The user space daemon does it's own ARPing to make sure that it has the information to hand to the kernel. The main advantage of the user space daemon is that it can handle LOTS of ARP entries, well beyond 1024 (I think). Something else you might consider would be some managed switches so that you could bond your connections out of the router to them thus ensuring that a cable failure (disconnection) will not take the router down. If you plug everything in to the managed switch and set up some VLANs you can easily do everything that you are wanting to do over the bonded connections with VLANs on top. The VLAN interfaces in Linux look like another network interface that you can do all the routing that you want off of. If the client systems you are going to be firewalling for are business systems I might suggest building two of these routers and setting them up as a VRRP router to ensure that the ""router is alwayse up and useable. This is much easier through managed switches too as you don't have to cable as much to the physical routers. In short get memory and a lower end proc to save the money for a 2nd identical router. Grant. . . . Mihai Vlad wrote: > Hey guys, > > I am planning to buy some components for a Linux router that will handle the > Internet access of 200 computers (includes tc shaping) and some inter > sub-network routing (at least 100MBps per eth - and there are 3 eth cards). > > I was thinking of a: > Pentium 4 - 3GHz > 256 or 512MB RAM > Network Cards. > > Now - I wonder what is more important: the processor speed or the amount of > RAM. > > And can you point me to some good Network Cards that you have used and are > not so expensive. Some Intel, etc. I have no clue about this subject... > > > Maybe this discussion can be extended to a list of best practices to set up > a performant Linux Router from the hardware point of view. > > Thanks in advance, > Mihai > > > _______________________________________________ > LARTC mailing list > LARTC@xxxxxxxxxxxxxxx > http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc _______________________________________________ LARTC mailing list LARTC@xxxxxxxxxxxxxxx http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc