On Monday 05 February 2007 11:16, Jarek Poplawski wrote: > On 01-02-2007 12:30, Andi Kleen wrote: > > Simon Lodal <simonl@xxxxxxxxxx> writes: > >> Memory is generally not an issue, but CPU is, and you can not beat the > >> CPU efficiency of plain array lookup (always faster, and constant time). > > Probably for some old (or embedded) lean boxes used for > small network routers, with memory hungry iptables - > memory could be an issue. Sure, but if they are that constrained they probably do not run HTB in the first place. We are talking about 4k initially, up to 256k worst case (or 512k if your router is 64bit, unlikely if "small" is a priority). > > And the worst memory consumption case considered by Patrick should > > be relatively unlikely. > > Anyway, such approach, that most users do something > this (reasonable) way, doesn't look like good > programming practice. The current hash algorithm also assumes certain usage patterns, namely that you choose classids that generate different hash keys (= distribute uniformly across the buckets), or scalability will suffer very quickly. Even at 64 classes you would probably see htb_find() near the top of a profiling analysis. But I would say that it is just as unlikely as choosing 64 classids that cause my patch to allocate all 256k. In these unlikely cases, my patch only wastes passive memory, while the current htb wastes cpu to a point where it can severely limit routing performance. > I wonder, why not try, at least for a while, to do this > a compile (menuconfig) option with a comment: > recommended for a large number of classes. After hash > optimization and some testing, final decisions could be > made. I decided not to do it because it would mean too many ifdefs (ugly+unmaintanable code). Regards Simon _______________________________________________ LARTC mailing list LARTC@xxxxxxxxxxxxxxx http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc