> > What do people think, would it be worth me writing a patch? > > I don't think we should make things more complicated than necessary. > > My VDR has over 30 timers and I don't have any problems with > excessive latency. > I just did a (very quick and hacky) comparason - I added 2 std::maps to cSchedule: std::map<u_int16_t,cEvent*> eventIDMap; std::map<time_t,cEvent*> eventStartTimeMap; By modifying cSchedule::AddEvent and cSchedule::GetEvent as described below, I managed to cut the most CPU intensive function's execution time to an 80th of the original! (from 83.79% of CPU time down to 1.12%!) % cumulative self self total time seconds seconds calls s/call s/call vdr_orig.profile: 83.79 51.93 51.93 70436 0.00 0.00 cSchedule::GetEvent(unsigned short, long) const vdr_mod.profile: 1.12 7.98 0.11 70436 0.00 0.00 cSchedule::GetEvent(unsigned short, long) const I know it adds a little complexity to things, but with such an improvement in lookup speed, maybe its worth looking at replacing the lists with a tree? Chris --- cEvent *cSchedule::AddEvent(cEvent *Event) { events.Add(Event); eventIDMap[Event->EventID()] = Event; eventStartTimeMap[Event->StartTime()] = Event; return Event; } const cEvent *cSchedule::GetEvent(u_int16_t EventID, time_t StartTime) const { std::map<u_int16_t,cEvent*>::const_iterator it = eventIDMap.find(EventID); if(it != eventIDMap.end()) return it->second; std::map<u_int16_t,cEvent*>::const_iterator it2 = eventStartTimeMap.find(StartTime); if(it2 != eventStartTimeMap.end()) return it2->second; return NULL; }