On 2/21/06, Manav Kataria <manav@xxxxxxxxxxxxx> wrote: > Greetings, > Hi I wanted to profile time taken by a particular funtion in an > application. I had been using gettimeofday() till now and calculating the > time diff b/w the call of the function and its return. > I read in a forum that even though it shows a resolution of microseconds > it is incorrect. The true resolution is only the order of microseconds. Is > the microsecond part really unreliable (fake) ?? > How do I profile my code other wise ? > Thanks in advance, > MK Try using these functions. This was provided by an earlier thread in the mailing list. This was provided by srinivas. static unsigned int ms = 0; static struct timeval start_tm,cur_tm; do_gettimeofday(&start_tm); ms = (cur_tm.tv_sec*1000000+cur_tm.tv_usec)- (start_tm.tv_sec*1000000+start_tm.tv_usec) + 1; printk("%10ld.%3ld ms",ms/1000, ms%1000); Call the system call do_gettimeofday( &cur_tm ); ms = (cur_tm.tv_sec*1000000+cur_tm.tv_usec)- (start_tm.tv_sec*1000000+start_tm.tv_usec) + 1; printk("%10ld.%3ld ms",ms/1000, ms%1000); I hope this will wok out for you, also you might require to make some changes depending upon your exact requirement ! -- Cheers, Sandeep A man with one watch knows what time it is; a man with two watches is never quite sure. -- Kernelnewbies: Help each other learn about the Linux kernel. Archive: http://mail.nl.linux.org/kernelnewbies/ FAQ: http://kernelnewbies.org/faq/