avr32 optimization

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On avr32 target.

Where does the speed difference come from between the two code
fragments?

struct foo buf[2];

static inline void rx_int(uint8_t ch){
	LED_On(DEBUG_LED);
	//access to buf[ch]
	LED_Off(DEBUG_LED);
}

__attribute__((__interrupt__)) static void can0_int_rx_handler(void){
	rx_int(0);
}

__attribute__((__interrupt__)) static void can1_int_rx_handler(void){
	rx_int(1);
}


-----

struct foo buf[2];

__attribute__((__interrupt__)) static void can0_int_rx_handler(void){
	LED_On(DEBUG_LED);
	//access to buf[0]
	LED_Off(DEBUG_LED);
}

__attribute__((__interrupt__)) static void can1_int_rx_handler(void){
	LED_On(DEBUG_LED);
	//access to buf[1]
	LED_Off(DEBUG_LED);
}

In my case the second version runs 7% faster
Shouldn't GCC in the above case be able to optimize buf[ch] access into
direct accesses?

just curious,
Max Schneider.






[Index of Archives]     [Linux C Programming]     [Linux Kernel]     [eCos]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [The DWARVES Debugging Tools]     [Yosemite Campsites]     [Yosemite News]     [Linux GCC]

  Powered by Linux