Hello GCC Community, I recently conducted an experiment where I tested the impact of different GCC optimization levels on the performance of a code <https://github.com/leechwort/levenberg-maquardt-example>. I observed that higher optimization levels didn't necessarily result in faster code execution. Is it correct or I have made a mistake? Do we have any other parameters for optimization? My make file was as follows: CC=gcc > CFLAGS_COMMON=-I. > LDLAGS=-lm > DEPS = levmarq.h > OBJ = main.o levmarq.o > > %.o: %.c $(DEPS) > $(CC) -c -o $@ $< $(CFLAGS) $(LDLAGS) > > main: $(OBJ) > gcc -o $@ $^ $(CFLAGS) $(LDLAGS) > > # No optimization > main_no_opt: CFLAGS += -O0 > main_no_opt: $(OBJ) > gcc -o $@ $^ $(CFLAGS) $(LDLAGS) > > # Basic optimization > main_opt1: CFLAGS += -O1 > main_opt1: $(OBJ) > gcc -o $@ $^ $(CFLAGS) $(LDLAGS) > > # Moderate optimization > main_opt2: CFLAGS += -O2 > main_opt2: $(OBJ) > gcc -o $@ $^ $(CFLAGS) $(LDLAGS) > > # High optimization > main_opt3: CFLAGS += -O3 > main_opt3: $(OBJ) > gcc -o $@ $^ $(CFLAGS) $(LDLAGS) > > # Clean rule > clean: > rm -f *.o main main_no_opt main_opt2 main_opt3 > Best regards, Aran