> Yes, I get this warning too. But what I want to understand is why do > we get the size of c as 8 when it should be 4 because the linker will > find that there are multiple declarations of c and will keep the > strong one which is the declaration with int. As Andrew said, sizeof(c) will be evaluated by the compiler, and the linker has nothing to do with it. > Please see slides 22,23,24 from here: > https://www.cs.cmu.edu/afs/cs/academic/class/15213-f10/www/lectures/11-linking.pdf Those slides conflate the notion of "weak" with the notion of "common". The example you gave and the ones on those slides are really "tentative definitions", not weak symbols. Tentative definitions are a relic from K&R C, and are implemented by the same object file and linker concepts that are used for Fortran common blocks: a common block may be defined in multiple files, and the largest declaration wins, unless there's an actual non-common symbol (as in f2.c in your example, or a BLOCK DATA subprogram in Fortran), which will override the common declarations. Weak symbols are something else entirely. They were originally designed to solve the ANSI/POSIX namespace problem, where, for example, fopen() needed to be implemented in terms of open(), but ANSI and POSIX rules did not allow the implementation to introduce "open" into the user's namespace (i.e., the programmer should be able to define an "open" without breaking the library's "fopen"). But if the programmer includes <fcntl.h>, open() is introduced in the system namespace, and can be called. The solution is to have fopen() call __open(), which is aliased to a weak symbol named "open". Thus, if the programmer defines his own "open", that definition takes precedence, but fopen() can still call __open(). After that, weak undefined symbols were invented as well, allowing the programmer to declare an external symbol that could remain unresolved at link time without error. -cary