Patrick Steinhardt <ps@xxxxxx> writes: > void *reftable_calloc(size_t nelem, size_t elsize) > { > - size_t sz = st_mult(nelem, elsize); > - void *p = reftable_malloc(sz); > - memset(p, 0, sz); > + void *p; > + > + if (nelem && elsize > SIZE_MAX / nelem) > + return NULL; Now it is open coded, it strikes me that the check is a bit overly conservative. If we are trying to allocate slightly than half of SIZE_MAX by asking elsize==1 and nelem==(SIZE_MAX / 2 + 10), we'd say that (elsize * nelem) would not fit size_t and fail the allocation. For the purpose of this caller, it is not a practical issue, as it is likely that you'd not be able to obtain slightly more than half your address space out of a single allocation anyway. But it illustrates why open coding is not necessarily an excellent idea in the longer term, doesn't it? When unsigned_mult_overflows() is updated to avoid such a false positive, how would we remember that we need to update this copy we? > + p = reftable_malloc(nelem * elsize); > + if (!p) > + return NULL; > + > + memset(p, 0, nelem * elsize); > return p; > }