Jeff Hostetler <git@xxxxxxxxxxxxxxxxx> writes: > I defined that routine to take a uint64_t because I wanted to > pass a nanosecond value received from getnanotime() and that's > what it returns. Hmph, but the target format does not have different representation of inttypes in different sizes, no? I personally doubt that we would benefit from having a group of functions (i.e. format_int{8,16,32,64}_to_json()) that callers have to choose from, depending on the exact size of the integer they want to serialize. The de-serializing side would be the same story. Even if the variable a potential caller of the formetter is a sized type that is different from uintmax_t, the caller shouldn't have to add an extra cast. Am I missing some obvious merit for having these separate functions for explicit sizes?