On Thu, Feb 15, 2024 at 11:39 PM Orion Poplawski <orion@xxxxxxxx> wrote: > > We're hitting this with h5py on i686: > > /builddir/build/BUILD/h5py-3.10.0/serial/h5py/defs.c: In function > ‘__pyx_f_4h5py_4defs_H5Dread_chunk’: > /builddir/build/BUILD/h5py-3.10.0/serial/h5py/defs.c:14922:85: error: > passing argument 4 of ‘H5Dread_chunk’ from incompatible pointer type > [-Wincompatible-pointer-types] > 14922 | __pyx_v_r = H5Dread_chunk(__pyx_v_dset_id, > __pyx_v_dxpl_id, __pyx_v_offset, __pyx_v_filters, __pyx_v_buf); > | > ^~~~~~~~~~~~~~~ > | > | > | > __pyx_t_5numpy_uint32_t * {aka long unsigned int *} > In file included from /usr/include/hdf5.h:25, > from > /builddir/build/BUILD/h5py-3.10.0/serial/h5py/api_compat.h:27, > from > /builddir/build/BUILD/h5py-3.10.0/serial/h5py/defs.c:1246: > /usr/include/H5Dpublic.h:1003:92: note: expected ‘uint32_t *’ {aka > ‘unsigned int *’} but argument is of type ‘__pyx_t_5numpy_uint32_t *’ > {aka ‘long unsigned int *’} > 1003 | H5_DLL herr_t H5Dread_chunk(hid_t dset_id, hid_t dxpl_id, const > hsize_t *offset, uint32_t *filters, > | > ~~~~~~~~~~^~~~~~~ > /builddir/build/BUILD/h5py-3.10.0/serial/h5py/defs.c: In function > ‘__pyx_f_4h5py_4defs_H5Pget_driver_info’: > /builddir/build/BUILD/h5py-3.10.0/serial/h5py/defs.c:31935:13: warning: > assignment discards ‘const’ qualifier from pointer target type > [-Wdiscarded-qualifiers] > 31935 | __pyx_v_r = H5Pget_driver_info(__pyx_v_plist_id); > | ^ > > > It seems that numpy is defining a uint32_t type as long unsigned int on > i686, while glibc(?) is defining it as unsigned int. Yes, looking at NumPy's header [1], it appears to check `long` first, then `long long`, then `int`, then `short`, and assigns the first one that matches to the matching bit-length. So it should pick unsigned long for npy_uint32 before unsigned int if they are both 4 bytes wide. > Now what puzzles > me a little is that on i686 aren't these both 4-byte integers and no not > incompatible at all? Yes, I think they are the same size, as demonstrated on a 32-bit mock: ``` #include <numpy/npy_common.h> #include <stdio.h> int main(void) { printf("npy_uint32: %u\nunsigned int: %u\nunsigned long: %u\nunsigned long long: %u\n", sizeof(npy_uint32), sizeof(unsigned int), sizeof(unsigned long), sizeof(unsigned long long)); return 0; } ``` prints out: ``` npy_uint32: 4 unsigned int: 4 unsigned long: 4 unsigned long long: 8 ``` > What should be done here? > I guess that depends on how glibc sets things up, but perhaps it would work better if NumPy checked from smallest to largest as defined in C (short -> int -> long -> long long)? [1] https://github.com/numpy/numpy/blob/308273e94bcf49980be9d5ded2b0ff5b4dd3a897/numpy/_core/include/numpy/npy_common.h#L488 -- _______________________________________________ devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue