Hi Bart, On Mon, Jun 08, 2020 at 09:14:03PM -0700, Bart Van Assche wrote: > diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h > index 42dbf90d4651..edc9c082dc6e 100644 > --- a/drivers/scsi/qla2xxx/qla_def.h > +++ b/drivers/scsi/qla2xxx/qla_def.h > @@ -46,7 +46,7 @@ typedef struct { > uint8_t al_pa; > uint8_t area; > uint8_t domain; > -} le_id_t; > +} __packed le_id_t; > > #include "qla_bsg.h" > #include "qla_dsd.h" > @@ -1841,8 +1841,8 @@ typedef union { > struct { > uint8_t reserved; > uint8_t standard; > - } id; > -} target_id_t; > + } __packed id; > +} __packed target_id_t; This is a bit strange. Why is that only these two definitions need this treatment? With gcc 6.3.0 on Debian stretch, the compiler did the right thing for x86_64, ARMv7 and ARMv8. In all cases target_id_t is 2 bytes long. Or does this happen because target_id_t is embedded into cmd_entry_t? Here is the test code: #include <stdio.h> #include <stdint.h> typedef union { uint16_t extended; struct { uint8_t reserved; uint8_t standard; } id; } target_id_t; int main(int argc, char *argv[]) { printf("sizeof(target_id_t) = %zu\n", sizeof(target_id_t)); return 0; } The only thing which is different to the above code is the use uint16_t instead of __le16. But I thought this should make a difference. Thanks, Daniel