possible breakage with the >= 020 bfset et al bitops

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've recently found and fixed a bug in e2fsprogs on m68k-linux,
caused by it implementing bitops with bfset et al, and then using
large (>= 2G) unsigned bit numbers even though such numbers are
interpreted as negative by the bit field instructions, resulting
in accesses way below the bitmap being operated upon.

A quick glance in the kernel shows that its bitops for >= 020 may
have the same problem.

So,

Q1: Are the kernel's bitops guaranteed to never be applied to
bitmaps larger than 256MB (2Gbit)?  ARAnym can certainly boot
kernels with much more memory than that, so I suspect the answer
is "no".

Q2: Assuming a "no" answer to Q1, what do we want to do about it?
a) declare it a user (driver, whatever) error and ignore it?
b) add a BUG_ON(nr < 0); at the start of bfset_mem_set_bit et al?
c) manually adjust the base address and bit number to refer to the
   correct byte before performing the bit field instructions?
   (that is what my e2fsprogs fix does)

With option c) I believe there is no longer any advantage to
bfset et al over the classic bset et al, so some of the code
in arch/m68k/include/asm/bitops.h could possibly be unified.

Side note: in arch/m68k/include/asm/bitops.h, shouldn't the bit
number parameter 'nr' be 'unsigned int' for consistency with
Documentation/atomic_ops.txt?  Surely nothing expects to be able
to use negative bit numbers...?

/Mikael
--
To unsubscribe from this list: send the line "unsubscribe linux-m68k" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Video for Linux]     [Yosemite News]     [Linux S/390]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux