On Thu, 8 Oct 2009, Alasdair G Kergon wrote: > On Tue, Oct 06, 2009 at 07:01:37PM -0400, Mikulas Patocka wrote: > > Under some special conditions (too big chunk size or zero-sized device), > > the resulting hash_size is calculated as zero. > > > > rounddown_pow_of_two(0) is undefined operation (it expands to shift by -1). > > And init_exception_table with zero argument would fail with -ENOMEM. > > > > This patch makes minimum chunk size 64, just like for pending exception table. > > Could we have some more specific information in this patch header? > > How does a "too big" chunk size happen? And what is "too big"? The problem is if you have bigger chunk size than the origin device (this is valid, but useless). Then, hash_size is set to zero and false allocation failure is reported. I think bug 502965 is caused by this --- Milan tried to create a 10k volume with 16k chunksize. > How does a zero-sized device happen and how can this code do anything > sensible if it encounters one - shouldn't it fail with an error instead? > Or do we still have a problem with the sequence in which dm ioctls are being > issued? I don't know if zero-sized device can happen with lvm. I think not. It could be only created manually with dmsetup. Snapshot of a zero-sized device may be still considered allowed. > Alasdairs Mikulas -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel