On Mon, 07 May 2007 17:00:24 -0700 Mingming Cao <cmm@xxxxxxxxxx> wrote: > > + while (ret >= 0 && ret < max_blocks) { > > + block = block + ret; > > + max_blocks = max_blocks - ret; > > + ret = ext4_ext_get_blocks(handle, inode, block, > > + max_blocks, &map_bh, > > + EXT4_CREATE_UNINITIALIZED_EXT, 0); > > + BUG_ON(!ret); > > + if (ret > 0 && test_bit(BH_New, &map_bh.b_state) > > + && ((block + ret) > (i_size_read(inode) << blkbits))) > > + nblocks = nblocks + ret; > > + } > > + > > + if (ret == -ENOSPC && ext4_should_retry_alloc(inode->i_sb, &retries)) > > + goto retry; > > + > > Now the interesting question is: what do we do if we get halfway through > > this loop and then run out of space? We could leave the disk all filled up > > and then return failure to the caller, but that's pretty poor behaviour, > > IMO. > > > The current code handles earlier ENOSPC by three times retries. After > that if we still run out of space, then it's propably right to notify > the caller there isn't much space left. > > We could extend the block reservation window size before the while loop > so we could get a lower chance to get more fragmented. yes, but my point is that the proposed behaviour is really quite bad. We will attempt to allocate the disk space and then we will return failure, having consumed all the disk space and having partially and uselessly populated an unknown amount of the file. Userspace could presumably repair the mess in most situations by truncating the file back again. The kernel cannot do that because there might be live data in amongst there. So we'd need to either keep track of which blocks were newly-allocated and then free them all again on the error path (doesn't work right across commit+crash+recovery) or we could later use the space-reservation scheme which delayed allocation will need to introduce. Or we could decide to live with the above IMO-crappy behaviour. - To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html