Avantika Mathur wrote: > - large filesystem > - We would like to perform more testing on large (>16TB) filesystems > - currently hardware limitations are preventing this testing. We > have tested 10TB raid dists, and 16TB loopback devices. Avantika will > look into creating very large sparse devices for testing. I've been hacking up some ext3@16T testing scripts to use sparse devicemapper devices which make use of snapshots... loopback files don't work for testing, at least not hosted on ext[234], because we still can't do these large file offsets. (Documentation/device-mapper/zero.txt in the kernel tree describes these sparse dm devices) Testing the whole range as a sparse snapshot can be slow, since devicemapper has to do all the exception handling etc, and I think essentially creates a fragmented block device. I've been playing with something like this: # 90% of the real device size is used for a "real" 1:1 mapping # The other 10% is sparsely mapped out to add up to totalsize. # i.e. - # [large sparse-ish device] # # +----------------------~ ~-----------------------------------------+ # | sparse | real | # +----------------------~ ~-----------------------------------------+ # # |<------------ SPARSE_SIZE ---------------->|<----- REAL_SIZE ----->| # is mapped on top of: # [real block device] # +----------------------------+ # | sp | real | # +----------------------------+ and then marking the sparse range as full (maybe via lazy_bg, or other methods). You could then also put a dm-error target under the "full" sections so that any IO that may stray there will fail. This way you can direct the real IO to the 1:1 mapping portion of the large dm device, and shouldn't get the snapshot slowdowns. Anyway, just something I've been playing with... -eric - To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html