Re: How to generate a large file allocating space

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





--On 1 November 2010 15:45:12 -0600 Andreas Dilger <adilger.kernel@xxxxxxxxx> wrote:

What is it you really want to do in the end?  Shared concurrent writers
to the same file?  High-bandwidth IO to the underlying disk?

High bandwidth I/O to the underlying disk is part of it - only one
reader/writer per file. We're really using ext4 just for its extents
capability, i.e. allocating space, plus the convenience of directory
lookup to find the set of extents.

It's easier to do this than to write this bit from scratch, and the
files are pretty static in size (i.e. they only grow, and grow
infrequently by large amounts). The files on ext4 correspond to large
chunks of disks we are combining together using an device-mapper
type thing (but different), and on top of that lives arbitary real
filing systems. Because our device-mapper type thing already
understands what blocks have been written to, we already have a layer
that prevents the data on the disk before the file's creation being
exposed. That's why I don't need ext4 to zero them out. I suppose
in that sense it is like the swap file case.

Oh, and because these files are allocated infrequently, I am not
/that/ concerned about performance (famous last words). The performance
critical stuff is done via direct writes to the SAN and don't even
pass through ext4 (or indeed through any single host).

--
Alex Bligh

_______________________________________________
Ext3-users mailing list
Ext3-users@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/ext3-users


[Index of Archives]         [Linux RAID]     [Kernel Development]     [Red Hat Install]     [Video 4 Linux]     [Postgresql]     [Fedora]     [Gimp]     [Yosemite News]

  Powered by Linux