Re: Contiguous file sequences

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Eric,

On Wed, Sep 22, 2010 at 9:10 PM, Eric Sandeen <sandeen@xxxxxxxxxxx> wrote:
> Daire Byrne wrote:
>> Hi,
>>
>> I have been trying to figure out how to lay down a file sequence (e.g.
>> images) such that they are guaranteed to always be contiguous on disk
>> (i.e. no block gaps between them).
>
> There's no mechanism to guarantee that.
>
> Why is this the goal, what are you trying to achieve?

I am essentially trying to play back a large frame sequence and trying
to minimise seeks as it can lead to sporadic slowdowns on a SATA based
RAID.

>> Currently if I write a sequence to
>> disk things like "filestreams" help keep everything in the same AG and
>> the allocation algorithm seems to prefer to try and place files next
>> to eachother but without the filesystem knowing the total size of the
>> sequence there are always likely to be gaps in the blocks where
>> existing data has been written.
>
> preallocation of each image before writing would help make it more
> likely that each image is itself contiguous (but again this is not
> -guaranteed-)
>
>> So even if the first file is written
>> completely contiguously to disk there is no way to guarantee that
>> there is contiguous free space after it to write the rest of the
>> images.
>>
>> What I really want is to be able to find and reserve enough space for
>> the entire sequence and then write the files into that big contiguous
>> range. I tried to do this with xfs_io hoping that the allocator would
>> just know what I wanted and do the right thing (ever the optimist...).
>
> :)
>
>> So something like this:
>>
>>   # find and reserve a big chunk to fit all my files in
>>   xfs_io -f -c "resvsp 0 136314880" -c "bmap -v" $DIR/test.0
>>
>>   # now shrink it keeping the start block
>>   xfs_io -f -c "freesp 13631488 0" -c "bmap -v" $DIR/test.0
>>
>>   # now write a bunch of files and hope they continue from test.0 on disk
>>   dd if=/dev/zero of=$DIR/test.0 bs=1M count=13 conv=nocreat,notrunc
>>   for  x in `seq 1 4`; do
>>       dd if=/dev/zero of=$DIR/test.$x bs=1M count=13 conv=notrunc
>>   done
>>
>> But a new allocation is made for the first new file in the sequence
>> elsewhere on disk and I don't know how to get it to use the large
>> chunk of free contiguous space after the "test.0" file instead.
>
> You can't specify a starting block for any given file I'm afraid.

Somebody pointed me at this which looks fairly promising:

  http://oss.sgi.com/archives/xfs/2006-07/msg01005.html

I'm still trying to get my head around how I would actually write a
userspace app/script to use it but I think it should allow me to do
what I want. It would be good if I could script it through xfs_io. I'd
really like a script where I could point it at a directory and it
would do something like:

  1. count total space used by file sequence
  2. find start block for that much contiguous space on disk (or as
much of it as possible)
  3. allocate the files using the start block one after another on disk

>> Another option might be to create a single contiguous large file,
>> concatenate all the images into it and then split it up on disk using
>> offsets but I don't think such a thing is even possible? I always know
>> the image sequence size beforehand, all images are exactly the same
>> size and I can control/freeze the filesystem access if needed.
>>
>> Anybody got any suggestions? It *seems* like something that should be
>> possible and would be useful.
>
> This would be pretty low-level control of the allocator by userspace.
>
> I'll just go back and ask what problem you're trying to solve?  There
> may be a better (i.e. currently existing) solution.

The "realtime" option is sometimes suggested as a way to do sequence
streaming but I'd really rather avoid that. It seems to me like the
option to allocate a sequence of files end on end in a known chunk of
contiguous space is something that would be useful in the normal
operating mode. SSDs are an option but they ain't cheap for the amount
of storage I require and besides I know that when the sequence is
written contiguously on disk my current setup can reach the required
speeds.

Thanks,

Daire

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs



[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux