NiLFS2 partial segment finfo

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi folks,

would the Linux implementation grock multiple finfo's about one file even with
overlapping blocks ? say a finfo of ino 12345, followed by some finfo's of say
ino 123400, followed by some more of ino 12345? Or does it implicitly assume a
kind of ordering in the blocks?

In *BSD, and especally in NetBSD, buffer cache and memory mapping is done FS
agnostic by UVM/UBC. If a piece of file-mapped memory is deemed not needed
anymore its pushed out as buffer, basicly a memory extent and its file/disc
information. The pages describing that buffer are thus NOT at the FSs control
and will only be released after it gets signalled its written out.

I hope to save duplicating the written out data in memory by putting it
directly into the to-be-written-out segment, this *might* result in
pre-pending and moving data around if earlier blocks are written out later but
can also mean that other file write pieces are interleaved. The block number
assigning and its inode update and recording in the DAT i'll cache for sure
since thats nearly no data.

The reason for the mumbo jumbo is to avoid multiple storage of data: its in
the buffer cache until its given the signal its written out. Normally this is
just passed to the device and that shedules it and we're done and can/will be
automagically released later. Since we want to create a log however and like
to write out synchronous it has be copied and the buffer released ASAP to
avoid possible long delays. If i would just write out all the blocks after a
(yet) unfilled/unwritten partial segment header it would solve it but would
also create heaps of disc write transactions and a retrace of the head on
physical disks or a re-flash/relocation on SSD for the missing partial segment
header. Performance would suffer a lot.

Having best of both worlds would mean keeping two empty partial segment spaces
around easily eating a few (possibly scarse) megabytes. One for construction,
one for storage/overflow. You don't want to lose 4Mb for each mounted FS or is
that not an issue you think?

In short: what do you folks recommend? I like the two partial segment spaces
and having a backup-way by direct writing for memory challenged machines, but
would the linux way be and what would be a sane way?

With regards,
Reinoud Zandijk

Attachment: pgp7TS_m4RvYi.pgp
Description: PGP signature


[Index of Archives]     [Linux Filesystem Development]     [Linux BTRFS]     [Linux CIFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux