Re: [PATCH 00/14] hfsplus: introduce journal replay functionality

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, 

It is quite a co-incidence - I have also spent some hours in the last few days rebasing the netgear derived patch set bit-rotting in my hard disk. I have a quick diff and it seems that you have drawn some ideas but yours is largely an independent effort. Anyway, here are a few things from where I am. 

- It appears that you don't treat the special files (catalog, etc) special? netgear's does; but i think there is a mistake in that they don't consider them ever getting fragmented, so they were not journaling those files properly. 

- I am still nowhere near figuring out what's the issue with just runing du on one funny volume i have. The driver gets confused after a few du's but the disk always fsck clean. 

- I see I still have one out-standing patch not yet submitted - the one about folder counts on case-sensitive volumes. 

I'll try to spend some time reading your patches and maybe even try them out. Will write again. 

Hin-tak



------------------------------
On Thu, Dec 26, 2013 09:41 GMT Vyacheslav Dubeyko wrote:

>Hi,
>
>This patch implements journal replay functionality in HFS+
>file system driver.
>
>Technical Note TN1150:
>"The purpose of the journal is to ensure that when a group of
> related changes are being made, that either all of those changes
> are actually made, or none of them are made. This is done by
> gathering up all of the changes, and storing them in a separate
> place (in the journal). Once the journal copy of the changes is
> completely written to disk, the changes can actually be written
> to their normal locations on disk. If a failure happens at that
> time, the changes can simply be copied from the journal to their
> normal locations. If a failure happens when the changes are being
> written to the journal, but before they are marked complete, then
> all of those changes are ignored."
>
>"A group of related changes is called a transaction. When all of
> the changes of a transaction have been written to their normal
> locations on disk, that transaction has been committed, and is
> removed from the journal. The journal may contain several
> transactions. Copying changes from all transactions to their
> normal locations on disk is called replaying the journal."
>
>"In order to replay the journal, an implementation just loops
> over the transactions, copying each individual block in the
> transaction from the journal to its proper location on the
> volume. Once those blocks have been flushed to the media
> (not just the driver!), it may update the journal header to
> remove the transactions."
>
>"Here are the steps to replay the journal:
>  1. Read the volume header into variable vhb. The volume may
>     have an HFS wrapper; if so, you will need to use it to
>     determine the location of the volume header.
>  2. Test the kHFSVolumeJournaledBit in the attributes field of
>     the volume header. If it is not set, there is no journal
>     to replay, and you are done.
>  3. Read the journal info block from the allocation block number
>     vhb.journalInfoBlock, into variable jib.
>  4. If kJIJournalNeedsInitMask is set in jib.flags, the journal
>     was never initialized, so there is no journal to replay.
>  5. Verify that kJIJournalInFSMask is set and kJIJournalOnOtherDeviceMask
>     is clear in jib.flags.
>  6. Read the journal header at jib.offset bytes from the start
>     of the volume, and place it in variable jhdr.
>  7. If jhdr.start equals jhdr.end, the journal does not have
>     any transactions, so there is nothing to replay.
>  8. Set the current offset in the journal (typically a local
>     variable) to the start of the journal buffer, jhdr.start.
>  9. While jhdr.start does not equal jhdr.end, perform the
>     following steps:
>       1. Read a block list header of jhdr.blhdr_size bytes from
>          the current offset in the journal into variable blhdr.
>       2. For each block in bhdr.binfo[1] to bhdr.binfo[blhdr.num_blocks],
>          inclusive, copy bsize bytes from the current offset in
>          the journal to sector bnum on the volume (to byte offset
>          bnum*jdhr.jhdr_size). Remember that jhdr_size is the
>          size of a sector, in bytes.
>       3. If bhdr.binfo[0].next is zero, you have completed the
>          last block list of the current transaction; set jhdr.start
>          to the current offset in the journal."
>
>With the best regards,
>Vyacheslav Dubeyko.
>---
>
> Documentation/filesystems/hfsplus.txt |    4 +
> fs/hfsplus/Makefile                   |    3 +-
> fs/hfsplus/hfsplus_fs.h               |   31 +-
> fs/hfsplus/hfsplus_raw.h              |   53 +-
> fs/hfsplus/journal.c                  | 1207 +++++++++++++++++++++++++++++++++
> fs/hfsplus/options.c                  |    8 +-
> fs/hfsplus/part_tbl.c                 |    4 +-
> fs/hfsplus/super.c                    |   42 +-
> fs/hfsplus/wrapper.c                  |   28 +-
> 9 files changed, 1356 insertions(+), 24 deletions(-)
>-- 
>1.7.9.5
>
>

--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux