Re: [PATCH 1/1] hfsplus: skip unnecessary volume header sync

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/19/2014 02:23 PM, Vyacheslav Dubeyko wrote:

On Jul 18, 2014, at 1:24 PM, Sougata Santra wrote:

On Fri, 2014-07-18 at 12:28 +0400, Vyacheslav Dubeyko wrote:
On Thu, 2014-07-17 at 19:32 +0300, Sougata Santra wrote:
hfsplus_sync_fs always updates volume header information to disk with every
sync. This causes problem for systems trying to monitor disk activity to
switch them to low power state. Also hfsplus_sync_fs is explicitly called
from mount/unmount, which is unnecessary. During mount/unmount we only want
to set/clear volume dirty sate.


As far as I can judge, hfsplus driver has hfsplus_mark_mdb_dirty()
method. This method "marks" volume header as dirty and to define some
dirty_writeback_interval. The hfsplus_mark_mdb_dirty() is called in such
important methods as hfsplus_block_allocate(), hfsplus_block_free(),
hfsplus_system_write_inode(), hfsplus_link(), hfsplus_new_inode(),
hfsplus_delete_inode(). So, it means for me that every call of
hfsplus_sync_fs() is made when volume header should be written on
volume. So, if you can detect some inefficiency or frequent calls of
hfsplus_sync_fs() then, maybe, it needs to optimize
hfsplus_mark_mdb_dirty() in the direction of proper
dirty_writeback_interval definition. What do you think?

Thanks a lot for taking time to look into the patch. I will look into
it. hfsplus_sync_fs() also called explicitly from mount/unmount (It is
not called from remount, that is a bug and needs to be addressed. ).
This is not required at all since it is already called from vfs. The
only purpose of calling them from mount/unmount is to update dirty/
clear state and other info like driver version, write count etc ...
When clearly hfsplus_sync_fs() does more than updating volume header
and flushing it to disk.


Finally, I think that more compact and better solution can be achieved by
modification of hfsplus_mark_mdb_dirty() using.

We can add into struct hfsplus_sb_info two additional fields:
(1) flag that informs about dirty state of volume header;
(2) last write timestamp.

So, every calling of hfsplus_mark_mdb_dirty() will set flag of volume header
dirtiness. The hfsplus_sync_fs() will know that volume header is really dirty
on basis of checking dirty flag in hfsplus_sb_info. And hfsplus_sync_fs() will
clear this flag after saving state of volume header and special files on volume.

The last write timestamp + timeout between two adjacent writes can be used
for decreasing frequency of flushing of volume header on volume. Such technique
is used in NILFS2.

Thank you for the pointer, will look into it. I guess it has to be
locked against mutual access as well.


[snip]

		if (!error)
			error = error2;
	}
@@ -206,34 +256,41 @@ static int hfsplus_sync_fs(struct super_block *sb, int wait)

	mutex_lock(&sbi->vh_mutex);
	mutex_lock(&sbi->alloc_mutex);
-	vhdr->free_blocks = cpu_to_be32(sbi->free_blocks);
-	vhdr->next_cnid = cpu_to_be32(sbi->next_cnid);
-	vhdr->folder_count = cpu_to_be32(sbi->folder_count);
-	vhdr->file_count = cpu_to_be32(sbi->file_count);

-	if (test_and_clear_bit(HFSPLUS_SB_WRITEBACKUP, &sbi->flags)) {
-		memcpy(sbi->s_backup_vhdr, sbi->s_vhdr, sizeof(*sbi->s_vhdr));
-		write_backup = 1;
+	free_blocks = cpu_to_be32(sbi->free_blocks);
+	next_cnid = cpu_to_be32(sbi->next_cnid);
+	folder_count = cpu_to_be32(sbi->folder_count);
+	file_count = cpu_to_be32(sbi->file_count);
+
+	/* Check if some attribute of volume header has changed. */
+	if (vhdr->free_blocks != free_blocks ||
+			vhdr->next_cnid != next_cnid ||
+			vhdr->folder_count != folder_count ||
+			vhdr->file_count != file_count) {

I don't think that this check is correct because volume header contains
some flags and forks.

Can you please elaborate ? What are the other forks and flags that gets
updated in volume header.


There is simple use-case. For example, you mount file system, do nothing
with it and, finally, unmount file system. You should set
kHFSBootVolumeInconsistentBit in attributes field of volume header during
mount. So, you should write new state of volume header on volume.
Then, during unmount, you should set kHFSVolumeUnmountedBit in
attributes field of volume header and save this state of volume header on
volume. This flags are used by fsck and file system driver for detecting
presence of error on volume.

Please look at the patch, it separates out the mount/unmount code flow
from sync file system code flow. Volume header is modified by both when
syncing file-system (because changes in allocations..etc), and during
mount and unmount. These are separate context and should not be mixed
with each other, please see any other file-system. When volume
dirty/clean states are updated, other version info etc are updated
they should be immediately flushed to disk that is it, but sync file
system context is much more than that.

AFAIK the relevant checks in sync_fs does some check to see if any
allocation states have changed, I agree this can be done in
hfsplus_mark_mdb_dirty but I need to check that. Although the
necessary checks that are done are only checking if the volume
header needs to be updated and to the best of my knowledge it
checks all the relevant fields which can get modified from
hfsplus_sync_fs() context (not mount/unmount context).

Moreover, there is specially dedicated method for
"marking" volume header as dirty (I mean hfsplus_mark_mdb_dirty()
method). So, I don't think that this check is really necessary. And,
moreover, I don't think such significant modification of
hfsplus_sync_fs() makes sense at all.


[snip]


	return error;
@@ -287,7 +344,7 @@ static void hfsplus_put_super(struct super_block *sb)
		vhdr->attributes |= cpu_to_be32(HFSPLUS_VOL_UNMNT);
		vhdr->attributes &= cpu_to_be32(~HFSPLUS_VOL_INCNSTNT);

-		hfsplus_sync_fs(sb, 1);
+		hfsplus_sync_volume_header(sb);

I doubt that to flush the volume header only is proper approach. Could
you guarantee that special metadata files have been flushed before?

Please see the cover-letter. I think that hfsplus_sync_fs is already
called from vfs.

Anyway, I prefer to leave code here unchanged. It is really important to guarantee
that we will have file system in consistent way after unmount. If special files were
flushed previously then it will done nothing here for special files.

I don't really understand.


Thanks,
Vyacheslav Dubeyko.


Thanks
    Sougata.

--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux