Re: [PATCH 1/1] hfsplus: skip unnecessary volume header sync

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2014-07-18 at 12:28 +0400, Vyacheslav Dubeyko wrote:
> On Thu, 2014-07-17 at 19:32 +0300, Sougata Santra wrote:
> > hfsplus_sync_fs always updates volume header information to disk with every
> > sync. This causes problem for systems trying to monitor disk activity to
> > switch them to low power state. Also hfsplus_sync_fs is explicitly called
> > from mount/unmount, which is unnecessary. During mount/unmount we only want
> > to set/clear volume dirty sate.
> > 
> 
> As far as I can judge, hfsplus driver has hfsplus_mark_mdb_dirty()
> method. This method "marks" volume header as dirty and to define some
> dirty_writeback_interval. The hfsplus_mark_mdb_dirty() is called in such
> important methods as hfsplus_block_allocate(), hfsplus_block_free(),
> hfsplus_system_write_inode(), hfsplus_link(), hfsplus_new_inode(),
> hfsplus_delete_inode(). So, it means for me that every call of
> hfsplus_sync_fs() is made when volume header should be written on
> volume. So, if you can detect some inefficiency or frequent calls of
> hfsplus_sync_fs() then, maybe, it needs to optimize
> hfsplus_mark_mdb_dirty() in the direction of proper
> dirty_writeback_interval definition. What do you think?

Thanks a lot for taking time to look into the patch. I will look into
it. hfsplus_sync_fs() also called explicitly from mount/unmount (It is
not called from remount, that is a bug and needs to be addressed. ).
This is not required at all since it is already called from vfs. The
only purpose of calling them from mount/unmount is to update dirty/
clear state and other info like driver version, write count etc ...
When clearly hfsplus_sync_fs() does more than updating volume header
and flushing it to disk.

> 
> > Signed-off-by: Sougata Santra <sougata@xxxxxxxxxx>
> > ---
> >  fs/hfsplus/super.c | 101 +++++++++++++++++++++++++++++++++++++++++------------
> >  1 file changed, 79 insertions(+), 22 deletions(-)
> > 
> > diff --git a/fs/hfsplus/super.c b/fs/hfsplus/super.c
> > index 4cf2024..5cacb06 100644
> > --- a/fs/hfsplus/super.c
> > +++ b/fs/hfsplus/super.c
> > @@ -170,12 +170,61 @@ static void hfsplus_evict_inode(struct inode *inode)
> >  	}
> >  }
> >  
> > +/*
> > + * Helper to sync volume header state to disk. Caller must acquire
> > + * volume header mutex (vh_mutex).
> > + */
> > +static int hfsplus_sync_volume_header_locked(struct super_block *sb)
> > +{
> > +	struct hfsplus_sb_info *sbi = HFSPLUS_SB(sb);
> > +	int write_backup = 0;
> > +	int error = 0;
> > +
> > +	if (test_and_clear_bit(HFSPLUS_SB_WRITEBACKUP, &sbi->flags)) {
> > +		memcpy(sbi->s_backup_vhdr, sbi->s_vhdr, sizeof(*sbi->s_vhdr));
> > +		write_backup = 1;
> > +	}
> > +
> > +	error = hfsplus_submit_bio(sb,
> > +			sbi->part_start + HFSPLUS_VOLHEAD_SECTOR,
> > +			sbi->s_vhdr_buf, NULL, WRITE_SYNC);
> 
> Such formatting looks weird for me. Maybe, it makes sense to use local
> variables here?
> 
> > +
> > +	if (error || !write_backup)
> > +		goto out;
> > +
> > +	error = hfsplus_submit_bio(sb,
> > +			sbi->part_start + sbi->sect_count - 2,
> > +			sbi->s_backup_vhdr_buf, NULL, WRITE_SYNC);
> 
> Ditto.

Well, it can be done if you think that it is more neat. I have
checked my patch with checkpatch.pl and the tool did not report
any formatting error. I don't know if it reports sub-optimal
label usage. 

> 
> > +out:
> > +	return error;
> > +}
> > +
> > +/* Sync dirty/clean volume header state to disk. */
> > +static int hfsplus_sync_volume_header(struct super_block *sb)
> > +{
> > +	struct hfsplus_sb_info *sbi = HFSPLUS_SB(sb);
> > +	int error = 0;
> > +
> > +	hfs_dbg(SUPER, "hfsplus_sync_volume_header\n");
> > +
> > +	mutex_lock(&sbi->vh_mutex);
> > +	error = hfsplus_sync_volume_header_locked(sb);
> 
> Name as hfsplus_sync_volume_header_locked() really confuses me. Because
> it is possible to think that we've locked mutex inside method. So, I
> suppose that hfsplus_sync_volume_header_unlocked() is much better name
> for my taste.

I think otherwise and I have commented the usage for
hfsplus_sync_volume_header_locked, again if it is considered neat, I
do so.

> 
> > +	mutex_unlock(&sbi->vh_mutex);
> > +
> > +	if (!error && !test_bit(HFSPLUS_SB_NOBARRIER, &sbi->flags))
> > +		blkdev_issue_flush(sb->s_bdev, GFP_KERNEL, NULL);
> > +
> > +	return error;
> > +}
> > +
> >  static int hfsplus_sync_fs(struct super_block *sb, int wait)
> >  {
> >  	struct hfsplus_sb_info *sbi = HFSPLUS_SB(sb);
> >  	struct hfsplus_vh *vhdr = sbi->s_vhdr;
> > -	int write_backup = 0;
> > +	int write_header = 0;
> >  	int error, error2;
> > +	u32 free_blocks, next_cnid;
> > +	u32 folder_count, file_count;
> >  
> >  	if (!wait)
> >  		return 0;
> > @@ -196,7 +245,8 @@ static int hfsplus_sync_fs(struct super_block *sb, int wait)
> >  		error = error2;
> >  	if (sbi->attr_tree) {
> >  		error2 =
> > -		    filemap_write_and_wait(sbi->attr_tree->inode->i_mapping);
> > +			filemap_write_and_wait(
> > +					sbi->attr_tree->inode->i_mapping);
> 
> What purpose has such change?

Sorry, this is a formatting change and I should not do it. Although,
the line was not tab spaced and doing so exceeded the 80 char limit.

> 
> >  		if (!error)
> >  			error = error2;
> >  	}
> > @@ -206,34 +256,41 @@ static int hfsplus_sync_fs(struct super_block *sb, int wait)
> >  
> >  	mutex_lock(&sbi->vh_mutex);
> >  	mutex_lock(&sbi->alloc_mutex);
> > -	vhdr->free_blocks = cpu_to_be32(sbi->free_blocks);
> > -	vhdr->next_cnid = cpu_to_be32(sbi->next_cnid);
> > -	vhdr->folder_count = cpu_to_be32(sbi->folder_count);
> > -	vhdr->file_count = cpu_to_be32(sbi->file_count);
> >  
> > -	if (test_and_clear_bit(HFSPLUS_SB_WRITEBACKUP, &sbi->flags)) {
> > -		memcpy(sbi->s_backup_vhdr, sbi->s_vhdr, sizeof(*sbi->s_vhdr));
> > -		write_backup = 1;
> > +	free_blocks = cpu_to_be32(sbi->free_blocks);
> > +	next_cnid = cpu_to_be32(sbi->next_cnid);
> > +	folder_count = cpu_to_be32(sbi->folder_count);
> > +	file_count = cpu_to_be32(sbi->file_count);
> > +
> > +	/* Check if some attribute of volume header has changed. */
> > +	if (vhdr->free_blocks != free_blocks ||
> > +			vhdr->next_cnid != next_cnid ||
> > +			vhdr->folder_count != folder_count ||
> > +			vhdr->file_count != file_count) {
> 
> I don't think that this check is correct because volume header contains
> some flags and forks. 

Can you please elaborate ? What are the other forks and flags that gets
updated in volume header.

> Moreover, there is specially dedicated method for
> "marking" volume header as dirty (I mean hfsplus_mark_mdb_dirty()
> method). So, I don't think that this check is really necessary. And,
> moreover, I don't think such significant modification of
> hfsplus_sync_fs() makes sense at all.


> 
> > +		vhdr->free_blocks = free_blocks;
> > +		vhdr->next_cnid = next_cnid;
> > +		vhdr->folder_count = folder_count;
> > +		vhdr->file_count = file_count;
> > +		write_header = 1;
> >  	}
> > +	/*
> > +	 * Write volume header only when something has changed. Also there
> > +	 * is no need to write backup volume header if nothing has changed
> > +	 * in the the volume header itself.
> > +	 */
> > +	if (!write_header)
> > +		goto out;
> >  
> > -	error2 = hfsplus_submit_bio(sb,
> > -				   sbi->part_start + HFSPLUS_VOLHEAD_SECTOR,
> > -				   sbi->s_vhdr_buf, NULL, WRITE_SYNC);
> > +	error2 = hfsplus_sync_volume_header_locked(sb);
> >  	if (!error)
> >  		error = error2;
> > -	if (!write_backup)
> > -		goto out;
> >  
> > -	error2 = hfsplus_submit_bio(sb,
> > -				  sbi->part_start + sbi->sect_count - 2,
> > -				  sbi->s_backup_vhdr_buf, NULL, WRITE_SYNC);
> > -	if (!error)
> > -		error2 = error;
> >  out:
> >  	mutex_unlock(&sbi->alloc_mutex);
> >  	mutex_unlock(&sbi->vh_mutex);
> >  
> > -	if (!test_bit(HFSPLUS_SB_NOBARRIER, &sbi->flags))
> > +	if (write_header && !error &&
> > +			!test_bit(HFSPLUS_SB_NOBARRIER, &sbi->flags))
> >  		blkdev_issue_flush(sb->s_bdev, GFP_KERNEL, NULL);
> 
> The blkdev_issue_flush() is called in
> hfsplus_sync_volume_header_locked() yet.

No it is called in hfsplus_sync_volume_header().
> 
> Do you really confident that it makes sense to prevent from calling
> blkdev_issue_flush() here in the case of error detection? Frankly
> speaking, I doubt that it makes sense.

If writing to page-cache is returning error, what is the point of
flushing write back cache of the disk ?.

> 
> >  
> >  	return error;
> > @@ -287,7 +344,7 @@ static void hfsplus_put_super(struct super_block *sb)
> >  		vhdr->attributes |= cpu_to_be32(HFSPLUS_VOL_UNMNT);
> >  		vhdr->attributes &= cpu_to_be32(~HFSPLUS_VOL_INCNSTNT);
> >  
> > -		hfsplus_sync_fs(sb, 1);
> > +		hfsplus_sync_volume_header(sb);
> 
> I doubt that to flush the volume header only is proper approach. Could
> you guarantee that special metadata files have been flushed before?

Please see the cover-letter. I think that hfsplus_sync_fs is already
called from vfs.

> 
> >  	}
> >  
> >  	hfs_btree_close(sbi->attr_tree);
> > @@ -539,7 +596,7 @@ static int hfsplus_fill_super(struct super_block *sb, void *data, int silent)
> >  		be32_add_cpu(&vhdr->write_count, 1);
> >  		vhdr->attributes &= cpu_to_be32(~HFSPLUS_VOL_UNMNT);
> >  		vhdr->attributes |= cpu_to_be32(HFSPLUS_VOL_INCNSTNT);
> > -		hfsplus_sync_fs(sb, 1);
> > +		hfsplus_sync_volume_header(sb);
> 
> Yes, maybe, it makes sense to flush the volume header only here.
> 
> Thanks,
> Vyacheslav Dubeyko.
> 
> 

Thanks a lot,

Best regards,
    Sougata.

--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux