Re: [ext2/ext3] Re-allocation of blocks for an inode

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Mar 13, 2009 at 12:55 AM, Greg Freemyer <greg.freemyer@xxxxxxxxx> wrote:
> On Thu, Mar 12, 2009 at 2:37 PM, Greg Freemyer <greg.freemyer@xxxxxxxxx> wrote:
>> On Thu, Mar 12, 2009 at 1:59 PM, Sandeep K Sinha
>> <sandeepksinha@xxxxxxxxx> wrote:
>>> On Thu, Mar 12, 2009 at 9:43 PM, Greg Freemyer <greg.freemyer@xxxxxxxxx> wrote:
>>>> On Thu, Mar 12, 2009 at 11:16 AM, Sandeep K Sinha
>>>> <sandeepksinha@xxxxxxxxx> wrote:
>>>>> Hi all,
>>>>>
>>>>> I am listing above two codes for re-allocating new blocks for a given inode.
>>>>>
>>>>> These code snippets are specific to  ext2.
>>>>> The problem here is that it is performing slower than cp and dd.
>>>>>
>>>>> Can anyone help in determining what can be the possible reason(s) ?
>>>>
>>>> Excessive use of sync_dirty_buffer();
>>>>
>>>> You're calling it for every single block you copy. Pretty sure it is a
>>>> very aggressive call that forces the data out to the disk immediately,
>>>> thus the benefits of caching and elevators are lost.  And those are
>>>> big benefits.
>>>>
>>>
>>> probably the reason being that the ext2 has some issues with sync.
>>> http://kerneltrap.org/index.php?q=mailarchive/linux-fsdevel/2007/2/14/316836/thread
>>>
>>> Hence, if we don't do this we see a difference in the original and
>>> reallocated file.
>>
>> Sandeep,
>>
> Sandeep,
>
> I dropped ext4.
>

> Seperately, if you want to try to get your flip page approach to work
> without sync_dirty_buffer(), I think you need to re-code it as:
>
>             /* Swap data pointers for the data buffers */
>
>              lock_buffer(src_bhptr);
>              lock_buffer(dst_bhptr);
>
>              oldpage = dst_bhptr->b_page;
>              olddata = dst_bhptr->b_data;
>              dst_bhptr->b_data = src_bhptr->b_data;
>              dst_bhptr->b_page = src_bhptr->b_page;
>              src_bhptr->b_data = oldpage;
>              src_bhptr->b_page = olddata;
>              flush_dcache_page(src_bhptr->b_page);
>              flush_dcache_page(dst_bhptr->b_page);
>
>              unlock_buffer(src_bhptr);
>              unlock_buffer(dst_bhptr);
>
>              /* buffer dirty so it will automatically get written to
> disk by io_scheduler */
>              mark_buffer_dirty(dst_bhptr);
>
>              brelse(src_bhptr);   /* src is not dirty, will be
> released immediately */
>              brelse(dst_bhptr);   /* dst is dirty, will be released
> as soon as the data is written to disk and the buffer becomes clean */
>
>       } /* End of block copy loop */
>
>       ext2_sync_inode(dest_ind);
>
> I threw in some comments just for my own sake.
>
> The above is totally untested, but it looks much better "to me" than
> the logic you currently have.  Personally I would get your memcpy
> approach perfected first.  It should be able to run at the same speed
> as cp from userspace.
>
> Then come back and experiment with the above small speed optimization
> which might run faster, but more importantly should show a smaller CPU
> load since you aren't having to move the data around one extra time.
>

Will see this once we are clean with memcopy algorithm.

Thanks.

> Greg
> --
> Greg Freemyer
> Head of EDD Tape Extraction and Processing team
> Litigation Triage Solutions Specialist
> http://www.linkedin.com/in/gregfreemyer
> First 99 Days Litigation White Paper -
> http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf
>
> The Norcross Group
> The Intersection of Evidence & Technology
> http://www.norcrossgroup.com
>



-- 
Regards,
Sandeep.





 	
“To learn is to change. Education is a process that changes the learner.”

--
To unsubscribe from this list: send an email with
"unsubscribe kernelnewbies" to ecartis@xxxxxxxxxxxx
Please read the FAQ at http://kernelnewbies.org/FAQ



[Index of Archives]     [Newbies FAQ]     [Linux Kernel Mentors]     [Linux Kernel Development]     [IETF Annouce]     [Git]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux SCSI]     [Linux ACPI]
  Powered by Linux