Re: Copying Data Blocks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 7, 2009 at 12:44 PM, Manish Katiyar <mkatiyar@xxxxxxxxx> wrote:
> On Wed, Jan 7, 2009 at 12:17 PM, Sandeep K Sinha
> <sandeepksinha@xxxxxxxxx> wrote:
>> Ok, Let me rephrase what rohit is exactly trying to question.
>>
>> There is an inode X which has say some N number of data blocks.
>> Now, through his own kernel module and some changes to the file system,
>> he wants to create a new inode Y in the FS and physically copy all the
>> data from the old inode to the new inode.
>
> Errr....... I must be missing something...... For this why do you need
> to copy the data blocks ? if you just copy the old inode to new inode,
> you have already copied the direct and indirect block pointers right ?
> That will not take much time, and now if you free the old inode, you
> have virtually changed the ownership of old blocks to the new inode.
>
The problem is not replacing the inode, i want to physically move the data.
That means if inode  X and its data blocks are in block group 1, and
new inode is in block group 100
then i will allocate data blocks in block group 100 and copy the data
from inode X to inode Y.
So i will be able to physically relocate a file, and change the
directory entry to contain inode Y.

> The problems i can see with this approach is that if the new inode is
> not in the same block group as old inode, you have *kind of broken*
> the ext2's intelligence of allocating the blocks in the same block
> group.
>
> CMIIW . btw this thread is interesting :-)

Yes its interesting. :-)

>
I haven't actually broken ext2's intelligence completely, i have only put
restrictions in allocation of inode and data blocks.
And it works fine with existing optimizations.

And the major issue is relocating files between different block group range.


> Thanks -
> Manish.
>
>>
>> And release the old inode and its data blocks and update the dentry
>> with the new inode number.
>> PS: The file system remains completely frozen in this time.
>>
>>
>> On Wed, Jan 7, 2009 at 4:02 AM, Om <om.turyx@xxxxxxxxx> wrote:
>>> Erik Mouw wrote:
>>>>
>>>> On Tue, 6 Jan 2009 23:16:14 +0530 "Rohit Sharma" <imreckless@xxxxxxxxx>
>>>> wrote:
>>>>>
>>>>> On Tue, Jan 6, 2009 at 11:09 PM, Manish Katiyar <mkatiyar@xxxxxxxxx>
>>>>> wrote:
>>>>>>
>>>>>> Apart from performance, is there anything else you are worried
>>>>>> about ?
>>>>>
>>>>> Performance is only a bottleneck,
>>>>> this can be done in user land
>>>>> but kernel space solution will be more efficient.
>>>>
>>>> Hardly more efficient. Your main bottleneck will be IO from/to the
>>>> device. If you are worried about copying between kernel and userland,
>>>> you could use the tee(2) and splice(2) system calls. They are relatively
>>>> new, so your system might not yet have manual pages for them. In that
>>>> case, see http://linux.die.net/man/2/tee and
>>>> http://linux.die.net/man/2/splice .
>>>
>>> Hm.. that is pretty enlightening...
>>> Thanks,
>>> Om.
>>>
>>>
>>> --
>>> To unsubscribe from this list: send an email with
>>> "unsubscribe kernelnewbies" to ecartis@xxxxxxxxxxxx
>>> Please read the FAQ at http://kernelnewbies.org/FAQ
>>>
>>>
>>
>>
>>
>> --
>> Regards,
>> Sandeep.
>>
>>
>>
>>
>>
>>
>> "To learn is to change. Education is a process that changes the learner."
>>
>> --
>> To unsubscribe from this list: send an email with
>> "unsubscribe kernelnewbies" to ecartis@xxxxxxxxxxxx
>> Please read the FAQ at http://kernelnewbies.org/FAQ
>>
>>
>
> --
> To unsubscribe from this list: send an email with
> "unsubscribe kernelnewbies" to ecartis@xxxxxxxxxxxx
> Please read the FAQ at http://kernelnewbies.org/FAQ
>
>

--
To unsubscribe from this list: send an email with
"unsubscribe kernelnewbies" to ecartis@xxxxxxxxxxxx
Please read the FAQ at http://kernelnewbies.org/FAQ


[Index of Archives]     [Newbies FAQ]     [Linux Kernel Mentors]     [Linux Kernel Development]     [IETF Annouce]     [Git]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux SCSI]     [Linux ACPI]
  Powered by Linux