in place file splitter

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Igor Gueths staggered into view and mumbled:
>
>Hi Ralph. Very interesting technique. Two things. First, don't you have to
>specify the chunksize which will be read? Also, if you have a very large
>file (1 MB), and the chunksize is aproximately 5 KB (roughly 5000 bytes),
>this could take a while.

The data chunk size will have to be defined somehow in any splitter
program.  This technique allows the chunk size to be specified on the
command line or elsewhere, hard coded in the program, or a
combination which sets up a default which can be modified by the user
as required.  As for the execution speed, so much disk I/O is
required that the program essentially operates at disk drive speed
instead of CPU or RAM speed.  The more chunks a file is split into,
the longer the process will take--large files split into many small
chunks could require an appreciable amount of time.  The main
advantage to this technique is that it will minimize disk and RAM
space usage during its operation.  Other techniques will certainly
operate faster, but may have their own disadvantages.  The best
anyone can do is to make a 'best guess' as to what will work best for
the given situation.

Have a _great_ day!


-- 
Ralph.  N6BNO.  Wisdom comes from central processing, not from I/O.
rreid at sunset.net  http://personalweb.sunset.net/~rreid
Opinions herein are either mine or they are flame bait.
1 = x^0




[Index of Archives]     [Linux for the Blind]     [Fedora Discussioin]     [Linux Kernel]     [Yosemite News]     [Big List of Linux Books]
  Powered by Linux