On 11/27/2016 10:40 PM, drago01 wrote:
On Monday, November 28, 2016, Py <py@xxxxxxxxx <mailto:py@xxxxxxxxx>> wrote:
>>> Have you ever made Nautilus copy/move a huge directory tree and then
>>> started a similar task for other directories while Nautilus was
>still
>>> working on the first task?
>>
>> A directory containing 10,000 1MiB files moved to another directory
>> completes immediately. Copying takes a while, as expected, and
>multiple
>> copies has the behavior you describe.
>>
>An SSD drive might not have this problem, but a spinning disk
>definitely
>will. You should never try running multiple copies on the same disk if
>
>you want it to finish in a reasonable time. With one copy, you can do
>long contiguous reads and writes, but if you have multiple copies
>happening, the read and write head will be bouncing all over the disk.
>________________________________________
So ideally this is the file manager job to queue copy operations.
This allows to do right even when the user is wrong, or wants to
launch big copy before coffee.
____________________________________________
No. The kernel (io scheduler) is supposed to order requests to avoid
this scenario. Also sequential reads / writes only happen for large
files if there is no fragmentation.
How could the kernel ever schedule this nicely? Is it going to hold up
one process until the other one is finished? Also, ext4 has minimal
fragmentation unless the disk is quite full. It's also designed that
files in the same folder are relatively close on the disk. But if you
are running two or more different copy operations, you are most likely
grabbing data from all over the disk which is going to kill the
performance of all the copy operations.
_______________________________________________
devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx