On 12 September 2010 19:07, Milan Broz <mbroz@xxxxxxxxxx> wrote: > On 09/12/2010 10:36 AM, dave b wrote: >> >> Should I forward my 'bug' to the linux kernel maliing list ? > > Better report it to kernel bugzilla, it is better for tracking. > (Also see https://bugzilla.kernel.org/show_bug.cgi?id=17892 ) > > Anyway, there are some patches waiting for inclusion in DM tree > for weeks and all fixes must follow these changes. Also replacing > io barriers in 2.6.37 can interfere here (fsync uses barriers). > > Anyway, if you have some tests which you found useful for dm-crypt > testing, attach them to bugzilla too. I would like them to run > for all kernels in the future to avoid performance regressions. Right, I see the issue as the following: requesting a lot of writes and then a number of read operations from different processes leads to a *poor* outcome. I say this because if I do "dd /dev/zero of=/tmp/DELETEME" after a short time the entire system stalls and it really is *very* difficult to end the dd, which is writing to the disk :) I found http://notemagnet.blogspot.com/2008/08/linux-write-cache-mystery.html ,http://lwn.net/Articles/152277/ , and http://archives.postgresql.org/pgsql-performance/2007-08/msg00234.php8/msg00234.php interesting. I haven't found a tweakable for giving preference to reads over writes for the CFQ, deadline seems to have such a tweakable. There is only that one modified test I posted before --> which also tests *read* times as well as fsync times. I will give you the 'story' as I see it (for 2 user types) : Background: Given there is system with a 'fast' (hard drive based) permanent storage device Feature: As a administrator of a file server I want to be able to write a large file to permanent storage So that I can profit via providing a 'fast' (hard drive based) file sharing service! Scenario: A user requests to store a large file on my service and then 5 other users request existing files (not in cache) Given user '0' requests to store a *large* file on my service When the system starts to write the data And user '1', '2', '3', '4' request files 'A', 'B', 'C', 'D' Then I should see the system able to honour the the requests And I should not see the system stall due to the large file being written Feature: As Desktop User I want to be able to use my desktop while I save a large file to permanent storage So that I can profit as I can continue with my other tasks instead of sitting on my hands! Scenario: I want to copy a large file off a e-sata / usb drive to my hard drive Given I have a large file on my removable storage device When I copy the file to my hard drive And I try to open a new firefox window And I try to go to google.com And I try to open nautilus Then I should see the system responding to my requests within a reasonable time frame And the system should not be completely stalled for more than a few seconds The other thing to note is that kcryptd was at 66% cpu time during dd'ing files <=10gb. _______________________________________________ dm-crypt mailing list dm-crypt@xxxxxxxx http://www.saout.de/mailman/listinfo/dm-crypt