-------- Original Message --------
On 08/02/2013 03:58 PM, Steinar H. Gunderson wrote: > On Fri, Aug 02, 2013 at 09:23:44AM -0400, Mike Snitzer wrote: >> Yes, it is surprising. Curious to know if the promotions aren't >> happening due to the IO scheduler somehow merging all your random small >> IO. We don't yet have a descrete counter to show the number of >> migrations that were skipped due to sequential_threshold but that is >> something we can add. >> >> But you can effectively disable the sequential_threshold by setting it >> really high, e.g.: >> >> dmsetup message cache 0 sequential_threshold 16384 > I tried this, and it still doesn't appear to promote anything at all: > > cache: 0 23440891904 cache 913/8192 0 7021239 0 2049048 0 0 0 0 0 2 migration_threshold 2048 4 random_threshold 8 sequential_threshold 16384 > > It's only been running for a few minutes, though. > > FWIW, earlier I ran it on only one single partition, and then it worked. > So it's not like my kernel is completely broken, at least. Doesn't seem so. Because you succeeded on a partition, mind trying a small some GB mapping with smaller block size too on top of your big RAID? Eventually apply the config you used for your to your partition and adjust that to your large RAID? If that fails as well, it looks like some strange device specific issue causing the failure. If it succeeds, it seems to be a large size related one. Heinz > >> Please write a file that is smaller than your specified >> sequential_threshold, and then read it numerous times via direct IO, >> e.g.: >> >> dd if=<your file> of=/dev/null iflag=direct bs=16K > I did, with a 16 kB file (that should certainly be small enough, right?), > executing the dd command 10000 times. Still nothing cached. > > /* Steinar */ |
-- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel