Throttling performance due to backing device latency

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am currently using bcache in a configuration where 1 cache SSD is
caching IO for 3 backing HDDs. The cache mode is writeback and I have
configured bcache to disable seq cutoff and congestion tuning so all
write IO goes to the cache device:
echo writeback > /sys/block/bcache0/cache_mode
echo 0 > /sys/block/bcache0/bcache/sequential_cutoff
echo 0 > /sys/fs/bcache/<cache set>/congested_read_threshold_us
echo 0 > /sys/fs/bcache/<cache set>/congested_write_threshold_us
 
I have increased the minimum writeback rate so that the available space
for the dirty data never goes to 100%
echo 16384 > /sys/block/bcache0/bcache/writeback_rate_minimum
 
 
I have noticed that during times of higher throughput the performance
of fsynced writes (and other high IOPs workloads as well) seems to
suffer a disproportionate amount. The throughput is something that the
ssd should be able to handle easily and it still has enough cache to
buffer any writes there so I am confused as to why performance would
suffer to such a high degree. Are there any internal mechanisms that
perhaps measure latency to the backing devices and then throttle IO? If
so how would I go about tuning those?
 
Regards,
 
Benard




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux ARM Kernel]     [Linux Filesystem Development]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux