Hi Kent, i find something happened when i set a large bucket size. i.e. bucket_size >= 2mb. when cache become full, 1% cache_available_percent. the new data keeps skipping the cache, and the cache will remain in that state. by default, the movinggc is disabled (copy_enabled = 0), so we might not get free buckets by cleaning up invalid data. since data skips cache under such high utilization, no insertion no new allocation, won't cause the replacement (invalidate buckets) on existing buckets. is this the correct logic that suppose to happen? or did i miss anything? Thanks, Sheng On Mon, Jul 29, 2013 at 3:13 PM, Kent Overstreet <kmo@xxxxxxxxxxxxx> wrote: > On Sun, Jul 28, 2013 at 06:57:42PM -0500, sheng qiu wrote: >> generally the flash size is 40GB, RAM is 8GB. The workload first >> writes about 65GB of data and then do read/writes (10 threads) on >> those data. >> it runs less than 1 hour during the read/write step and nearly >> happened each time. now i was testing with less threads and see if >> this is the case. > > Some kind of garbage collection bug... :/ > > Could you try the bcache-testing branch? It's got significantly improved > garbage collection code - if it's not fixed there, at least that code > should be easier to debug... -- Sheng Qiu Texas A & M University Room 332B Wisenbaker email: herbert1984106@xxxxxxxxx College Station, TX 77843-3259 -- To unsubscribe from this list: send the line "unsubscribe linux-bcache" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html