On Tue, Apr 30, 2013 at 9:29 AM, Shankar Brahadeeswaran <shankoo77@xxxxxxxxx> wrote: > Question: > On occasions when we return because of the lock unavailability, what > could be the worst case number of ashmem pages that are left > unfreed (lru_count). Will it be very huge and have side effects? On that VM shrink path, all of them, but they'll go on the next pass. Even if they didn't, however, that is fine: The ashmem cache functionality is advisory. From user-space's point of view, it doesn't even know when VM pressure will occur, so it can't possibly care. > To get the answer for this question, I added some instrumentation code > to ashmem_shrink function on top of the patch. I ran Android monkey > tests with lot of memory hungry applications so as to hit the Low > Memory situation more frequently. After running this for almost a day > I did not see a situation where the shrinker did not have the mutex. > In fact what I found is that (in this use case at-least) most of the > time the "lru_count" is zero, which means the application has not > unpinned the pages. So the shrinker has no job to do (basically > shrink_slab does not call ashmem_shrinker second time). So worst case > if we hit a scenario where the shrinker is called I'm sure the > lru_count would be very low. So even if the shrinker returns without > freeing them (because of unavailability of the lock) its not going to > be costly. That is expected. This race window is very, very small. > After this experiment, I too think that this patch (returning from > ashmem_shrink if the lock is not available) is good enough and does > not seem to have any major side effects. > > PS: Any plans of submitting this patch formally? Sure. Greg? :) Robert _______________________________________________ devel mailing list devel@xxxxxxxxxxxxxxxxxxxxxx http://driverdev.linuxdriverproject.org/mailman/listinfo/devel