2016-03-09 10:23 GMT+09:00 Leizhen (ThunderTown) <thunder.leizhen@xxxxxxxxxx>: > > > On 2016/3/8 9:54, Leizhen (ThunderTown) wrote: >> >> >> On 2016/3/8 2:42, Laura Abbott wrote: >>> On 03/07/2016 12:16 AM, Leizhen (ThunderTown) wrote: >>>> >>>> >>>> On 2016/3/7 12:34, Joonsoo Kim wrote: >>>>> On Fri, Mar 04, 2016 at 03:35:26PM +0800, Hanjun Guo wrote: >>>>>> On 2016/3/4 14:38, Joonsoo Kim wrote: >>>>>>> On Fri, Mar 04, 2016 at 02:05:09PM +0800, Hanjun Guo wrote: >>>>>>>> On 2016/3/4 12:32, Joonsoo Kim wrote: >>>>>>>>> On Fri, Mar 04, 2016 at 11:02:33AM +0900, Joonsoo Kim wrote: >>>>>>>>>> On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote: >>>>>>>>>>> On 2016/3/3 15:42, Joonsoo Kim wrote: >>>>>>>>>>>> 2016-03-03 10:25 GMT+09:00 Laura Abbott <labbott@xxxxxxxxxx>: >>>>>>>>>>>>> (cc -mm and Joonsoo Kim) >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On 03/02/2016 05:52 AM, Hanjun Guo wrote: >>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>> >>>>>>>>>>>>>> I came across a suspicious error for CMA stress test: >>>>>>>>>>>>>> >>>>>>>>>>>>>> Before the test, I got: >>>>>>>>>>>>>> -bash-4.3# cat /proc/meminfo | grep Cma >>>>>>>>>>>>>> CmaTotal: 204800 kB >>>>>>>>>>>>>> CmaFree: 195044 kB >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> After running the test: >>>>>>>>>>>>>> -bash-4.3# cat /proc/meminfo | grep Cma >>>>>>>>>>>>>> CmaTotal: 204800 kB >>>>>>>>>>>>>> CmaFree: 6602584 kB >>>>>>>>>>>>>> >>>>>>>>>>>>>> So the freed CMA memory is more than total.. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Also the the MemFree is more than mem total: >>>>>>>>>>>>>> >>>>>>>>>>>>>> -bash-4.3# cat /proc/meminfo >>>>>>>>>>>>>> MemTotal: 16342016 kB >>>>>>>>>>>>>> MemFree: 22367268 kB >>>>>>>>>>>>>> MemAvailable: 22370528 kB >>>>>>>>>>> [...] >>>>>>>>>>>>> I played with this a bit and can see the same problem. The sanity >>>>>>>>>>>>> check of CmaFree < CmaTotal generally triggers in >>>>>>>>>>>>> __move_zone_freepage_state in unset_migratetype_isolate. >>>>>>>>>>>>> This also seems to be present as far back as v4.0 which was the >>>>>>>>>>>>> first version to have the updated accounting from Joonsoo. >>>>>>>>>>>>> Were there known limitations with the new freepage accounting, >>>>>>>>>>>>> Joonsoo? >>>>>>>>>>>> I don't know. I also played with this and looks like there is >>>>>>>>>>>> accounting problem, however, for my case, number of free page is slightly less >>>>>>>>>>>> than total. I will take a look. >>>>>>>>>>>> >>>>>>>>>>>> Hanjun, could you tell me your malloc_size? I tested with 1 and it doesn't >>>>>>>>>>>> look like your case. >>>>>>>>>>> I tested with malloc_size with 2M, and it grows much bigger than 1M, also I >>>>>>>>>>> did some other test: >>>>>>>>>> Thanks! Now, I can re-generate erronous situation you mentioned. >>>>>>>>>> >>>>>>>>>>> - run with single thread with 100000 times, everything is fine. >>>>>>>>>>> >>>>>>>>>>> - I hack the cam_alloc() and free as below [1] to see if it's lock issue, with >>>>>>>>>>> the same test with 100 multi-thread, then I got: >>>>>>>>>> [1] would not be sufficient to close this race. >>>>>>>>>> >>>>>>>>>> Try following things [A]. And, for more accurate test, I changed code a bit more >>>>>>>>>> to prevent kernel page allocation from cma area [B]. This will prevent kernel >>>>>>>>>> page allocation from cma area completely so we can focus cma_alloc/release race. >>>>>>>>>> >>>>>>>>>> Although, this is not correct fix, it could help that we can guess >>>>>>>>>> where the problem is. >>>>>>>>> More correct fix is something like below. >>>>>>>>> Please test it. >>>>>>>> Hmm, this is not working: >>>>>>> Sad to hear that. >>>>>>> >>>>>>> Could you tell me your system's MAX_ORDER and pageblock_order? >>>>>>> >>>>>> >>>>>> MAX_ORDER is 11, pageblock_order is 9, thanks for your help! >>>>> >>>>> Hmm... that's same with me. >>>>> >>>>> Below is similar fix that prevents buddy merging when one of buddy's >>>>> migrate type, but, not both, is MIGRATE_ISOLATE. In fact, I have >>>>> no idea why previous fix (more correct fix) doesn't work for you. >>>>> (It works for me.) But, maybe there is a bug on the fix >>>>> so I make new one which is more general form. Please test it. >>>> >>>> Hi, >>>> Hanjun Guo has gone to Tailand on business, so I help him to run this patch. The result >>>> shows that the count of "CmaFree:" is OK now. But sometimes printed some information as below: >>>> >>>> alloc_contig_range: [28500, 28600) PFNs busy >>>> alloc_contig_range: [28300, 28380) PFNs busy >>>> >>> >>> Those messages aren't necessarily a problem. Those messages indicate that >> OK. >> >>> those pages weren't able to be isolated. Given the test here is a >>> concurrency test, I suspect some concurrent allocation or free prevented >>> isolation which is to be expected some times. I'd only be concerned if >>> seeing those messages cause allocation failure or some other notable impact. >> I chose memory block size: 512K, 1M, 2M ran serveral times, there was no memory allocation failure. > > Hi, Joonsoo: > This new patch worked well. Do you plan to upstream it in the near furture? Of course! But, I should think more because it touches allocator's fastpatch and I'd like to detour. If I fail to think a better solution, I will send it as is, soon. Thanks. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>