On 2019-12-05 20:01:41 [-0600], Frank Rowand wrote: > Is there a memory usage issue for the systems that led to this thread? No, no memory issue led to this thread. I was just testing my patch and I assumed that I did something wrong in the counting/lock drop/lock acquire/allocate path because the array was hardly used. So I started to look deeper… Once I figured out everything was fine, I was curious if everyone is aware of the different phandle creation by dtc vs POWER. And I posted the mail in the thread. Once you confirmed that everything is "known / not an issue" I was ready to take off [0]. Later more replies came in such as one mail [1] from Rob describing the original reason with 814 phandles. _Here_ I was just surprised that 1024 were used over 64 entries for a benefit of 60ms. I understand that this is low concern for you because that memory is released if modules are not enabled. I usually see that module support is left enabled. However, Rob suggested / asked about the fixed size array (this is how I understood it): |And yes, as mentioned earlier I don't like the complexity. I didn't |from the start and I'm I'm still of the opinion we should have a |fixed or 1 time sized true cache (i.e. smaller than total # of |phandles). That would solve the RT memory allocation and locking issue |too. so I attempted to ask if we should have the fixed size array maybe with the hash_32() instead the mask. This would make my other patch obsolete because the fixed size array should not have a RT issue. The hash_32() part here would address the POWER issue where the cache is currently not used efficiently. If you want instead to keep things as-is then this is okay from my side. If you want to keep this cache off on POWER then I could contribute a patch doing so. [0] https://lore.kernel.org/linux-devicetree/20191202110732.4dvzrro5o6zrlpax@xxxxxxxxxxxxx/ [1] https://lore.kernel.org/linux-devicetree/CAL_JsqKieG5=teL7gABPKbJOQfvoS9s-ZPF-=R0yEE_LUoy-Kw@xxxxxxxxxxxxxx/ > -Frank Sebastian