> On 28 Feb 2018, at 22:54, Kees Cook <keescook@xxxxxxxxxxxx> wrote: > > I was trying to understand the target entropy level, and I'm worried > it's a bit biased. For example, if the first allocation lands at 1/4th > of the memory space, the next allocation (IIUC) has a 50% chance of > falling on either side of it. If it goes on the small side, it then > has much less entropy than if it had gone on the other side. I think > this may be less entropy than choosing a random address and just > seeing if it fits or not. Dealing with collisions could be done either > by pushing the address until it doesn't collide or picking another > random address, etc. This is probably more expensive, though, since it > would need to walk the vma tree repeatedly. Anyway, I was ultimately > curious about your measured entropy and what alternatives you > considered. Let me please start with the options we have here. Let's pretend we need to choose random address from free memory pool. Let’s pretend we have an array of gaps sorted by size of gap descending. First we find the highest index satisfies requested length. For each suitable gap (with less index) we count how many pages in this gap satisfies request. And compute total count of pages satisfies request. Now we get random by module of total number. Subtracting from this value count of suitable gap pages for gaps until this value greater we will find needed gap and offset inside it. Add gap start to offset we will randomly choose suitable address. In this scheme we have to keep array of gaps. Each time address space is changed we have to keep the gaps array consistent and apply this changes. It is a very big overhead on any change. Pure random looks really expensive. Lets try to improve something. We can’t just choose random address and try do it again and again until we find something - this approach has non-deterministic behaviour. Nobody knows when it stops. Same if we try to walk tree in random direction. We can walk tree and try to build array of suitable gaps and choose something from there. In my current approach (proof of concept) length of array is 1 and thats why last gaps would be chosen with more probability. I’m agree. It is possible to increase array spending some memory. For example struct mm may have to array of 1024 gaps. We do the same, walk tree and randomly fill this array ( everything locked under write_mem semaphore). When we filled it or walked whole tree - choose gap randomly. What do you think about it? Thanks, Ilya -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href