Hi, Looks to me that systems with high uptimes eventually exhaust the entire kernel memory as there are no limits for the measurement list size. Problem is pretty bad for systems that have processes that can create temporary files, such as a browser - each new site visited can eat some kernel memory that is lost forever (!). While this can be tackled via the policy, in some of these cases it is very hard to form safe policy statements that would strictly define what exactly needs to be measured. Besides, some of these systems support applications that the system users can install and such policy statements might even be impossible as the administrator would never know. Now, we can attempt to tackle this if there is a common agreement on what to do with the case. First thing that comes to my mind based on a comment from Mimi concerning the prior work on the topic by Dave is that the measurement list should probably get periodically exported to a file with its own measurement. Rest of the measurement entries would then get freed, so the system would start again from a clean state (ie. state where there is only 1 entry in the measurement list, the older generation list name and the measurement). For remote attestation of the system you would have to concatenate all the lists and verify their validity by walking down the chain, starting from the existing in-kernel measurement that is kept secure. In other words, each exported list would have a measurement of the earlier generation list and we would build a simple list chain. Thoughts? -- Janne