On Tue, Jun 22, 2021 at 2:23 AM Steven Rostedt <rostedt@xxxxxxxxxxx> wrote: > > On Mon, 14 Jun 2021 10:50:09 +0300 > "Tzvetomir Stoyanov (VMware)" <tz.stoyanov@xxxxxxxxx> wrote: > > > When reading a trace.dat file of version 7, uncompress the trace data. > > The trace data for each CPU is uncompressed in a temporary file, located > > in /tmp directory with prefix "trace_cpu_data". > > With large trace files, this will be an issue. Several systems setup the > /tmp directory as a ramfs file system (that is, it is locate in ram, and > not backed up on disk). If you have very large trace files, which you would > if you are going to bother compressing them, by uncompressing them into > /tmp, it could take up all the memory of the machine, or easily fill the > /tmp limit. There are a few possible approaches for solving that: - use the same directory where the input trace file is located - use an environment variable for user specified temp directory for these files - check if there is enough free space on the FS before uncompressing > > Simply uncompressing the entire trace data is not an option. The best we > can do is to uncompress on a as needed basis. That would require having > meta data that is stored to know what pages are compressed. > I can modify that logic to compress page by page, as the data is loaded by pages. Or use some of the above approaches ? > -- Steve -- Tzvetomir (Ceco) Stoyanov VMware Open Source Technology Center