Hi! On 09:45 Fri 23 May , Robert P. J. Day wrote: > the scenario: a single-board computer (SBC) running linux, limited RAM, > which opens a potentially very large regular or device file, then possibly > hops all over the place processing the contents. currently, this > processing uses lseek() to move around in the file, and read() to get the > relevant data. How big is "very large"? > is there any theoretical performance hit to using mmap() for this > instead? i'm assuming that, regardless of the size of the actual file, all > i will ever really need in RAM are the blocks currently being processed, > right? I would not bet on this. When the chunk is not already in memory, the page fault handler in called and the data is loaded from disk. This is as slow as lseek/read. You can reduce lseek/read to a single call named "pread". > so ... would mmap() have any inherent drawbacks in this situation? thanks. It consumes address space. On 32 bit systems, you only have 3 GB address space (can be lower, if the kernel is configured to have a different address space split). There are other things which need address space as well. If you expect large files you probably have to map chunks of the file on demand. See also: http://lists.freebsd.org/pipermail/freebsd-questions/2004-June/050371.html -Michi -- programing a layer 3+4 network protocol for mesh networks see http://michaelblizek.twilightparadox.com -- To unsubscribe from this list: send an email with "unsubscribe kernelnewbies" to ecartis@xxxxxxxxxxxx Please read the FAQ at http://kernelnewbies.org/FAQ