Neeraj Singh <nksingh85@xxxxxxxxx> writes: >> >> -#define WRITE_BUFFER_SIZE 8192 >> >> +#define WRITE_BUFFER_SIZE (128 * 1024) >> >> static unsigned char write_buffer[WRITE_BUFFER_SIZE]; >> >> static unsigned long write_buffer_len; >> > >> > [...] >> > >> > Very nice. >> >> I wonder if we gain more by going say 4M buffer size or even larger? >> >> Is this something we can make the system auto-tune itself? This is >> not about reading but writing, so we already have enough information >> to estimate how much we would need to write out. >> >> Thanks. >> > > Hi Junio, > At some point the cost of the memcpy into the filesystem cache begins to > dominate the cost of the system call, so increasing the buffer size > has diminishing returns. Yes, I know that kind of "general principle". If I recall correctly, we used to pass too large a buffer to a single write(2) system call (I do not know if it was for the index---I suspect it was for some other data), and found out that it made response to ^C take too long, and tuned the buffer size down. I was asking where the sweet spot for this codepath would be, and if we can take a measurement to make a better decision than "8k feels too small and 128k turns out to be better than 8k". It does not tell us if 128k would always do better than 64k or 256k, for example. I suspect that the sweet spot would be dependent on many parameters (not just the operating system, but also relative speed among memory, "disk", and cpu, and also the size of the index) and if we can devise a way to auto-tune it so that we do not have to worry about it. Thanks.