Search squid archive

Re: shared memory seems to allow size of 32K **1KB** segments (32MB)...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Linda W wrote:
Amos Jeffries wrote:
We are still limited to  one page,
---
1 page or 1 segment/item?
----
I don't know who 'we' is... but on x86_64 linux, I was able to use the
perl SysV::IPC calls shmget/shmwrite/shmread/shmctl to allocate up to
my system's run-time limit (which I can up if needed) of 32MB/shm segment.

It looks like there is an underlying granularity of 8KB, but shouldn't
it be easy to simply use the shm interface and allocate exact size segments
to hold shared files?

Hmmm......

Either that or allocate in largest size chunksize available and sub-divide it.

As for disk -- if the index of files that were stored on disk -- why
couldn't the processes share a file cache? Certainly you don't want
two separate processes downloading the same file at the same time -- that
would really hurt bandwidth...



[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux