Search squid archive

shared memory seems to allow size of 32K **1KB** segments (32MB)...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Amos Jeffries wrote:
We are still limited to  one page,
---
1 page or 1 segment/item?

Looking at the output of ipcs they show a max seg
size of 32768 (32k), but the units are in kbytes, not bytes.
so the real limit looks more like 32MB

Are you sure that limit was 32K and not 32k kbytes? (i.e. 32M?)

I found a "ipcs -l" command that shows limits and it describes things in terms
of segments, with the number of segments being fairly limited compared
to amount of memory:
util-linux> ipcs -l

------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 32768
max total shared memory (kbytes) = 8388608
min seg size (bytes) = 1

-------------
Do you think it might be better resource usage to subdivide
one large segment rather than trying to use many small segments
so as not to use up segment descriptors?

Maybe a pseudo extend-based fs like xfs has (or maybe it's feasible to
strip out the extent-allocator/manager from xfs and use it to manage
a memory system?  I don't know if it would have any benefit over
a standard alloc/malloc model, but it might minimize fragmentation
over time.





[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux