Re: aio: questions with ioctx_alloc() and large num_possible_cpus()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Benjamin,

On 10/05/2016 02:41 PM, Benjamin LaHaise wrote:
I'd suggest increasing the default limit by changing how it is calculated.
The current number came about 13 years ago when machines had orders of
magnitude less RAM than they do today.

Thanks for the suggestion.

Does the default also have implications other than memory usage?
For example, concurrency/performance of as much aio contexts running,
or if userspace could try to exploit some point with a larger number?

Wondering about it because it can be set based on num_possible_cpus(),
but that might be really large on high-end systems.

Regards,

--
Mauricio Faria de Oliveira
IBM Linux Technology Center

--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux