Re: Caching/buffers become useless after some time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am Mi., 31. Okt. 2018 um 18:01 Uhr schrieb Michal Hocko <mhocko@xxxxxxxx>:
>
> On Wed 31-10-18 15:53:44, Marinko Catovic wrote:
> [...]
> > Well caching of any operations with find/du is not necessary imho
> > anyway, since walking over all these millions of files in that time
> > period is really not worth caching at all - if there is a way you
> > mentioned to limit the commands there, that would be great.
>
> One possible way would be to run this find/du workload inside a memory
> cgroup with high limit set to something reasonable (that will likely
> require some tuning). I am not 100% sure that will behave for metadata
> mostly workload without almost any pagecache to reclaim so it might turn
> out this will result in other issues. But it is definitely worth trying.

hm, how would that be possible..? every user has its UID, the group
can also not be a factor, since this memory restriction would apply to
all users then, find/du are running as UID 0 to have access to
everyone's data.

so what is the conclusion from this issue now btw? is it something
that will be changed/fixed at any time?
As I understand everyone would have this issue when extensive walking
over files is performed, basically any `cloud`, shared hosting or
storage systems should experience it, true?




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux