On Tue 09-07-24 20:47:30, xiujianfeng wrote: > > > On 2024/7/9 0:04, Michal Hocko wrote: > > On Mon 08-07-24 21:40:39, xiujianfeng wrote: > >> > >> > >> On 2024/7/8 20:48, Michal Hocko wrote: > >>> On Wed 03-07-24 13:38:04, Andrew Morton wrote: > >>>> On Wed, 3 Jul 2024 10:45:56 +0800 xiujianfeng <xiujianfeng@xxxxxxxxxx> wrote: > >>>> > >>>>> > >>>>> > >>>>> On 2024/7/3 9:58, Andrew Morton wrote: > >>>>>> On Tue, 2 Jul 2024 12:57:28 +0000 Xiu Jianfeng <xiujianfeng@xxxxxxxxxx> wrote: > >>>>>> > >>>>>>> Introduce peak and rsvd.peak to v2 to show the historical maximum > >>>>>>> usage of resources, as in some scenarios it is necessary to configure > >>>>>>> the value of max/rsvd.max based on the peak usage of resources. > >>>>>> > >>>>>> "in some scenarios it is necessary" is not a strong statement. It > >>>>>> would be helpful to fully describe these scenarios so that others can > >>>>>> better understand the value of this change. > >>>>>> > >>>>> > >>>>> Hi Andrew, > >>>>> > >>>>> Is the following description acceptable for you? > >>>>> > >>>>> > >>>>> Since HugeTLB doesn't support page reclaim, enforcing the limit at > >>>>> page fault time implies that, the application will get SIGBUS signal > >>>>> if it tries to fault in HugeTLB pages beyond its limit. Therefore the > >>>>> application needs to know exactly how many HugeTLB pages it uses before > >>>>> hand, and the sysadmin needs to make sure that there are enough > >>>>> available on the machine for all the users to avoid processes getting > >>>>> SIGBUS. > >>> > >>> yes, this is pretty much a definition of hugetlb. > >>> > >>>>> When running some open-source software, it may not be possible to know > >>>>> the exact amount of hugetlb it consumes, so cannot correctly configure > >>>>> the max value. If there is a peak metric, we can run the open-source > >>>>> software first and then configure the max based on the peak value. > >>> > >>> I would push back on this. Hugetlb workloads pretty much require to know > >>> the number of hugetlb pages ahead of time. Because you need to > >>> preallocate them for the global hugetlb pool. What I am really missing > >>> in the above justification is an explanation of how come you know how to > >>> configure the global pool but you do not know that for a particular > >>> cgroup. How exactly do you configure the global pool then? > >> > >> Yes, in this scenario, it's indeed challenging to determine the > >> appropriate size for the global pool. Therefore, a feasible approach is > >> to initially configure a larger value. Once the software is running > >> within the container successfully, the maximum value for the container > >> and the size of the system's global pool can be determined based on the > >> peak value, otherwise, increase the size of the global pool and try > >> again. so I believe the peak metric is useful for this scenario. > > > > This sounds really backwards to me. Not that I care much about peak > > value itself. It is not really anything disruptive to add nor maintain > > but this approach to configuring the system just feels completely wrong. > > You shouldn't be really using hugetlb cgroup controller if you do not > > have a very specific idea about expected and therefore allowed hugetlb > > pool consumption. > > > > Thanks for sharing your thoughts. > > Since the peak metric exists in the legacy hugetlb controller, do you > have any idea what scenario it's used for? I found it was introduced by > commit abb8206cb077 ("hugetlb/cgroup: add hugetlb cgroup control > files"), however there is no any description about the scenario. I do not remember but I suspect this is mimicts other cgroupv1 interfaces. -- Michal Hocko SUSE Labs