On Fri 21-06-24 10:10:58, Andrew Morton wrote: > On Fri, 21 Jun 2024 16:42:38 +0200 Jan Kara <jack@xxxxxxx> wrote: > > > The dirty throttling logic is interspersed with assumptions that dirty > > limits in PAGE_SIZE units fit into 32-bit (so that various > > multiplications fit into 64-bits). If limits end up being larger, we > > will hit overflows, possible divisions by 0 etc. Fix these problems by > > never allowing so large dirty limits as they have dubious practical > > value anyway. For dirty_bytes / dirty_background_bytes interfaces we can > > just refuse to set so large limits. For dirty_ratio / > > dirty_background_ratio it isn't so simple as the dirty limit is computed > > from the amount of available memory which can change due to memory > > hotplug etc. So when converting dirty limits from ratios to numbers of > > pages, we just don't allow the result to exceed UINT_MAX. > > Shouldn't this also be cc:stable? So this is root-only triggerable problem and kind of "don't do it when it hurts" issue (who really wants to set dirty limits to > 16 TB?). So I'm not sure CC stable is warranted but I won't object. Honza -- Jan Kara <jack@xxxxxxxx> SUSE Labs, CR