On 10/20/24 9:29 PM, Kent Overstreet wrote:
On Sun, Oct 20, 2024 at 01:19:42PM -0700, Linus Torvalds wrote:
On Sun, 20 Oct 2024 at 13:10, Kent Overstreet <kent.overstreet@xxxxxxxxx> wrote:
And the INT_MAX check wouldn't catch truncation anyways - it'd only
catch integer _underflow_, but allocation size calculations pretty much
as a rule never use subtractions, so I don't think this check was ever
worth much to begin with.
It fixed a real security issue.
Which you quite conveniently aren't naming.
Enough said, and you're just making shit up to make excuses.
Also, you might want to start look at latency numbers in addition to
throughput. If your journal replay needs an *index* that is 2G in
size, you may have other issues.
Latency for journal replay?
No, journal replay is only something happens at mount after an unclean
shutdown. We can afford to take some time there, and journal replay
performance hasn't been a concern.
Then why are you arguing about there being an "artificial cap on
performance", if you can "afford to take some time there"?
Am I missing something?
- Joshie 🐸✨
Your journal size is insane, and your "artificial cap on performance"
had better come with numbers.
I'm not going to run custom benchmarks just for a silly argument, sorry.
But on a fileserver with 128 GB of ram and a 75 TB filesystem (yes,
that's likely a dedicated fileserver), we can quite easily justify a
btree node cache of perhaps 10GB, and on random update workloads the
journal does need to be that big - otherwise our btree node write size
goes down and throughput suffers.
Why do you keep on being the person who creates all these pointless
arguments? Not just with me, btw.
That's only going to get the biggest eyeroll ever.