On 5/22/22 7:22 PM, Jens Axboe wrote: > On 5/22/22 6:42 PM, Al Viro wrote: >> On Sun, May 22, 2022 at 02:03:35PM -0600, Jens Axboe wrote: >> >>> Right, I'm saying it's not _immediately_ clear which cases are what when >>> reading the code. >>> >>>> up a while ago. And no, turning that into indirect calls ended up with >>>> arseloads of overhead, more's the pity... >>> >>> It's a shame, since indirect calls make for nicer code, but it's always >>> been slower and these days even more so. >>> >>>> Anyway, at the moment I have something that builds; hadn't tried to >>>> boot it yet. >>> >>> Nice! >> >> Boots and survives LTP and xfstests... Current variant is in >> vfs.git#work.iov_iter (head should be at 27fa77a9829c). I have *not* >> looked into the code generation in primitives; the likely/unlikely on >> those cascades of ifs need rethinking. > > I noticed too. Haven't fiddled much in iov_iter.c, but for uio.h I had > the below. iov_iter.c is a worse "offender" though, with 53 unlikely and > 22 likely annotations... Here it is... diff --git a/include/linux/uio.h b/include/linux/uio.h index 6570b688ed39..52baa3c89505 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -163,19 +163,17 @@ static inline size_t copy_folio_to_iter(struct folio *folio, size_t offset, static __always_inline __must_check size_t copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i) { - if (unlikely(!check_copy_size(addr, bytes, true))) - return 0; - else + if (check_copy_size(addr, bytes, true)) return _copy_to_iter(addr, bytes, i); + return 0; } static __always_inline __must_check size_t copy_from_iter(void *addr, size_t bytes, struct iov_iter *i) { - if (unlikely(!check_copy_size(addr, bytes, false))) - return 0; - else + if (check_copy_size(addr, bytes, false)) return _copy_from_iter(addr, bytes, i); + return 0; } static __always_inline __must_check @@ -191,10 +189,9 @@ bool copy_from_iter_full(void *addr, size_t bytes, struct iov_iter *i) static __always_inline __must_check size_t copy_from_iter_nocache(void *addr, size_t bytes, struct iov_iter *i) { - if (unlikely(!check_copy_size(addr, bytes, false))) - return 0; - else + if (check_copy_size(addr, bytes, false)) return _copy_from_iter_nocache(addr, bytes, i); + return 0; } static __always_inline __must_check -- Jens Axboe