On Sat, Jun 13, 2020 at 01:51:26PM +0100, Al Viro wrote: > On Sat, Jun 13, 2020 at 05:34:32PM +0530, afzal mohammed wrote: > > > Observation is that max. pages reaching copy_{from,to}_user() is 2, > > observed maximum of n (number of bytes) being 1 page size. i think C > > library cuts any size read, write to page size (if it exceeds) & > > invokes the system call. Max. pages reaching 2, happens when 'n' > > crosses page boundary, this has been observed w/ small size request > > as well w/ ones of exact page size (but not page aligned). > > > > Even w/ dd of various size >4K, never is the number of pages required > > to be mapped going greater than 2 (even w/ 'dd' 'bs=1M') > > > > i have a worry (don't know whether it is an unnecessary one): even > > if we improve performance w/ large copy sizes, it might end up in a > > sluggishness w.r.t user experience due to most (hence a high amount) > > of user copy calls being few bytes & there the penalty being higher. > > And benchmark would not be able to detect anything abnormal since > > usercopy are being tested on large sizes. > > > > Quickly comparing boot-time on Beagle Bone White, boot time increases > > by only 4%, perhaps this worry is irrelevant, but just thought will > > put it across. > > Do stat(2) of the same tmpfs file in a loop (on tmpfs, to eliminate > the filesystem playing silly buggers). And I wouldn't expect anything > good there... Incidentally, what about get_user()/put_user()? _That_ is where it's going to really hurt...