Re: [PATCH 3/3] btrfs: Avoid live-lock in search_ioctl() on hardware with sub-page faults

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 24, 2021 at 08:03:58PM +0000, Matthew Wilcox wrote:
> On Wed, Nov 24, 2021 at 07:20:24PM +0000, Catalin Marinas wrote:
> > +++ b/fs/btrfs/ioctl.c
> > @@ -2223,7 +2223,8 @@ static noinline int search_ioctl(struct inode *inode,
> >  
> >  	while (1) {
> >  		ret = -EFAULT;
> > -		if (fault_in_writeable(ubuf + sk_offset, *buf_size - sk_offset))
> > +		if (fault_in_exact_writeable(ubuf + sk_offset,
> > +					     *buf_size - sk_offset))
> >  			break;
> >  
> >  		ret = btrfs_search_forward(root, &key, path, sk->min_transid);
> 
> Couldn't we avoid all of this nastiness by doing ...

I had a similar attempt initially but I concluded that it doesn't work:

https://lore.kernel.org/r/YS40qqmXL7CMFLGq@xxxxxxx

> @@ -2121,10 +2121,9 @@ static noinline int copy_to_sk(struct btrfs_path *path,
>                  * problem. Otherwise we'll fault and then copy the buffer in
>                  * properly this next time through
>                  */
> -               if (copy_to_user_nofault(ubuf + *sk_offset, &sh, sizeof(sh))) {
> -                       ret = 0;
> +               ret = __copy_to_user_nofault(ubuf + *sk_offset, &sh, sizeof(sh));
> +               if (ret)

There is no requirement for the arch implementation to be exact and copy
the maximum number of bytes possible. It can fail early while there are
still some bytes left that would not fault. The only requirement is that
if it is restarted from where it faulted, it makes some progress (on
arm64 there is one extra byte).

>                         goto out;
> -               }
>  
>                 *sk_offset += sizeof(sh);
> @@ -2196,6 +2195,7 @@ static noinline int search_ioctl(struct inode *inode,
>         int ret;
>         int num_found = 0;
>         unsigned long sk_offset = 0;
> +       unsigned long next_offset = 0;
>  
>         if (*buf_size < sizeof(struct btrfs_ioctl_search_header)) {
>                 *buf_size = sizeof(struct btrfs_ioctl_search_header);
> @@ -2223,7 +2223,8 @@ static noinline int search_ioctl(struct inode *inode,
>  
>         while (1) {
>                 ret = -EFAULT;
> -               if (fault_in_writeable(ubuf + sk_offset, *buf_size - sk_offset))
> +               if (fault_in_writeable(ubuf + sk_offset + next_offset,
> +                                       *buf_size - sk_offset - next_offset))
>                         break;
>  
>                 ret = btrfs_search_forward(root, &key, path, sk->min_transid);
> @@ -2235,11 +2236,12 @@ static noinline int search_ioctl(struct inode *inode,
>                 ret = copy_to_sk(path, &key, sk, buf_size, ubuf,
>                                  &sk_offset, &num_found);
>                 btrfs_release_path(path);
> -               if (ret)
> +               if (ret > 0)
> +                       next_offset = ret;

So after this point, ubuf+sk_offset+next_offset is writeable by
fault_in_writable(). If copy_to_user() was attempted on
ubuf+sk_offset+next_offset, all would be fine, but copy_to_sk() restarts
the copy from ubuf+sk_offset, so it returns exacting the same ret as in
the previous iteration.

-- 
Catalin



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux