Re: [PATCH 2/3] Revert "bpf: Fix issue in verifying allow_ptr_leaks"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 14, 2023 at 10:24 AM Daniel Borkmann <daniel@xxxxxxxxxxxxx> wrote:
>
> On 9/14/23 6:20 PM, Alexei Starovoitov wrote:
> > On Wed, Sep 13, 2023 at 5:30 AM Luis Gerhorst <gerhorst@xxxxxxxxx> wrote:
> >>
> >> This reverts commit d75e30dddf73449bc2d10bb8e2f1a2c446bc67a2.
> >>
> >> To mitigate Spectre v1, the verifier relies on static analysis to deduct
> >> constant pointer bounds, which can then be enforced by rewriting pointer
> >> arithmetic [1] or index masking [2]. This relies on the fact that every
> >> memory region to be accessed has a static upper bound and every date
> >> below that bound is accessible. The verifier can only rewrite pointer
> >> arithmetic or insert masking instructions to mitigate Spectre v1 if a
> >> static upper bound, below of which every access is valid, can be given.
> >>
> >> When allowing packet pointer comparisons, this introduces a way for the
> >> program to effectively construct an accessible pointer for which no
> >> static upper bound is known. Intuitively, this is obvious as a packet
> >> might be of any size and therefore 0 is the only statically known upper
> >> bound below of which every date is always accessible (i.e., none).
> >>
> >> To clarify, the problem is not that comparing two pointers can be used
> >> for pointer leaks in the same way in that comparing a pointer to a known
> >> scalar can be used for pointer leaks. That is because the "secret"
> >> components of the addresses cancel each other out if the pointers are
> >> into the same region.
> >>
> >> With [3] applied, the following malicious BPF program can be loaded into
> >> the kernel without CAP_PERFMON:
> >>
> >> r2 = *(u32 *)(r1 + 76) // data
> >> r3 = *(u32 *)(r1 + 80) // data_end
> >> r4 = r2
> >> r4 += 1
> >> if r4 > r3 goto exit
> >> r5 = *(u8 *)(r2 + 0) // speculatively read secret
> >> r5 &= 1 // choose bit to leak
> >> // ... side channel to leak secret bit
> >> exit:
> >> // ...
> >>
> >> This is jited to the following amd64 code which still contains the
> >> gadget:
> >>
> >>     0:   endbr64
> >>     4:   nopl   0x0(%rax,%rax,1)
> >>     9:   xchg   %ax,%ax
> >>     b:   push   %rbp
> >>     c:   mov    %rsp,%rbp
> >>     f:   endbr64
> >>    13:   push   %rbx
> >>    14:   mov    0xc8(%rdi),%rsi // data
> >>    1b:   mov    0x50(%rdi),%rdx // data_end
> >>    1f:   mov    %rsi,%rcx
> >>    22:   add    $0x1,%rcx
> >>    26:   cmp    %rdx,%rcx
> >>    29:   ja     0x000000000000003f // branch to mispredict
> >>    2b:   movzbq 0x0(%rsi),%r8 // speculative load of secret
> >>    30:   and    $0x1,%r8 // choose bit to leak
> >>    34:   xor    %ebx,%ebx
> >>    36:   cmp    %rbx,%r8
> >>    39:   je     0x000000000000003f // branch based on secret
> >>    3b:   imul   $0x61,%r8,%r8 // leak using port contention side channel
> >>    3f:   xor    %eax,%eax
> >>    41:   pop    %rbx
> >>    42:   leaveq
> >>    43:   retq
> >>
> >> Here I'm using a port contention side channel because storing the secret
> >> to the stack causes the verifier to insert an lfence for unrelated
> >> reasons (SSB mitigation) which would terminate the speculation.
> >>
> >> As Daniel already pointed out to me, data_end is even attacker
> >> controlled as one could send many packets of sufficient length to train
> >> the branch prediction into assuming data_end >= data will never be true.
> >> When the attacker then sends a packet with insufficient data, the
> >> Spectre v1 gadget leaks the chosen bit of some value that lies behind
> >> data_end.
> >
> > The above analysis is correct, but unlike traditional spec_v1
> > the attacker doesn't control data/data_end.
> > The attack can send many large packets to train that data + X < data_end
> > and then send a small packet where CPU will mispredict that branch
> > and data + X will speculatively read past data_end,
> > so the attacker can extract a bit past data_end,
> > but data/data_end themselves cannot be controlled.
> > So whether this bit 0 or 1 has no bearing.
> > The attack cannot be repeated for the same location.
> > The attacker can read one bit 8 times in a row and all of them
> > will be from different locations in the memory.
> > Same as reading 8 random bits from 8 random locations.
> > Hence I don't think this revert is necessary.
> > I don't believe you can craft an actual exploit.
> >
> > Your patch 3 says:
> >         /* Speculative access to be prevented. */
> > +       char secret = *((char *) iph);
> > +
> > +       /* Leak the first bit of the secret value that lies behind data_end to a
> > +        * SMP silbling thread that also executes imul instructions. If the bit
> > +        * is 1, the silbling will experience a slowdown. */
> > +       long long x = secret;
> > +       if (secret & 1) {
> > +               x *= 97;
> > +       }
> >
> > the comment is correct, but speculative access alone is not enough
> > to leak data.
>
> What you write makes sense, it will probably be hard to craft an exploit.
> Where it's a bit more of an unknown to me is whether struct skb_shared_info
> could have e.g. destructor_arg rather static (at last the upper addr bits)
> so that you would leak out kernel addresses.

You mean since skb_shared_info is placed after skb->end
and in zero copy case destructor_arg may be initialized with the same
kernel pointer for multiple skb-s ?
The attacker cannot construct the address from data_end.
The verifier explicitly prohibits any ALU with PTR_TO_PACKET_END.
But the attacker can do skb->data + X.
The idea is that they can train the branch to mispredict with
a large packet and then send a small one so that shared_info
after skb->end has the same uarg pointer in all packets?
So every skb->data+X is a different location, but all of them
point to data that has uarg==destructor_arg ?

That would be feasible in theory, but in order to speculate the loads
the branch mispredict has to be reliable.
The spec v1 attack requires one of two loads feeding
into compare operation has to be slow.
In this case both data and data_end loads are going to be fast.
The attacker cannot evict skb->data or skb->data_end from cache.
Remember that we rearranged 'max_entries' field in struct bpf_map
specifically to be in the different cache line vs fields
controlled by user space. It was the necessary part of spec v1 attack.

So I still believe this revert is unnecessary and this speculative
execution is not exploitable.




[Index of Archives]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Device Mapper]

  Powered by Linux