Re: [PATCH] bpf: optimize constant blinding

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 14/06/2019 16:13, Jiong Wang wrote:
> Just an update and keep people posted.
>
> Working on linked list based approach, the implementation looks like the
> following, mostly a combine of discussions happened and Naveen's patch,
> please feel free to comment.
>
>   - Use the reserved opcode 0xf0 with BPF_ALU as new pseudo insn code
>     BPF_LIST_INSN. (0xf0 is also used with BPF_JMP class for tail call).
>
>   - Introduce patch pool into bpf_prog->aux to keep all patched insns.
It's not clear to me what the point of the patch pool is, rather than just
 doing the patch straight away.  Idea is that when prog is half-patched,
 insn idxs / jump offsets / etc. all refer to the original locations, only
 some of those might now be a list rather than a single insn.  Concretely:

struct bpf_insn_list { bpf_insn insn; struct list_head list; }
orig prog: array bpf_insn[3] = {cond jump +1, insn_foo, exit};
start of day verifier converts this into array bpf_insn_list[3],
 each with new.insn = orig.insn; INIT_LIST_HEAD(new.list)

verifier decides to patch insn_foo into {insn_bar, insn_baz}.
so we allocate bpf_insn_list *x;
insn_foo.insn = insn_bar;
x->insn = insn_baz;
list_add_tail(&x->list, &insn_foo.list);

the cond jump +1 is _not_ changed at this point, because it counts
 bpf_insn_lists and not insns.

Then at end of day to linearise we just have int new_idx[3];
 populate it by iterating over the array of bpf_insn_list and adding on
 the list length each time (so we get {0, 1, 3})
 then allocate output prog of the total length (here 4, calculated by
 that same pass as effectively off-the-end entry of new_idx)
 then iterate again to write out the output prog, when we see that 'cond
 jump +1' in old_idx 0 we see that (new_idx[2] - new_idx[0] - 1) = 2, so
 it becomes 'cond jump +2'.


This seems to me like it'd be easier to work with than making the whole
 program one big linked list (where you have to separately keep track of
 what each insn's idx was before patching).  But I haven't tried to code
 it yet, so anything could happen ;-)  It does rely on the assumption
 that only original insns (or the first insn of a list they get patched
 with) will ever be jump targets or otherwise need their insn_idx taken.

-Ed



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux