On 27/04/2019 04:11, Alexei Starovoitov wrote: > instead of converting all insns into lists of 1 before all patching > it can be done on demand: > convert from insn to list only when patching is needed. Makes sense. > Patched insn becomes a pointer to a block of new insns. > We have reserved opcodes to recognize such situation. It's not clear to me where you can fit everything though. The pointer is 64 bits, which is the same as struct bpf_insn. Are you suggesting relying on kernel pointers always starting 0xff? > The question is how to linearise it once at the end? Walk the old prog once to calculate out_insn_idx for each in_insn (since we will only ever be jumping to the first insn of a list (or to a non-list insn), that's all we need), as well as out_len. Allocate enough pages for out_len (let's not try to do any of this in-place, that would be painful), then walk the old prog to copy it insn-by-insn into the new one, recalculating any jump offsets by looking up the dest insn's out_insn_idx and subtracting our own out_insn_idx (plus an offset if we're not the first insn in the list of course). While we're at it we can also fix up e.g. linfo[].insn_off: if in_insn_idx matches linfo[li_idx].insn_off, then set linfo[li_idx++].insn_off = out_insn_idx. If we still need aux_data at this point we can copy that across too. Runtime O(out_len), and gets rid of all the adjusts on patch_insn_single — branches, linfo, subprog_starts, aux_data. Have I missed anything? If I have time I'll put together an RFC patch in the next few days. -Ed