On Wed, May 31, 2023 at 12:30 PM Eduard Zingerman <eddyz87@xxxxxxxxx> wrote: > > On Wed, 2023-05-31 at 11:29 -0700, Andrii Nakryiko wrote: > > On Wed, May 31, 2023 at 10:21 AM Eduard Zingerman <eddyz87@xxxxxxxxx> wrote: > > > > > > On Tue, 2023-05-30 at 14:37 -0700, Andrii Nakryiko wrote: > > > [...] > > > > Also, it might make sense to drop SCALAR register IDs as soon as we > > > > have only one instance of it left (e.g., if "paired" register was > > > > overwritten already). I.e., aggressively drop IDs when they become > > > > useless. WDYT? > > > > > > I added modification which resets sole scalar IDs to zero before > > > states comparison, it shows some speedup but is still slow: > > > > > > Filter | Number of programs | Number of programs > > > | patch #1 | patch #1 + sole scalar ID pruning > > > ---------------------------------------------------------------------- > > > states_pct>10 | 40 | 40 > > > states_pct>20 | 20 | 19 > > > states_pct>30 | 15 | 13 > > > states_pct>40 | 11 | 8 > > > > > > (Out of 177 programs). > > > > > > I'll modify mark_chain_precision() to propagate precision marks for > > > find_equal_scalars(), so that it could be compared to current patch #3 > > > in terms of code complexity and verification performance. > > > > > > If you have any thoughts regarding my previous email, please share. > > > > > > > Yep, I do. Given SCALAR registers with the same ID are meant to "share > > the destiny", shouldn't it be required that when we mark r6 as precise > > we should automatically mark linked r7 as precise at the same point. > > So in your example: > > > > 7: r9 += r6 > > > > should be where we request both r6 and r7 (and whatever other register > > has the same ID) to be marked as precise. It should be very easy to > > implement, especially given my recent refactoring with > > mark_chain_precision_batch. > > Ok, I'll modify the `struct backtrack_state` as follows: > > struct backtrack_state { > struct bpf_verifier_env *env; > u32 frame; > u32 reg_masks[MAX_CALL_FRAMES]; > + u32 reg_ids[MAX_CALL_FRAMES]; > u64 stack_masks[MAX_CALL_FRAMES]; > + u64 stack_ids[MAX_CALL_FRAMES]; > }; > > And add corresponding logic to backtrack_insn(). I don't see why you need to change anything about backtrack_state at all, so we are not on the same page. What I propose is that in mark_chain_precision(), when given regno, go over all *current* registers with the same ID, and set all of them as "to be marked precise". And then call mark_chain_precision_batch(). See propagate_precision() for how we do similar stuff for bulk precision propagation. backtrack_state shouldn't need to know about IDs, unless I'm missing something > > > The question I have (and again, haven't spent any time thinking about > > any other corner cases, sorry) is whether that alone would be a proper > > fix? > > As far as I understand, in terms of correctness it would be a proper fix. > In terms of performance, I hope that it would be enough but we will see. ok, let's try that then, before we commit to u32_hashset stuff > > > As for this u32_hashset, it just feels like a big overkill, tbh. If we > > have to do something like that, wouldn't it be better to, say, set > > highest bit in reg->id (for all linked registers, of course) to mark > > it as "used for range checks" instead of maintaining a separate check? > > Unfortunately no, because this ID change would have to be propagated > backwards. It was the first implementation I tried. > > > But just the whole idea of keeping track of some special circumstances > > under which IDs are meaningful seems wrong... All this logic is > > complicated, now we are adding another layer of complexity on top. And > > the complexity is not in the code, it's in thinking about all possible > > scenarios and their interactions. > > I agree that adding more layers is a complication in itself. > Thank you for your input. > > > > [...] >