Pablo Neira Ayuso <pablo@xxxxxxxxxxxxx> wrote: > On Mon, Feb 15, 2016 at 01:54:46PM +0100, Florian Westphal wrote: > > Patrick McHardy <kaber@xxxxxxxxx> wrote: > > > > Hi Patrick > > > > > On 04.02, Florian Westphal wrote: > > > > In fact, doing the scaling via precision_type seems to > > > > be a lot simpler as then its applied only in this one case of the > > > > prandom META_TEMPLATE while keeping this detail limited to meta.c. > > > > > > Yes, on second thought I agree, sorry. Maybe the work is not lost though, > > > what does seem to make sense is to use a float basetype and derive your > > > probability type from that. > > > > I can do this. > > However, I don't currently see any other type that could be derived from > > that. Would you be OK with leaving things as-is and adding a float > > type later on once a use case presents itself? > > I also think you can add a new TYPE_FLOAT. Yes, but what I'm asking is: 'What for'? > Then, from the evaluation step make sure that META_PRANDOM is under > the valid limits (0, 1]. Thats not so simple. META_PRANDOM is scaled so its represented as 1.0 == UINT32_MAX and 0 = 0. We can't do that for TYPE_FLOAT since it means that it could not represent values > 1.0 . Doing the scaling in the eval step is possible but its a bit ugly. > These TYPE_* will be part of the public API of the high level library > at some point, they describe the datatype that are used in set > definitions in the kernel (through the NFTA_SET_DATATYPE netlink > attribute and the new NFTA_SET_USERDATA through TLVs). I understand, but, the proposed float and probability types are very different, and allow for very little re-use. For example on printing the probability type has to undo the scaling so we print 1.0 instead of $bignum. -- To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html