Re: [PATCH v4 1/2] netfilter: Introduce new 64-bit helper functions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Aug 15, 2019 at 11:46:04AM +0200, Ander Juaristi wrote:
> 
> 
> On 13/8/19 20:58, Pablo Neira Ayuso wrote:
> > On Tue, Aug 13, 2019 at 08:38:19PM +0200, Ander Juaristi wrote:
> > [...]
> >> diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
> >> index 9b624566b82d..aa33ada8728a 100644
> >> --- a/include/net/netfilter/nf_tables.h
> >> +++ b/include/net/netfilter/nf_tables.h
> >> @@ -2,6 +2,7 @@
> >>  #ifndef _NET_NF_TABLES_H
> >>  #define _NET_NF_TABLES_H
> >>  
> >> +#include <asm/unaligned.h>
> >>  #include <linux/list.h>
> >>  #include <linux/netfilter.h>
> >>  #include <linux/netfilter/nfnetlink.h>
> >> @@ -119,6 +120,16 @@ static inline void nft_reg_store8(u32 *dreg, u8 val)
> >>  	*(u8 *)dreg = val;
> >>  }
> >>  
> >> +static inline void nft_reg_store64(u32 *dreg, u64 val)
> >> +{
> >> +	put_unaligned(val, (u64 *)dreg);
> >> +}
> >> +
> >> +static inline u64 nft_reg_load64(u32 *sreg)
> >> +{
> >> +	return get_unaligned((u64 *)sreg);
> >> +}
> > 
> > Please, add these function definition below _load16() and _store16().
> 
> You mean you'd like them ordered from smaller to larger?
> 
> nft_reg_store8
> nft_reg_load8
> nft_reg_store16
> nft_reg_load16
> nft_reg_store64
> nft_reg_load64

yes please.

> >> +
> >>  static inline u16 nft_reg_load16(u32 *sreg)
> >>  {
> >>  	return *(u16 *)sreg;
> >> diff --git a/net/netfilter/nft_byteorder.c b/net/netfilter/nft_byteorder.c
> >> index e06318428ea0..a25a222d94c8 100644
> >> --- a/net/netfilter/nft_byteorder.c
> >> +++ b/net/netfilter/nft_byteorder.c
> >> @@ -43,14 +43,14 @@ void nft_byteorder_eval(const struct nft_expr *expr,
> >>  		switch (priv->op) {
> >>  		case NFT_BYTEORDER_NTOH:
> > 
> > This is network-to-host byteorder.
> > 
> >>  			for (i = 0; i < priv->len / 8; i++) {
> >> -				src64 = get_unaligned((u64 *)&src[i]);
> >> -				put_unaligned_be64(src64, &dst[i]);
> >> +				src64 = nft_reg_load64(&src[i]);
> >> +				nft_reg_store64(&dst[i], cpu_to_be64(src64));
> > 
> > This looks inverted, this should be:
> > 
> > 				nft_reg_store64(&dst[i], be64_to_cpu(src64));
> > 
> > right?
> > 
> >>  			}
> >>  			break;
> >>  		case NFT_BYTEORDER_HTON:
> > 
> > Here, network-to-host byteorder:
> > 
> >>  			for (i = 0; i < priv->len / 8; i++) {
> >> -				src64 = get_unaligned_be64(&src[i]);
> >> -				put_unaligned(src64, (u64 *)&dst[i]);
> >> +				src64 = be64_to_cpu(nft_reg_load64(&src[i]));
> > 
> > and this:
> > 
> >                                 src64 = (__force __u64)
> >                                         cpu_to_be64(nft_reg_load64(&src[i]));
> > 
> 
> My bad. Yes, I've just fixed them.

Great.



[Index of Archives]     [Netfitler Users]     [Berkeley Packet Filter]     [LARTC]     [Bugtraq]     [Yosemite Forum]

  Powered by Linux