Re: Regarding floating point,exponential and power calcualtion inkernel. (fwd)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi
	Sorry to intrude, but can we do atlest the basic operations (+ - * /)
on FP inside the kernel Or even this is forbidden?

and yes, thanks for the valuable information.

manojs


> On Mon, Mar 19, 2001 at 01:57:53PM +0530, Chandrashekhar S wrote:
> > Currently I am writing a module which will be inserted in IP layer. In my
> > module I need to calculate exp and pow of some numbers.
> > My problem is how to calculate exponential and power functions, because I
> > cannot use the standard library functions exp() and pow() in kernel space.
> > Am I suppose to write my own exponential and power functions in that case
> > in the kernel.
> > If so kindly help me by giving algorithms or source code for calculating
> > exponential and power functions.
> 
>         Repeat after me: NO FLOATING POINT IN THE KERNEL.
>         (I will point you to a way to do it below, unless you
>          can be persuaded to do it in smarter way.)
> 
>         Now, what exactly are the whole equations you want to use ?
> 
>         What are the value spaces you want to handle ?
> 
>         Are there very small numbers (below 0.001 or so)
>         which need to be presented and not underflown to zero,
>         or very large numbers (above about 100 000) ?
> 
>         How many significant numbers (or bits) are needed ?
> 
>         With suitable constraints you can do the thing by using
>         "scaled integers" a.k.a. "fixed point".
> 
>         Simplest forms of which include having a 'signed long' for
>         data storage, and IMPLIED scale factor of 16384 - which gives
>         you 4 digits (14 bits) of value beyond the decimal point.
>         Other factors can be used too, all depending on expected value
>         ranges.  Instead of X you now have X/S with constant value at S.
>         (Use some power-of-2 for S, and the division can be turned into
>          a shift.)
> 
>         Mathematics for doing these things are described at every math
>         textbook handling e.g. Taylor series.  The (small) challenge is
>         at turning the equations to program code, and keeping care of
>         not loosing precission (under-, nor overflowing any intermediate
>         results) while evaluating the approximation series.
> 
>         Expand equations until your math consists entirely of operators
>         +, -, *, /   Nothing else is allowed, or needs function calls.
>         (E.g. SQRT() needs function call, but X^3 expands as X*X*X)
> 
>         Rewrite the equations out with X/S in place of X, and see if some
>         manual algebra can help to eliminate S in some cases -- try your
>         hand at first with the Newton iteration for the square-root, you
>         will be surprised...
> 
>         Alternates for the series expansions include table approximations,
>         where you have a table of (say) 128 X&Y values of the function
>         space.  Approximate the space in between two points with a linear
>         function, and determine Y = Y0 + (Y1-Y0)*(X-X0)/(X1-X0)  where
>         X0 < X < X1, and Y0/Y1 are values for X0/X1 respectively.
>         (When X is exactly one of listed values, Y is also listed.)
>         (Consider adding scale term  /S  to every variable.  What happens to
>          the result of the equation ?    Will the resulting binary value be
>          different ?  Why ?)
> 
>         Extending your algebra excercise over the entire mathematics
>         you want to do by opening series approximations and simplifying
>         common terms out (e.g. S, maybe others) should get you faster
>         resulting INTEGER code without needing to resort to nasty FP.
> 
>         This is standard stuff for DSP programmers.  (With possible
>         exception that DSP programmers try to avoid generic division.)
> 
>         As the expanded basic-algebra-only algorithms are very hard to
>         understand without (possibly lengthy) comments describing their
>         derivation from the original equations, I suggest you make those
>         derivations into comments of your code.
> 
>         And of course if you *really* want to be lazy and use FP, you
>         should do the things which RAID XOR code does when it uses MMX
>         facility for XORing blocks. -- save and restore the FPU state,
>         which is *not* a small lightweight thing in itself!
> 
>         There is definite reason for doing "lazy FPU save and restore"
>         at i386 systems - which is also the root reason for forbidding
>         the use of FP in the kernel -- you do careless FP in kernel, and
>         you muck the FP-coprocessor state which shows up to userspace
>         as spodaric FP math failures.
> 
>         If you want to make your code strictly i386 one, then you can
>         do all manner of nasty things like that, and copy inline assembly
>         code from  /usr/include/bits/mathinline.h  (glibc 2.2.2)  file.
>         (Of course your code will be terribly slow due to FPU save&restore
>          to allow you to do couple lines of simple math, but that does not
>          trouble you ?)
>
-
: send the line "unsubscribe linux-net" in
the body of a message to majordomo@vger.kernel.org


[Index of Archives]     [Netdev]     [Ethernet Bridging]     [Linux 802.1Q VLAN]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Git]     [Bugtraq]     [Yosemite News and Information]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux PCI]     [Linux Admin]     [Samba]

  Powered by Linux