On Fri, Feb 1, 2013 at 4:12 PM, Fons Adriaensen <fons@xxxxxxxxxxxxxx> wrote:
Gotta say, you guys impress me. I think embedded programming is pretty tough. I bombed my FPGA class last spring--I gave up too soon for that class, but haven't given up altogether. There's a lot of value for rt-audio there.
One topic of research where I'm at (ITTC/KU)
concerns compilation from Haskell (a relational language) to verilog or
vhdl for synthesis on fpga's--not going through the usual chain of
defining a processor but actually building the specific functions (greater utilization this way as I understood it). Maybe someday Faust (the audio relational language) will
have a similar compiler target like this too
On Fri, Feb 01, 2013 at 08:07:46PM +0000, Kelly Hirai wrote:There are many ways to use an fpga. I've got a friend who's a real
> fpga seems a natural way to express in silicon, data flow languages like
> pd, chuck, csound, ecasound. regarding the stretch, the idea that one
> could code in c or c++ might streamline refactoring code, but i'm still
> trying to wrap my head around designing graph topology for code that is
> tied to the program counter register. nor do i see the right peripherals
> for sound. perhaps the g.711 codec support is software implementation
> and could be rewritten. need stats on the 8 bnc to dvi adapter audio port.
wizard in this game, and his approach is quite unusual but very
effective.
In most cases, after having analysed the problem at hand, he'll design
one or more ad-hoc processors in vhdl. They are always very minimal,
maybe having 5 to 20 carefully chosen instructions, usually all of them
conditional (ARM style), and coded if necessary in very wide instruction
words so there's no microcode and these processors are incredibly fast.
It takes him a few hours to define such a processor, and a few more hours
more to create an assembler for it in Python. Then he starts coding the
required algorithms using these processors.
If necessary, he'll revise the processor design until it's perfectly
matched to the problem. In all cases I've watched, this results in
something that most other designers couldn't even dream of in terms
of speed and efficiency - not only of the result, but also of the
design process and hence the economics.
All of this is of course very 'politically incorrect' - he just throws
away the whole canon of 'high level tools' or rather replaces it with
his own vision of it - with results that I haven't seen matched ever.
Gotta say, you guys impress me. I think embedded programming is pretty tough. I bombed my FPGA class last spring--I gave up too soon for that class, but haven't given up altogether. There's a lot of value for rt-audio there.
All the same, it's a bit limited when compared to x86_64
instruction sets. I think it's nearly impossible to get pd running on a
fpga with decent performance (I can speak only of the software I know
well enough). But there's the rub anyway--your x86_64 processors don't have access to interfaces by themselves. On opencores, there's a fpga QPI endpoint. Wouldn't it be cool to build an audio interface *directly* off the processor's QPI lanes? I know... I'm dreaming
Chuck
_______________________________________________ Linux-audio-user mailing list Linux-audio-user@xxxxxxxxxxxxxxxxxxxx http://lists.linuxaudio.org/listinfo/linux-audio-user