Search Linux Wireless

[no subject]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I _think_ the assumption / idea was that "toggle" implies that the output
is connected to a pull-up resistor and that the pin either floats or is
pulled down to ground, causing the signal to toggle. I don't know if/how
that works in practice, though.

Guenter

>   drivers/watchdog/gpio_wdt.c | 13 +++++--------
>   1 file changed, 5 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/watchdog/gpio_wdt.c b/drivers/watchdog/gpio_wdt.c
> index 0923201ce874..f7686688e0e2 100644
> --- a/drivers/watchdog/gpio_wdt.c
> +++ b/drivers/watchdog/gpio_wdt.c
> @@ -108,7 +108,6 @@ static int gpio_wdt_probe(struct platform_device *pdev)
>   struct device *dev = &pdev->dev;
>   struct device_node *np = dev->of_node;
>   struct gpio_wdt_priv *priv;
> - enum gpiod_flags gflags;
>   unsigned int hw_margin;
>   const char *algo;
>   int ret;
> @@ -122,17 +121,15 @@ static int gpio_wdt_probe(struct platform_device *pdev)
>   ret = of_property_read_string(np, "hw_algo", &algo);
>   if (ret)
>   return ret;
> - if (!strcmp(algo, "toggle")) {
> +
> + if (!strcmp(algo, "toggle"))
>   priv->hw_algo = HW_ALGO_TOGGLE;
> - gflags = GPIOD_IN;
> - } else if (!strcmp(algo, "level")) {
> + else if (!strcmp(algo, "level"))
>   priv->hw_algo = HW_ALGO_LEVEL;
> - gflags = GPIOD_OUT_LOW;
> - } else {
> + else
>   return -EINVAL;
> - }
>
> - priv->gpiod = devm_gpiod_get(dev, NULL, gflags);
> + priv->gpiod = devm_gpiod_get(dev, NULL, GPIOD_OUT_LOW);
>   if (IS_ERR(priv->gpiod))
>   return PTR_ERR(priv->gpiod);
>

*****

Get the best out of Youtube encoding with GPL QFFT Codecs for :
Windows,Linux & Android #RockTheHouseGoogle!

Advanced FFT & 3D Audio functions for CPU & GPU
https://gpuopen.com/true-audio-next/

Multimedia Codec SDK https://gpuopen.com/advanced-media-framework/

(c)Rupert S https://science.n-helix.com

***
Decoder CB 2021 Codecs

kAudioDecoderName "FFmpegAudioDecoder"
kAudioTracks [{"bytes per channel":2,"bytes per frame":4,"channel
layout":"STEREO","channels":2,"codec":"aac","codec delay":0,"discard
decoder delay":false,"encryption scheme":"Unencrypted","has extra
data":false,"profile":"unknown","sample format":"Signed
16-bit","samples per second":48000,"seek preroll":"0us"}]

kVideoDecoderName "MojoVideoDecoder"
kVideoPlaybackFreezing 0.10006
kVideoPlaybackRoughness 3.048
kVideoTracks [{"alpha mode":"is_opaque","codec":"h264","coded
size":"426x240","color space":"{primaries:BT709, transfer:BT709,
matrix:BT709, range:LIMITED}","encryption scheme":"Unencrypted","has
extra data":false,"hdr metadata":"unset","natural
size":"426x240","orientation":"0°","profile":"h264 baseline","visible
rect":"0,0 426x240"}]

info "Selected FFmpegAudioDecoder for audio decoding, config: codec:
mp3, profile: unknown, bytes_per_channel: 2, channel_layout: STEREO,
channels: 2, samples_per_second: 44100, sample_format: Signed 16-bit
planar, bytes_per_frame: 4, seek_preroll: 0us, codec_delay: 0, has
extra data: false, encryption scheme: Unencrypted, discard decoder
delay: true"
kAudioDecoderName "FFmpegAudioDecoder"
kAudioTracks [{"bytes per channel":2,"bytes per frame":4,"channel
layout":"STEREO","channels":2,"codec":"mp3","codec delay":0,"discard
decoder delay":true,"encryption scheme":"Unencrypted","has extra
data":false,"profile":"unknown","sample format":"Signed 16-bit
planar","samples per second":44100,"seek preroll":"0us"}]
kBitrate 192000

kAudioDecoderName "FFmpegAudioDecoder"
kAudioTracks [{"bytes per channel":4,"bytes per frame":8,"channel
layout":"STEREO","channels":2,"codec":"opus","codec
delay":312,"discard decoder delay":true,"encryption
scheme":"Unencrypted","has extra
data":true,"profile":"unknown","sample format":"Float 32-bit","samples
per second":48000,"seek preroll":"80000us"}]

kVideoDecoderName "VpxVideoDecoder"
kVideoTracks [{"alpha mode":"is_opaque","codec":"vp9","coded
size":"1920x1080","color space":"{primaries:BT709, transfer:BT709,
matrix:BT709, range:LIMITED}","encryption scheme":"Unencrypted","has
extra data":false,"hdr metadata":"unset","natural
size":"1920x1080","orientation":"0°","profile":"vp9 profile0","visible
rect":"0,0 1920x1080"}]

kAudioDecoderName "FFmpegAudioDecoder"
kAudioTracks [{"bytes per channel":2,"bytes per frame":4,"channel
layout":"STEREO","channels":2,"codec":"aac","codec delay":0,"discard
decoder delay":false,"encryption scheme":"Unencrypted","has extra
data":false,"profile":"unknown","sample format":"Signed
16-bit","samples per second":44100,"seek preroll":"0us"}]

kVideoDecoderName "MojoVideoDecoder"
kVideoTracks [{"alpha mode":"is_opaque","codec":"h264","coded
size":"1920x1080","color space":"{primaries:BT709, transfer:BT709,
matrix:BT709, range:LIMITED}","encryption scheme":"Unencrypted","has
extra data":false,"hdr metadata":"unset","natural
size":"1920x1080","orientation":"0°","profile":"h264 main","visible
rect":"0,0 1920x1080"}]
***

PlayStation 5 and Xbox Series Spatial Audio Comparison | Technalysis
Audio 3D Tested : Tempest,ATMOS,DTX,DTS

https://www.youtube.com/watch?v=vsC2orqiCwI

*

Waves & Shape FFT original QFFT Audio device & CPU/GPU : (c)RS

The use of an FFT simple unit to output directly: Sound
& other content such as a BLENDER or DAC Content : (c)RS

FFT Examples :

Analogue smoothed audio ..
Using a capacitor on the pin output to a micro diode laser (for analogue Fibre)

Digital output using:
8 to 128Bit multiple high frequency burst mode..

(Multi Phase step at higher frequency & smooth interpolation)
Analogue wave converted to digital in key steps through a DAC at
higher frequency & amplitude.

For many systems an analogue wave makes sense when high speed crystal
digital is too expensive.

Multiple frequency overlapped digital signals with a time formula is
also possible.

The mic works by calculating angle on a drum...
Light.. and timing & dispersion...
The audio works by QFFT replication of audio function..
The DAC works by quantifying as Analog digital or Metric Matrix..
The CPU/GPU by interpreting the data of logic, Space & timing...

We need to calculate Quantum is not the necessary feature;

But it is the highlight of our:
Data storage cache.
Our Temporary RAM
Our Data transport..
Of our fusion future.

FFT & fast precise wave operations in SiMD

Several features included for Audio & Video : Add to Audio & Video
drivers & sdk i love you <3 DL

In particular I want Bluetooth audio optimized with SiMD,AVX vector
instructions & DSP process drivers..

The opportunity presents itself to improve the DAC; In particular of
the Video cards & Audio devices & HardDrives & BDBlueRay Player Record
& load functions of the fluctuating laser..
More than that FFT is logical and fast; Precise & adaptive; FP & SiMD
present these opportunities with correct FFT operations & SDK's.

3D surround optimised the same, In particular with FFT efficient code,
As one imagines video is also effected by FFT ..

Video colour & representation & wavelet compression & sharpness restoration..
Vivid presentation of audio & video & 3D objects and texture; For
example DOT compression & image,Audio presentation...

SSD & HD technology presents unique opportunities for magnetic waves
and amplitude speculation & presentation.

FFT : FMA : SiMD instructions & speed : application examples : Audio,
Colour pallet , Rainbows, LUT, Blood corpuscles with audio & vibration
interaction, Rain with environmental effects & gravity.. There are
many application examples of transforms in action (More and more
complex by example)

High performance SIMD modular arithmetic for polynomial evaluation

FFT Examples :  in the SiMD Folder...

Evaluation of FFT and polynomial X array algebra .. is here handled to
over 50Bits...
As we understand it the maths depends on a 64bit value with a 128Bit  ..
as explained in the article value have to be in identical ranges bit
wise, However odd bit depth sizes are non conforming (God i need
coffee!)

In one example (page 9) Most of the maths is 64Bit & One value 128Bit
"We therefore focus in this article on the use of floating-point (FP)
FMA (fused multiply-add) instructions for floating-point based modular
arithmetic. Since the FMA instruction performs two operations (a â?? b +
c) with one single final rounding, it can indeed be used to design a
fast error-free transformation of the product of two floating-point
numbers"

Our latest addition is a quite detailed example for us
High performance SIMD modular arithmetic for
polynomial evaluation 2020

Pierre Fortin, Ambroise Fleury, François Lemaire, Michael Monagan

https://hal.archives-ouvertes.fr/hal-02552673/document

Contains multiple algorithm examples & is open about the computer
operations in use.

Advanced FFT & 3D Audio functions for CPU & GPU
https://gpuopen.com/true-audio-next/

Multimedia Codec SDK https://gpuopen.com/advanced-media-framework/

(c)Rupert S https://science.n-helix.com

*****

Lets face it, Realtec could well resource the QFFT Audio device &
transformer/DAC

(c)Rupert S https://science.n-helix.com

document work examples :

https://eurekalert.org/pub_releases/2021-01/epfd-lpb010621.php

"Light-based processors boost machine-learning processing
ECOLE POLYTECHNIQUE FÃ?DÃ?RALE DE LAUSANNE

Research News

IMAGE
IMAGE: SCHEMATIC REPRESENTATION OF A PROCESSOR FOR MATRIX
MULTIPLICATIONS WHICH RUNS ON LIGHT. view more

CREDIT: UNIVERSITY OF OXFORD

The exponential growth of data traffic in our digital age poses some
real challenges on processing power. And with the advent of machine
learning and AI in, for example, self-driving vehicles and speech
recognition, the upward trend is set to continue. All this places a
heavy burden on the ability of current computer processors to keep up
with demand.

Now, an international team of scientists has turned to light to tackle
the problem. The researchers developed a new approach and architecture
that combines processing and data storage onto a single chip by using
light-based, or "photonic" processors, which are shown to surpass
conventional electronic chips by processing information much more
rapidly and in parallel.

The scientists developed a hardware accelerator for so-called
matrix-vector multiplications, which are the backbone of neural
networks (algorithms that simulate the human brain), which themselves
are used for machine-learning algorithms. Since different light
wavelengths (colors) don't interfere with each other, the researchers
could use multiple wavelengths of light for parallel calculations. But
to do this, they used another innovative technology, developed at
EPFL, a chip-based "frequency comb", as a light source.

"Our study is the first to apply frequency combs in the field of
artificially neural networks," says Professor Tobias Kippenberg at
EPFL, one the study's leads. Professor Kippenberg's research has
pioneered the development of frequency combs. "The frequency comb
provides a variety of optical wavelengths that are processed
independently of one another in the same photonic chip."

"Light-based processors for speeding up tasks in the field of machine
learning enable complex mathematical tasks to be processed at high
speeds and throughputs," says senior co-author Wolfram Pernice at
Münster University, one of the professors who led the research. "This
is much faster than conventional chips which rely on electronic data
transfer, such as graphic cards or specialized hardware like TPU's
(Tensor Processing Unit)."

After designing and fabricating the photonic chips, the researchers
tested them on a neural network that recognizes of hand-written
numbers. Inspired by biology, these networks are a concept in the
field of machine learning and are used primarily in the processing of
image or audio data. "The convolution operation between input data and
one or more filters - which can identify edges in an image, for
example, are well suited to our matrix architecture," says Johannes
Feldmann, now based at the University of Oxford Department of
Materials. Nathan Youngblood (Oxford University) adds: "Exploiting
wavelength multiplexing permits higher data rates and computing
densities, i.e. operations per area of processor, not previously
attained."

"This work is a real showcase of European collaborative research,"
says David Wright at the University of Exeter, who leads the EU
project FunComp, which funded the work. "Whilst every research group
involved is world-leading in their own way, it was bringing all these
parts together that made this work truly possible."

The study is published in Nature this week, and has far-reaching
applications: higher simultaneous (and energy-saving) processing of
data in artificial intelligence, larger neural networks for more
accurate forecasts and more precise data analysis, large amounts of
clinical data for diagnoses, enhancing rapid evaluation of sensor data
in self-driving vehicles, and expanding cloud computing
infrastructures with more storage space, computing power, and
applications software.

###

Reference

J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers,
M. Le Gallo, X. Fu, A. Lukashchuk, A.S. Raja, J. Liu, C.D. Wright, A.
Sebastian, T.J. Kippenberg, W.H.P. Pernice, H. Bhaskaran. Parallel
convolution processing using an integrated photonic tensor core.
Nature 07 January 2021. DOI: 10.1038/s41586-020-03070-1"

Time Measurement

"Let's Play" Station NitroMagika_LightCaster




[Index of Archives]     [Linux Host AP]     [ATH6KL]     [Linux Wireless Personal Area Network]     [Linux Bluetooth]     [Wireless Regulations]     [Linux Netdev]     [Kernel Newbies]     [Linux Kernel]     [IDE]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite Hiking]     [MIPS Linux]     [ARM Linux]     [Linux RAID]

  Powered by Linux