On 06/04/2016 07:12 PM, Erik Steffl wrote:
On 06/04/2016 12:21 PM, Ralf Mardorf wrote:
On Sat, 4 Jun 2016 05:09:02 -0400, tom haddington wrote:
One might observe that the machine wrote bad music. Well, humans are
already doing that, too, so Magenta has gotten at least that far! As
with chess machines, it may be a matter of time.
The point isn't, if a machine is able to fake music, it doesn't
matter, if it's good or bad faked music. What the machine generates is
completely uninteresting to me, since a machine has got no
emotions I'm interested in. A machine has got no emotions at all, so
even if the machine would generate "good music", it would be faked "good
music", emotional fraud. Human impostors are able to e.g. fake love.
Victims often feel more loved by an impostor, than by somebody who
really loves them. Fraud could make us feel good, we anyway
dislike fraud. That just shows what kind of company Google is. A human
might be an untalented musician, but at least a human usually has got
real emotions. A machine that is able to fake "good music" has got
absolutely nothing to do with progress. It's a damage. Developing
something like this shows the unimaginativeness of the developers.
Nobody needs it, it's good for absolutely nothing and even not a
useful step to learn something for useful AI projects or something
like this.
great emotional impact on audience does not require great emotional
investment of performer/author. Think of ocean or sunset or flowers - no
emotions but they are beautiful/impactful. (you might now change
argument to what's natural or not but that's a separate argument, only
pointing out the emotions here)
Interestingly enough, color and music don't objectively exist. They are
both processed perceptions. Color is heavily processed, even before it
actually reaches the brain, by the optic nerve. Sound, on the other
hand, goes from the nerve cells (connected to hair cells) directly into
the brain, where parts of the brain analyze it into frequency components
while other parts analyze it into temporal components while other parts
analyze its timbre while other parts analyze the apparent spatial
location of the source, etc. Even our old reptilian brain is involved,
identifying emotional elements. (After all, one function of our
marvelous audio processing system in the past was to separate out the
single sound made by a tiger coming up behind you, and make you run like
hell before it could pounce!) All of this is kept in memory and analyzed
in real time by other parts of the brain as the brain builds an internal
model of the music, constantly predicts what will come next, and gets a
pleasure jolt every time its prediction proves correct. It also gets a
pleasure jolt when its prediction is wrong but the brain successfully
reworks its model of the music to achieve another correct prediction.
(Damn, music processing is one of the most complex things brains do!)
All done so that we can take in a barrage of sound and separate it into
the bass playing this, the keyboard playing that, the rhythm guitar
strumming this way, the singer singing a line and the lead guitar
playing a short riff at the end of each line - while being aware from
how she sounds that the hot chick beside you at the concert is really
getting turned on ...
--
David W. Jones
gnome@xxxxxxxxxxxxx
authenticity, honesty, community
http://dancingtreefrog.com
_______________________________________________
Linux-audio-user mailing list
Linux-audio-user@xxxxxxxxxxxxxxxxxxxx
http://lists.linuxaudio.org/listinfo/linux-audio-user