On Thu, 13 Oct 2016 17:36:16 -0400, jonetsu wrote: >artificial intelligence Should the network of plugins read the users mind? After asking the above question, I read https://www.izotope.com/en/community/blog/tips-tutorials/2016/10/using-the-track-assistant-in-neutron.html So some detection algorithm, not necessarily a pattern recognition, IOW AI might not be the correct term, should guess what an audio engineer might want. To some degree this might be possible, since audio engineering techniques and quality standards are assessable within in bounds. This guessing game is absolutely impossible regarding the creative processes of making music. After playing a guitar and singing with a broken voice similar to Bob Dylan for a circle of fifths composition, should the network recommend to add fluid-dssi with a harp soundfont and an arpeggiator playing random pentatonic notes? I don't want some algorithm to suggest something related to audio engineering or how to make music. The very most I do, is to learn myself, I e.g. would read about different recording techniques or e.g. chord progressions. If an app should help me to search a recording technique or chord progression, absolutely not interconnected with my recording project, such apps are ok for my taste. In the end I want to do the recording and mixing myself, based on my knowledge and taste. Useful IMO are interconnected plugins that sync to each other, compensate latency issues, are able to transmit and receive the ability of special virtual keyboards, that perhaps can't be done by MIDI messages. This more or less already seems to be possible, but as noticed by Godfrey's rat already could be overwhelming. Regards, Ralf _______________________________________________ Linux-audio-user mailing list Linux-audio-user@xxxxxxxxxxxxxxxxxxxx http://lists.linuxaudio.org/listinfo/linux-audio-user