Re: music from image metadata

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, April 1, 2014 9:43 am, Thomas Mayer wrote:
> Hi,
>
> On 31.03.2014 11:11, Patrick Shirkey wrote:
>> Hi,
>>
>> Can anyone think of a way to automate the creation of a music track from
>> the metadata embedded in an image track?
>
> I haven't worked with metadata yet, but did some experiments in
> sonification. Here is what I would do:
>
> Analyse the format of EXIF data. What is actually encoded?

Apparently it is a selfie taken by one of the passengers on flight 370. So
it appears to be someone sitting in a dark room wearing a black hood over
their head.


>  What is
> varying from image to image, camera to camera? Try to get a fair sample
> size to see the data.
>
> Try to convert the data to numbers, that can be interpreted as notes,
> frequency, duration, volume. If not all parameters can be set, then use
> some sane defaults.
>
> My guess is, that there is to little data to really interpret in EXIF,
> and that data is to disparate to create a melody:
>

I suppose I could translate it into ascii and then use that. I vaguely
recall that someone has written an ascii to audio tool.

However I am wondering if anyone knows of a tool to translate binary
metadata into audio. The json parser you wrote looks interesting but I'm
thinking of the something like the opposite of synaetheasia which turns
audio into visual data.


> To take the example from Wikipedia
> (http://en.wikipedia.org/wiki/Exchangeable_image_file_format#Example),
> how many notes can you create from the data to create a melody using an
> algorithm, that you can explain to users in just a few sentences?
>
> If not all data can be made to create a melody, then create one note
> from each image and use series of images, a slideshow of sorts.
>
> You could combine that with converting the images to sound, interpreting
> the y-axis as frequency and x-axis as time, similar to "Sheet music" by
> Johannes Kreidler:
> http://www.youtube.com/watch?v=vdbpJmsaNAw
>
> An example for Twitter sonification is included in my Pd extension
> PuREST JSON: http://ix.residuum.org/pd/purest_json.html
>
> Here's an example, that uses the returned data from a search, including
> a description on the algorithm to generate the sound:
> https://soundcloud.com/residuum/twitter-sonification
>

It's an interesting piece.  Quite soothing in it's own way. Did you select
specific instruments/noises or did you let it automate that process too?

Do you have others in that style?



--
Patrick Shirkey
Boost Hardware Ltd
_______________________________________________
Linux-audio-user mailing list
Linux-audio-user@xxxxxxxxxxxxxxxxxxxx
http://lists.linuxaudio.org/listinfo/linux-audio-user




[Index of Archives]     [Linux Sound]     [ALSA Users]     [Pulse Audio]     [ALSA Devel]     [Sox Users]     [Linux Media]     [Kernel]     [Photo Sharing]     [Gimp]     [Yosemite News]     [Linux Media]

  Powered by Linux