Hi, On 31.03.2014 11:11, Patrick Shirkey wrote: > Hi, > > Can anyone think of a way to automate the creation of a music track from > the metadata embedded in an image track? I haven't worked with metadata yet, but did some experiments in sonification. Here is what I would do: Analyse the format of EXIF data. What is actually encoded? What is varying from image to image, camera to camera? Try to get a fair sample size to see the data. Try to convert the data to numbers, that can be interpreted as notes, frequency, duration, volume. If not all parameters can be set, then use some sane defaults. My guess is, that there is to little data to really interpret in EXIF, and that data is to disparate to create a melody: To take the example from Wikipedia (http://en.wikipedia.org/wiki/Exchangeable_image_file_format#Example), how many notes can you create from the data to create a melody using an algorithm, that you can explain to users in just a few sentences? If not all data can be made to create a melody, then create one note from each image and use series of images, a slideshow of sorts. You could combine that with converting the images to sound, interpreting the y-axis as frequency and x-axis as time, similar to "Sheet music" by Johannes Kreidler: http://www.youtube.com/watch?v=vdbpJmsaNAw An example for Twitter sonification is included in my Pd extension PuREST JSON: http://ix.residuum.org/pd/purest_json.html Here's an example, that uses the returned data from a search, including a description on the algorithm to generate the sound: https://soundcloud.com/residuum/twitter-sonification Have fun, Thomas -- "From the perspective of communication analysis, government is not an instrument of law and order, but of law and disorder." (Gracchus Gruad in: Robert Shea & Robert A. Wilson, The Golden Apple) http://www.residuum.org/ _______________________________________________ Linux-audio-user mailing list Linux-audio-user@xxxxxxxxxxxxxxxxxxxx http://lists.linuxaudio.org/listinfo/linux-audio-user