|
Thursday, Jan 11, 2007 | dorkbot-atl by Decius at 2:23 am EST, Jan 6, 2007 |
Some recent audio/video projects that use generative processes to yield surprising results. "Temporide" does a pixel-by-pixel delay on a video, showing many time lapses simultaneously. Spectral splicing, morphing, and reconstitution creates new audio based out of what you feed it. And "Ghost Jockey" generates a continuous stream of mashup audio and video. Daniel Iglesia makes electronic music and video, or more accurately, is lazy and creates machines that do it for him. ---- In this talk, I will present my work on sound source separation with applications for music. Music is repetitious in nature and this repetition actually informs the source separation process. I derive an automated statistical approach based entirely on repetitive structure to separate sound sources. In addition, spectrograms contain time-frequency structure. This structure may be factored into note-like components containing a spectral shape modulated by an amplitude envelope. When multiple spectrograms are available, I show how to incorporate this additional spatial information to separate components and combine them to form the original source signals. Mitchell Parry is a Ph.D. candidate in the College of Computing working with Irfan Essa. His research interests include source separation, signal processing, visualization and music information retrieval.
DorkBot! In Atlanta! On Thursday! Who is with me!? |
|
RE: Thursday, Jan 11, 2007 | dorkbot-atl by Palindrome at 11:36 am EST, Jan 6, 2007 |
Decius wrote: Some recent audio/video projects that use generative processes to yield surprising results. "Temporide" does a pixel-by-pixel delay on a video, showing many time lapses simultaneously. Spectral splicing, morphing, and reconstitution creates new audio based out of what you feed it. And "Ghost Jockey" generates a continuous stream of mashup audio and video. Daniel Iglesia makes electronic music and video, or more accurately, is lazy and creates machines that do it for him. ---- In this talk, I will present my work on sound source separation with applications for music. Music is repetitious in nature and this repetition actually informs the source separation process. I derive an automated statistical approach based entirely on repetitive structure to separate sound sources. In addition, spectrograms contain time-frequency structure. This structure may be factored into note-like components containing a spectral shape modulated by an amplitude envelope. When multiple spectrograms are available, I show how to incorporate this additional spatial information to separate components and combine them to form the original source signals. Mitchell Parry is a Ph.D. candidate in the College of Computing working with Irfan Essa. His research interests include source separation, signal processing, visualization and music information retrieval.
DorkBot! In Atlanta! On Thursday! Who is with me!?
This is cool |
|
|
RE: Thursday, Jan 11, 2007 | dorkbot-atl by k at 12:10 pm EST, Jan 7, 2007 |
Decius wrote: Some recent audio/video projects that use generative processes to yield surprising results. "Temporide" does a pixel-by-pixel delay on a video, showing many time lapses simultaneously. Spectral splicing, morphing, and reconstitution creates new audio based out of what you feed it. And "Ghost Jockey" generates a continuous stream of mashup audio and video. In this talk, I will present my work on sound source separation with applications for music. Music is repetitious in nature and this repetition actually informs the source separation process.
DorkBot! In Atlanta! On Thursday! Who is with me!?
I'm very curious about that... I've long been interested in sound source separation, though I never did anything about it, thus proving that success only marginally has anything to do with inspiration. Lots of smart people think the same thought, but only one or two sack up and do anything about it. I'm very disappointed in myself. |
|
|
|