Download
ah ha moments where science meets art and practice in digital sound part 1 n.
Skip this Video
Loading SlideShow in 5 Seconds..
“Ah-Ha! Moments”: Where Science Meets Art and Practice in Digital Sound, Part 1 PowerPoint Presentation
Download Presentation
“Ah-Ha! Moments”: Where Science Meets Art and Practice in Digital Sound, Part 1

“Ah-Ha! Moments”: Where Science Meets Art and Practice in Digital Sound, Part 1

148 Vues Download Presentation
Télécharger la présentation

“Ah-Ha! Moments”: Where Science Meets Art and Practice in Digital Sound, Part 1

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. “Ah-Ha! Moments”: Where Science Meets Art and Practice in Digital Sound,Part 1 Jennifer Burg CCLI Workshop Series: “Linking Science, Art, and Practice through Digital Sound” Workshop 1, August 11 and 12, 2008 Wake Forest University This work was funded by National Science Foundation CCLI grant DUE-0717743 Jennifer Burg PI, Jason Romney Co-PI

  2. “Ah-Ha Moments” are moments when the lights go on, and you see something clearly for the first time.

  3. The relationships between things • Why something is so • How something works • What something applies to • Often this requires seeing the thing from a different perspective or in context

  4. The Effect of Bit Depth in Quantization • Rounding to discrete quantization levels causes error. • In Audition • The error is itself a wave. • In MATLAB

  5. Visualizing the Error from Quantization

  6. Dithering • Add a random amount between -1 and 1 (scaled to the bit depth of the audio file) to each sample before quantizing. • There will be fewer consecutive samples that round to the same amount. Rounding to 0 is the worst thing, causing breaks. • In Audition

  7. Noise Shaping • Raise error wave above the Nyquist frequency. • Do this by making the error go up if it was previously down and down if it was previously up. This raises the error wave’s frequency. • The amount added to a sample depends on the error in previous sample. In Audition

  8. Ah-ha! original sound file Spectral view of quantization noise. (Click picture to hear.) Spectral view of quantization noise sith dithering. (Click picture to hear.) Spectral view of quantization noise with dithering and noise shaping. Click picture to hear.)

  9. Frequency Components • The spectral views just shown introduce the idea of frequency components of a sound wave. • The digitized sound wave can be stored in one of two ways: • Time domain – a list of values representing the amplitude of the sound wave at evenly-spaced moments in time • Frequency domain – a list of values representing “how much” of each frequency is present in the wave

  10. frequency domain time domain

  11. The Relationship Between a Frequency Response and an Impulse Response Frequency Response (how much of each frequency will be retained after filtering, on a scale of 0 to 1) Impulse Response (a graph of the values in the convolution mask, in the time domain) The Fourier transform of the impulse response gives the frequency response.

  12. Impulse Response = Convolution Mask • Filtering can be done in the time domain by applying a convolution mask to the sound samples. • The impulse response is a convolution mask.

  13. How Does Convolution Work to Filter Frequencies of a Sound Wave? Click to animate x is a list of sound samples – It’s the digitized sound wave. h is a list of values that constitute the convolution mask – i.e., the filter in the time domain. y is a list of sound samples that constitute the sound wave after it has been filtered.

  14. Impulse Response vs. Frequency Response • impulse response = convolution mask = filter in time domain • frequency response = a graph of what the filter does in terms of how much each frequency is boosted or attenuated • impulse response = inverse Fourier transform of frequency response • frequency response = Fourier transform of impulse response

  15. Frequency Response of Idealized vs. Realistic Low-Pass Filter idealized low-pass filter realistic low-pass filter

  16. Going from Frequency Response to Impulse Response • If you know what frequency response you want from a filter, how do you get the corresponding impulse response ? • In the ideal, where the frequency response is a rectangular function, the frequency response and impulse response are both Fourier transforms of each other and both inverse Fourier transforms of each other.

  17. Fourier Transform

  18. To get the impulse response from the idealized frequency response, take the Fourier transform of the frequency response

  19. Creating a Low-Pass Filter • You can do it yourself in MATLAB: • Create the filter using the given function, sin(2fc)/n • Read in an audio clip • Since this is a filter in the time domain, convolve audio clip with the filter • Listen to the result • Graph the frequencies of filtered clip against the unfiltered clip. (Do this by taking the Fourier transform of each first.) • See the demonstration and worksheet for details.

  20. Creating a Vocoder in MATLAB From http://www.paia.com/ProdArticles/vocodwrk.htm

  21. Creating a Vocoder in MATLAB function output = vocoder(input1, input2, s, window) input1 = input1'; input2 = input2'; q=(s-window); output = zeros(1,s); for i=1:window/4:q b = i+window-1; input1partfft = fft(input1(i:b)); input2partfft = fft(input2(i:b)); input1fft(i:b) = input1fft(i:b) + abs(input1partfft); input2fft(i:b) = input2fft(i:b) + abs(input2partfft); mult = input1partfft.*input2partfft; output(i:b) = output(i:b)+ifft(mult); end output = output/max(output); end Demonstration

  22. Digital Filters

  23. That lead me to an ah-ha moment this morning! • Audition’s vocoder • Comparison of my vocoder and Audition’s

  24. Kinds of Ah-Ha Moments • Ah-ha! I know how that works now! • Ah-ha! I know why someone would be interested in using that! • Ah-ha! I know why it matters to know that!

  25. How does knowledge of the math and science help to make the real work better? Hands on work for some real purpose: music, theatre, television, movie-making Application Environments: Audition, Logic Pro, Sound Forge, Pro Tools, Sonar, Music Creator, Reason How things work: mathematics, algorithms, and technology

  26. Interplay Between Science, Art, and Practice • If you put artist/practitioners together with computer scientists, does one group shed light on the work of another? • What kinds of ah-ha moments emerge? • Do artist/practitioners create better products if they understand more about how things work? • Computer scientists like to understand how things work. But would it help them to know why it matters!

  27. If artist/practitioners know who things work, they can be more purposefully experimental (as opposed to “click on things and see what happens”). • They have more power over their tools to be original and creative. • Consider this in the visual arts…

  28. With the click of the mouse, I’m an artist! ???

  29. What Jason and I learned in our Digital Sound Production Workshop • Yes, definitely, when music students understand their tools, they use them more effectively. • Yes, definitely, when computer science students see what musicians want from their tools, they have ideas for how to create new and better tools. Also, understanding something about the music sheds light on how to make the tools work.

  30. Ah-ha Moments for Music Students • Rewiring Cakewalk Music Creator to Combining digital audio and MIDI • Demonstration

  31. Ah-ha Moments for Music Students • Multi-band dynamics processing - L3-LL Multimaximizer plug-in • Unprocessed audio • Processed audio after using the plug-in

  32. Ah-ha Moments for Music Students • In order to understand this tool, students needed to know something about frequency, dynamic range, and ADRS envelopes Dynamics Processing Plug-In from Audition

  33. Ah-ha Moments for Music Students • Creating and editing MIDI samples • Samplers and synthesizers are not the same. • There’s a lot going on in a sample bank. • You can edit a sample bank yourself. • A MIDI message can mean whatever you want it to mean. • You can create your own sample banks. • Loops and samples aren’t the same thing. • It’s actually possible to work creatively with loops. You can edit the samples from which they’re created, or edit the why the loops are put together.

  34. The Instrument Editor in Logic Pro There’s a lot going on in a sample bank, and you can have access to it!

  35. Ah-ha! A MIDI message can mean whatever I want it to mean! A control change message (or a pitch bend message or whatever message you choose) can be defined to mean “Go to the next instrument in the EXS24 set of instruments.”

  36. Ah-ha! You can make your own sample bank! Making a sample bank of birds songs in Reason. Then rewiring Cakewalk Music Creator through Reason, using this sample bank, and playing a jazz piece with the bird songs as instruments. Listen Then compose your own piece to use the bird Samples! Listen

  37. Using loops isn’t “cheating.” You can edit loops and put them together creatively! Editing loops in Reason

  38. Working Creatively with Loops • Turkish Nights by Dan Applegate

  39. Ah-ha Moments for Computer Science Students • A MIDI message can mean whatever you want it to mean. • It really makes more sense to think of MIDI messages in hexadecimal rather than decimal.

  40. Status Byte • 10000000 is the lowest value a status byte can hold. • 10000000 = 80 in hex, 128 in decimal • Generally we deal with MIDI bytes in hex because it makes it easier to program. Slide courtesy of Jason Romney. Thanks. 

  41. Status Bytes n=channel number, in hexadecimal Slide courtesy of Jason Romney. Thanks. 

  42. Data Byte • MIDI data bytes follow a MIDI status byte • Status bytes tell a MIDI device what to do. Data bytes tell the MIDI device how to do it. • Data bytes are bytes with the 8th bit turned off. Consequently, data bytes cannot carry a value larger than 127 (7FH). Slide courtesy of Jason Romney. Thanks. 

  43. Channel Voice Messages n = channel number Slide courtesy of Jason Romney. Thanks. 

  44. A C program for reading MIDI message, converting them to frequencies, and sending them to the sound card to be played. • Programming assignment created for first or second year programming students, by John Brock while(j){ if(k&0x80){ d1 = fgetc(midi); if(k != 0xC0 && k != 0xD0){ d2 = fgetc(midi); } if(k == 0xFF && d1 == 0x2F) j = 0; if(k == 0x90){ freq = 8.1758*pow(2, d1/12.0); if(d2 != 0){ for(i = 0; i < (shmsz); i++){ samp[i] += sin(freq*(2.0*M_PI/RATE)*i); } ….. and so forth You can see that John is working in hexadecimal.

  45. Hey! Let’s try using an autotuner! • The music student wants to use it. • The computer science student wants to make his own and understand how it works. • Ah-ha, they both say! The human voice has harmonic frequencies, and if you don’t know that, you can’t create a vocoder! • Ah-ha, they both say! Human perception of sound is non-linear! Now I know what that means and why it matters!

  46. Creating an Autotuner • Experiment in thinking about autotuning: demo