1 / 28

Beyond Static Sound: Incorporating Dynamic, Realtime Sound into Multimedia Applications

Beyond Static Sound: Incorporating Dynamic, Realtime Sound into Multimedia Applications. Dr. Dan Hosken Assistant Professor of Music California State University, Northridge. Presented at: The National Meeting of The Association for Technology in Music Instruction San Francisco, CA

mina
Télécharger la présentation

Beyond Static Sound: Incorporating Dynamic, Realtime Sound into Multimedia Applications

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Beyond Static Sound:Incorporating Dynamic, Realtime Sound into Multimedia Applications Dr. Dan Hosken Assistant Professor of Music California State University, Northridge Presented at: The National Meeting of The Association for Technology in Music Instruction San Francisco, CA November 6, 2004

  2. Traditional Multimedia Authoring Model • Media elements created/found externally. For example: • Soundfiles recorded/edited in Peak • MIDI files recorded/edited in Digital Performer • Soundfiles or MIDI files from a library (or internet)

  3. Traditional Multimedia Authoring Model • Media elements are imported into authoring programs and manipulated using scripting language. For example: • Dreamweaver (Javascript) • Flash (ActionScript) • Director (Lingo)

  4. Advantages of Traditional Model • “Production Team” approach • Sound Designer • Composer • Authoring environment need not “reinvent the wheel” • Adequate model for many applications (e.g., CAI)

  5. Drawbacks of Traditional Model • Sound/MIDI can be manipulated but not transformed • Low-level control of parameters not possible (e.g., changing index of modulation of a sound produced by FM)

  6. Lingo control of sound (Director) • Play (8 possible sound channels) • Sound(channel).play(member(“mySound”)) • Loop • Sound(channel).play([#member: member(“mySound”), #loopStartTime: ms, #loopEndTime: ms, #loopCount: repeats]) • Rateshift • Sound(channel).play([#member: member(“mySound”), #rateShift: semitones]) • Volume (0–255) and Pan (-100 to +100) • Sound(channel).volume = volumeValue • Sound(channel).pan = panValue

  7. Why not just use Max/MSP?…or Supercollider or Csound or Reaktor or … • You can! • The ubiquity of Director and Flash in the MM world make collaboration easier • Director and Flash are built for MM so a number of authoring operations are simpler

  8. So what are my options? • Option 1: Use Director for animation and interaction and send messages to, say, Max/MSP • MIDI • Sequence Xtra (temporarily off market) • MIDI I/O (not up to date on Mac) • XMIDI • OSC (Open Sound Control) • OSCar (not in active development) • Flosc (Flash)

  9. So what are my options? • Option 2: Use an add-on synthesis engine • Koan/Intent (in transition) • PD (Pure Data) web browser plug-in (new) • Jsyn (for use within java applets) • CPS

  10. What is CPS? • Program written by Niels Gorisse • Standalone Patch Editor • Object/Patching paradigm (Max/MSP-ish) • Based on MPEG4-SA (Csound-ish) • Patches savable as Lingo, java, and C++ • CPS engine provided as Director Xtra • Free Downloadable Client (an Xtra for use by Shockwave player)

  11. CPS/Director Example: Aliasing CPS Patch:

  12. CPS/Director Example: Aliasing • CPS Patch Saved as Lingo: set oscil0 to CPSgetObject("oscil") set audioOut1 to CPSgetObject("audioOut") set upsamp2 to CPSgetObject("upsamp") set freq to CPSgetObject("numberField") set multiply4 to CPSgetObject("*") CPSOBJConversate(multiply4,"_UP") CPSOBJConversate(multiply4,"_UP") CPSOBJConversate(multiply4,"_UP") set scale to CPSgetObject("numberField") set autoStart6 to CPSgetObject("autoStart") set table7 to CPSgetObject("table") CPSOBJConversate(table7,"harm 1024 1") set constant8 to CPSgetObject("constant") CPSOBJConversate(constant8,"R0.100000") CPSgetConnection(upsamp2,30,oscil0,10) CPSgetConnection(oscil0,30,multiply4,10) CPSgetConnection(scale,40,multiply4,20) CPSgetConnection(multiply4,30,audioOut1,10) CPSgetConnection(multiply4,30,audioOut1,11) CPSgetConnection(table7,40,oscil0,20) CPSgetConnection(constant8,40,upsamp2,21) CPSgetConnection(freq,40,upsamp2,20) CPSOBJkin(freq,100.0,0) CPSOBJkin(scale,0.5,0) CPSOBJkin(constant8,0.1,1) -- a 'constant' object CPSsetSamplerate(11025) CPSstartAudio()

  13. CPS/Director Example: Aliasing • Director Interface:

  14. CPS/Director Example: Aliasing • Director Interface:

  15. CPS/Director Example: Aliasing • Changing the Contents of the table object with Lingo: global table7 sineButton = sprite("radio_sine").spritenum threeButton = sprite("radio_three").spritenum case me.spriteNum of sineButton --sine wave CPSOBJConversate(table7,"harm 1024 1") threeButton --three partials CPSOBJConversate(table7,"harm 1024 1 .5 .33") end case • Changing the Value of the Frequency number box with Lingo: global freq --name assigned to frequency number box property pValue on setCPSValues me CPSOBJkin(freq, float(pValue), 0) end

  16. CPS/Director Example: Basic Synth • CPS Patch:

  17. CPS/Director Example: Basic Synth • Director Interface (waveform menu):

  18. CPS/Director Example: Basic Synth • Director Interface (filter menu):

  19. CPS/Director Example: Basic Synth • Director Interface (Envelope menu):

  20. CPS/Director Example: Sound Space I • Uses a game-style interface to exercise multi-dimensional control over sound • User manipulates an onscreen character through virtual terrain • “Effort” of character is determined by the nature of the terrain at the character’s position • Effort is mapped to synthesis parameters (Ring-modulated waveshaping and sub-audio phasor) • Activity Index measures how long the character has been engaging in high-effort/low-effort activities • Activity index mapped to pitch collection expansion and register

  21. Sound Space I Visual Interface Cindercone, Lassen National Park, CA

  22. Sound Space I CPS: Cloud Control

  23. Sound Space I Mapping: Cloud Control • Ostinato with pitches chosen randomly from specific collection • Global parameters: “Inter-onset” time, inter-onset randomness, pitch expansion, pitch collection, note duration, register • Effort associated with various zones and Activity Level determine parameters

  24. Sound Space I CPS: Cloud Synth

  25. Sound Space I Mapping: Cloud Synth • Non-linear waveshaping instrument with ring modulation • Note-level parameters include distortion index and ring modulation frequency factor • Effort determines these parameters

  26. Sound Space I CPS: “Engine”

  27. Sound Space I Mapping: Engine • Sawtooth waveform at nearly sub-audio frequency run through low-pass resonant filter • Horizontal position  Pan • Speed  Frequency (ca. 16 to 20 Hz) • “Effort”  Filter Cutoff

  28. Contact Dan.hosken@csun.edu Presentation and Examples posted soon: http://www.csun.edu/~dwh50750/

More Related