1 / 18

Genetic Programming System for Music Generation

Genetic Programming System for Music Generation. With Automated Fitness Raters. What is Genetic Programming(GP)?. a n evolutionary algorithm-based methodology inspired by biological evolution New generation of solutions created from previous generations Three Steps:

badru
Télécharger la présentation

Genetic Programming System for Music Generation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Genetic Programming System for Music Generation With Automated Fitness Raters

  2. What is Genetic Programming(GP)? • an evolutionary algorithm-based methodology inspired by biological evolution • New generation of solutions created from previous generations • Three Steps: • Mutation: change random bits • Crossover: exchange parts of two parents • Selection: get rid of bad examples

  3. An Example: 8-Queen Problem

  4. Fitness Function: • How good is each string? • In this case, it depends on the number of non-attacking pairs of queens. • Selection: • Choose parents randomly according to fitness function • Crossover: • Make new children from parents • Choose a cut-point, swap halves • Mutation • Change random bits with low probability

  5. Back to the GP-Music System • An interactive system which allows users to evolve short musical sequences. • Focus on creating short melodic sequences. • Does not attempt to evolve polyphony or the actual wave forms of the instruments. • Create only a set of notes and pauses. • Use XM format - store musical pieces rather than straight digital audio

  6. Function and Terminal Sets • Function Set: play_two(2 arguments), add_space(1 argument), play_twice(1 argument), shift_up(1 argument), shift_down(1 argument), mirror(1 argument), play_and_mirror(1 argument) • Terminal Set: • Notes: C-4, C#4, D-4, D#4, E-4, F-4, F#4, G-4, G#4, A-5, A#5, B-5 • Pseudo-Chords: • C-Chord, D-Chord, E-Chord, F-Chord, G-Chord, A-Chord, B-Chord • Other: RST (used to indicates one beat without a note)

  7. Function and Terminal Sets Cont.

  8. Sample Music Program Interpreter: (shift-down (add-space (play-and-mirror (play-two (play-two (play-two (play-two B-5 B-5) (shift-down A-5)) (shift-down A-5)) F-4)))) A-5,RST,A-5,RST,F-4,RST,F-4,RST,E-4,RST, E-4,RST,F-4,RST,F-4,RST,A-5,RST,A-5,RST shift-down B-5,RST,B-5,RST,G-4,RST,G-4,RST,F-4,RST, F-4,RST,G-4,RST,G-4,RST,B-5,RST,B-5,RST add-space B-5,B-5,G-4,G-4,F-4,F-4,G-4,G-4,B-5,B-5 play-and-mirror Each node in the tree propagates up a musical note string, which is then modified by the next higher node. In this way a complete sequence of notes is built up, and the final string is returned by the root node. B-5,B-5,G-4,G-4,F-4 play-two B-5,B-5,G-4,G-4 F-4 play-two B-5,B-5,G-4 play-two shift-down B-5,B-5 play-two shift-down A-5 B-5 B-5 A-5

  9. Fitness Selection Ahuman using the system is asked to rate the musical sequences that are created for each generation of the GP process.

  10. User Bottleneck • A user can only rate a small number of sequences in a sitting, limiting the number of individuals and generations that can be used. • The GP-Music System takes rating data from a users run and uses it to train a neural network based automatic rater.

  11. Neural Networks • A trainable mathematical model for finding boundaries • Inspired by biological neurons • Neurons collect input from receptors, other neurons • If enough stimulus collected, the neuron fires • Inputs in artificial neurons are variables, outputs of other neurons • Output is an “activation” determined by the weighted inputs

  12. Back Propagation Algorithm • We have some error and we want to assign “blame” for it proportionally to the various weights in the network • We can compute the error derivative for the last layer • Then, distribute “blame” to previous layer

  13. Backprop Training • Start with random weights • Run the network forward • Calculate the error based on outputs • Propagate the error backward • Update the weights • Repeat with next training example until you stop improving • Cross validation data is very important here

  14. each of the top level nodes has ‘Level Spread’ connections to lower nodes. The weights on these connections are all shared, so the weight on the first input to each upper level node is identical each lower level node affects two upper level nodes

  15. Auto Rater Runs • The weights and biases for the auto-rater network trained for 850 cycles were used in several runs of the GP-Music System. • To evaluate how well the auto-rater works in larger runs, runs with 100 and 500 sequences per generation over 50 generations were made. The resulting best individuals are shown in the next slide

  16. 50 Generations, 100 individuals per Generation 50 Generations, 500 individuals per Generation

  17. Future Work • it would be interesting to analyze the weights which are being learned by the network to see what sort of features it is looking for • It may also be possible to improve the structure of the auto raters themselves by feeding them extra information, or modifying their topology

  18. Reference The research is done by Brad Johanson- Stanford University Rains Apt. 9A 704 Campus Dr. Stanford, CA. 94305 bjohanso@stanford.edu Riccardo Poli - University of Birmingham School of Computer Science The University of Birmingham Birmingham B15 2TT R.Poli@cs.bham.ac.uk

More Related