1 / 77

Application of Genetic Algorithms and Neural Networks to the Solution of Inverse Heat Conduction Problems

Application of Genetic Algorithms and Neural Networks to the Solution of Inverse Heat Conduction Problems. A Tutorial Keith A. Woodbury Mechanical Engineering Department. Paper/Presentation/Programs. Not on the CD Available from www.me.ua.edu/inverse. Overview. Genetic Algorithms

eros
Télécharger la présentation

Application of Genetic Algorithms and Neural Networks to the Solution of Inverse Heat Conduction Problems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Application of Genetic Algorithms and Neural Networks to the Solution of Inverse Heat Conduction Problems A Tutorial Keith A. WoodburyMechanical Engineering Department

  2. Paper/Presentation/Programs • Not on the CD • Available from www.me.ua.edu/inverse 4th Int. Conf. Inv. Probs. Eng.

  3. Overview • Genetic Algorithms • What are they? • How do they work? • Application to simple parameter estimation • Application to Boundary Inverse Heat Conduction Problem 4th Int. Conf. Inv. Probs. Eng.

  4. Overview • Neural Networks • What are they? • How do they work? • Application to simple parameter estimation • Discussion of boundary inverse heat conduction problem 4th Int. Conf. Inv. Probs. Eng.

  5. MATLAB® • Integrated environment for computation and visualization of results • Simple programming language • Optimized algorithms • Add-in toolbox for Genetic Algorithms 4th Int. Conf. Inv. Probs. Eng.

  6. Genetic Algorithms • What are they? • GAs perform a random search of a defined N-dimensional solution space • GAs mimic processes in nature that led to evolution of higher organisms • Natural selection (“survival of the fittest”) • Reproduction • Crossover • Mutation • GAs do not require any gradient information and therefore may be suitable for nonlinear problems 4th Int. Conf. Inv. Probs. Eng.

  7. Genetic Algorithms • How do they work? • A population of genes is evaluated using a specified fitness measure • The best members of the population are selected for reproduction to form the next generation. The new population is related to the old one in a particular way • Random mutations occur to introduce new characteristics into the new generation 4th Int. Conf. Inv. Probs. Eng.

  8. Genetic Algorithms • Rely heavily on random processes • A random number generator will be called thousands of times during a simulation • Searches are inherently computationally intensive • Usually will find the global max/min within the specified search domain 4th Int. Conf. Inv. Probs. Eng.

  9. Genetic Algorithms • Basic scheme • (1)Initialize population • (2)evaluate fitness of each member • (3)reproduce with fittest members • (4)introduce random mutations in new generation • Continue (2)-(3)-(4) until prespecified number of generations are complete 4th Int. Conf. Inv. Probs. Eng.

  10. Role of Forward Solver • Provide evaluations of the candidates in the population • Similar to the role in conventional inverse problem 4th Int. Conf. Inv. Probs. Eng.

  11. Elitism • Keep the best members of a generation to ensure that their characteristics continue to influence subsequent generations 4th Int. Conf. Inv. Probs. Eng.

  12. Encoding • Population stored as coded “genes” • Binary Encoding • Represents data as strings of binary numbers • Useful for certain GA operations (e.g., crossover) • Real number encoding • Represent data as arrays of real numbers • Useful for engineering problems 4th Int. Conf. Inv. Probs. Eng.

  13. Binary Encoding – Crossover Reproduction 4th Int. Conf. Inv. Probs. Eng.

  14. Binary Encoding • Mutation • Generate a random number for each “chromosome” (bit); • If the random number is greater than a “mutation threshold” selected before the simulation, then flip the bit 4th Int. Conf. Inv. Probs. Eng.

  15. Real Number Encoding • Genes stored as arrays of real numbers • Parents selected by sorting population best to worst and taking the top “Nbest” for random reproduction 4th Int. Conf. Inv. Probs. Eng.

  16. Real Number Encoding Reproduction • Weighted average of the parent arrays: Ci = wAi + (1-w)*Biwhere w is a random number 0 ≤ w ≤ 1 • If sequence of arrays are relevant, use a crosover-like scheme on the children 4th Int. Conf. Inv. Probs. Eng.

  17. Real Number Encoding • Mutation • If mutation threshold is passed, replace the entire array with a randomly generated one • Introduces large changes into population 4th Int. Conf. Inv. Probs. Eng.

  18. Real Number Encoding • Creep • If a “creep threshold” is passed, scale the member of the population with Ci = ( 1 + w )*Ciwhere w is a random number in the range 0 ≤ w ≤ wmax. Both the creep threshold and wmax must be specified before the simulation begins • Introduces small scale changes into population 4th Int. Conf. Inv. Probs. Eng.

  19. Simple GA Example • Given two or more points that define a line, determine the “best” value of the intercept b and the slope m • Use a least squares criterion to measure fitness: 4th Int. Conf. Inv. Probs. Eng.

  20. Make up some data • >> b = 1; m = 2; • >> xvals =[ 1 2 3 4 5]; • >> yvals = b*ones(1,5) + m * xvalsyvals = 3 5 7 9 11 4th Int. Conf. Inv. Probs. Eng.

  21. Parameters • Npop – number of members in population • (low, high) – real number pair specifying the domain of the search space • Nbest – number of the best members to use for reproduction at each new generation 4th Int. Conf. Inv. Probs. Eng.

  22. Parameters • Ngen – total number of generations to produce • Mut_chance – mutation threshold • Creep_chance – creep threshold • Creep_amount – parameter wmax 4th Int. Conf. Inv. Probs. Eng.

  23. Parameters • Npop = 100 • (low, high) = (-5, 5) • Nbest = 10 • Ngen = 100 4th Int. Conf. Inv. Probs. Eng.

  24. SimpleGA – Results (exact data) 4th Int. Conf. Inv. Probs. Eng.

  25. SimpleGA – Convergence History 4th Int. Conf. Inv. Probs. Eng.

  26. SimpleGA – Results (1% noise) 4th Int. Conf. Inv. Probs. Eng.

  27. SimpleGA – Results (10% noise) 4th Int. Conf. Inv. Probs. Eng.

  28. SimpleGA – 10% noise 4th Int. Conf. Inv. Probs. Eng.

  29. Heat function estimation • Each member of population is an array of Nunknown values representing the piecewise constant heat flux components • Discrete Duhamel’s Summation used to compute the response of the 1-D domain 4th Int. Conf. Inv. Probs. Eng.

  30. Make up some data • Use Duhamel’s summation with t = 0.001 • Assume classic triangular heat flux 4th Int. Conf. Inv. Probs. Eng.

  31. “Data” 4th Int. Conf. Inv. Probs. Eng.

  32. Two data sets • “Easy” Problem – large t • Choose every third point from the generated set • Harder Problem – small t • Use all the data from the generated set 4th Int. Conf. Inv. Probs. Eng.

  33. GA program modifications • Let Ngen, mut_chance, creep_chance, and creep_amount be vectors • Facilitates dynamic strategy • Example: Ngen = [ 100 200 ] mut_chance = [ 0.7 0.5 ]means let mut_chance = 0.7 for 100 generations and then let mut_chance = 0.5 until 200 generations 4th Int. Conf. Inv. Probs. Eng.

  34. GA Program Modifications • After completion of each pass of the Ngen array, redefine (low,high) based on (min,max) of the best member of the population • Nelite = 5 4th Int. Conf. Inv. Probs. Eng.

  35. “Easy” Problem • t = 0.18 • First try, let • Npop = 100 • (low, high) = (-1, 1) • Nbest = 10 • Ngen = 100 4th Int. Conf. Inv. Probs. Eng.

  36. “Easy” Problem • Npop = 100, Nbest = 10, Ngen = 100 4th Int. Conf. Inv. Probs. Eng.

  37. “Easy” Problem • Npop = 100, Nbest = 10, Ngen = 100 4th Int. Conf. Inv. Probs. Eng.

  38. “Easy” Problem • Npop = 100, Nbest = 10, Ngen = 100 4th Int. Conf. Inv. Probs. Eng.

  39. “Easy” Problem – another try • Use variable parameter strategy • Nbest = 20 • Ngen =[ 200 350 500 650 750] • mut_chance = [0.9 0.7 0.5 0.3 0.1] • creep_chance = [ 0.9 0.9 0.9 0.9 0.9] • creep_amount =[0.7 0.5 0.3 0.1 0.05 ] 4th Int. Conf. Inv. Probs. Eng.

  40. “Easy” Problem – another try 4th Int. Conf. Inv. Probs. Eng.

  41. “Easy” Problem – another try 4th Int. Conf. Inv. Probs. Eng.

  42. “Easy” Problem – another try 4th Int. Conf. Inv. Probs. Eng.

  43. “Hard” Problem • has small time step data t = 0.06 • Use same parameters as last • Nbest = 20 • Ngen =[ 200 350 500 650 750] • mut_chance = [0.9 0.7 0.5 0.3 0.1] • creep_chance = [ 0.9 0.9 0.9 0.9 0.9] • creep_amount =[0.7 0.5 0.3 0.1 0.05 ] 4th Int. Conf. Inv. Probs. Eng.

  44. “Hard” Problem 4th Int. Conf. Inv. Probs. Eng.

  45. “Hard” Problem 4th Int. Conf. Inv. Probs. Eng.

  46. “Hard” Problem 4th Int. Conf. Inv. Probs. Eng.

  47. “Hard” Problem • What’s wrong? • Ill-posedness of the problem is apparent as t becomes small. • Solution • Add a Tikhonov regularizing term to the objective function 4th Int. Conf. Inv. Probs. Eng.

  48. “Hard” Problem • With 1 = 1.e-3 4th Int. Conf. Inv. Probs. Eng.

  49. “Hard” Problem • With 1 = 1.e-3 4th Int. Conf. Inv. Probs. Eng.

  50. “Hard” Problem • With 1 = 1.e-3 4th Int. Conf. Inv. Probs. Eng.

More Related