480 likes | 623 Vues
This research examines the application of simple neural network models to optimize the Traveling Salesman Problem (TSP), a classic combinatorial challenge of finding the shortest route through a set of cities. Utilizing a neural framework with excitatory and inhibitory connections, the study investigates various neural architectures and learning dynamics that affect convergence and solution quality. Key questions focus on network performance, initial activations, and the similarities to traditional heuristic methods. Results are presented for multiple city configurations, contributing insights into computational models inspired by neural dynamics.
E N D
Neural Networks for Optimization Bill Wolfe California State University Channel Islands
Neural Models • Simple processing units • Lots of them • Highly interconnected • Exchange excitatory and inhibitory signals • Variety of connection architectures/strengths • “Learning”: changes in connection strengths • “Knowledge”: connection architecture • No central processor: distributed processing
Simple Neural Model • aiActivation • ei External input • wij Connection Strength Assume: wij = wji (“symmetric” network) W = (wij) is a symmetric matrix
Dynamics • Basic idea:
Lower Energy • da/dt = net = -grad(E) seeks lower energy
Keeps the activation vector inside the hypercube boundaries Encourages convergence to corners
Summary: The Neural Model aiActivation eiExternal Input wijConnection Strength W (wij = wji) Symmetric
Example: Inhibitory Networks • Completely inhibitory • wij = -1 for all i,j • k-winner • Inhibitory Grid • neighborhood inhibition
Traveling Salesman Problem • Classic combinatorial optimization problem • Find the shortest “tour” through n cities • n!/2n distinct tours
An Effective Heuristic for the Traveling Salesman Problem S. Lin and B. W. Kernighan Operations Research, 1973 http://www.jstor.org/view/0030364x/ap010105/01a00060/0
Neural Network Approach neuron
Tours – Permutation Matrices tour: CDBA permutation matrices correspond to the “feasible” states.
Only one city per time stopOnly one time stop per cityInhibitory rows and columns inhibitory
Distance Connections: Inhibit the neighboring cities in proportion to their distances.
Research Questions • Which architecture is best? • Does the network produce: • feasible solutions? • high quality solutions? • optimal solutions? • How do the initial activations affect network performance? • Is the network similar to “nearest city” or any other traditional heuristic? • How does the particular city configuration affect network performance? • Is there any way to understand the nonlinear dynamics?
Initial Phase Fuzzy Tour Neural Activations
Monotonic Phase Fuzzy Tour Neural Activations
Nearest-City Phase Fuzzy Tour Neural Activations
Fuzzy Tour Lengths tour length iteration
Average Results for n=10 to n=70 cities (50 random runs per n) # cities
DEMO 2 Applet by Darrell Long http://hawk.cs.csuci.edu/william.wolfe/TSP001/TSP1.html
Conclusions • Neurons stimulate intriguing computational models. • The models are complex, nonlinear, and difficult to analyze. • The interaction of many simple processing units is difficult to visualize. • The Neural Model for the TSP mimics some of the properties of the nearest-city heuristic. • Much work to be done to understand these models.
Brain • Approximately 1010 neurons • Neurons are relatively simple • Approximately 104 fan out • No central processor • Neurons communicate via excitatory and inhibitory signals • Learning is associated with modifications of connection strengths between neurons
Fuzzy Tour Lengths tour length iteration
Average Results for n=10 to n=70 cities (50 random runs per n) tour length # cities