1 / 25

LECTURE FIVE

LECTURE FIVE. ARTIFICIAL NEURAL NETWORKS AND NEUROSEMANTICS 人工神经元网络以及神经语义学. THE CONNECTIONS BETWEEN THIS LECTURE AND THE LAST ONE.

hagar
Télécharger la présentation

LECTURE FIVE

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. LECTURE FIVE ARTIFICIAL NEURAL NETWORKS AND NEUROSEMANTICS 人工神经元网络以及神经语义学

  2. THE CONNECTIONS BETWEEN THIS LECTURE AND THE LAST ONE • If eliminativism is right, then we cannot reduce types of mental states to anything else. That means, the natural kinds of any type of mental states cannot survive a strict implementation of an eliminativist program. • Somehow similar to this case: from an eliminativist perspective, the Great French Revolution is not a legitimate label which can pick out a single historical event. Rather, it should be viewed as a label attached to a loose collection of the behaviors of numerous individuals. Though historians need to rely on this label when the behaviors of individuals are epistemologically inaccessible to them; they also need to be prepared to abandon this label when new data are available.

  3. SO THE MORAL IS Philosophers of mind and cognitive scientists need to be prepared to abandon the mental vocabulary when new data about human’s neural systems are available.

  4. THREE TASKS LEFT IN THE AGENDA To learn something from Neuroscience; To seek some possibility of making the neurological story more universal (with, say, the help of AI) To try to reconstruct the mental architecture out of the findings in neural science and AI.

  5. WHAT NEUROSCIENCE CAN TELL • By definition, “Neurons are basic signaling units of the nervous system of a living being in which each neuron is a discrete cell whose several processes are from its cell body” . • The biological neuron has four main regions to its structure. The cell body, or soma, has two offshoots from it. The dendrites (树突)and the axon (轴突)end in pre-synaptic terminals(突触前末端). The cell body is the heart of the cell. It contains the nucleolus(细胞核) and maintains protein synthesis(蛋白质合成). A neuron has many dendrites, which look like a tree structure, receives signals from other neurons.

  6. MOREOVER … • A single neuron usually has one axon, which expands off from a part of the cell body. This I called the axon hillock(轴丘). The axon main purpose is to conduct electrical signals generated at the axon hillock down its length. These signals are called action potentials(动作电位). • The other end of the axon may split into several branches, which end in a pre-synaptic terminal. The electrical signals (action potential) that the neurons use to convey the information of the brain are all identical. The brain can determine which type of information is being received based on the path of the signal. • Just similar to this case: I will send the some message to different medias, and the authority of each media will change the weight of what I said from the audience perspective.

  7. The Mathematical Model • Once modeling an artificial functional model from the biological neuron, we must take into account three basic components. First of all, the synapses of the biological neuron are modeled as weights. Let’s remember that the synapse of the biological neuron is the one which interconnects the neural network and gives the strength of the connection. For an artificial neuron, the weight is a number, and represents the synapse. A negative weight reflects an inhibitory connection, while positive values designate excitatory connections. The following components of the model represent the actual activity of the neuron cell. All inputs are summed altogether and modified by the weights. This activity is referred as a linear combination. Finally, an activation function controls the amplitude (值幅)of the output. For example, an acceptable range of output is usually between 0 and 1, or it could be -1 and 1.

  8. Activation functions(激发函数) • As mentioned previously, the activation function acts as a squashing function(压缩函数), such that the output of a neuron in a neural network is between certain values (usually 0 and 1, or -1 and 1). In general, there are three types of activation functions, denoted by Φ(.)

  9. Threshold Function(阈值函数) • First, there is the Threshold Function which takes on a value of 0 if the summed input is less than a certain threshold value (v), and the value 1 if the summed input is greater than or equal to the threshold value.

  10. Piecewise-Linear function(方形分片线性函数) • Secondly, there is the Piecewise-Linear function. This function again can take on the values of 0 or 1, but can also take on values between that depending on the amplification factor in a certain region of linear operation.

  11. Sigmoid Function (S形函数) • This function can range between 0 and 1, but it is also sometimes useful to use the -1 to 1 range. An example of the sigmoid function is the hyperbolic tangent function(双曲正切函数).

  12. MORE ILLUSTATIONS OF THE S-FUNCTIONS:

  13. THE ARTIFICAL NEURAL NETWORK • Within neural systems it is useful to distinguish three types of units: input units (indicated by an index i) which receive data from outside the neural network, output units (indicated by an index o) which send data out of the neural network, and hidden units (indicated by an index h) whose input and output signals remain within the neural network.

  14. WHY IS THIS NETWORK SPECIAL? • Each unit performs a relatively simple job: receive input from neighbours or external sources and use this to compute an output signal which is propagated to other units. • Apart from this processing, a second task is the adjustment of the weights. • The system is inherently parallel in the sense that many units can carry out their computations at the same time. • During operation, units can be updated either synchronously or asynchronously. With synchronous updating, all units update their activation simultaneously; with asynchronous updating, each unit has a (usually fixed) probability of updating its activation at a time t, and usually only one unit will be able to do this at a time. In some cases the latter model has some advantages.

  15. SO THE MORAL IS Semantic content is distributed in a huge network whose topological structure will evolve when new inputs come in, rather than stored in a fixed location in the brain. Or in another way around, your belief-token of something is not encoded by this neuron of that one, but by a huge network!

  16. HENCE A CRAZY IDEA COMES TO THE MIND: What if the brain can be scanned and mathematically re-modeled? That means, maybe we can download your thought and re-implement it in another brain!

  17. This idea appears in AVATAR!!! • Avatars are Na'vi-human hybrids which are operated by genetically matched humans. So if the human is Jack, its Avatar will share the same mental states with Jack when being operated. Avatar –Jack looks like a mental duplicate of Jack.

  18. A FEW STEPS IN FINE-TUNING AVATAR'S BRAIN • 1. To get Jack’s brain scanned ; • 2.To establish the mathematical model of Jack’s brain by using the data acquired from the scanning; • 3. To upload the model to the Avatar-Jack’s brain; • 4. Then Avatar-Jack would have the same psychological state as Jack.

  19. A philosophical problem arises here: • What is meaning now? • Answer: Structured Activation Spaces as Conceptual Frameworks!!

  20. figure 8.9. (a) Cottrell’s face-discrimination network. (b) Six possible input-layer activation patterns for this network. Each constitutesthe “preferred stimulus” of exactly one of the eighty middle-layer neurons. Each serves as one of eighty “templates” to which anyinput image is “compared,” so that each input receives a highly individual, eighty-dimensional “profile” of middle-layer activations.

  21. figure 8.2. (a) Two distinct networks trained to discriminate photos of facesas belonging to one of four hillbilly families. (b) The two activation spaces ofthe respective middle layers of the two networks. Each has acquired a structuredfamily of “prototypical” family regions, within which facial inputs from each ofthe four families typically produce an activation pattern.

  22. CRITICISMS OF NEUROSEMANTIC BY FODOR AND LEPORE Jerry Alan Fodor (born 1935) Dr. Ernest LeporeActing Director of the Rutgers Center for Cognitive Science (RuCCS)

  23. Anti-holism(反整体论) • Fodor has made many and varied criticisms of holism. He identifies the central problem with all the different notions of holism as the idea that the determining factor in semantic evaluation is the notion of an "epistemic bond". Briefly, P is an epistemic bond of Q if the meaning of P is considered by someone to be relevant for the determination of the meaning of Q. Meaning holism strongly depends on this notion. The identity of the content of a mental state, under holism, can only be determined by the totality of its epistemic bonds . And this makes the realism of mental states an impossibility: • "If people differ in an absolutely general way in their estimations of epistemic relevance, and if we follow the holism of meaning and individuate intentional states by way of the totality of their epistemic bonds, the consequence will be that two people (or, for that matter, two temporal sections of the same person) will never be in the same intentional state. Therefore, two people can never be subsumed under the same intentional generalizations. And, therefore, intentional generalization can never be successful. And, therefore again, there is no hope for an intentional psychology."

  24. FURTHER READING: • http://plato.stanford.edu/entries/connectionism/ • Neurophilosophy at Work • PAUL CHURCHLAND • University of California, San Diego • Chapter 8: Neurosemantics

  25. THE END

More Related