1 / 43

CSE202: Introduction to Formal Languages and Automata Theory

CSE202: Introduction to Formal Languages and Automata Theory. Chapter 10 Other Models of Turing Machines These class notes are based on material from our textbook, An Introduction to Formal Languages and Automata , 4 th ed., by Peter Linz. Variations on TMs.

elias
Télécharger la présentation

CSE202: Introduction to Formal Languages and Automata Theory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CSE202: Introduction to Formal Languages and Automata Theory Chapter 10 Other Models of Turing Machines These class notes are based on material from our textbook, An Introduction to Formal Languages and Automata, 4th ed., by Peter Linz.

  2. Variations on TMs • Most variations don’t add to or subtract from the power of the standard TM • Additions: • Stay instruction: in each move, the R/W head moves right or left or stays under the same cell • Tape is infinite in both directions • Multiple tapes (see proof in textbook) • Restrictions • Each move either writes to the tape or moves, but doesn’t do both

  3. Computational power • No attempt to extend the computational power of Turing machines yields a model of computation more powerful than the standard one-tape, one-head, deterministic Turing machine • By computational power, we mean what can be computed -- not how fast it can be computed. Your desktop may run faster than a Turing machine, but it can’t compute anything that a Turing machine can’t also compute.

  4. Off-line Turing machine • What if the TM has a second tape that is used to hold the original string, while the main tape is used for processing. You never have to write over the original string. Does this add any power to the TM? • No. Imagine writing the string onto the main tape, then inserting a special mark on the tape, then copying the string after the mark and doing all the processing after the mark.

  5. Multiple tapes • Consider a TM with k tapes and a separate head for each tape. It reads and writes on these tapes in parallel. • We can show that this does not increase the computational power of a TM by showing that any multi-tape TM can be simulated by a standard, single-tape TM. • The details of the simulation involve dividing a single tape into multiple tracks -- using an alphabet consisting of tuples, with one element for each track.

  6. Multiple tapes    TM with 3 tapes, simulated by . . .

  7. a TM with 1 tape, but 3 tracks on the tape . . .  or a TM with 1 track with tuples in each cell . . . 

  8. or a TM with “words” instead of tuples . . .  or a TM with the “words” replaced by individual symbols. 

  9. Multiple heads • In this case, there is a single tape but k heads that can read/write at different places on the tape at the same time. • We show that this does not increase the computational power of TMs by showing that a multiple-head TM can be simulated by a standard single-head TM. • The simulation details are similar to those for a multi-tape TM.

  10. Multiple heads   simulated by a tape with single head that reads and writes tuples . . . 

  11. Two-dimensional tapes • A 2-dimensional tape is a grid that extends infinitely downward as well as to the right. • The head can move in 4 directions: right, left, up, and down. • This TM can also be simulated by a TM with a single, one-dimensional tape

  12. Two-dimensional tapes

  13. Random-access Turing machine • Instead of accessing data on the tape sequentially, imagine a TM that has random-access memory and can go to any cell of the tape in one step. To allow this, the TM has registers that can store memory addresses. • We can simulate this by a multi-tape TM in which one tape is used as memory and the extra tapes are used as registers.

  14. Random-access Turing machine Tape Register 1 Register 2

  15. Nondeterministic Turing machine • A nondeterministic TM (NTM) has more than one transition with the same left-hand part, which means more than one transition can be taken in the same configuration. • Nondeterminism allows a TM to have different outputs for the same input. This does not make sense when computing a function, but makes sense for language-recognition in the same way as before. A string is accepted if some computation leads to the halting state.

  16. Non-determinism Back when we looked at finite state machines, we discovered that, although it might take fewer moves to process a string in a regular language with a non-deterministic finite automaton, we could always build a deterministic finite automaton to recognize the same strings.

  17. Non-determinism Deterministic and nondeterministic Turing machines are similar; it may be possible to do things faster with a non-deterministic TM, but it is always possible to build an equivalent deterministic TM that recognizes the same language.

  18. Nondeterminism and computational power • Nondeterminism does not increase the computational power of a TM. • We can show this by showing that any NTM can be simulated by a DTM using a technique that the book calls “dovetailing.”

  19. Nondeterminism and efficiency • Although nondeterminism does not increase the computational power of a TM, it lets it compute some things more efficiently by guessing the right thing to do. • Although a DTM can always simulate a NTM, the DTM may be much more inefficient because it has to try all possibilities to find the right one.

  20. Nondeterminism and efficiency • Surprisingly, the question whether a DTM can simulate an NTM efficiently is still unresolved. It is the famous question of whether P = NP. • P stands for “can be solved by a standard deterministic Turing machine in polynomial time” • NP stands for “can be solved by a non-deterministic Turing machine in polynomial time”

  21. Nondeterministic TMs Non-determinism doesn’t add any power to a TM to solve harder problems. We can always simulate an ordinary TM on a non-deterministic Turing Machine (NTM) by not using the freedom to be non-deterministic. Theorem: A non-deterministic Turing Machine (NTM) can be simulated exactly by a deterministic Turing Machine. So TMs and NTMs are equivalent.

  22. Variations of TM that limit its power • What if we change the transition rules so that the read/write head can only move right? Or delete the finite state controller? Wouldn’t those changes limit the power of a TM? • Yes! In fact, those changes would limit the power of the TM so much that you really couldn’t call it a TM any more.

  23. Variations of TM that limit its power • Restricting the amount of tape that a TM can use limits its computational power. • Theory tells us that this is the only modification to the standard TM that can limit the power of the TM.

  24. Variations of TM that limit its power • What if we limit the size of the tape to some arbitrary constant limit, no matter what language we are trying to recognize? • We may have a string too long to fit on the tape. The resulting machine is weaker than a Push-Down Automaton, which has an infinite stack. The advantages of a tape can’t make up for lack of adequate storage. • Equivalent to a Finite State Automaton (tape size = 0).

  25. Variations of TM that limit its power • What if we limit the size of tape to the size of the input string? • This gives us a Linear-Bounded Automaton (LBA). An LBA can accept all context-free languages plus other languages like {anbncn | n  0} and {ww | w  {a,b}*}, but not some of the other languages accepted by a standard TM. • It is more powerful than a PDA but less powerful than a TM.

  26. Universal Turing Machines The Universal Turing machine simulates any other TM with any tape. The UTM tape has a description of another TM on it, followed by an encoding of the tape that the machine will run on. The Universal Turing machine decodes and simulates the represented TM. This corresponds to Turing’s and von Neumann’s “stored program machine”.

  27. Encoding function A specific TM is defined primarily by its transition function. Each move of a TM is described by the formula: d (p, a) = (q, b, D) where: p is the initial state a is the current character on the tape q is the state moved to b is the character written on the tape D is the direction the tape head moves

  28. Encoding function Suppose that we represent a move, such as d (q3, a) = (q4, D, R) like this: q3 a q4 D R initial state current character on the tape state moved to character written on the tape direction the tape head moves q3 a q4 D R Can you tell what this is supposed to represent?

  29. Encoding function So here is our “condensed” rule: q3 a q4 D R Now let’s encode each of these 5 components as a sequence of 0’s, separated by 1’s. For example • the halt state will be represented by a single 0 • q0 will be represented by two 0’s • q1 will be represented by three 0’s • etc.

  30. Encoding function Characters: D = 0 a = 00 b = 000 States: halt = 0 q0 = 00 q1 = 000 Direction: Stay = 0 Left = 00 Right = 000

  31. Encoding function But 00 can stand for both the character a and state q0; won’t we get confused? No, because there are 5 parts to each rule, the parts are separated by 1’s, and the parts always come in the same order. So: 001010001010001 unambiguously represents: q0D q1D R

  32. Change leftmost a to b This TM has 6 transition rules.

  33. Change leftmost a to b b / b,R b / b,L a / b,L Δ / Δ,R Δ / Δ,L Δ / Δ,S q0 q1 q2 qhalt This TM has 6 transition rules: q0Δq1ΔR q1bq1bR q1aq2bL q1Δq2ΔL q2bqhbL q2ΔqhΔS

  34. Change leftmost a to b We use 11 to separate the rules from each other. So this TM can be represented by: • q0Δq1ΔR = 0010100010100011 • q1bq1bR = 000100010001000100011 • q1aq2bL = 00010010000100010011 • q1Δq2ΔL = 00010100001010011 • q2bqhbL = 0000100010100010011 • q2ΔqhΔS = 00001010101011

  35. Change leftmost a to b We can also encode the input string. The string baa would be encoded as: 11000100100 We use 11 to separate this string from the TM. So, an encoding of the entire TM, plus the string that it is supposed to process, looks like this: 001010001010001100010001000100010001100010010000100010011000101000010100110000100010100010011000010101010111111000100100

  36. How does the Tu work? The universal Turing machine, Tu, will have 3 tapes. The first tape will be the input/output tape, and initially it contains the entire string, representing both the specific TM we want to simulate, plus the string the TM is supposed to process. The second tape is the work tape. We will move the encoded string to this tape.

  37. How does the Tu work? The third tape will be used to represent the state that the simulated TM is currently in. We start off by copying the initial state of the TM (q0, or 00, in this case) to tape 3.

  38. How does the Tu work? Tape 1: input/output tape Tape 2: work tape; contains encoded string Tape 3: state the simulated TM is in

  39. How does the Tu work? You can see how the Tu is going to work: The precondition of any transition rule is the current state the TM is in (available on tape 3), and the character on the TM’s tape that we are currently reading (available on tape 2). We then look on tape 1 to find the rule whose precondition matches this one.

  40. How does the Tu work? Finally, we execute the postcondition part, changing the TM’s state to the new state (replacing the old state on tape 3), writing a character onto the TM’s tape (on Tu’s tape 2), and moving the tape head (on tape 2) left, right, or staying.

  41. Does the Tu model the encoded TM? Yes. Why? Because it is deterministic, there are only 2 possibilities, crash or halt. It will crash when the encoded TM does, and halt when the encoded TM does.

  42. Does the Tu model the encoded TM? Crash: • If the encoded TM crashes, Tu will not find a transition and will crash. Halt: • If the encoded TM halts, Tu notices this when it tries to write a single 0 (the halt state) to tape 3 (which keeps track of the current state the TM is in). At this point it erases tape 1, copies tape 2 onto tape 1 and halts.

  43. Conclusion: Anything that is effectively calculable can be executed on a TM. A universal TM can compute anything that any other Turing machine can compute. The universal TM is itself a standard TM. A CPU with RAM is a finite version of a TM; it has the power of a TM up to the point that it runs out of memory. Languages or hardware that provides compares, loops, and increments are termed Turing complete and can also compute anything that is effectively calculable.

More Related