1 / 30

Phonological constraints as filters in SLA

Phonological constraints as filters in SLA. Raung-fu Chung rfchung@mail.nsysu.edu.tw. 1. Introduction. The main components of this article are: The framework of Optimality Theory Acquisition and learnability in OT Our model Concluding remarks. 2. The framework of Optimality Theory.

elan
Télécharger la présentation

Phonological constraints as filters in SLA

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Phonological constraints as filters in SLA Raung-fu Chung rfchung@mail.nsysu.edu.tw

  2. 1. Introduction The main components of this article are: • The framework of Optimality Theory • Acquisition and learnability in OT • Our model • Concluding remarks

  3. 2. The framework of Optimality Theory (1) The model of OT

  4. For instance, the English morpheme for plurals /s/ can be realized either as [s], [z], depending on the preceding sound of the stem:

  5. (2) cat [k] cats [ks] dog [dg] dogs [dgz] hen [hen] hens [henz] The input form is taken to be [s]. Then we propose the following constraint: Voiced Obstruent Prohibition)。

  6. (3) Voiced Obstruent Prohibition, VOP No obstruents can be voiced.

  7. Another constraint called for is: (4) Obstruent Voicing Harmony, OVH The adjacent obstruents should share the same value for [voice] The third constraint is a universal constraint, which is one component of IO, here referred as Ident-IO(voice) Ident= identical, IO= Input and Output) (5) Ident-IO(voice) The value for [voice] feature of the Output should be identical with tha of the Input.

  8. As for the ranking, it is obvious, as shown below. (6) OVH >> VOP (「>>」= be preceded or be prior to) Adding to IO, we have the following raking for all the three constraints we just proposed. (54) OVH>> Id-IO(voice)>> VOP

  9. (55)

  10. 3. Acquisition and learnability in OT

  11. 3.1 The notion of learnability a. The formal learnability in the sense of Tesar & Smolensky, 1993 assumed that “all constraints started out being unranked.” In later empirical studies (e.g. Gnanadesikan, 1996; Levelt, 1995), it is pointed out that outputs are initially governed by markedness constraints, rather than by faithfulness constraints. This leads to the proposal that in the initial state of the grammar, all markedness constraints outrank all faithfulness constraints, or “M >> F” for short (Kager, Pater & Zonneveld, 2004; Hayes, 2004; Prince & Tesar, 2004).

  12. b. There are two algorithms accounting for learnability of constraint rankings: Constraint Demotion Algorithm (CDA) and Gradual Learning Algorithm (GLM). Constraint Demotion Algorithm (CDA), proposed by Tesar & Smolensky (1993, 1998, 2000), ranks a set of constraints based on the positive input. For example, L1 acquisition can be interpreted as constraint demotion (Tesar & Smolensky, 1996)

  13. c. Gradual Learning Algorithm (GLA), developed in Boersma (1997, 1998) and Boersma & Hayes (2001), handles variation in the input and explains gradual well-formedness. GLA is helpful in accounting for categorization errors a learner makes in both production and perception. L2 learners with restricted constraint sets have to gradually learn to rerank the constraints by raising or lowering the existing ones.

  14. L1 filter L2 Output (native-like) UG (UC) L2 Input Native- like ranking Interlanguage- Output Interlanguage- ranking Constraint- reranking 4. Our model • L1 Filter Hypothesis & OT: L1 filter Interlanguage- ranking

  15. Empirical arguemtns: • An VOT-baased analysis of VOT production by Taiwanese EFL learners • An diphthong construction of Mandarin and English for Taiwanese learners • Errors of production of [yi] and [wu] for Mandarin EFL leasrners

  16. An OT-based Analysis of VOT Productionby Taiwanese EFL Learners • Acoustic values of VOT: (Liou, 2005) • Note: NSE: native speakers of English; HEFL: high proficient EFL learners; LEFL: low proficient EFL learners; MAN: Mandarin; SM: Southern Min

  17. Constraints for VOT: • 1. the *CATEG(ORIZE) family, which punishes productive categories with certain acoustic values. For example, *CATEG(VOT: /91.5ms/) is against producing /91.5ms/ as a particular category. • 2. *WARP family, which demands every segment be produced as a member of the most similar available category. For instance, *WARP(VOT: 9.3ms) requires that an acoustic segment with a VOT of 91.5ms should not be produced as any VOT ‘category’ that is 9.3ms off (or more), i.e. as /82.2ms/ or /100.8ms/ or anything even farther away.

  18. Constraint-ranking for English [ph] by NSE • Tableau 1 English [ph] of NSE

  19. Constraint-ranking for [ㄆ] by Taiwanese EFL learners • Tableau 2 Mandarin [ㄆ] by Taiwanese EFL learners

  20. Constraint-ranking forInterlanguage [ph] by Taiwanese EFL learners • Tableau 3 Interlanguage [ph] by HEFL

  21. Tableau 4 Interlanguage [ph] by LEFL

  22. An OT-based Analysis of Mandarin and English diphthongs for Taiwanese EFL (MSL) learners

  23. (1)

  24. (2) [-back] vowels for: • (3) [+back] vowels for: • (4)

  25. (4) *N (N=韻母,=相同的) [後] [- 後]

  26. (5) Different back featues for: • (6) • (7) • (8)

  27. (9) *N (N=nucleus,=same,*=ungrammatical) [後] [後]

  28. 5. Concluding remarks • theoretical implications • empirical supports

  29. The end

More Related