1 / 23

ACT-R/S: Extending ACT-R to make big predictions

ACT-R/S: Extending ACT-R to make big predictions. Christian Schunn, Tony Harrison, Xioahui Kong, Lelyn Saner, Melanie Shoup, Mike Knepp, … University of Pittsburgh. Approach. Combine functional analysis Computational level (Marr); Knowledge level (Newell); Rational level (Anderson)

amish
Télécharger la présentation

ACT-R/S: Extending ACT-R to make big predictions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ACT-R/S: Extending ACT-R to make big predictions Christian Schunn, Tony Harrison, Xioahui Kong, Lelyn Saner, Melanie Shoup, Mike Knepp, … University of Pittsburgh

  2. Approach Combine functional analysis • Computational level (Marr); Knowledge level (Newell); Rational level (Anderson) with neuroscience understanding • most elaborated about gross structure to build a spatial cognitive architecture for problem solving

  3. Need for 3 Systems • Computational Considerations • Some tasks need to ignore size, orientation, location • Some tasks need highly metric 3D part reps

  4. Need for 3 Systems • Computational Considerations • Some tasks need to ignore size, orientation, location • Some tasks need highly metric 3D part reps • Some tasks need relative 3D locations of blob objects

  5. Visual - object identification Configural - navigation Manipulative - grasping & tracking ACT-R/S: Three Visiospatial Systems Traditional “what” system Traditional “where” system

  6. Visual input of nearby chair Visual Representation Manipulative Representation Configural Representation

  7. Allocentric vs. egocentric representations • All ACT-R/S representations are inherently egocentric representations => Allocentric view points must be inferred (computed) • Q: • What about data suggestive of allocentric representations?

  8. Configural System Representation

  9. Configural Buffer Configural Buffer Path Integrator Triangle-T1 Triangle-TN • Vectors • Identity-tag • Vectors • Identity-tag Circle-TN Circle-T1 + • Vectors • Identity-tag • Vectors • Identity-tag Circ-Tri-T1 Circ-Tri-TN • Triangle-ID • Circle-ID • delta-heading • delta-pitch • triangle-range • circle-range • Triangle-ID • Circle-ID • delta-heading • delta-pitch • triangle-range • circle-range

  10. Single place-cell from Muller, 1984 “Place-cells” • Pyramidal cells in rodent hippocampus (CA1/CA3) • Fires maximally w/r rodent’s location - regardless of orientation • Span many modalities (aural, olfactory, visual, haptic & vestibular) • Stable across time • Plot cell-firing rate across space

  11. “Place-cells”(the not-so pretty picture) • Cell firing within a rat is also correlated with: • Goal (Shapiro & Eichenbaum, 1999) • Direction of travel (O’Keefe, 1999) • Duration in the environment (Ludvig, 1999) • Relative configuration of landmarks (Tanila, Shapiro & Eichenbaum, 1997; Fenton, Csizmadia, & Muller, 2000) from Burgess, Jackson, Hartley & O’Keefe 2000

  12. • Configural representation (vectors) supports lowest level navigation - but defines an infinite set of locations • Configural relationship (between two) establishes a unique location in space ACT-R/S and “Place-cells”

  13. Circ-Tri-TN Circle-TN • Triangle-ID • Circle-ID • delta-heading • delta-pitch • triangle-range • circle-range • Vectors • Identity-tag Triangle-TN • Vectors • Identity-tag Egocentric RepresentationAllocentric Interpetation

  14. Foraging Model • Virtual rat searching for food • Square environment with each wall as a landmark (obstacle free) • When no food is available, rat free roams or returns to previously successful location • Food is placed semi-randomly to force rat to cover the entire environment multiple times • Record activation across time and space for preselected configural-relationships • (Add Guasssian noise)

  15. “Single-Chunk” Recording • Stable fields are a function of regularities in the learned attending pattern. • Multiple passes through same region will reactivate configural relation chunk. • Multi-modal peaks likewise influenced by goal (same landmarks, different order).

  16. What about humans? • Small scale orientation and navigation data typically reports egocentric representations • Diwadkar & McNamara, 1997; Roskos-Ewoldsen, McNamara, Shelton, & Carr, 1998; Shelton & McNamara, 1997 • One famous counter-example • Mou & McNamara, 2002

  17. Mou & McNamara (2002) E • Subjects study a view of objects from 315 deg. • Study it as if from intrinsic axis (0 deg) • A-B • C-D-E • F-G • Testing asks subjects to imagine: • Standing at X • Look at Y • Point to Z • Plot pointing error as function of imagined heading (X-Y) • 0, 90, 180, 270 much lower error! B D F A C E 315º View position 0º

  18. Zero parameter egocentric prediction • The hierarchical task analysis of training and testing • But extra boost from encoding configuration chunks (egocentric vectors as in ACT-R/S) • Count number of times any specific chunk will be accessed • Compute probability of successful retrieval of chunks (location, facing, pointing), using basic ACT-R chunk learning and retrieval functions, default parameters, delay of 10 minutes

  19. Modeling Frames of Reference • Data (Exp 1) • Zero parameter prediction • Playing with noise parameter(s) and retrieval threshold () improve absolute fit (RMSE) • All (reasonable) parameter values produce similar qualitative fit

  20. More data • Having mats on the floor which emphasize allocentric frame of reference • No effect (as predicted) • Square vs. round room • No effect (as predicted) • Training order from ego vs. allocentric orientation • Big effect (as predicted)

  21. Training Order Mou & McNamara (2002) Exp 2 “Allocentric” “Egocentric” Data Model r=.62 r=.85

More Related