1 / 26

Agenda

Agenda. What we have done on which tasks Further specification of work on all our tasks Planning for deliverable writing this autumn (due in December) Administrative matters. What we have done on which tasks. First quarter. Task 1.1 Grammatical Framework and TrindiKit.

Télécharger la présentation

Agenda

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Agenda • What we have done on which tasks • Further specification of work on all our tasks • Planning for deliverable writing this autumn (due in December) • Administrative matters

  2. What we have done on which tasks • First quarter

  3. Task 1.1 Grammatical Framework and TrindiKit Work has been carried out towards a new TrindiKit release, in particular a GUI which enables inspection of dialogues held with the system (scheduled for June). Work has also been carried out towards a new GF to CFG grammar compiler which will facilitate the integration of TrindiKit with GF

  4. Task 1.6 Proof of concept dialogue system A baseline version of a dialogue system for a video recorder been completed. A small corpus of user interactions has been collected. (This work has been carried out in part with other project funding.) Plans have been made to integrate the video recorder with a smart house demo including lights sensitive to the position of the user and a personal agenda.

  5. WP5 Göteborg has conducted work towards a new release of TrindiKit which will make it possible to use it as an OAA agent and thus make it available to users who do not wish to have TrindiKit as the main agent in an OAA configuration. The work on the TrindiKit GUI has been divided between wp1 and wp5 since it will make it possible for other users to inspect corpora for applications other than those collected in connection with the work for wp1.

  6. Second quarter

  7. GF stuff • CF compiler, parser wp1 • java program compiler (OAA) -- only generation so far wp5 • generate corpus from grammar wp1 • multimodal grammars wp1

  8. TrindiKit stuff • new release 3.1, with GUI wp1,5 • video, small corpus [audio + transcription] wp1 • lights (not really beyond planning) wp1 • agenda (updating old version to new trindikit) - perhaps interface • to route planning, Cambridge map wp1mail

  9. Current & future work • Trindikit 4 (increased integration with OAA, simpler to use - increased access from OAA) wp1,5 • OAA shells for input/output: Festival synthesis, RUTH • talking head, FestVox, SPHINX recognition wp5 • OAA shells for various devices: internet radio, agenda, X10 lights wp5 • GF grammar for GoDiS VCR application wp1

  10. 2. Further specification of work on all our tasks

  11. Task 1.1: Integration of the Grammatical Framework with TrindiKit and DELFOS NCL We will integrate the Grammatical Framework into TrindiKit and DELFOS NCL, providing examples of grammars which can be used in issue-based dialogue systems.

  12. Task 1.2: Extending the TALK Grammar Library to include multimodality. Extend the use of abstract and concrete syntax from multilinguality to multimodality. We will produce some working examples, early in the project, to establish the viability of the idea of uni-fying multimodality and multilinguality. We will produce examples of multimodal grammars which take abstract representations of information and relate them both to natural languages and abstract representations used for e.g. tabular representation, graphical representation.

  13. Task 1.5: Relating abstract representations to existing KR and domain specifications. We will show how existing knowledge representations can be related to abstract representations of the kind developed in Task 1.2.

  14. Task 1.6: Proof of concept dialogue system using the multimodal grammar library. This task will showcase an experimental dialogue system in the smart house domain.

  15. ?Task 2.1: Integration of ontological knowledge with the ISU approach (a) Flexible Dialogue. Develop an ISU interaction manager which can use ontologies for au-tomatic clarifications of values e.g., Did you mean London Gatwick or Heathrow and tasks e.g., Do you want to plan a route or want to know about traffic conditions? . (b) Multimodal presentation. Use of ontological structure to provide appropriate options for particular modalities (small vs. large screen,audio or visual etc.). This task links to WP3.

  16. (c) Semantic interpretation. Provide semantic interpretation routines which can be reconfigured automatically by plugging in a new ontology. (d) Filtering speech recognition hypotheses. Evaluate the results of re-ranking of recognition hypotheses according to plausibility in the domain. Generation of clarifications. (e) Reuse of existing ontological knowledge. Examine existing ontological resources and show what can be reused in a dialogue context. Provide recommendations for ontology formats.

  17. ?Task 2.2: Dynamic Reconfiguration. Extend work on device plug-and-play in networked home environments to more general task or service plug-and-play in a multimodal setting. The work will build on existing standards in this area (such as UPNP) which address plug-and-play, but do not include the necessary linguistic resources.

  18. ?Task 2.3: Programming Devices and Services. Bridge the gap between instructions given in natural language and the APIs presented by various devices and applications. This will includethe ability for users to define simple programs over both devices and services.

  19. ?Task 3.1: Extended information state modelling To support advanced multimodal presentations, we will define (i) what information should be retained in the extended information state beyond local context, (ii) develop representations of the extended information state in a structured way useful for dialogue management, output presentation and input interpretation, and (iii) develop methods to maintain the extended infor-mation state during a dialogue.

  20. ?Task 3.2: Using extended information state for multimodal turn planning This task is concerned with developing methods to determine the contextualized content from the proto-content on the basis of the extended information state, and also methods for taking the extended information state into account when distributing the contextualized content over the available modalities.

  21. ?Task 3.3: Modality-specific Resources We will develop modality-specific resources to realize the contextualized content in various modalities, such that information structure is properly reflected. We will evaluate the capability of each modality to present the given contextualized content, to support the adaptive dynamic allocation of modality-specific resources; we will develop metrics for this purpose.

  22. Task 5.1: Infrastructure The goal of integrating components from different partners into the laboratory system and also the final in-car and in-home showcase depends on a well-defined software infrastructure as the basis for flexible, efficient and real-time communication between the modules. Based on ex-isting middle-ware for connecting modules and in close cooperation with the various providers of libraries, we will develop a project standard for module interfaces. Important factors are distributability across multiple machines and independence from operating systems, as well as the easy integration of third-party software, e.g., for speech synthesis.

  23. ?Task 5.2: Integration Given the initial versions of libraries, the task of WP5 will be the integration of the components into the laboratory system which will be the basis for data collection and evaluation and even-tually lead to in-car and in-home showcase systems based on libraries developed in TALK . To get an early start, partially existing technology will be used to develop a first baseline system based on the in-car scenario.

  24. 3. Planning for deliverable writing this autumn (due in December) • Deliverables D1.2a [UGOT] MM and Multilingual grammars (Peter and Aarne, with input from Robin) • Status reports T1.6s1 [UGOT] smart house system (Staffan and Robin coordinate, input from David, Stina, Rebecca, sommarjobbare?, Seville, ...)

  25. Contributions to other status reports? • T2.1s1 [LING] ontologies and ISU -- in-car • T2.2s1 [LING] reconfiguration -- smart house • T3.1s1 [SAAR] extended ISU modelling • T3.2s1 [DFKI] plan library for MM output • T3.3s1 [SAAR] modality-specific resources • T5.2s1 [BMW] in-car baseline system ?? where else would our • contributions to WP5 go? why is there no T5.1s1?

  26. 4. Administrative matters • Job specifications, time sheets • Email, action, report and contact duty (Peter HT04, Stina VT05,...) • Gbg website? TALK area on dialog lab site? Can all get access? • Six monthly report • Site report • wp1 report

More Related