1 / 17

Fall Meeting October 2013

International Technology Alliance in Network and Information Sciences Human-Machine Conversations to Support Coalition Missions with QoI Trade-Offs Alun Preece (Cardiff) Dave Braines (IBM UK) Diego Pizzocaro (Cardiff) Christos Parizas (Cardiff) Tom La Porta (PSU).

cole-frank
Télécharger la présentation

Fall Meeting October 2013

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. International Technology Alliance in • Network and Information Sciences • Human-Machine Conversations to Support Coalition Missions • with QoI Trade-Offs • Alun Preece (Cardiff) • Dave Braines (IBM UK) • Diego Pizzocaro (Cardiff) • Christos Parizas (Cardiff) • Tom La Porta (PSU) Fall MeetingOctober 2013

  2. Human-machine information interaction in coalition mission support Coalition mission support Natural language-based approach • High-level tasking of network resources in terms of mission objectives • Enabling exploitation of soft (human) sources in addition to physical sensing assets • “Locate & track high value targets in Border Zone” • Information quality in coalition mission support networks is highly variable, both due to the nature of the sources and the network capacity • “Conversational” model for human-machine and machine-machine interactions • Interactions flow from natural language (NL) to Controlled English (CE) and back • Scenario-based experimentation Users request information from the network (ask), while also being sources of information (tell)

  3. Human-machine conversations Contributions • Natural language (NL) based conversational approach that includes support for: • requests for information (with specified QoI requirements) • provision of information (at specified QoI levels) • human-machine reasoning and information fusion (with aggregated QoI levels) • Show how the model can be used to support realistic information exchanges in a coalition mission support context, offering flexibility in dealing with trade-offs associated with quality-of-information (QoI). Human-machine example: Soldier on patrol reports a suspicious vehicle by means of a text message from their mobile device. Machine-human example: A software agent sends a brief “gist” report to a human analyst indicating the vehicle is associated with a known high-value target (HVT). Machine-machine example: A broker agent queries or updates a database agent managing HVT sightings.

  4. A new approach using Controlled Natural Language TA6 context • P4.3: Coalition Context-Aware Assistance for Decision Makers • Link to P4.1: how coalition communications environment affects cognition • Link to P4.2: advanced fact extraction & reasoning • Link to P5.2: high-level CE-based policies for asset management Types of interaction • NL to CE query or CE facts (confirm) • CE query to CE facts (ask-tell) • exchange of CE facts (tell) • Gist to full CE (expand) • CE to CE rationale (why)

  5. Controlled English Conversation Cards (CE-Cards) 3 kinds of card content We conceptualise a conversation as a series of cards exchanged between agents, including humans and software services A conversation unfolds through a series of primitive communicative acts (e.g. queries, assertions, requests) • natural language • Controlled English • a form of template-based CE that provides the “gist” of complex sets of CE sentences for brevity and easier human-readability CE card model • Cards are modelled in CE with types as shown and attributes that include: • is from • is to • is in reply to • has content • has timestamp • has resource (for linked non-text content)

  6. CE-Cards Conversation Policies Conversation sequence rules* *can be defined in CE as part of the model Examples gist: “the red SUV is a threat” expand: “red SUV” tell: “there is a vehicle named v12345 that has `red SUV’ as description and has XYZ456 as registration and…” tell: “there is a vehicle named v12345 that is a threat and is located at central junction and…” why: “v12345 is a threat” tell: “v12345 is owned by HVT John Smith and…”

  7. Vignette I Interacting agents: • human patrol (location A) • human intelligence analyst • broker software agent that mediates between humans and other agents • tasker software agent that handles access to database and sensor resources Step 1 Patrol on North Road (A) reports a suspicious black saloon car, vehicle registration ABC123, moving south [confirm interaction between patrol & broker; QoI on certainty of observation]

  8. Vignette II Interacting agents: • human patrol (location A) • human intelligence analyst • broker software agent that mediates between humans and other agents • tasker software agent that handles access to database and sensor resources Step 2 Broker sends the patrol's report to tasker agent, and a DB query reveals the vehicle is linked to a HVT [ask-tell interaction between broker & tasker; QoI on certainty that vehicle is linked to HVT]

  9. Vignette III Interacting agents: • human patrol (location A) • human intelligence analyst • broker software agent that mediates between humans and other agents • tasker software agent that handles access to database and sensor resources Step 3 Broker sends a request to tasker to track the vehicle. Tasker assigns a UAV to perform this task [tell interaction between broker and tasker; QoI requirements on accuracy of tracking]

  10. Vignette IV Interacting agents: • human patrol (location A) • human intelligence analyst • broker software agent that mediates between humans and other agents • tasker software agent that handles access to database and sensor resources Step 4 UAV reports that the vehicle stops near Central Junction (B); the broker sends an alert to analyst [tell interactions between tasker, broker, & analyst; aggregate QoI from steps 1-3]

  11. Summary of interactions (Steps 1-4) there is a tell card named '#2b' that is from the agent tasker and is to the agent broker and is in reply to the card '#2a' and has content the CE content 'there is an HVT sighting named h00453 that has the vehicle v01253 as target vehicle and has the person p670467 as hvt candidate'.

  12. Detailed interactions: Step 1

  13. Detailed interactions: Step 2

  14. Detailed interactions: Step 3

  15. Handling Quality of Information (QoI)Issues Example 2 – quality of information retrieved from database (step 2) We may have uncertainty that the vehicle is associated with HVT John Smith (the information may be out of date or inaccurate). • Subjective logic may play a role in combining different uncertainty representations (symbolic, numeric). • CE rationale plus provenance information offers a means of communicating the compounded result, • gist offers a way to make it easily-digestible while also being expandable to users on request (or during subsequent audit Example 1 – QoI from the patrol (step 1) If the patrol is a trained team, likely that the word “suspicious” has a codified meaning and can be directly mapped to some kind of certainty range (symbolic or numeric). For a message from an untrained team or social media, the subjective meaning of “suspicious”would need to be estimated and the certainty recorded. e.g. based on past performance. These cases can be handled by the receiver of the message – broker in our vignette – using rules (conversational pragmatics)

  16. Architecture for experimentation CE-Card server supports exchange & storage of cards • Software agents representing human and machine parties can be implemented in any programming language We are experimenting with a speech-based interface, in conjunction with a Google Glass-style display for gist content We would envisage full CE being directed to a user's handheld device

  17. Conclusions & future work • The conversational model appears robust enough to handle realistic scenarios • The model can be extended with pragmatics rules to handle a range of QoI cases Next: • Incorporate (richer) QoI cases / pragmatics. • Extend/enrich scenario. • Extend and link approach across P4 and to P5. • Develop and run experiments with human subjects.

More Related