60 likes | 167 Vues
Trust in engineered systems requires ongoing assessment and adaptation to ensure reliability over time. Unlike short-term trust experiences, long-term trust hinges on consistency, dependability, and predictability. While human-human trust dynamics inform this process, the question remains: How closely should human-machine interactions mimic these relationships? Considerations include machine responsiveness, empathy, and alignment with user goals, alongside the effects of superficial social cues. Continuous monitoring of trust levels is vital, especially under varying workload conditions, to foster enduring relationships with technology.
E N D
Trust is dynamic • What works short-term need not translate to long-term • System could be learning and changing over time • Your trust in the system might / should change • Should monitor over time, we should help the user do it
How much should human-machine look like human-human trust for long-term trust? • Human-human trust is largely established by consistency, dependability, predictability (e.g., Apple), human-machine trust is currently largely situational, should that change?
How much should human-machine look like human-human trust for long-term trust? • Must machines exhibit (substantive) responsiveness to user’s goals and circumstances? • Exhibit empathy • Share goals • Model users
How much should human-machine look like human-human trust for long-term trust? • What does superficial affect and social cues achieve (e.g., smiley face cars) • Do they persist over time? • Are they sufficient? (At minimum you need to know about them)
Other themes • Role of transparency (Is ignorance bliss?) • Remote operation blunts emotional engagement, may elicit a different set of requirements for trust • Implications of socially constructed trust • Trust during high workloads