1 / 26

Incentive compatibility in data security

Incentive compatibility in data security. Felix Ritchie, ONS (Richard Welpton, Secure Data Service). Overview. Research data centres Traditional perspectives A principal-agent problem? Behaviour re-modelling Evidence and impact. Research data centres.

luce
Télécharger la présentation

Incentive compatibility in data security

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Incentive compatibilityin data security Felix Ritchie, ONS (Richard Welpton, Secure Data Service)

  2. Overview • Research data centres • Traditional perspectives • A principal-agent problem? • Behaviour re-modelling • Evidence and impact

  3. Research data centres • Controlled facilities for access to sensitive data • Enjoying a resurgence as ‘virtual’ RDCs • Exploit benefits of an RDC • Avoid physical access problems • ‘People risk’ key to security

  4. The traditional approach

  5. Parameters of access NSI Wants research Hates risk Sees security as essential Researcher Wants research Sees security as a necessary evil a classic principal-agent problem?

  6. NSI perspective • Be careful • Be grateful

  7. Researcher perspective • Give me data • Give me a break!

  8. Objectives VNSI = U(risk-, Research+) – C(control+) Vi (researcheri) = U(researchi+, control-) risk = R(control-, trust-) < Rmin Research = f(Vi+)

  9. A principal-agent problem? NSI: Trust = T(lawfixed) = T(training(lawfixed), lawfixed) Maximise research s.t. maximum risk Risk = Riskmin Researcher: Control = Controlfixed Maximise research

  10. Dependencies VNSI Research Risk Vi researchi control trust choice variables

  11. Consequences: inefficiency? • NSI • Little incentive to develop trust • Limited gains from training • Access controls focus on deliberate misuse • Researcher • Access controls are a cost of research • No incentive to build trust

  12. More objectives, more choices VNSI Research Risk Vi researchi control trust training effort

  13. Intermission:What do we know?

  14. Conversation pieces Researchers are malicious Researchers are untrustworthy Researchers are not security-conscious NSIs don’t care about research NSIs don’t understand research NSIs are excessively risk-averse ☒ ☑ ☒ ☒ ☑ ☑

  15. Some evidence Deliberate misuse Low credibility of legal penalties Probability of detection more important Driven by ease of use Researchers don’t see ‘harm’ Accidental misuse Security seen as NSI’s responsibility Contact affects value

  16. Developing trueincentive compatibility

  17. Incentive compatibility for RDCs • Align aims of NSI & researcher • Agree level of risk • Agree level of controls • Agree value of research • Design incentive mechanism for default • Minimal reward system • Significant punishments • Bad economics?

  18. Changing the message (1)behaviour of researchers Aim researchers see risk to facility as risk to them Message we’re all in this together no surprises, no incongruities we all make mistakes Outcome shopping fessing

  19. Changing the message (2)behaviour of NSI Aim positive engagement with researchers realistic risk scenarios Message research is a repeated game researchers will engage if they know how contact with researchers is of value per se we all make mistakes Outcome improved risk tolerance

  20. Changing the message (3)clearing research output Aim clearances reliably good & delivered speedily Message we’re human & with finite resources/patience you live with crude measures, but you tell us when it’s important we all make mistakes Outcome few repeat offenders high volume, quick response, wide range user-input into rules

  21. Changing the message (4)VML-SDS transition Aim get VML users onto SDS with minimal fuss Message we’re human & with finite resources/patience don’t ask us to transfer data unless it’s important Outcome most users just transfer syntax (mostly) good arguments for data transfer

  22. Changing the message: summary we all know what we all want we all know each other’s concerns we’ve all agreed the way forward we are all open to suggestions we’re all human

  23. IC in practice Cost VML at full operation c.£150k p.a. Secure Data Service c. £300k Denmark, Sweden, NL €1m-€5m p.a. Failures Some refusals to accept objectives VML bookings Limited knowledge/exploitation of research Limited development of risk tolerance

  24. Summary ‘Them and us’ model of data security is inefficient Punitive model of limited effectiveness Lack of information causes divergent preferences Possible to align preferences directly It works!

  25. Felix RitchieMicrodata Analysis & User SupportONS

  26. Objectives VNSI = U(risk-, Research+) – C(control+) Vi (researcheri) = U(risk-, researchi+, control-) risk = R(control, trust) control = C(compliance, trust trust = T(training, compliance)

More Related