1 / 36

Benchmarking for e-Maturity: Why and how

Benchmarking for e-Maturity: Why and how. Professor Paul Bacsich Matic Media Ltd Team leader, BELA team, working for the Higher Education Academy. Benchmarking (Xerox). a process of self-evaluation and self-improvement

kalin
Télécharger la présentation

Benchmarking for e-Maturity: Why and how

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Benchmarking for e-Maturity:Why and how Professor Paul Bacsich Matic Media Ltd Team leader, BELA team, working for the Higher Education Academy

  2. Benchmarking (Xerox) a process of self-evaluation and self-improvement through the systematic and collaborative comparison of practice [process] and performance [metrics, KPIs] with competitors [or comparators] in order to identify own strengths and weaknesses, and learn how to adapt and improve as conditions change.

  3. Benchmarking and e-learning • Includes best practice statements • Covers quality assurance and quality enhancement • And can be used for these (to some extent) • Not primarily about standards (which are too detailed) but can cover the standards processes

  4. UK HE benchmarking trials • Foreshadowed in HEFCE e-learning strategy • Run by HE Acad in tandem with JISC • 12-institution pilot, in 3 groups (2006) • 38 institutions in Phase 1 (2006-07) • 28 institutions in Phase 2 (ongoing)

  5. Chester Leicester Manchester Staffordshire Hertfordshire Bristol UWIC Strathclyde Institute of Education Warwick Brookes Coventry Pilot

  6. Birmingham Bradford Brighton Brunel Buckinghamshire Chilterns Canterbury Christ Church Central Lancashire Cumbria Institute of the Arts De Montfort Derby East London Edge Hill Exeter Glasgow Caledonian Gloucestershire Greenwich Hull Keele Kingston Lincoln London South Bank Manchester Metropolitan Middlesex Newman College Northampton Nottingham Trent Ravensbourne College Reading Robert Gordon Royal Academy of Music Sheffield Hallam St Georges St. Mark and St. John Sunderland Teeside Thames Valley Trinity College West Nottinghamshire College West of England Westminster Wimbledon School of Art and Design Wolverhampton Phase 1

  7. Methodologies • Pick&Mix (mine) • eMM (Marshall) • ELTI (from JISC) • OBHE (from them) • MIT90s

  8. Commonalities • Orientation is to processes • Backed up by output measures where relevant • (Input measures less relevant) • A set of criteria is produced (15-50 best) • Some underlying theory or fieldwork is used to justify inclusion/exclusion of criteria • Criteria are clustered into around 6 groups • Evidence for criteria is produced, often via an “Institutional Report” (cf QAA) • Scores (usually 1-5) may be generated

  9. Public Criterion/indicator-based • Adapt or create a set of criteria publicly known and slowly changing • Preferably 12-50 criteria, 20-30 even better • Preferably in “open source” form • OER >> OEM • Preferably core and optional ones • However, criteria will evolve and new optional criteria will emerge • Examples: Pick&Mix, MIT90s, ELTI 2.0, eMM

  10. eMM: e-Learning Maturity • Originated in New Zealand, sector trial has taken place • Was 43 process criteria (has just reduced) • Scoring system with levels .1 to .5: • Delivery, Planning, Definition, Management, Optimisation • Groupings are: Learning/Pedagogy, Development, Coordination, Evaluation and Quality, Organisational

  11. Some eMM criteria • L1: Student work is subject to clearly communicated timescales and deadlines • D2: The reliability of the technology delivery system is as failsafe as possible • C8: Questions addressed to student service personnel are answered accurately & quickly • O2: A documented technology plan is in place and operational to ensure quality of delivery

  12. MIT90s framework: groupings Success is a function of: • External environment • Strategy • Individuals and their roles • staff and students • Structures • Technology • Management Processes

  13. OBHE • OBHE: Observatory on Borderless Higher Education • Train of development from ACU/CHEMS work on benchmarking in the 1990s • Also links to European work • Little public until recently but now the method and outline IRD structure is released

  14. OBHE groupings • Strategy development • Management (of e-Learning) • Delivery (of e-Learning) • Resources (for e-L) and value for money • Students • Staff • Collaboration and partnership (external) • Communications, evaluation and review

  15. ELTI • Derived from JISC work in HE and FE in the early 2000s, led by Bristol U • Four parts: • Criteria, roles, Skills survey, policy & planning • Focus was on “embedding” but part 1 can be recast as benchmarking

  16. ELTI groupings and criteria • Learners and their experience of learning • How (well) do learners acquire and develop skills for e-learning? • Pedagogic culture and expertise • How far do practitioners have the skills to deliver e-learning? • Infrastructure • How well do physical spaces support the use of innovative technologies? • Organisational strategy • (To what extent) does the organisation have a realistic long-term business strategy for e-learning?

  17. Pick&Mix Overview and history

  18. Pick & Mix overview • Focussed purely on e-learning, but can be extended more widely e.g. for TQEF… • Picks up on “hot” agenda items • Draws on several sources and methodologies • Not linked to any particular style of e-learning (e.g. distance or on-campus or blended) • Oriented to institutions past “a few projects” • Suitable for desk research on comparators as well as “in-depth” on one’s own institution

  19. Pick & Mix history • Initial version developed in early 2005 in response to a request from Manchester Business School for 12-competitor study • Since then refined by literature search, discussion, feedback, presentations (UK, Brussels, Berlin, Sydney, Melbourne) and workshops (ALT-C and Online Educa) – and by 10 institutions in 2006 and 2007

  20. Pick&Mix Criteria and metrics

  21. Pick&Mix: 20 core criteria • Composited some ur-criteria together • Removed any not specific to e-learning (but can put them back) • Was careful about any which are not provably critical success factors • Left out of the core some criteria where there was not (yet) UK consensus – e.g. IPR • Institutions will wish to add specific ones to monitor their objectives and KPIs. This is allowed – system is extensible. • many supplementary criteria now agreed

  22. Pick&Mix Metrics • Use a 6-point scale (1-6) • Usual 1-5 from Likert plus 6 for “excellence” • 1 is nil activity or sector minimum • Backed up by continuous metrics where possible • Also contextualised by narrative • Some criteria are really “criteria bundles” • There are always issues of judging progress and “best practice”; say “better practice”: • e.g. VLE convergence

  23. Criterion 18 “Training” • No systematic training for e-learning • Some systematic training, e.g. in some projects and departments • Uni-wide training programme but little monitoring of attendance or encouragement to go • Uni-wide training programme, monitored and incentivised • All staff trained in “VLE” use, training appropriate to job type – and retrained when needed • Staff increasingly keep themselves up to date in a “just in time, just for me” fashion except in situations of discontinuous change

  24. Current issues… From Phase 1 and Pilot benchmarking

  25. Benchmarking vocab • Criteria: what you score and describe • Sub-criteria: aspects of these • Levels (dimensions): specific aspects in a hierarchy (e.g. eMM) • Slices: chunks of a Uni that are separately benchmarked, e.g. Business School, postgrad, DL, FDs – NB must benchmark overall • Comparators: significant others that you also benchmark • Transversals: issues or KPIs that cut across many criteria (but should such exist if they are “hot”?)

  26. Broadenings (cf TQEF and IT) TQEF examples: • 06: e-Learning Strategy becomes • 06bis: Learning and Teaching Strategy • 18: Staff recognition for e-Learning becomes • 18bis: Staff recognition for Learning and Teaching Information Technology examples: • 53: Underpinning IT/comms reliability becomes • 53ter: For all IT systems not only e-learning systems

  27. Phase 1 questions at start • Time delay between Pilot and Phase 1: • entropy issues, staff and mission changes • “Due Diligence” needed • Most institutions do not read (any) reports… – need teaching • Not quite the same lure of money • Running many BM systems is expensive since all need updating and documentation, some a lot • Can a new system enter or old systems return? • Is there really a scholarship of benchmarking??

  28. Phase 1 tentative conclusions • Organisation, commitment and staffing are important (see blog for guidance) • It still takes time • No decision yet on methodologies, but: • No new ones under consideration • Framework proposed (see blog), with suggested focus on student experience (?) • Some work being done on concordances

  29. Phase 1 main outcomes • Timescales: • September-July better – more aligned • Pathfinder bid deadline “not ideal” • Team work: all methodologies gained from their cohort meetings – MIT90s had four • Not always necessary to gather lots of new data • Small cohort size may be beneficial – so how to get subgroups of cohorts

  30. Phase 1 - tools • All BELA methodologies “worked” • Scope issues on e-learning an “irritation” not a fundamental problem • MIT90s needed substantial start-up work and was more HEI-driven, yet consultant model worked for that tool too – could be lessons here for eMM(-lite/Scots)

  31. Phase 1 – outcomes for the sector • Scholarship: not much sign – yet… • But perhaps after not within the project • Many BELA HEIs aim to produce some kind of shortish public report; several will produce longer ones for circulation within cohort • But timescale issues – May/June • HEFCE Measures of Success not monitored at HEI level – but some work done on concordance • Less focus on “hard” issues; more on “soft” (including management issues and student-facing) but also “tools” esp e-assessment not now only portfolios • Costs and workload areas still weakish • Strategy and planning thin out as one goes down the org

  32. Onwards… Phase 2 just started

  33. Phase 2 • Public criterion-based • Pick&Mix: 10 • eMM: 7 • OBHE: 11 • No other old methodologies gained traction • No new methodologies were permitted • Size of cohort issue

  34. Onwards… Beyond Phase 2

  35. Interesting for the future • Re-benchmarking (some will be done) • Cross-benchmarking • Revisit Framework but more focus on categories not criteria • International benchmarking – build on seeds from WUN, DTUs, Oz and Dutch but re-review EU methodologies also • HE in FE: HE, FE or both?

  36. Thank you for listeningAny questions? Professor Paul Bacsich Email bacsich@matic-media.co.uk URLs: www.matic-media.co.uk/benchmarking.htmand www.heacademy.ac.uk/benchmarking.htm

More Related