1 / 34

Aims of this Case Study

VV40: Committee on Verification & Validation for Modeling & Simulation of Medical Devices Technical Symposium, Subgroup: Orthopaedics. A Case Study: Examination of RAM/CAM Application for Evaluation of a Coupled Musculoskeletal-FEA Model Anthony Petrella, PhD

virgo
Télécharger la présentation

Aims of this Case Study

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. VV40: Committee on Verification & Validation for Modeling & Simulation of Medical DevicesTechnical Symposium, Subgroup: Orthopaedics A Case Study: Examination of RAM/CAM Application for Evaluation of a Coupled Musculoskeletal-FEA Model Anthony Petrella, PhD Colorado School of Mines, Golden, CO AnyBodyTechnology Inc, Cambridge, MA RuxiMarinescu, PhD Brian McKinnon Smith & Nephew, Inc, Memphis, TN Jeff Bischoff, PhD Zimmer, Inc, Warsaw, IN SwRI, San Antonio TX January 22, 2014

  2. Aims of this Case Study • Explore application of RAM/CAM to realistic modeling scenario used in orthopaedic implant development • Attempt to consider a more complex modeling workflow comprised of multiple scales and simulation methods • Consider a common modeling context in device new product development – comparison to predicate device reference • Specifically, we sought to examine the question... How well do the RAM/CAM evaluation criteria work in their current form for a coupled musculoskeletal-FE analysis for purposes of device evaluation?

  3. Literature Used • Kim et al., “Evaluation of Predicted Knee-Joint Muscle Forces during Gait Using an Instrumented Knee Implant,” JOR, pp.1326-1331, Oct 2009. • Lin et al., “Simultaneous prediction of muscle and contact forces in the knee during gait,” J Biomech, 43, pp.945-952, 2010. • Pegg et al., “Evaluation of Factors Affecting Tibial Bone Strain afterUnicompartmental Knee Replacement,” JOR, pp.821-828, May 2013. • Disclaimers The authors did not organize the details of their articles with the intention of being “scrutinized” in the context of the CAM Evaluators have limited experience with RAM/CAM

  4. Summary of Modeling Workflow (Lin et al., 2013) (Kim et al., 2009) (Pegg et al., 2013)

  5. COU • Decision/Question… Have we satisfied design verification requirements for this UKR design? • Some patient needs: • Avoid subsidence of device • Avoid chronic pain • Related design inputs: • Good coverage, modest resection, no excessive rise in periprosthetic bone strain relative to successful predicate(s) • Modest resection, no excessive rise in periprosthetic bone strain relative to successful predicate(s) • Model serves as sole source of info to test whether design inputs RE: bone strain have been met

  6. V&V Workflow • COU not explicitly defined, Draft v1.1 • One definition of COU is: • Decision/question to be addressed • Influence of model on decision • Risk to patient • Any of the three elements can change independently and affect COU • How can risk be separate from COU? RAM

  7. RAM – VV40 Guide Draft v1.1 Moderate: M&S is considered to address only a part of the decision... There are ample data from similar sources... Major: M&S is not the sole source of information... Data are available from similar sources to support the decision but no data are available from the actual environment... Controlling: no data are available from other sourcesfor essential aspects of the system and the M&S plays a key role in the decision. A. No adverse health consequences B. Limited (transient, minor impairment or complaints) C. Temporary or reversible (without medical intervention) D. Necessitates medical or surgical intervention E. Results in permanent impairment of body function or permanent damage to a body structure F. Life-threatening (death could occur) G. Hazard cannot be assessed

  8. COU Summary • Decision/Question… Have we satisfied design verification requirements (inputs a,b) for this UKR design? • Related design inputs: • Good coverage, modest resection, no excessive rise in periprosthetic bone strain relative to successful predicate(s) • Modest resection, no excessive rise in periprosthetic bone strain relative to successful predicate(s)

  9. Summary of Modeling Workflow 1 (Lin et al., 2013) 2 (Kim et al., 2009) (Pegg et al., 2013)

  10. CAM A. Software Verification • Verification of the Software Code & Solution: • 0. Insufficient • Minimal • Some testing conducted • Some peer review conducted • All algorithms tested, independent peer review conducted • Matlab model, geometry registration Geomagics. Based on the information provided we don’t know if the author reviewed the verification activities to determine if those were relevant to this application. Based on prior analysis. • CAM A = 1 • Note: element types, adequate mesh size, not applicable.

  11. CAM B. Validation computational model • B1. System configuration: • 0. Insufficient • Minimal – abstraction of geometry • Simplified – single patient-specific case, captures major features • Minor/major features captured, ranges of possible geometry, multiple cases • All features captured, multiple cases, statistically relevant • Skeletal model – • Implant geometric model – patient-specific post-surgery CT data, CAD models of the patient’s implant components (Patient 1) • Bone geometric model – MRI-derived bone models from another patient (Patient 2) • Muscle model – 11 muscles (values from literature), MRI-derived models from Patient 2 with muscle and patellar ligament origin/insertion locations • Articular contact model – TF and PF, elastic model • CAM B1 = 3?

  12. CAM B. Validation computational model • B2. Governing equations: • 0. Insufficient • Substantially simplified • Model forms are based and tuned on data from related systems • Representation of all important processes, tuning needed • Key physics captured, minimal need for tuning • Patient specific inverse dynamic model. The equations of motion were derived using Autolev symbolic manipulation software. The complete knee model was implemented in Matlab (OpenSIM for Pegg et al. 2012) • CAM B2 = 3?

  13. CAM B. Validation computational model • B3. System properties (biological, physical properties) • 0. Insufficient • Simplified properties, sensitivities not addressed • Nominal properties, uncertainties • Distribution of properties, uncertainties identified • Key properties captured, sensitivity analysis • Skeletal model – • Implant geometric model – linear elastic isotropic materials (Pegg et al. 2012) • Bone geometric model – Calculated from HU in CT scans? Pegg et al. 2012) • Muscle model – muscles (strength values from literature), patellar ligament (data from literature); mapping of the muscle attachment sites (Matlab, Pegget al. 2012) • Articular contact model – TF and PF, elastic model • CAM B3 = 3?

  14. CAM B. Validation computational model • B4. Boundary conditions (e.g., applied loading) • 0. Insufficient • Significant simplification • Some simplification of BCs • Representative BCs, uncertainties identified • No simplifications, appropriate distribution of variation, comprehensive sensitivity analysis • The femur was fixed to ground, where the tibia and patella were allowed to move relative to it; Two-level optimization approach (Matlab – min of the sum of 3 compressive contact forces). • CAM B4 = 3?

  15. CAM C. Validation: Evidence-based comparator • C1. System configuration: • 0. Insufficient • Locations for data collection are roughly measured; geometry of parts is assumed • Locations for data collection are prescribed and measured; geometry of parts is coarsely measured (?); calibrated system or signal/noise ratio>1 • Locations for data collection are prescribed and error collected; geometry of parts is measured to machine tolerance; signal/noise ratio is high • All dimensions known to greater than machine precision; high precision • Pegget al. 2012 • Force-measuring tibial prosthesis • Over ground walking trials (normal and medial-lateral trunk sway) • Experimentally measured contact forces (medial and lateral sides of the tibial tray – loads magnitude, direction, position and contact area recorded; custom python script) • CAM C1 = 3

  16. CAM C. Validation: Evidence-based comparator • C2. System Properties: • 0. Insufficient • Material properties are average, homogeneous non-specific; environment conditions unknown • Material properties are average, homogeneous specific to the system; environment conditions known • Key material properties are measured and heterogeneity captured • All material properties are measured, environmental effects accounted for. • Pegget al. 2012 • Adult male subject • Force-measuring tibial prosthesis • Gait analysis • Experimentally measured contact forces • CAM C2 = 4

  17. CAM C. Validation: Evidence-based comparator • C3. Boundary Conditions: • 0. Insufficient • System states are not specifically measured • System states are specifically measured or perturbations are measured • System states are specifically measured, affected degrees of freedom are known and perturbations are measured • System states are specifically measured and degrees of freedom are known and perturbations are measured; variability known • Pegget al. 2012 • Adult male subject • Force-measuring tibial prosthesis • Gait analysis, over ground walking trials (normal and medial-lateral trunk sway) • Experimentally measured contact forces • CAM C3 = 4?

  18. CAM C. Validation: Evidence-based comparator • C4. Sample Size: • 0. Insufficient • Single case or few cases • Several cases or statistically relevant sample size • Several cases and statistically relevant sample size • Comprehensive parameter variability and statistically relevant sample size for all parameters • Pegget al. 2012 • Single adult male subject • CAM C4 = 1

  19. CAM D. Determine model credibility • Discrepancy between the model and the comparator • “Model was implemented with the equivalent comparator conditions” (3 or 4? Variability?) • Comparison: qualitative or quantitative • “Quantitative comparison of single achievable case” (2) • Applicability of V&V activities to Context of Use • “Embodies key CoU features and captures key system properties” (3) • Single adult male subject (Patient 1) • Musculoskeletal model • Skeletal model – • Implant model – patient-specific post-surgery CT (Patient 1) • Bone geometric model – MRI-derived bone models (Patient 2) • Muscle model – 11 muscles, MRI-derived models (Patient 2) • Articular contact model – TF and PF, elastic model • CAM D = 2.666666667 or 66.67% (8/12)

  20. Credibility assessment matrix: MS Model 1 1 2 2 3 3 3 4 4

  21. CAM. Verification • A. Code • Mimics, CT segmentation • MATLAB, ICP & muscle force sites • SolidWorks, bone resection • Mimics + custom, material calcs & mapping • Abaqus, FE solution • Python scripting for Abaqus • Application region for native contact load • Analytical field for implant load to bone • von Mises strain values • Probabilistic variation of loading • PASW Statistics, statistical analysis • Score = 1

  22. CAM. Verification • B. Solution • FE mesh convergence study performed • FE simplifications, no sig affect on results • Direct load vs. using actual implant • Implant interface, tie vs. rough/friction • Full length tibia vs. truncated • Probabilistic variation in load magnitudes based on errors reported in MS model • Material properties from CT mapping reported consistent with previous pub • Score = 2

  23. CAM. Model Validation • A. Configuration • Tibia bone geometry from single patient extracted from CT using “previously validated method” • No information about implant model • Loads applied directly to the bone surfaces • Bone cuts made in accordance with surgical technique published by implant company • Score = 2

  24. CAM. Model Validation • B. Governing Equations • Structural FE methods well established for stress / strain calculation • Physics… • Static FE simulation • Bone modeled as linear elastic and isotropic – no rate effects • Non-homogeneous material property mapping from CT • Muscle and joint loading derived from gait simulation and instrumented TKR • Score = 3

  25. CAM. Model Validation • C. System Properties • Non-homogeneous material properties mapped from CT based on published equations • Properties not compared to human subject, but “consistent with previous” published data • No sensitivity analysis reported in relation to properties • Score = 2

  26. CAM. Model Validation • D. Boundary Conditions • Contact loading applied directly to bone surface; compared to case with implant to confirm no significant effect on outcomes • Muscle forces from MS model applied to bone • Load BC’s associated with level gait only • Uncertainty in load magnitudes due to upstream errors (MS model) incorporated using Monte Carlo simulation with (only) 40 random cases • Several custom Python scripts employed with no direct verification – creates doubt • Score = 2

  27. CAM. Comparator Validation • Comparator A. Configuration • Comparator B. Governing Equations • Comparator C. Properties • Comparator D. Sample Size • There was NO COMPARATOR, model only used to assess relative change in outcome metric (bone strain) • Scores = 0, 0, 0, 0

  28. CAM. Validation – Model/Comparator • A. Discrepancy • B. Comparison of Outputs • There was NO COMPARATOR, model only used to assess relative change in outcome metric (bone strain) • Scores = 0, 0

  29. CAM. Validation – Model/Comparator • C. Applicability of V&V to COU… • Decision/Question… Have we satisfied design verification requirements (design inputs a,b) for this UKR design? • Good coverage, modest resection, no excessive rise in periprosthetic bone strain relative to successful predicate(s) • Modest resection, no excessive rise in periprosthetic bone strain relative to successful predicate(s) • Score = 4

  30. CAM Summary 0 0 0 0 0 0 1 2 2 2 2 3 4

  31. CAM Summary – Complete Workflow • Is the multiscale workflow acceptable? • Not obvious how to create composite score • Does precursor (MS) model even need to be evaluated, or simply captured in BC’s eval for the FE model?

  32. Comments • RAM/CAM application for a coupled MS-FEA modeling workflow • Value in looking at published models? • Probably will be common in practice, and info often lacking in literature • Perhaps publication standards need to evolve • Library of Models, MS model repositories will help • A model is rarely based on a single piece of code • How best to apply guidelines? CAM separate or combined? • Is CAM even needed explicitly for both (all) models? When? • An explicit definition of COU and how to identify it will probably be needed for general users • We defined COU as: Question/Decision + RAM (influence, risk) • Any of three elements can independently change COU • Utility of RAM for orthopedic applications may be limited • Always “Medium”? • Same influence, same risk for any COU? Spine, joints, same risk for all? • Will some standard simplifications evolve for specific industries?

  33. Comments • Should CAM be a measuring tool or a checklist? • “Credibility = 2.6” could be misleading • Acceptance criteria? What is good enough? What does a “4” look like? • Perhaps some “grandfathering” of accepted modeling paradigms will occur • What impact does an incremental shift in COU have on acceptance? • Removing numbers could make a subtle but positive psychological difference, not just post-hoc scoring but planning for specific CAM levels before model development • Moving “applicability” considerations to beginning of CAM may be more effective • Model Validation, BC’s • Level 4 = “no simplifications” • Does this make sense for a “model”, which is inherently simplified? • Comparator evaluation difficult for human subjects • Especially for subject-specific modeling efforts • If model compares well to single subject, does that mean it is extensible to others? • Are all comparators equal? Is a weighting factor appropriate for human subjects?

  34. Comments • 510(k) pathway with comparison to predicate device extremely common in orthopaedics • Predicate device comparison (relative analysis) • Strictly speaking, no direct comparator for outcome metric, but… • Predicate will have controlling influence on “decision” • Should predicate model be evaluated separately from primary model? • Should predicate evidence be critically considered, where/how? • Incremental increase in value (CAM score) vs. increased cost to improve model is a consideration COU Model Clinical History Predicate Model

More Related