1 / 35

Developing an adverse event prediction system : A neural network and Bayesian pilot study

Developing an adverse event prediction system : A neural network and Bayesian pilot study. Associate Professor Liza Heslop & Mahdi Bazargani Acknowledgements: Dean Athan and Gitesh Raikundalia May 14 th 2013. vu.edu.au CRICOS Provider No: 00124K. Three year study with three stages.

ravi
Télécharger la présentation

Developing an adverse event prediction system : A neural network and Bayesian pilot study

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Developing an adverse event prediction system : A neural network and Bayesian pilot study Associate Professor Liza Heslop & Mahdi Bazargani Acknowledgements: Dean Athan and Gitesh Raikundalia May 14th 2013 vu.edu.au CRICOS Provider No: 00124K

  2. Three year study with three stages • Stage One (Pilot study): develop a structured neural network based on first day admission case mix indicators to discover the most sensitive indicators that impact on AEs and to refine the neural network threshold values– Neural Networks and Bayesian approach • Stage Two: Daily aggregate adverse events based on daily hospital workload indicators (DHWI) – a Bayesian approach • Stage Three: Discovering the relationship of common comorbidity indices with patients different main CHADx adverse event categories - A Bayesian approach

  3. Surgeons blame pressure from management for poor safety at Lincolnshire trustBMJ 2013;346:f1094 Fourteen hospital trusts are to be investigated for higher than expected mortality ratesBMJ 2013;346:f960 “It has been estimated that across the 14 hospitals around 6000 more patients died than expected, with mortality rates 20% higher” BMJ 2013;346:f960

  4. How has current research developed understandings of hospital-based workload intensity? Nurse workforce (measured as nurse overtime working hours) and nurse-sensitive patient outcome indicators are positively correlated (Liu et al. 2012) Nurse staffing (fewer RNs), increased workload, and unstable nursing unit environments were linked to negative patient outcomes including falls and medication errors on medical/surgical units in a mixed method study combining longitudinal data (5 years) and primary data collection(Duffield et al. 2012) Workload levels and sources of stressors can vary across different professional groups (Mazur et al. 2012)

  5. Current measures/variables of workload intensity

  6. Workforce intensity measures– no common standard A range of internal and external research instruments such as audit, subjective responses to surveys and administrative and clinical data records Very few sourced coded episode-based hospital administrative data (HAD) It is necessary to accurately measure workload A factor that impacts upon the safety and quality of health care Useful measure to validate in the prediction model

  7. Objectives of the first stage pilot study • Develop a structured neural network based on first day admission case mix indicators • Discover the sensitivity of each input and controlling indicator toward occurrences of an AEs • Know the most sensitive indicators that impact on AEs • Establish neural network threshold values • Compare two main machine learning algorithms - Neural Networks (NN) and Bayesian Networks Classifier (NBC)

  8. Methodological objective: Develop a complex computational relational model Machine learning methodologies - Neural Network (NN) and Naive Bayes Classifiers (NBC). Both contribute in different ways. While NN has a complex structure, it is suitable for establishing the relational model. The relational model is based on dependent and inter-correlated indicators. NBC was employed with a pre-optimization algorithm that was trained with independent indicators. The accuracy of these two methodologies (NN and NBC) are compared with each other based on ‘confusion matrices’ and the rate of true positive and true negative AEs. Sensitivity analyses are reported based on the NN model which is finally established based on all incorporated indicators.

  9. Generalized Feed Forward Multilayer Perceptron Neural Network (input, hidden and output layers) The hidden layer consists of four processing elements (PEs) with using TanhAxon function as the transfer function. Weights are updated using back propagation by using Momentum rule (Momentum=0.7, Step Size=0.1) and batch learning. Batch learning improves the speed of training/learning

  10. Neural Network is employed for developing a prediction model based on dependent and inter-correlated input indicators There are many structures for a Neural Network and many methods for training them. For neural networks method, this study employed a Generalized Feed Forward Multilayer Perceptron Neural Network with three layers - input, hidden and output. This simple structure is suitable for the current coded episode static dataset in absence of any time series objective for prediction of AEs. The input layers are composed of independent variables based on first day of admission information : Table 1 DHVI; Table 2 Patient demographic information and patients’ diagnosis and episode characteristics (used as controlling input indicators); and anumeric score derived from comorbidity indexes.

  11. Conceptual design for building the relational model Daily Hospital Volume Indicators (DHVIs) Likelihood of Patient Adverse Events (CHADx) Patient demographic information; Patient’s diagnosis and episode characteristics; Comorbidity indices

  12. Identify daily hospital volume indicators Daily hospital volume indicators (DHVI)- measures of work intensity DHVI Capability for extraction from a coded Australian episode data set

  13. A defined set of controlling indicators controlling indicators Patient demographic information and patients’ diagnosis and episode characteristics These indicators are itemised within Table 2

  14. Determine each patient’s corresponding LOS scores related to patient specific characteristics Primary procedure Secondary procedure Primary diagnosis Secondary diagnosis Assignment of LOS scores from the NHCDC (2001) for each primary and secondary diagnosis and procedures for each patient in the coded episode data set.

  15. Comorbidity classification indices used for obtaining comorbidity LOS scores

  16. Dealing with a coded data set without ‘onset flag’ This study used a coded data set that did not have an onset flag. Onset flags were introduced in 2008 where hospital acquired conditions (HAC) were flagged in the codes. Hence a difficulty of this dataset was the non-existence of an indicator (represented by an onset flag) on each secondary diagnosis to identify its type as a comorbidity or complication. Identifying possible comorbidity and complication diseases with absence of onset flag A step was necessary to include and identify the possible comorbidities from the coded episode dataset.

  17. Identify an operation for AEs – output from the Neural Network Each patient episode of care is identified as containing an AE if it satisfies any CHADx major categories’ business rules. According to Utz et al.(2012): “The CHADx offers a comprehensive classification of hospital-acquired conditions available for use with ICD-10-AM.The CHADx was developed as a tool for use within hospitals, allowing hospitals to monitor (assuming constant casemix) and reduce hospital-acquired illness and injury. Within Queensland in 2010/2011, 9.0% of all admissions included at least one hospital-acquired condition (as defined by the CHADx)”.

  18. Results: building of the relational model (training and validation components) The relational model will ascertain relationships between all inter-correlated (dependent) input and controlling indicators toward the output variable AEs. To some extent these variables are inter-correlated, for example emergency admissions are correlated with the number of admissions.

  19. Discussion on Table 3 Most of the DHVIs have small sensitivity toward the output The ‘number of adverse events’; and ‘emergency admissions’ on the date of admission have the most sensitivity toward AE occurrences Among patient diagnoses indicators, all show strong sensitivity toward the likelihood of adverse events with Secondary Diagnosis LOS having the most effect among all employed input indicators in this pilot study Among comorbidity indices, Elixhauser and Charlson show rather strong sensitivity values Sex and Age have the highest sensitivities among demographic characteristic indicators

  20. Discussion on Table 4 The Neural Network with different thresholds achieves higher overall accuracy than an optimized NBC. As the goal of this prediction is to obtain higher accuracy of true positive rates of AE (sensitivity), the thresholds 0.15 (sensitivity 85%) and 0.20 (sensitivity 78%) were selected while the last one achieved overall higher accuracy (74% versus 70%). Selection of these thresholds could be also dependent on the problem specification and application of the prediction model. On the other hand, NBC overall accuracy was lower than those values (64%) and a low rate of sensitivity was obtained (33%).

  21. Summary of key findings A trained Neural Network and NBC on the least indicators which achieve the highest accuracy   Ordering of the sensitivity values Number of adverse events and Number of Emergency admissions on the date of admission showed most sensitivity within DHVIs Elixhauser and Shwartz indices showed most sensitivity within comorbidity indices Sex and Age showed most sensitivity within patient characteristics information toward occurrences of an AE. Results show the supremacy of the Neural Network with an overall accuracy of 74% (Threshold =0.2) versus 64% for Naive Bayes Classifier

  22. Lessons for the three stage study Indicators are very sensitive to the current state of the trained neural network and may be different if the network is trained with a different structure and if new indicators are employed

  23. Outcomes A simply-structured relational model and neural network that can generate complex computational calculations based on several weights for each node as well as several input and hidden nodes – a first step to develop a relational model to predict AEs Various training iterations have been conducted to generate the highest accuracy based on the validation dataset. This has resulted in avoidance of the overtraining and over-fitting of the network which the sensitivity analyses are based on Sensitivity values for the independent indicators have been obtained

  24. Study limitations Use of a coded episode data set without an onset flag Inclusion of complicated steps to distinguished complications arising after admission Results are not conclusive without further machine computational processing

  25. Implications of this pilot study for the next stages of this research The procedures to overcome the lack of an onset flag have been complex. The accuracy of knowing the hospital acquired conditions in the overall relational model will be improved in the main study. The sensitivity results will help with refinements to this pilot study when a larger data set will be used (including onset flag) The DHVIs on the date of admission may be eliminated as they don’t show sufficient strength for AE prediction.Comorbidity diseases and demographic characteristics along with diagnosis types are involved.1-age2-sex3-primary procedure4-seconary diagnosis5-ElixhasuerThere did not seem to be workload indicators involved in the highest accuracy of this relational model, but this finding will require validation in the refinements to the pilot study. May not support many research findings which suggests that workload indicators are heavily associated with adverse events.

  26. Next stage of research (continued) To develop a case mix of input indicators (CMI) between all employed indicators to reach the highest possible accuracy of classification based on employed machine learning algorithmThis CMI will hold the least number of indicators which achieve the highest accuracy of classification To firmly establish which indicators to eliminate as their inclusion will not improve the overall accuracy of the model

  27. Direction of change as a result of the pilot study Further testing different machines other than Neural Network and Bayesian NetworkConsider an ensemble of RepTRee which may result further accuracy). There are different machines (e.g. Bayes, Neural Networks, Decision Trees, Logistic Regression) involved with different optimization algorithms(Greedy Search, Genetic Algorithm, Ensembles). The next stage will be to obtain the episode data indicator which will result in the highest possible accuracy for each machine and for each corresponding optimization algorithms Correlation (tipping point or non-linear relationships) may be examined in stage three based on the average rate of DHWIs during all days of the patient hospitalization, rather than the first day of admission. Correlation types based on Neural Networks is very complex and suitable for just classification and prediction results – hence Bayes is recommended for this study instead

  28. Development of a composite measure of hospital workload intensity A composite measure of hospital workload intensity may be valuable to policy and health service officials at many levels: The future outcome of a valid and reliable workload intensity composite measure will Help clinicians define suitable workload standards for hospital organisations Help hospital organisational officials to monitor their hospitals’ workload intensity and even possibly capacity Support health services researchers to standardize measures of workload intensity for benchmarking Help examine relationships between practice environment features (for example, as rated on measures of job satisfaction, turnover intentions and assessments of quality of care) and workload intensity in a systematic and standardized way

  29. Development of a composite measure of hospital workload intensity (cont’d) Make better use of coded activity-based data to improve the effectiveness of operational decision-making For example, Pedroja (2008:36), who used composite indexes to measure hospital workload intensity suggested: “Through the identification of a set of indicators that predict stresses on the system, leaders would have the ability to provide additional resources or system fixes that would make the operation less vulnerable to health care error and patient harm” Support national studies that may like to develop a systemic picture of workload intensity. Most current studies on workload intensity use a range of proxy measures in small scale or localised studies to measure the effort needed for inpatient medical and nursing work or workload intensity

  30. References Australian Commission on Safety and Quality in Health Care. Classification of Hospital Acquired Diagnosis (CHADx), 2011 Thomas JW, Guire KE, Horvat GG Is patient length of stay related to quality of care? Hospital & Health Services Administration (1997) 42(4):489-507

  31. NAME Liza Heslop DEPARTMENT Western Centre for Health Research and Education, Sunshine Hospital. PHONE +0407886201 EMAIL liza.heslop@vu.edu.au www.vu.edu.au CONTACT DETAILS

More Related