Abstract
Using large and small data metrics can provide insight into the effectiveness of continuing professional development (CPD) programs. This data can serve to inform the learner, medical education supporters and content creators. Individual program data may provide meaningful insights for targeted education or barriers to elevated practice. Aggregated data may be used to set benchmarks for learners, demonstrate purposeful educational methodology and showcase bigger themes in content design and creation. Here we outline uses for data, big and small, while also calling all CPD/CME members to rethink the use of data to drive decisions and change for patients.
Introduction
The rate of published literature is at an all-time high, with MEDLINE publishing two articles every minute in 2021.1 This is outside the global research seen in non-indexed open access journals, virtual poster presentations, healthcare provider (HCP) congresses and other widely accessible information. This abundance of scientific data is readily available and often overwhelming to consume. Combined with the increasing demand on healthcare providers and diminished time for professional development, it is unsurprising that there is a lag in adapting changing standards of care in medical professionals.2
Continuing professional development (CPD) for healthcare professionals can help facilitate understanding and application of important and accurate medical information. Additionally, information to address potential data limitations, important adverse events, impact on health disparities and future directions can all be too nuanced to broadly apply literature, necessitating a deeper dive into this content.
Medical education organizations and partnerships can serve as partners in the provision of timely and relevant medical information for healthcare providers. However, programs are often focused on one therapeutic area, disease state or treatment pathway. Likewise, knowledge retention fades over time, requiring repeat exposure to content to solidify and reinforce understanding.3–5 This presents a challenge to assessing how much an individual program is truly impacting patient care.
In this article, we seek to start a discussion about how to use data as signals in the CME/CPD space. We hope to encourage thoughtful use of individual data metrics to help shape a larger understanding within CME/CPD. We will share uses for both internal and external data stakeholders, identifying immediate solutions and longer-term creative application.
Small Data, Big Signals of Importance
We first look at “small data”. Small data are distinct from “big data” as they yield from limited or discrete educational activity or learner samples, such as individual events or activities, the initial set of learners within an on-demand educational activity, or small teams or groups of healthcare professionals from disparate community settings. Such small data examples may yield insights that have distinct value and deserve recognition for their place in a comprehensive data evaluation effort.6 Here, we will detail three particular areas in which small data are uniquely valuable as the source of meaningful signals and their implication within a CPD initiative.
First, small data serve as a source to identify areas of confusion or ambiguity in evidence adoption among learners. Baseline data or data prior to exposure to education, as well as CPD activity data, even within a relatively small sample, will point to gaps in understanding or areas where there is a lack of consistent understanding among learners. One example of such signaling yields from intra-activity data, when closed-ended polling or activity questions have responses of supposition or a majority selecting incorrectly. Complex topics, including new drug mechanisms and new standards of care, are potential areas where we may see this speculation more commonly. Early or limited data can serve as signals that warrant a careful review of follow up or continuously collected data, assessing if there is a persistent gap revealed.
Second, small data signals within CPD can identify areas that are barriers or challenges for healthcare providers, especially in rapidly evolving areas or where emerging data challenge standards of care. Insights from such small data may reveal previously undiscovered latent or unexpected effects of new guidelines, standards or therapeutic options. These data may provide insights into areas of variation in care that appear to be emerging, such as among healthcare roles within the medical team, pockets of care delivery such as regional variation or patients with varying abilities to access care. Often these data also provide clues to better focus mitigation or intervention efforts, such as through education or other tactics as part of a comprehensive assessment of quality of care and patient outcomes.
Third, small data point to areas to improve the efficacy of education delivered. Examining these data in the formative stages of a program will enable CPD planners to update or revise content to address areas of gap more effectively. Assessing for early data signals of comprehension as a result of CPD may yield opportunities to re-write a question or add questions to further understand and address gaps, such as through the addition of decision tools or reference aids. Paying attention to small data has the potential to yield insights supported by data early within a program cycle, to provide clues on how to adjust, and provide a mechanism to assess the impact of any refinement or revision. The use of data in this way provides an objective approach to assess programs, in formative as well as summative analysis, that may be more effective generally than assessment without such data review.
Systematic approaches to analyze data in the formative stages and to look for data signals can yield valuable insights to ensure CPD is targeted appropriately and is addressing the needs of the intended learners. Such data have the potential to yield insights on emerging or nascent of need that warrant attention to fulfill the full potential of CPD efforts.
Big Data as More Than Claims Data
Individual activity or programmatic data can be powerful for internal and external stakeholders. However, a comprehensive evaluation of data across therapeutic areas or learner types can yield even stronger context. Merging data can be time intensive and cumbersome; however, data warehouses can serve as a meaningful repository for use. One such use is the potential to trend impact of CME/CPD over time and predict future impact, taking leading data (outcome levels 1-4) and building predictions for lagging data (outcome levels 5-7).6
Using past performance to predict future impact is not a new concept, and creation of an Education Data Warehouse (EDW) has been briefly described in the literature.7–9 To date, most is described in the setting of graduate medical education. Carney et al describe creation of an EDW for 14 medical residency programs. Qualitative and quantitative data were collected and analyzed. Ultimately, the information gathered was used, in part, to create metrics for graduate competency.10 Similarly, Cook et al describe an EDW opportunity in educational research across medical schools as a response to an evolving need for collaborative medical education research.11 The implications in CPD are clear, with the opportunity to track a learner over time and predict successful program engagement and performance. Conclusions about intraprogram benefits, across a single topic or therapeutic area, and interprogram benefits, across the CME/CE provider’s entire CPD program, can be deduced from an EDW. Other benefits, such as identifying learning gaps, barriers and future needs are all easily attainable from a functioning EDW.
Creating an EDW is outside the scope of this article; however, a previous published guide has significant considerations.12 Importantly, one must be purposeful in selecting which data should be included, how often it should be updated, and what value it brings to the learner and organization. Lastly, veracity must be considered, understanding data integrity is of the utmost importance.
Once established, the EDW could feed into a learner dashboard, helping participants assess their progress and improvement compared to historic performance or an ongoing peer group.13 The dashboard could provide benchmark data for overall program progress and allow for learners to identify weak areas to focus future education. Partnering internal data with publicly available or regional health metrics will allow for prediction models to be built, bridging the gap to higher level outcome achievement.12,14
Success Strategies
Successful utilization of an EDW is built on a foundation of data integrity and standardization. In order to get meaningful predictions and correlations, quality questions must be asked and timely data should be available. Consideration for detailed item analysis to assess question performance would allow for developing a question inclusion or retention threshold. It will also be essential to normalize any data points across the intra- and interprogram metrics, ensuring consistency and standardization to help data validity and reporting.
Next, understanding and mitigating potential bias will help provide data integrity. Association, recall and sample bias are just a few potential pitfalls that may compromise analyses. Taking steps to minimize this through the aforementioned item analysis and contextual statistical analysis are two strategies to minimize bias. Data privacy and security measures also cannot be overstated. This is especially important if the EDW is incorporating patient outcomes or HCP performance data. Finally, being knowledgeable and realistic for learner growth over time is essential. Cognitive overload, competing interests and human nature all contribute to a decline in learned information over time.3,4,15,16
Call to Action
Providers and consumers of CPD should consider careful application of small and big data for interpretation of success. Evaluating for potential bias, understanding item analyses and trending data over time are steps to interpret CPD data with validity and reliability. Understanding how to harness the power of data in a meaningful way is essential for CPD providers and supporters. Advocating for educational methodology that supports retention while also understanding how to use data as a signal for downstream impact can help us all advance the future of patient care.
References
1. MEDLINE® Citation Counts by Year of Publication.
2. Balas EA, Boren SA. Managing Clinical Knowledge for Health Care Improvement. Yearb Med Inform. 2000;(1):65-70.
3. Semb GB, Ellis JA. Knowledge taught in school: What is remembered? Rev Educ Res. 1994;64(2):253-286.
4. Sisson JC, Swartz RD, Wolf FM. Learning, retention and recall of clinical information. Med Educ. 1992;26(6):454-461. doi:10.1111/j.1365-2923.1992.tb00205.x
5. Fisher K, Williams S, Roth J. Qualitative and quantitative differences in learning associated with multiple‐choice testing. J Res Sci Teach. 1981;18(5):449-464.
6. Moore DE, Green JS, Gallis HA. Achieving desired results and improved outcomes: integrating planning and assessment throughout learning activities. J Contin Educ Health Prof. 2009;29(1):1-15. doi:10.1002/chp.20001
7. Triola MM, Pusic MV. The education data warehouse: a transformative tool for health education research. J Grad Med Educ. 2012;4(1):113-115. doi:10.4300/JGME-D-11-00312.1
8. Arora VM. Harnessing the Power of Big Data to Improve Graduate Medical Education: Big Idea or Bust? Acad Med J Assoc Am Med Coll. 2018;93(6):833-834. doi:10.1097/ACM.0000000000002209
9. Collichio FA, Hess BJ, Muchmore EA, et al. Medical Knowledge Assessment by Hematology and Medical Oncology In-Training Examinations Are Better Than Program Director Assessments at Predicting Subspecialty Certification Examination Performance. J Cancer Educ Off J Am Assoc Cancer Educ. 2017;32(3):647-654. doi:10.1007/s13187-016-0993-6
10. Carney PA, Eiff MP, Saultz JW, et al. Assessing the impact of innovative training of family physicians for the patient-centered medical home. J Grad Med Educ. 2012;4(1):16-22. doi:10.4300/JGME-D-11-00035.1
11. Cook DA, Andriole DA, Durning SJ, Roberts NK, Triola MM. Longitudinal research databases in medical education: facilitating the study of educational outcomes over time and across institutions. Acad Med J Assoc Am Med Coll. 2010;85(8):1340-1346. doi:10.1097/ACM.0b013e3181e5c050
12. Nasir K, Javed Z, Khan SU, Jones SL, Andrieni J. Big Data and Digital Solutions: Laying the Foundation for Cardiovascular Population Management CME. Methodist DeBakey Cardiovasc J. 2020;16(4):272-282. doi:10.14797/mdcj-16-4-272
13. Boscardin C, Fergus KB, Hellevig B, Hauer KE. Twelve tips to promote successful development of a learner performance dashboard within a medical education program. Med Teach. 2018;40(8):855-861. doi:10.1080/0142159X.2017.1396306
14. Yang T, Yang Y, Jia Y, Li X. Dynamic prediction of hospital admission with medical claim data. BMC Med Inform Decis Mak. 2019;19(Suppl 1):18. doi:10.1186/s12911-019-0734-y
15. Mattarella-Micke A, Beilock SL. Capacity Limitations of Memory and Learning. In: Seel NM, ed. Encyclopedia of the Sciences of Learning. Springer US; 2012:498-501. doi:10.1007/978-1-4419-1428-6_603
16. Leutner D, Leopold C, Sumfleth E. Cognitive load and science text comprehension: Effects of drawing and mentally imagining text content. Comput Hum Behav. 2009;25(2):284-289. doi:10.1016/j.chb.2008.12.010
Sarah Nisly, PharmD, MEd, BCPS, FCCP, vice president, outcomes and clinical impact for Clinical Education Alliance, earned her Doctorate in Pharmacy at the University of Kansas before completing her PGY1 residency at Greenville Health System in Greenville, South Carolina, and a PGY2 Internal Medicine residency at the University of Tennessee in Knoxville, Tennessee. Sarah has over 15 years of clinical academic experience, most recently as a professor with Wingate University and clinical pharmacist at Wake Forest Baptist Health. She has worked extensively with physicians, advanced practice providers, pharmacists, student and resident learners, and other members of the healthcare team. Sarah was active in the didactic and experiential curriculum, led post-graduate efforts for students, and participated in curricular assessment for the school. In addition to her doctorate, Sarah has completed a master’s in education, with a focus on measurement, evaluation, statistics and assessment from the University of Illinois Chicago. She has a passion for figuring out how people learn, evaluating new and innovative teaching techniques, and assessing change over time. She is active within several professional organizations and enjoys working collaboratively on projects with other individuals, both locally and nationally, to advance healthcare and education.
Caroline O. Pardo, PhD, CHCP, FACEHP, is a longstanding member of the Alliance community, with professional experience in independent medical education outcomes, strategy, research and grant management. Pardo currently serves as senior vice president of education strategy, Clinical Education Alliance. She is always up for a brainy challenge, like maintaining a Wordle streak, and is excited to be a part of the live conferences within the Alliance professional community this year.