Authors

  1. Benham, Brian MSN, APRN, CRNA
  2. Hawley, Diane PhD, RN, CCNS, CNE

Abstract

Review question/objective: The objective of this review is to identify the effectiveness of tools used to evaluate critical decision making skills for applicants to healthcare graduate educational programs.

 

Review questions:

 

Do unique tools to assess critical decision making skills evaluate the likelihood of student success in healthcare graduate educational programs?

 

Do traditional Graduate Record Exam (GRE) scores or grade point average (GPA) evaluate the likelihood of student success in healthcare graduate educational programs?

 

Background: Students leave healthcare academic programs for a variety of reasons. These reasons include financial hardship, personal and family health issues, as well as the realization that a program may be too rigorous for them to complete. When they attrite, it is disappointing for the student as well as their faculty. In addition, there can be residual debt from student loans and potential for conflict and legal dispute if the student was relieved from their program.1 Students that are accepted into graduate educational programs that later attrite also block other students from enrolling due to a limited number of available spots. A common desire among both students and faculty would be 100% graduation of every applicant initially selected. Unfortunately this does not always occur.

 

In a review of attrition from participating Certified Registered Nurse Anesthetists (CRNA) programs in the United States in 2005, the second most common reason for student attrition was academic dismissal, which was responsible for 30%of all attritions. Clinical deficiencies accounted for 15%of student attrition. According to the review, overall attrition nationwide was 9%, with a low of zero students to a high of 41.3%attrition.1 It could be surmised that academic and clinical deficiencies are related to inadequate critical decision making (CDM) skills. If accurate, this deficit is responsible for nearly half of all attrition.

 

Advanced practice nursing and other healthcare professions require not only extensive academic preparation, but also the ability to critically evaluate patient care situations. Inherent to this process is assessing situations, analyzing options, narrowing possible interventions and implementing the action. This is followed by evaluating the effect of the action and correcting if needed. These steps contribute to the process known as CDM. The ability to critically evaluate a situation is not innate. CDM skills are higher level skills that are difficult to assess.2 For the purpose of this review, CDM and critical thinking (CT) skills refer to the same constructs and will be referred to globally as CDM skills.

 

Quantitative, cognitive measures like grade point averages (GPA) or scores from the Graduate Readiness Exam (GRE) are frequently used methods of determining whether applicants will succeed in graduate healthcare programs. Though these two methods do elucidate an applicant's probable ability to complete the required course work; the applicant's ability to engage in CDM and their mastery of non-cognitive skills is harder to evaluate.3 A 2009 study by Megginson analyzed various methods for assessing non-cognitive constructs in graduate admissions.4 Results highlighted that non-cognitive attributes, such as motivation, problem solving and maturity, were generally assessed through letters of recommendation, interviews and personality inventories.

 

According to O'Sullivan, former Program Director of the U.S. Army's nurse anesthesia program (personal communication, March 2013), most CRNA programs rely on applicant interviews and letters of recommendations from previous supervisors of the applicant to assess CDM and non-cognitive attributes. However, Megginson illustrates that traditional narrative letters of recommendation (NLOR) exhibited low validity in predicting performance in graduate programs.4 Areas of NLOR weakness included problems with leniency, less than optimal knowledge of the applicant, low reliability and other extraneous factors. These findings suggest that the primary tool for assessing CDM and non-cognitive factors in applicants via the NLOR hold little utility.

 

In 2010, Fero and colleagues published an experimentally-based study related to assessing CT.5 The quasi-experimental cross-over designed study examined the CT scores of a convenience sample of 36 undergraduate nursing students. The numeric data was derived from videotaped vignettes (VTV), patient care scenarios utilizing high-fidelity human patient simulators (HFHS) and two known tests for accessing CT skills; the California Critical Thinking Disposition Inventory (CCDTI) and the California Critical Thinking Skills Test (CCTST). Videotaped vignettes are recorded simulated situations, with an actor portraying the patient in a given scenario. The subjects watched the VTV, then provided a written assessment of the situation with proposed actions and rationales. The subjects were administered the CCTDI and CCTST, then randomized into two groups, A and B. Group A was presented the VTV involving a pulmonary embolism followed by the same scenario using the HFHS. Group B started with the HFHS, then watched and evaluated the VTV. The steps of randomization and altering the sequence of evaluated events constituted the quasi-experimental cross-over design stated by the author. The results demonstrate that subjects exhibiting strong clinical thinking dispositions on the CCTDI showed a greater ability to identify clinical problems, report essential data, initiate nursing interventions and prioritize care. The implication of this finding is that HFHS simulated scenarios were equally effective in assessing CDM as the CCTDI.

 

A 2005 study presented the possibility of utilizing on-line responses to presented case scenarios as one tool to evaluate CT skills.6 This observational study involved 53 master's degree nursing students enrolled in three on-line courses. The authors developed a 10-item Likert-scaled tool. The on-line written work analyzed involved case scenarios related to one of three academic courses sampled. The three scenarios included a crisis-intervention and decision-making situation, a primary care clinic encounter scenario and a communication problem scenario between a student nurse and a staff nurse in a clinical setting. Analysis of data illustrated that inter-rater reliability problems prevented this tool from gaining reliable data.

 

Multiple tools are available to evaluate CT and CDM. The implication from the literature is these tools should be used for their predictive value in admissions processes. However, a descriptive correlational study reported data that confounds this conclusion.7 A convenience nonprobability sample of nurses enrolled in a master's level family nurse practitioner programs was utilized. The author's research hypotheses stated that nurses who scored higher on the CCTST will demonstrate multiple higher level clinical skills, as evaluated by the Clinical Decision-Making in Nursing Scale (CDMNS) and preceptor evaluation tools. Correlational analysis illustrated an absence of statistically significant relationships between CT and any of the evaluated areas of study. One positive correlation was that nurses with critical care nursing experience demonstrated higher scores on the CCTST scales.

 

A study published in a nurse anesthesia journal in 2012 addressed a CRNA program's use of high-fidelity simulation as an integral portion of the applicant appraisal.8 This retrospective study was used to assess possible correlations between HFHS performance scores and other applicant characteristics. These include undergraduate GRE, scores on two written assignments (goals statement and interview essay), years of critical care and general nursing experience, professional involvement, GRE scores and initial nursing degree earned (Associate Degree in Nursing or Bachelor of Science in Nursing).The simulation component was evaluated using an author-derived tool (SIET, or simulation interview evaluation tool). The tool covered eight areas of problem solving including recognition, deductive reasoning, causation, treatment plan development, assistance seeking, treatment initiation, communication and leadership traits. The subjects were a convenience sample of 70 applicants from the 2008 class selection cycle. Two CRNA simulation coordinators performed the simulation assessment portion to assist with inter-rater reliability. Results of the study demonstrated a positive correlation between face-to-face interviews and SIET scores (p=0.003). Limitations of this study included a small sample size, retrospective nature of data collection, as well as the un-validated nature of the SEIT evaluation tool. What this study accomplishes is the introduction of an anesthesia-derived, simulation-based scoring matrix that functions in parallel to currently established and accepted admission standards.

 

A 1996 article by Adams, Whitlow, Stover and Johnson presented an evaluation of four tools used to assess CT.10 This expert opinion piece was based on a review of the available literature and discusses the WGCTA, CCTST, Ennis-Weir Critical Thinking Essay Test (EWCTET) and the Cornell Critical Thinking Tests (CCTT). The authors' analysis found all four tools to be valid, providing a measure of the abstract concepts of CT. The general strength of these tools, as reported in literature, is limited by the use of convenience samples and a small number of subjects. While the authors support the WGCTA as a validated instrument, they question its utility in nursing due to inconsistent results in published literature. The authors view the CCTST as lacking validation in the field of nursing. The CCTT and EWCTET are stated as having 'a reservoir of untapped potential' due to their underuse as predictive tools for CT.

 

What these studies and articles represent is current thought and a historical perspective of the evaluation of CT and CDM skills. While these works have validity and provide information, a consensus is lacking in the form of a systematic review of these tools in use. A thorough review of all available databases, to include those devoted to systematic reviews, failed to demonstrate works focusing on critical decision making assessment in applicants to graduate healthcare educational programs. Though two meta-analyses were discovered evaluating the predictive validity of the GRE for graduate student selection and performance,11 and the validity of the PCAT (Pharmacy College Admission Test) and grade predictors in pharmacy student performance,12 neither work followed a systematic review protocol. A quality systematic review could provide light on the path to acceptance in the application process for graduate healthcare education programs. As healthcare monies decrease and educational funds are more difficult to acquire, any consistent tool to allow the best qualified applicant admission to graduate healthcare programs will undoubtedly increase the number of providers, with education funds best spent on those with the greatest likelihood of success.

 

Article Content

Inclusion criteria

Types of participants

This review will consider studies published in the English language that include applicants, students enrolled and/or recent graduates (within one year from completion) of healthcare graduate educational programs.

 

Types of intervention(s)/phenomena of interest

This review will consider studies that evaluate the utilization of unique tools (e.g. CCTST [California Critical Thinking Skills Test], CTOE [Arnett Critical Thinking Outcome Evaluation], HCTSR [Holistic Critical Thinking Scoring Rubric], WGCTA [Watson Glaser Critical Thinking Appraisal], Prevue, simulation-based evaluation, and others unknown),as well as standard tools such as the GRE or GPA, to evaluate critical decision making skills in graduate heath care program applicants.

 

Types of outcomes

This review will consider studies that include the following outcome measures:

 

Successful quantitative evaluations based on specific field of study standards.

 

Types of studies

This review will consider any experimental study design including randomized controlled trials (RCTs), quasi-experimental and before and after studies. Analytical epidemiological study designs including prospective and retrospective cohort studies, case control studies and analytical cross sectional studies will also be evaluated.

 

Search strategy

The search strategy aims to find both published and unpublished studies. A three-step search strategy will be utilized in this review. An initial limited search of MEDLINE and CINAHL will be undertaken, followed by analysis of the text words contained in the title and abstract and of the index terms used to describe the article. A second search using all identified keywords and index terms will then be undertaken across all included databases. Thirdly, the reference lists of all identified reports and articles will be searched for additional studies. Studies published in English will be considered for inclusion in this review. Studies published after 1970 in the English language will be considered for inclusion in this review. This date will be used due to 1970 being the earliest publication found from any articles using the tools mentioned in the review question.

 

The databases to be searched include:

 

JBI Database of Systematic Reviews and Implementation Reports

 

Cochrane Library

 

CINAHL

 

ProQuest

 

MEDLINE

 

ERIC

 

The search for unpublished studies will include:

 

New York Academy of Medicine Grey Literature Report

 

MEDNAR

 

ProQuest database for theses and dissertations

 

OpenSIGLE

 

Virginia Henderson Library

 

Initial keywords to be used will be:

 

Critical thinking

 

Decision making

 

Academic

 

Graduate program

 

Masters program

 

Doctoral program

 

Healthcare

 

All studies identified during the database search will be assessed for relevance to the review based on the information provided in the title, abstract and descriptor/MeSH terms. A full text will be retrieved for all studies that meet the inclusion criteria (see Appendix I). Studies identified from reference list searches will be assessed for relevance based on the study title.

 

Assessment of methodological quality

Papers selected for retrieval will be assessed by two independent reviewers for methodological validity prior to inclusion in the review using standardized critical appraisal instruments from the Joanna Briggs Institute Meta Analysis of Statistics Assessment and Review Instrument (JBI-MAStARI) (Appendix V). Any disagreements that arise between the reviewers will be resolved through discussion, or with a third reviewer if a consensus cannot be reached.

 

Data collection

Data will be extracted independently by each reviewer from papers included in the review using the standardized data extraction tool from JBI-MAStARI (Appendix VI). The data extracted will include specific details about the interventions, populations, study methods and outcomes of significance to the review question and specific objectives. Where there is missing information or data in retrieved articles, the author(s) will be contacted for clarification and data when possible.

 

Data synthesis

Quantitative data will, where possible, be pooled in statistical meta-analysis using JBI-MAStARI. All results will be subject to double data entry. Effect sizes expressed as odds ratios (for categorical data) and weighted mean differences (for continuous data) and their 95% confidence intervals will be calculated for analysis. Heterogeneity will be assessed statistically using the standard Chi-square and also explored using subgroup analyses based on the different study designs included in this review as applicable. Where statistical pooling is not possible, the findings will be presented in narrative form including tables and figures to aid in data presentation where appropriate. Subgroup analysis will be used where appropriate when variations in effects are noted.

 

Conflicts of interest

No known conflicts of interest exist.

 

Acknowledgements

The Texas Christian University Centre for Evidence-based Healthcare: a Collaborating Centre of the Joanna Briggs Institute.

 

References

 

1. Dosche M, Jarvis S, Schlosser K. Attrition in nurse anesthesia educational programs as reported by program directors: The class of 2005. AmerAssoc of NurAnes J [Internet] 2008;76(4):277-281. Available from:http://www.aana.com/newsandjournal/Documents/attrition_naprogs0808_p277-281.pdf[Context Link]

 

2. Cohen M, Freeman T, Thompson, B. Critical thinking skills in tactical decision making: A model and a training strategy. [Internet].n.d. Available from: http://www.cog-tech.com/papers/chapters/tadmus/tadmus.pdf[Context Link]

 

3. Hulse JA, Chenowith T, Lebedovych L, Dickinson P, Cavanaugh B, Garrett N. Predictors of student success in the U.S. Army Graduate Program in Anesthesia Nursing. AmerAssoc of NurAnes J. 2007;75(5):339-346. [Context Link]

 

4. Megginson L. Noncognitive constructs in graduate admissions: An integrative review of available instruments. Nur Educ. 2009;34(6):254-261. [Context Link]

 

5. Fero L, O'Donnell J, Zullo T, Dabbs A, Kitutu J, Samosky J, Hoffman L. Critical thinking skills in nursing students: Comparison of simulation-based performance with metrics. J AdvNur, [Internet] 2010;66(10):2182-2193.doi: 10.1111/j.1365-2648.2010.05385.x. [Context Link]

 

6. Penprase B, Mileto L, Bittinger A, Hranchook AM, Atchley JA, Bergakker SA, Franson HE. The use of high-fidelity simulation in the admissions process: One nurse anesthesia program's experience. AmerAssoc of NurAnes J. 2012;80(1):43-8. [Context Link]

 

7. Gorton KL. An investigation into the relationship between critical thinking skills and clinical judgment in the nurse practitioner student.(PhD dissertation). Available from: ProQuest Dissertations and Theses [Context Link]

 

8. Ali NS, Bantz D, Siktberg L. Validation of critical thinking skills in online responses. J Nurs Educ. 2005;44(2):90-94. [Context Link]

 

9. Reid, HV. The correlation between a general critical thinking skills test and a discipline-specific critical thinking test for associate degree nursing students [PhD dissertation]. Available from: ProQuest Dissertations and Theses

 

10. Adams MH, Whitlow JF, Stover LM, Johnson KW. Critical thinking as an educational outcome: An evaluation of current tools of measurement. Nur Educ. 1996;21(3):23-32. [Context Link]

 

11. Kuncel NR, Hezlett SA, Ones DS. A comprehensive meta-analysis of the predictive validity of the graduate record examinations: Implications for graduate student selection and performance. PsycholBul, 2001;127, 162-181. doi:10.1037/0033-2909.127.1.162 [Context Link]

 

12. Kuncel NR, Crede M, Thomas LL,Klieger DM, et al. A Meta-Analysis of the Validity of the Pharmacy College Admission Test (PCAT) and Grade Predictors of Pharmacy Student Performance. Publication info: Amer J PharmEd, 2005;69(1-5): 339-347. [Context Link]

Appendix I: Appraisal instruments

 

MAStARI appraisal instrument[Context Link]

Appendix II: Data extraction instruments

 

MAStARI data extraction instrument

 

Keywords: Critical thinking; decision making; academic; graduate program; masters program; doctoral program; healthcare