Authors

  1. Windey, Maryann PhD, MS, MSN, RN-BC
  2. Lawrence, Carol PhD, MS, BSN, RNC-OB, CBC
  3. Guthrie, Kimberly PhD, MS, MSN, RN
  4. Weeks, Debra DNP, MSN, RN-BC
  5. Sullo, Elaine MLS, MAEd
  6. Chapa, Deborah W. PhD, ACNP-BC, FNAP, FAANP

Abstract

Increases in newly licensed nurses and experienced nurses changing specialties create a challenge for nursing professional development specialists (NPDS). The NPDS must use the best available evidence in designing programs. A systematic review of interventions for developing preceptors is needed to inform the NPDS in best practice. A search was conducted for full-text, quantitative, and mixed-methods articles published after the year 2000. Over 4000 titles were initially identified, which yielded 12 research studies for evaluation and syntheses. Results identified a limited body of evidence reflecting a need for NPDS to increase efforts in measuring the effectiveness of preceptor development initiatives.

 

(See CE Video, Supplemental Digital Content 1, http://links.lww.com/JNPD/A9)

 

Article Content

Building a comprehensive nurse preceptor development program is essential for acute care systems in today's healthcare environment. Acute care organizations are challenged with an overwhelming number of nursing students obtaining clinical practice hours, newly licensed nurses entering the profession, and experienced nurses seeking opportunities in new practice specialties (Auerbach, Buerhaus, & Staiger, 2011). Meeting the psychosocial and developmental needs of these nurses transitioning into new roles falls to the nursing professional development specialist (NPDS). The NPDS serves a vital role in the creation of preceptor development programs and relies on best practices as identified in the literature (American Nurses Association & National Nursing Staffing Development Organization, 2010). Prepared preceptors can also lead to nurses' improved satisfaction and improved retention rates (Lee, Tzeng, Lin, & Yeh, 2009; Sandau, Cheng, Pan, Gaillard, & Hammer, 2011Lee, Tzeng, Lin, & Yeh, 2009; Sandau, Cheng, Pan, Gaillard, & Hammer, 2011).

 

BACKGROUND AND SIGNIFICANCE

Nursing Turnover and Replacement

The NPDS must keep informed of nursing workforce trends, such as turnover rates, projected shortages, and changing demographics, and their implications when planning preceptor development interventions. The turnover rate of new nurses has been reported anywhere between 35% and 61% during the first year of practice (Anderson, Linden, Allen, & Gibbs, 2009; Beecroft, Kunsman, & Krozek, 2001Anderson, Linden, Allen, & Gibbs, 2009; Beecroft, Kunsman, & Krozek, 2001). Moreover, the cost of replacing one nurse is at least $44,000, with one study estimating up to $67,100 (Halfer, Graf, & Sullivan, 2008; Jones, 2005Halfer, Graf, & Sullivan, 2008; Jones, 2005). Estimates that account for inflation and are more practical, are probably closer to $82,000 if vacancies are filled with experienced nurses (Jones, 2008).

 

Surge of New Nurses

Federal and state legislators have worked to address concerns over the nursing shortage for years. It has been reported that 850,000 nurses in the United States are between 50 and 64 years old (Buerhaus, Auerbach, Staiger, & Muench, 2013). The 2004 National Sample Survey of Registered Nurses reported that over 55% of nurses intend to retire between 2011 and 2020, and as a result, new nursing programs have appeared throughout the country, and postsecondary schools have expanded their programs (Dracup & Morris, 2007). This surge of new nurses is predicted to swell toward the end of this decade, and it will dramatically increase between 2020 and 2030 (Auerbach et al., 2011). These trends point toward an overwhelming need for prepared nursing preceptors to assist with the transitioning of nurses into the workforce.

 

Identified Gap in the Literature

The literature is abundant with interventional research studies attesting to the successful outcomes related to the development of preceptors. Billay and Myrick (2007) completed an integrative review summarizing how allied health disciplines describe preceptorship; however, the study did not address preceptor development. Mann-Salinas et al.'s (2014) systematic review on evidenced-based preceptor programs found a paucity of evidence-based strategies to support preceptor development. The authors' review excluded studies including preceptors of students (Mann-Salinas et al., 2014). This identified gap in the literature is a challenge for the NPDS, who is tasked with gathering the evidence available to provide for the developmental needs of both students and staff requiring preceptor support during role transition.

 

Preceptor Development

Preceptor development is one intervention that the NPDS uses to address the development needs of those entering new roles within the acute care organization. Luhanga, Dickieson, and Mossey (2010) state that the success of the orientation to the environment is dependent on the proper preparation of the preceptor as supported by a formalized educational program. When Billay and Myrick (2007) conducted their integrative review on allied health preceptorship, education of the nursing preceptor was a prominent theme in the literature. The need for the creation of preceptor development programs is profuse in the nursing literature (Almada, Carafoli, Flattery, French, & McNamara, 2004; Luhanga et al., 2010Almada, Carafoli, Flattery, French, & McNamara, 2004; Luhanga et al., 2010). Moreover, one study reported that 49% of preceptors did not feel they were adequately prepared for the role of preceptor (Yonge, Hagler, Cox, & Drefs, 2008).

 

PURPOSE OF THE STUDY

A formalized systematic review is essential to help the NPDS evaluate best practices for preceptor development programs. Levels of evidence reside on a hierarchy with systematic reviews ranking the highest (Bettany-Saltikov, 2012). The purpose of this study was to review, assess, analyze, and synthesize the best available evidence of interventions that support preceptor development to inform the NPDS practice.

 

SYSTEMATIC REVIEW METHODOLOGY

Study Design

A systematic review was conducted, guided by processes recommended by the Evidence Based Practice Centers funded by the Agency for Healthcare Research and Quality (2014). Processes were developed to identify and select relevant articles, review and rate the individual articles, and then synthesize results and grade the evidence. No meta-analysis was planned as considerable heterogeneity across articles was anticipated with regard to participant samples, definitions of outcomes, length of follow-up, and settings.

 

Literature Search and Eligibility

A literature search was conducted as recommended by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement (Moher, Liberati, Tetzlaff, Altman, & PRISMA Group, 2009). Study eligibility criteria were established a priori. Inclusion criteria were primary studies with nursing preceptors of students, new graduates, or nurses changing specialties; full text; published; peer reviewed; and English language originating from any country. Quantitative studies about nursing preceptor development were included if the settings were acute care hospital or inpatient rehabilitation, and reported at least one intervention and one measurable outcome. Excluded studies were unpublished dissertations and those studies focused on preceptors of advanced practice nurses.

 

Search strategies were adapted from Cochrane and the National Institute for Health and Clinical Excellence protocols to systematically search Pubmed, CINAHL (EBSCOHost), Dissertations & Theses (Proquest), ERIC, Scopus, and Cochrane Libraries of Systematic Reviews and Clinical Trials (OVID) databases from 2000 through March 2014 (Chandler, Churchill, Higgins, Lasserson, & Tovey, 2013). The searches were designed for high sensitivity to locate any study of preceptor development. The search was limited to articles published between January 2000 and March 2014 to capture a timely body of research that is consistent with the findings of Billay and Myrick (2007), who reported that most articles pertaining to education of nursing preceptors were published after 2000. Search selection strategies were conducted in a stepwise fashion with a team of five reviewers: Two reviewers independently examined all titles for inclusion criteria. Consensus was reached, and abstracts were reviewed independently by two reviewers. Consensus was reached, and the full-text articles were randomly assigned and examined by two reviewers. Bibliographies of full-text articles were searched to locate additional articles, and 94 were found (see Figure 1).

  
Figure 1 - Click to enlarge in new windowFIGURE 1. Flow diagram: review of records for interventions to support preceptor development.

Data Extraction

Data were divided among the research team. Each section of data was extracted by two reviewers with both clinical and methodological expertise. Detailed evidence tables were completed from the data extraction performed. Data were rechecked against the original articles for accuracy. If discrepancies were discovered, these were discussed by the team, resolved, and corrected.

 

Quality Assessment Tools

Medical education research study quality instrument

The Medical Education Research Study Quality Instrument (MERSQI) and Best Evidence in Medical Education (BEME) were used to rate study quality and were selected because of their frequent use in quality assessment of medical and nursing education (Cook, Levinson, & Garside, 2011; Reed et al., 2008; Sullivan, 2011; Yucha, Schneider, Smyer, Kowalski, & Stowers, 2011Cook, Levinson, & Garside, 2011; Reed et al., 2008; Sullivan, 2011; Yucha, Schneider, Smyer, Kowalski, & Stowers, 2011Cook, Levinson, & Garside, 2011; Reed et al., 2008; Sullivan, 2011; Yucha, Schneider, Smyer, Kowalski, & Stowers, 2011Cook, Levinson, & Garside, 2011; Reed et al., 2008; Sullivan, 2011; Yucha, Schneider, Smyer, Kowalski, & Stowers, 2011). The MERSQI contains 10 items that rate study quality in six domains of research quality: study design, sampling, type of data (subjective or objective), validity, data analysis, and outcomes (Reed et al., 2008). The maximum score for each domain is 3 with a maximum MERSQI score of 18. The potential range is 5-18. Domain scores that had a "not applicable response" option were adjusted to the percent of total achievable points for that domain to allow for total scale scoring (Reed et al., 2008). MERSQI has been found to have strong content validity, interrater reliability (r = .72-.998), and internal consistency reliability ([alpha] = .57-.92) and adequate predictive validity and criterion validity compared with other variables, such as published versus rejected manuscripts (Cook et al., 2011; Reed et al., 2007, 2008; Yucha et al., 2011Cook et al., 2011; Reed et al., 2007, 2008; Yucha et al., 2011Cook et al., 2011; Reed et al., 2007, 2008; Yucha et al., 2011Cook et al., 2011; Reed et al., 2007, 2008; Yucha et al., 2011). Internal consistency of the MERSQI in nursing education is supported ([alpha] = .55; Yucha et al., 2011).

 

Best evidence in medical education

The BEME global scale assesses two domains, the strength of the evidence (range = 1-5, 1 = no clear conclusions can be drawn to 5 = results are unequivocal) and outcomes based on the Kirkpatrick's levels of educational outcomes (see Table 1; Hammick, Dornan, & Steinert, 2010; Littlewood et al., 2005Hammick, Dornan, & Steinert, 2010; Littlewood et al., 2005). Limited validity and reliability evidence for the BEME was located in the literature. However, positive correlations have been found between the MERSQI and BEME instruments (r = .58-.62; Cook et al., 2011). Two reviewers independently rated the quality of each study with an agreement rate of 100%. The research team discussed but did not rank three additional items as recommended by Colthart et al. (2008): (a) the appropriateness of the design of the study to answer the research questions posed, (b) how well the design was implemented, and (b) the appropriateness of the analysis with elaboration on any concerns.

  
Table 1 - Click to enlarge in new windowTABLE 1 Descriptive Statistics for Quality Variables (

RESULTS OF THE STUDY

Four thousand five hundred one articles were identified through database searching and other sources. Twelve articles were selected for qualitative synthesis (see Figure 1). The 12 interventional research articles that were selected for quality review are summarized in Table 2. Ten of the research articles were quasi-experimental, and two were of experimental design. Seven of the articles used a longitudinal design, and five used a cross-sectional design. In addition, 11 of the studies used a prospective design, whereas one used retrospective and prospective dimensions. In 6 of the 12 articles, researchers reported using a theoretical or conceptual model as a framework for their studies (see Table 2). Ten studies used the primary intervention of workshops, which may have included various instructional methodologies such as group discussion, role play, and/or printed materials (see Table 2). The two remaining studies used CD-ROM or a printed manual self-directed learning.

  
Table 2 - Click to enlarge in new windowTABLE 2 Description of the Articles Selected for Qualitative Analysis

Content Topics

Study authors reported the inclusion of a variety of content topics as part of the preceptor development intervention (see Table 2). Content most frequently reported was giving and receiving feedback (83%), effective communication (75%), facilitating adult learning (58%), reviewing roles and responsibilities of the preceptor role (58%), and the development and evaluation of clinical judgment (50%). Contents such as evidenced-based practice, mentoring, time management, diversity, rewards and benefits, and motivation were reported infrequently, with inclusion in only one study each. There were many evaluation methods (dependent variables) used to determine effectiveness of the intervention (see Table 2). Dependent variables as reported by the study authors ranged from low-level participant satisfaction measures to high-level patient safety quality indicators, such as decreases in medication errors, patient falls, and incidents.

 

Quality Assessment Scores

MERSQI and BEME scores were calculated based on the rigor of the research design and the level of outcomes reported (see Table 3). The range of MERSQI scores was 7-15, with a mean of 11.38 (SD = 2.21; see Table 1). The range of BEME strength scores for the 12 articles was 2-4, with a mean of 3.08 (SD = 0.67). The BEME outcome scores were predominately lower level outcomes (25.0% 2a-Attitudes or perceptions, 41.7% 2b-Knowledge and skills, 16.7% 3-behavioral change, 8.3% 4a-organization practice, 8.3% 4b-patient benefits). A correlation between both tools' strength scores showed a positive but weak correlation (r = .13) and was not statistically significant (p > .05).

  
Table 3 - Click to enlarge in new windowTABLE 3 Quality Assessment Summary for the Final Sample of Articles (

Methodological Concerns

After addressing the three additional discussion questions, as recommended by Colthart et al. (2008), the research team identified methodological concerns. Two of the 12 studies were found to use an inappropriate design for the study question. One study used a posttest-only design, and another study used a dependent variable (evaluation) that was inconsistent with the research questions. Seven of the studies (58.3%) had a design that was not well implemented. Some examples of concern were high attrition rates, small sample sizes, and/or lack of fidelity to administer the intervention reliably. Additional concerns ranged from unreported validity of the instrumentation to a risk of a Type 1 error from lack of control for t test pretest scores. Six of the studies (50%) reported an appropriate analysis for their study. The discussion also identified strength in the diversity of interventions, sample selections, and design analysis.

 

DISCUSSION

This systematic review provided a rigorous analysis of the current state of evidence pertaining to preceptor development. Most studies reported success with a variety of instructional strategies, many of which were offered during workshops. Multiple creative modalities were implemented, such as the use of CD-ROM, learner-directed modules, and resources. Most studies reported outcomes that predominately addressed participant satisfaction and self-efficacy, rather than higher level outcomes based on Kirkpatrick's levels of educational outcomes (Littlewood et al., 2005). One critical finding was the lack of rigorous interventional studies designed with valid and reliable assessment tools, control groups, and control for extraneous variables. The findings of this review highlight the challenges of experimental educational research in the nursing professional development specialty.

 

Study findings add an increased understanding of the psychometric properties of the MERSQI and BEME instruments. The MERSQI mean score of 11.38 (SD = 2.21) in this study is consistent with Reed et al. (2008; mean = 10.7, SD = 2.5) for accepted manuscripts for publication in medical education, supporting it as a valid and reliable instrument. A weak, nonsignificant correlation between the MERSQI and BEME strength scores (r = .13, p > .05) is inconsistent with Cook et al. (2011), who found a significantly positive moderate correlation (r = .58, p = .001). However, these findings are conceptually logical given that greater sensitivity can be obtained with an instrument with a greater number of items and suggest that the BEME and MERSQI are measuring different dimensions of quality.

 

LIMITATIONS OF THE STUDY

This review has several limitations. First, studies included in the review were implemented in a variety of inpatient clinical settings and may not be generalizable to all healthcare environments. Second, exclusion of qualitative studies potentially impacts the depth and richness of information synthesized. Third, given the high volume of the synonyms used in the search strategy, it is possible to have inadvertently omitted a relevant study.

 

PRACTICE IMPLICATIONS FOR THE NPDS

The major practice implication is the limited body of knowledge supporting specific interventions and their efficacy in developing preceptors. The NPDS is tasked to evaluate preceptor development programs' impact on their organization's results and patient outcomes, in addition to evaluating participant satisfaction. Implications for further research include the need for more reliable and valid instruments to measure learning and application, more rigorous research design, and measurement of organizational and patient benefits.

 

CONCLUSION

This systematic review found a limited body of literature evaluating interventions to support preceptor development. Of the studies that were located, many had design and methodological concerns. Most of the studies evaluated multimodal interventions; therefore, assessment of the impact of any particular component was problematic. Future research should focus on more rigorous study design and evaluation using high-level outcome measures.

 

ACKNOWLEDGMENTS

The authors would like to acknowledge and thank the Association for Nursing Professional Development for supporting this research study through a grant.

 

References

 

Agency for Healthcare Research and Quality. (2014) The effective health care program stakeholder guide: Chapter 2: Effective health care program activities. Retrieved from http://www.ahrq.gov/research/findings/evidence-based-reports/stakeholderguide/ch[Context Link]

 

Al-Hussami M., Saleh M. Y., Darawad M., Alramly M. (2011). Evaluating the effectiveness of a clinical preceptorship program for registered nurses in Jordan. Journal of Continuing Education in Nursing, 42(12), 569-576. doi:10.3928/00220124-20110901-01

 

Almada P., Carafoli K., Flattery J. B., French D. A., McNamara M. (2004). Improving the retention rate of newly graduated nurses. Journal for Nurses in Staff Development, 20(6), 268-273. [Context Link]

 

American Nurses Association & National Nursing Staffing Development Organization. ( 2010). Nursing professional development: Scope and standards of practice. Silver Spring, MD: Nursesbooks.org [Context Link]

 

Anderson T., Linden L., Allen M., Gibbes E. ( 2009). New graduate RN work satisfaction after completing an interactive nurse residency. The Journal of Nursing Administration, 39(4), 165-169. [Context Link]

 

Auerbach D. I., Buerhaus P. I., Staiger D. O. (2011). Registered nurse supply grows faster than projected amid surge in new entrants ages 23-26. Health Affairs, 30(12), 2286-2292. [Context Link]

 

Beecroft P. C., Kunzman L., Krozek C. (2001). RN internship: Outcomes of a one-year pilot program. The Journal of Nursing Administration, 31(12), 575-582. [Context Link]

 

Bettany-Saltikov J. (2012). How to do a systematic literature review in nursing. New York, NY: McGraw-Hill. [Context Link]

 

Billay D., Myrick F. (2007). Preceptorship: An integrated review of the literature. Nurse Education in Practice, 8, 258-266. [Context Link]

 

Bradley C., Erice M., Halfer D., Jordan K., Lebaugh D., Opperman C., Stephen J. (2007). The impact of a blended learning approach on instructor and learner satisfaction with preceptor education. Journal for Nurses in Staff Development, 23(4), 164-170.

 

Buerhaus P. I., Auerbach D. I., Staiger D. O., Muench U. ( 2013). Projections of the long-term growth of the registered nurse workforce: A regional analysis. Nursing Economics, 31( 1), 13-17. [Context Link]

 

Chandler J., Churhill R., Higgins J., Lasserson T., Tovey D. (2013). Methodological standards for the conduct of new Cochrane intervention reviews (MECIR), v. 2.3. Retrieved from http://www.editorial-unit.cochrane.org/sites/editorial-unit.cochrane.org/files/u[Context Link]

 

Colthart I., Bagnall G., Evans A., Allbutt H., Haig A., Illing J., McKinstry B. (2008). The effectiveness of self-assessment on the identification of learner needs, learner activity, and impact on clinical practice: BEME guide no. 10. Medical Teacher, 30(2), 124-145. doi:10.1080/01421590701881699 [Context Link]

 

Cook D. A., Levinson A. J., Garside S. (2011). Method and reporting quality in health professions education research: A systematic review. Medical Education, 45(3), 227-238. doi:10.1111/j.1365-2923.2010.03890.x [Context Link]

 

Dracup K., Morris P. E. (2007). Nurse residency programs: Preparing for the next shift. American Journal of Critical Care, 16(4), 328-330. [Context Link]

 

Hagler D., Mays M. Z., Stillwell S. B., Kastenbaum B., Brooks R., Fineout-Overholt E., Jirsak J. (2012). Preparing clinical preceptors to support nursing students in evidence-based practice. Journal of Continuing Education in Nursing, 43(11), 502-508. doi:10.3928/00220124-20120815-27

 

Halfer D., Graf E., Sullivan C. (2008). The organizational impact of a new graduate pediatric nurse mentoring program. Nursing Economics, 26(4), 243-249. [Context Link]

 

Hallin K., Danielson E. (2009). Being a personal preceptor for nursing students: Registered nurses' experiences before and after introduction of a preceptor model. Journal of Advanced Nursing, 65(1), 161-174. doi:10.1111/j.1365-2648.2008.04855.x

 

Hammick M., Dornan T., Steinert Y. (2010). Conducting a best evidence systematic review. Part 1: From idea to data coding. BEME guide no. 13. Medical Teacher, 32(1), 3-15. doi:10.3109/01421590903414245 [Context Link]

 

Horton C. D., DePaoli S., Hertach M., Bower M. (2012). Enhancing the effectiveness of nurse preceptors. Journal for Nurses in Staff Development, 28(4), E1-E7.

 

Jones C. B. (2005). The cost of nurse turnover, part 2: Application of the nursing turnover cost calculation methodology. The Journal of Nursing Administration, 35(1), 41-49. [Context Link]

 

Jones C. B. (2008). Revisiting nurse turnover costs: Adjusting for inflation. The Journal of Nursing Administration, 38(1), 11-18. [Context Link]

 

Komaratat S., Oumtanee A. (2009). Using a mentorship model to prepare newly graduated nurses for competency. Journal of Continuing Education in Nursing, 40(10), 475-480.

 

Lee T. Y., Tzeng W. C., Lin C. H., Yeh M. L. (2009). Effects of a preceptorship programme on turnover rate, cost, quality and professional development. Journal of Clinical Nursing, 18(8), 1217-1225. [Context Link]

 

Littlewood S., Ypinazar V., Margolis S. A., Scherpbier A., Spencer J., Dornan T. (2005). Early practical experience and the social responsiveness of clinical education: Systematic review. British Medical Journal, 331(7513), 387-391. [Context Link]

 

Luhanga F. L., Dickieson P., Mossey S. D. (2010). Preceptor preparation: An investment in the future generation of nurses. International Journal of Nursing Education Scholarship, 7(1), 1-18. [Context Link]

 

Mann-Salinas E., Hayes E., Robbins J., Sabido J., Feider L., Allen D., Yoder L. (2014). A systematic review of the literature to support an evidence-based precepting program. Burns, 40(3), 374-387. http://dx.doi.org/10.1016/j.burns.2013.11.008[Context Link]

 

Moher D., Liberati A., Tetzlaff J., Altman D. G. PRISMA Group ( 2009). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Annals of Internal Medicine, 151(4), 264-269. [Context Link]

 

Parker F. M., Lazenby R. B., Brown J. L. (2012). Mission possible CD ROM: Instructional tool for preceptors. Nurse Education Today, 32(5), 561-564 doi:10.1016/j.nedt.2011.08.009

 

Reed D. A., Beckman T. J., Wright S. M., Levine R. B., Kern D. E., Cook D. A. (2008). Predictive validity evidence for medical education research study quality instrument scores: Quality of submissions to JGIM's medical education special issue. Journal of General Internal Medicine, 23(7), 903-907. doi:10.1007/s11606-008-0664-3 [Context Link]

 

Reed D. A., Cook D. A., Beckman T. J., Levine R. B., Kern D. E., Wright S. M. (2007). Association between funding and quality of published medical education research. JAMA: Journal of the American Medical Association, 298(9), 1002-1009. [Context Link]

 

Riley-Doucet C. (2008). A self-directed learning tool for nurses who precept student nurses. Journal for Nurses in Staff Development, 24( 2), E7-E14.

 

Sandau K. E., Cheng L. G., Pan Z., Gaillard P. R., Hammer L. (2011). Effect of a preceptor education workshop: Part 1. Quantitative results of a hospital-wide study. Journal of Continuing Education in Nursing, 42(3), 117-126. doi:10.3928/00220124-20101101-01 [Context Link]

 

Smedley A., Morey P., Race P. (2010). Enhancing the knowledge, attitudes, and skills of preceptors: An Australian perspective. Journal of Continuing Education in Nursing, 41(10), 451-461.

 

Sorensen H. A., Yankech L. R. (2008). Precepting in the fast lane: Improving critical thinking in new graduate nurses. Journal of Continuing Education in Nursing, 39(5), 208-216. doi:10.3928/00220124-20080501-07

 

Sullivan G. M. (2011). Deconstructing quality in education research. Journal of Graduate Medical Education, 3(2), 121-124. [Context Link]

 

Yonge O., Hagler P., Cox C., Drefs S. (2008). Listening to preceptors: Part B. Journal for Nurses in Staff Development, 24(1), 21-26. [Context Link]

 

Yucha C. B., Schneider B. S., Smyer T., Kowalski S., Stowers E. (2011). Methodological quality and scientific impact of quantitative nursing education research over 18 months. Nursing Education Perspectives, 32(6), 362-368. [Context Link]