Development of evaluation criteria for dental hygiene intra-school clinical practice operation using CIPP evaluation model

한국치위생학회
Mi-Hwa Jang1*

Abstract

Objectives: The purpose of this study was to develop operation management evaluation standards for efficient operation of practical courses conducted on campus and to verify validity and reliability. Methods: The draft evaluation criteria of this study were derived based on the CIPP program evaluation model, in order to verify the validity of the content, the Delphi survey was conducted targeting 30 dental hygiene professors and clinical field dental hygienists who had been in charge of practical education for more than 5 years. Main survey was conducted with 252 professors and clinical field dental hygienists in charge of practical education. Results: Through exploratory factor analysis, a total of 36 questions were confirmed with 7 factors. As a result of verifying the internal consistency of the final evaluation criteria, the degree of internal fit of the items in the entire domain was 0.914. Conclusions: According to the above results, the validity and reliability of the evaluation criteria for evaluating the operation and management of in-school clinical practice were verified to be appropriate, and it is thought that it can be used in follow-up studies related to the operation and management of the practice course in the future.

Keyword



Introduction

With the development and changes in the dental care environment, dental hygienists need to have practical skills as well as the ability to cope with various clinical situations actively [1], and an adequate curriculum is required to cultivate the professional manpower demanded in the clinical environment. This requires provision for sufficient practical experience and time in the dental hygiene curriculum, and the number of preclinical practice hours must be increased to improve the clinical proficiency of dental hygiene students [2]. However, due to the current coronavirus disease 2019 (COVID-19) pandemic, opportunities for hands-on training on clinical sites and on campus are limited. To provide the best education in a limited environment, the quality of the practical training needs systematic management. Quality control of curriculum not only improves the curriculum and satisfies the needs of students but also improves the quality of vocational education to nurture talents with on-site practical skills [3]. In Korea, the Dental Hygiene Education Evaluation Preparatory Committee has been established for quality control of education and the development of evaluation indicators. Evaluation items for in-school clinical practice subjects suggested by the preparatory committee include implementation of a practical curriculum, cost of practical training per student, organization and the use of practice guidebook, placement of students in clinical practice, and autonomous training [4]. However, specific standards to evaluate the quality management in in-school clinical practical training are lacking.

In this study, a systematically structured CIPP evaluation model was used to establish in-school clinical practical training operation management evaluation criteria to expand the evaluation target and enable rational decision-making by decision-makers in operating the curriculum. The CIPP evaluation model comprises four stages: contextevaluation, input evaluation, process evaluation, and product evaluation. Context evaluation detects undesirable circumstances or unmet needs and diagnoses problems, while input evaluation is used to provide information on resource utilization for the successful achievement of the goals. Process evaluation is used to identify the shortcomings of program implementation and procedural measures and collect information for revising and supplementing the program procedures and implementation methods. Product evaluation measures and interprets the result at the end of the program [5,6].

Previous studies on evaluation criteria have assessed job satisfaction, infection, supplementary education, and on-site clinical practical training; however, studies on in-school clinical practical training are lacking. Therefore, the purpose of this study was to develop operational management evaluation criteria for in-school clinical practical training and provide basic data for quality management and standardization of clinical practice management.

Methods

1. Study participants

This study was approved by the Korean Public Institutional Review Board (IRB) (P01-202109-22-005).

An expert interview was conducted with two dental hygiene professors who had been in charge of the practical training for more than 20 years and one expert with industry experience. The Delphi survey was conducted with 30 dental hygiene professors who had been in charge of practical training for more than 5 years and dental hygienists working in clinics. The survey was conducted from October 1, 2021 to October 10, 2021 through e-mails. The study was conducted from October 20, 2021 to November 20, 2021 on 252 clinical dental hygienists and professors who had been in charge of practical training. Based on a previous finding that stated the requirement of five to 10 times greater number of patients than the total number of items [7] and a possibility of participant drop-out, a total of 252 participants were explained the purpose of this study. The participants provided written consent for participation. The number of participants in this study was five times greater than the total number of items, and this study was conducted through face-to-face interviews and e-mails or using Naver forms. In the final analysis, data from 240 participants were included, excluding data of those who did not provide answers to certain items or provided insincere responses.

2. Procedures and methods

1) Evaluation criteria draft composition

The evaluation criteria draft was based on previous study findings on the CIPP evaluation model [8-11], training evaluation [12,13], dental hygiene clinical practice [14], and expert opinions. The evaluation area, items were classified according to the CIPP evaluation stage, and similar items were grouped into one. As a result, a total of 53 items were devised.

2) Validity verification

The content validity of the evaluation criteria draft was revised to a total of 44 items according to expert interviews. Then, the items were evaluated for content and suitability through the Delphi survey. Items that were lower than the cut-off value were deleted, and a total of 37 items remained. The main survey was conducted to assess the construct validity of the modified items. Exploratory factor analysis was conducted to construct a total of 36 items for the evaluation criteria.

3) Data analysis

SPSS 25.0 program (IBM SPSS Statistics, IBM, Corp. Armonk, NY, USA) was used for analyzing the collected data. Frequent analyses were conducted for the general characteristics of participants, and the mean, standard deviation, and content validity ratio (CVR) were calculated for data collected through the Delphi survey. Values with an average value below 0.4 and a CVR less than 3.3 were deleted after discussion with the experts [16]. The collected data were analyzed through exploratory factor analysis (EFA). The Kaiser-Meyer-Olkin (KMO) value and Barlett’s sphericity were calculated to check the suitability for EFA. Principal component analysis was conducted using Varimax rotation. The cut-off value for factor loading and commonality was 0.4 [17,18], and the consistency reliability of the items was measured using Cronbach’s α.

Results

1. General participant characteristics

The number of participants between the age of 30 and 39 in the Delphi survey was 40% and in the main survey was 31.7%. The ratio of professors and clinical dental hygienists was higher in the main survey (54.6%) than that in the Delphi survey. Approximately 33.3% and 31.7% of the participants in the Delphi and main surveys had experiences of 15 to 19 years, respectively. The highest number of participants (33.3%) in the Delphi survey lived in Seoul, while that (52.1%) in the main survey lived in Gyeonggido <Table 1>.

Table 1. General characteristics of the subject

http://dam.zipot.com:8080/sites/KSDH/images/N0220220202_image/Table_KSDH_22_02_02_T1.png

2. Content validity verification

The evaluation criteria draft comprised 53 items in total, and the content validity of the items was assessed through expert interviews and the Delphi survey. Context evaluation comprised six items on demand analysis and two items on goal setting. Input evaluation comprised seven items on the operation plan and five items on the practical training support system. Process evaluation comprised five questions on practical training operation, nine items on practical training guidance, four items on practical training evaluation, and five items on practical training support. Product evaluation comprised two items on satisfaction and eight items on results and achievement.

1) Expert interview

To construct universally valid and objective evaluation criteria, three experts with more than 20 years of education and clinical experience were interviewed. As a result, a total of 44 items were constructed according to the discussions on the necessity of items, appropriateness of domain classification, added and deleted contents, and duplicated and revised contents <Table 2>.

Table 2. Draft evaluation criteria and expert opinion

http://dam.zipot.com:8080/sites/KSDH/images/N0220220202_image/Table_KSDH_22_02_02_T2.png

2) The Delphi survey

The evaluation criteria revised through expert interviews were further evaluated using the Delphi survey. Items with an average value below 0.4 and a CVR below 3.3 were deleted after discussions with experts. As a result, six and one items were deleted in the first and second Delphi survey, respectively, and a total of 37 items remained.

In process evaluation, the item ‘likelihood of reaching the practical training goal (M 3.97, CVR 0.13)’ presented an average value of less than four and a CVR value of less than 3.3. According to the expert opinion, the item was similar to the ‘assessment of practical training goal achievement’; hence, it was deleted.

In the input evaluation, each of the items ‘guidance plan for students who have trouble adapting to practical training (M 3.87, CVR 0.13)’, ‘sharing practical guidance information (M 3.94, CVR 0.47)’, and ‘securing a practice room (M 3.87, CVR 0.13)’ presented an average value less than four and a CVR value less than 3.3. The item ‘guidance plan for students who have trouble adapting to practical training’ was similar to ‘support for students showing no improvement in practical training’, and the item ‘sharing practical guidance information’ was also assessed in ‘sharing practical training results’. Additionally, the item ‘securing a practice room’ was deleted as practical training was mostly provided after securing a practice room. Item ‘budge management plan (M 3.97, CVR 0.46)’ presented an average value of less than four; however, according to the expert opinion, the item was an essential element for practical training; hence, it was not deleted.

In the process evaluation, the item ‘appropriateness of practical training location (M 4.43, CVR 0.26)’ presented an average value above four, suggesting its relative importance. However, the item presented a CVR value of less than 3.3. According to the expert opinion, this item could be included in another item ‘appropriateness of practical training environment’; hence, the item was deleted. In product evaluation, items ‘assessment of changes in attitude toward practical training participation (M 3.42, CVR 0.20)’ and ‘documentation of practical training results (M 3.90, CVR 0.46)’ presented an average value of less than four and a CVR value of less than 3.3. As the item ‘assessment of changes in attitude toward practical training participation’ was difficult to measure because of the lack of an evaluation scale, which can clearly measure changes in attitude and time constraints, the item was deleted. In addition, the item ‘documentation of practical training results’ was deleted according to the expert opinion that the syllabus and teaching guidance plans included general information <Table 3>.

Table 3. Delphi research results of draft evaluation criteria

http://dam.zipot.com:8080/sites/KSDH/images/N0220220202_image/Table_KSDH_22_02_02_T3.png

M: mean; SD: standard deviation; CVR: content validity ratio

3. Construct validity verification

To determine the suitability of EFA, the KMO scale and Barlett’s sphericity were calculated. The KMO scale was 0.895, and the approximate chi-square value for Bartlett’s sphericity test was 5,181.213 (p<0.001), satisfying the conditions of factor analysis [19]. Factor analysis extracted seven factors with an eigenvalue greater than one and a cumulative explanatory rate of 64.413%.

Factor Ⅰ was grouped into eight items with an explained variance of 11.416%. Factor Ⅰ was named “practical training operation” as it comprised the following items on practical training operation: ‘composition of practical guidance manpower’, ‘plan for expanding training equipment’, ‘appropriateness of practical training environment’, ‘budget operation plan’, ‘appropriateness of trainees’, ‘appropriateness of training contents’, ‘appropriateness of operating budget’, and ‘appropriateness of practical training time’.

Factor Ⅱ was grouped into six items with an explained variance of 11.414%. Factor Ⅱ was named “communication and interaction” as it comprised the following items of communication and interaction between professors and students: ‘inducing student cooperation’, ‘supporting students stuck in practical training’, ‘motivating students’, ‘feedback on demands’, and ‘practical training-related communication’.

Factor Ⅲ was grouped into five items with an explained variance of 9.767%. Factor Ⅲ was named “practical training results and satisfaction” as it comprised the following items: ‘satisfaction survey’, ‘reflection of satisfaction results in next practical training plan and improvement’, ‘reflection of practical training results in educational improvement and plan establishment’, ‘assessment of goal achievement’, and ‘sharing practical training results’.

Factor Ⅳ was grouped into four items with an explained variance of 8.570%. Factor Ⅳ was named “practical training evaluation” as it comprised the following items: ‘presentation of practical training evaluation standards’, ‘objectivity and appropriateness of practical training evaluation standards’, ‘feedback of practical training evaluation results’, and ‘selection of evaluation items and contents’.

Factor Ⅴ was grouped into seven items with an explained variance of 8.496%. Factor Ⅴ was named “operation plan” as it comprised the following items: ‘orientation’, ‘practical training instruction method selection’, ‘practical training goal presentation’, ‘development of practical training guide’, ‘setting practical training goal’, ‘preparation of lesson plan’, and ‘selection of practical training content’.

Factor Ⅵ was grouped into four items with an explained variance of 8.330%. Factor Ⅵ was named ‘demand analysis’ as it comprised the following items on demand investigation and analysis for setting goals in designing the education process: ‘analysis of work environment’, ‘task analysis’, ‘education process analysis’, and ‘industrial needs survey’.

Factor Ⅶ was grouped into three items with an explained variance of 6.420%. Factor Ⅶ was named ‘infection and safety management’ as it comprised the following items on safety accidents: ‘infection prevention and control guidance’, ‘laboratory safety management guidance’, and ‘efforts to share and solve infections and safety management’ <Table 4>.

FactorⅠ: practice operation; FactorⅡ: communication and interaction; Factor III: practice results and satisfaction; Factor Ⅳ: practice evaluation, FactorⅤ: practice operation plan; Factor Ⅵ: demand analysis; Factor Ⅶ: infection and safety management

Table 4. Exploratory factor analysis results of evaluation criteria

http://dam.zipot.com:8080/sites/KSDH/images/N0220220202_image/Table_KSDH_22_02_02_T4.png

4. The final evaluation criteria

Factor analyses showed that the item ‘appropriateness of practical training time’ in factor Ⅰ had a factor loading and commonality less than 0.4; as a result, the item was deleted. Other items showed inconsistency in meanings unlike in the statistical results; hence, they were removed. Items ‘composition of practical training guidance manpower’, ‘plan for expanding training equipment’, and ‘budget operation plan’ were moved from group Ⅰ to group Ⅴ ‘operation plan’ Items ‘orientation’ and ‘practical training goal presentation’ were moved from group Ⅴ to group Ⅰ ‘practical training operation’. Thus, the final evaluation criteria comprised 36 items in seven factors, including four items on demand analysis, eight items on operation plan, six items on practical training operation, six items on communication and interaction, three items on infection and safety management, four items on practical training evaluation, and five items on practical training results and satisfaction <Table 5>.

Table 5. Final evaluation criteria

http://dam.zipot.com:8080/sites/KSDH/images/N0220220202_image/Table_KSDH_22_02_02_T4.png

5. Reliability of variable measurements

Cronbach's α for the developed seven factors were as follows: 0.760 for demand analysis, 0.757 for operation plan, 0.732 for practical training operation, 0.855 for communication and interaction, 0.753 for infection and safety management, 0.812 for practical training evaluation, and 0.826 for practical training results and satisfaction. Cronbach’s α for the total domain was 0.914. Cronbach’s α was greater than 0.7 for all factors, suggesting that the criteria were reliable [20] <Table 5>.

Discussion

In this study, we developed evaluation criteria and assessed their validity and reliability using literature review, expert review, and the Delphi and main surveys for the efficient operation and management of in-school clinical practice. The sub-factors of evaluation criteria were demand analysis, operation plan, practical training operation, communication and interaction, infection and safety management, practical training evaluation, and practical training results and satisfaction, including a total of 36 items.

Demand analysis comprised items basic for the establishment of a curriculum on vocational education. The changes in the dental clinical environment must be assessed to provide opportunities for realistic, practical training in the limited in-school clinical practice environment. Moreover, the duties and curriculum of dental hygienists must be analyzed to select the contents of practical training. Job analysis of dental hygienists ensures a guideline for selecting the contents and providing relevant training in the education environment. Taichman et al. [21] have emphasized the need for a job training program through job analysis to improve the profession, such as skills to cope with the rapidly changing environment, interaction with patients and other health care providers, and job improvement.

The operation plan comprised items on the plan and support system for the operation of the practical training. The selection of practical training goals is an essential element that must be conducted in the planning of the curriculum and is selected based on the analysis of the job environment and needs. After selecting the goals, realistic and standardized practical contents and methods of instruction are selected based on the proficiency and performance of the students. Then, a practical training guideline is developed. Additionally, through proper allocation of budget, practical training cost is secured, and a plan to expand the training equipment is established for adequate operation of the training system.

Practical training operation comprised items required for the implementation of the practical training program. In the first stage of the practical training, the goals of the program must be presented in detail through a comprehensive assessment of the curriculum characteristics, overall progress, and practical training based on the learning outcomes and core dental hygiene competencies [4]. As there may be individual differences in the performance of students, the overall performance level of students must be understood to determine the appropriateness of the contents and the number of trainees and accordingly modify the next training program. The practice room environment affects not only in-school but also on-site clinical practices. Shim et al. [22] have shown that students who were highly satisfied with the in-school clinical practice environment were highly satisfied with the on-site clinical experience. An appropriate operation budget is an essential element of practical training. In dental hygiene evaluation and certification standards, 80% or more of the total practical training budget must be used. More than 200,000 won must be secured for each student, and a detailed list and supply of required equipment and tools must be presented [4].

Communication and interaction comprised items related to exchanges among students and professors or among students during the practical training. Motivation and communication to induce active participation in training activities are important factors in determining the effects of practical training. Attitudes, such as learning motivation, enthusiasm, values, and decision, can influence learning, and learners with an active learning attitude can participate in learning in a motivated and efficient way to increase learning efficiency [23]. In-school clinical practice is mostly conducted in a limited time by a single instructor who shows a demonstration of skills directly or through mutual practice to a large number of students without on-site clinical experiences. In the course of practical training, students may find difficulties in the program, maladapt, and face problematic situations in mutual practice. Thus, based on individual differences of students, customized practical training that allocates time and autonomous practice outside class hours must be provided. In a previous study, Kim [24] has reported that developmental learning outcomes are observed when an appropriate teaching/learning method is provided according to the learning type during interviews with and coaching for trainees. Thus, Kim suggested the importance of developing suitable learning types and customized teaching and learning methods.

Infection and safety management comprised items on infection and injuries that may occur during practical training and safety management of training equipment. Current infection and safety management in practical training includes mandatory attendance for the training, provision of first-aid supplies while preparing for possible accidents during training, connecting with the health center in cases of potential accidents, and insurance coverage. Students participating in practical training lack the understanding of safety accidents and infections. Therefore, proper management and guidance must be provided to the students for their safety, and the occurrence of various problematic incidents during training must be shared to seek countermeasures.

Practical training evaluation comprised items that evaluated the performance of students during training. Evaluation of the practical training performance may be a sensitive interest for students. Thus, it is essential to select appropriate and objective evaluation items, contents, and standards that meet the goals of training and provide a clear explanation to the students. After completing the evaluation, feedback on the results must be assessed by the students for an opportunity to self-assess and improve their performance.

Practical training results and satisfaction comprised items that were implemented at the end of the training. A satisfaction survey of trainees is a tool to evaluate the effectiveness of the practical training curriculum. It acts as an important factor that can modify or improve the direction of the program. In addition, sharing and discussing the overall contents and the results of practical training with the department members can improve the quality of the program through step-by-step improvements and supplementation of limitations for the establishment of the next plans.

This study is meaningful because it developed its own evaluation criteria for the operation and management of in-school clinical practice. However, due to regional bias and a limited number of selected participants, the findings of this study cannot be generalized. To obtain generalized results in the future, larger sample size is necessary. In addition, it would be important to derive specific evaluation indicators and scales appropriate for evaluation criteria to verify the effectiveness and supplement the problems of evaluation.

Conclusions

This study was conducted to evaluate the quality control evaluation criteria of the in-school practical training course.

1. The evaluation criteria draft comprising 53 preliminary items was developed through a literature review.

2. The content validity of the items was verified through expert interviews and Delphi surveys. Based on the results of expert interviews, the number of items was reduced to 44. Following Delphi surveys, a total of 37 items were constructed.

3. The construct validity was verified through EFA in the main survey. As a result, the final criteria comprised 36 items, including four items on demand analysis, eight items on operation plan, six items on practical training operation, six items on communication and interaction, three items on infection and safety management, four items on practical training evaluation, and five items on practical training results and satisfaction.

4. Cronbach’s α was 0.760 for demand analysis, 0.757 for operation plan, 0.732 for practical training operation, 0.855 for communication and interaction, 0.753 for infection and safety management, 0.812 for practical training evaluation, and 0.826 for practical training results and satisfaction, and all values were satisfactory.

Based on these results, the developed evaluation criteria for in-school clinical practice operation and management presented adequate validity and reliability. Future studies with specific evaluation indicators and scales need to be developed for the quality management of the in-school clinical practice operation.

Conflicts of Interest

The author declared no conflict of interest.

References

1 1. Park IS, Song GS. Effect of cooperative learning on learning strategies, academic self - efficacy and class satisfaction among dental hygiene students. J Korean Soc Dent Hyg 2012;12(1):93-101. https://doi.org/10.13065/jksdh.2012.12.1.093  

2 2. Mann NK, Sellers PA. Survey for teaching patient education in the dental hygiene curriculum. J Dent Hyg 2003;77(3):168-72.  

3 3. Hwang GH, Kim KJ, Ohn JD, Chun YY. Exploring the meaning of total quality management of curriculum and its implications. J Educ Stud 2013;44(4):99-121.  

4 4. Korea Dental Hygiene Education and Evaluation Institute Establishment Promotion Committee. Public hearing for establishment of dentalhygiene education evaluation and certification system. Korean Dental Hygienists Association 2017.  

5 5. Stufflebeam DL. The relevance of the CIPP evaluation model for educational account ability. J Res and Development in Educ(JRDE) 1971;5(1):19-25.  

6 6. Bae HS. Theory - driven educational program evaluation. Seoul: Wonmisa; 2008: 1-163.  

7 7. Floyd FJ, Widaman KF. Factor analysis in the development and refinement of clinical assessment instruments. Psychol Assess 1995;7(3):286-99. https://doi.org/10.1037/1040-3590.7.3.286  

8 8. Shin YJ, Kim ST, Song HD. Development of evaluation indicators for job capability strengthening program for vocational high school with application of CIPP evaluation model. J Vocational Educ Res 2018;37(3):1-23. https://doi.org/10.37210/JVER.2018.37.3.1  

9 9. Kwon JS, Kim HY, Yune SJ. Developing Korean LID program assessment criterion based on CIPP evaluation model. JOEC 2019;25(4):51-72. https://doi.org/10.24159/joec.2019.25.4.51  

10 10. Cha BE, Shon MH. A study on the development of evaluation indicators for the education process in medical school. JOEC 2019;25(4):239-61. https://doi.org/10.24159/joec.2019.25.4.239  

11 11. Bae GM, Wu HJ, Choi ML, Yoon GS. Diagnosis and improvements plan study of CIPP model - based vocational competency development training teacher qualification training. J Vocational Educ Res 2017;36(2):1-27. https://doi.org/10.37210/JVER.2017.36.2.51  

12 12. Yun MH, Kim JW, Kim HH, Park SS. Evaluation of industry-university cooperation internship programs. J Vocational Educ Res 2006;25(3):183-206.  

13 13. Kang JN, Park KO, Chang YJ. Development of evaluation indicators on hospice and palliative care curriculum using delphi methods. KJHRDQ 2013;15(1):107-34. https://doi.org/10.18211/kjhrdq.2013.15.1.005  

14 14. Jang MH, Kim JH. Influencing factors on attitude, stress and satisfaction in clinical practice. J Korean Soc Dent Hyg 2014;14(3):407-15. https://doi.org/10.13065/jksdh.2014.14.03.407  

15 15. Lynn MR. Determination and quantification of content validity. Nurs Res 1986;5(6):382-6. https://doi.org/10.1097/00006199-198611000-00017  

16 16. Lawshe CH. A quantitative approach to content validity. Pers Psychol 1975;28(4):563-75. https://doi.org/10.1111/j.1744-6570.1975.tb01393.x  

17 17. Hair JF, Black WC, Babin BJ, Anderson RE. Multivariate data analysis. 7th ed. Upper Saddle River NJ: Prentice Hall; 2010: 91-146.  

18 18. Field A. Exploratory factor analysis. In: discovering statistics using IBM SPSS statistics. London: Sage; 2013: 670-8.  

19 19. Kaiser HF. An index of factorial simplicity. Psychometrika 1974;39(1):31-6.  

20 20. Uysal MS, Sirakaya - Turk E. Factor analytical procedure and scale reliability. Boston: CABI; 2017: 320-38.  

21 21. Taichman RS, Green TG, Polverini PJ. Creation of a scholars program in dental leadership(SPDL) for dental and dental hygiene students. J Dent Educ 2009;73(10):1139-43. https://doi.org/10.1002/j.0022-0337.2009.73.10.tb04805.x    

22 22. Sim SJ, Back HO, Um JS, Jung HY, Ji HM, Hwang SN, et al. The factors to impact on the satisfaction of field practice among dental hygiene students. J Kor Acad Dent Admin 2017;5(1):22-30. https://doi.org/10.22671/JKADA.2017.5.1.22  

23 23. Lee YS. Effects of problem - based instruction on learning attitude and academic achievement by learner’s cognitive style attitude[Master’s thesis]. Chungju: Univ. of Education Chungju National, 2002.  

24 24. Kim ME. Cognitive and affective domains outcome of students in the department of dental hygiene according to teaching and learning methods by learning style. Jour of KoCon.a 2021;21(1):363-72. https://doi.org/10.5392/JKCA.2021.21.01.363