Standard 1: Content and Pedagogical Knowledge – The provider ensures that candidates develop a deep understanding of the critical concepts and principles of their discipline and, by completion, are able to use discipline-specific practices flexibly to advance the learning of all students toward attainment of college- and career- readiness standards.
Candidate Knowledge, Skills, and Professional Dispositions
1.1 Candidates demonstrate an understanding of the 10 InTASC standards at the appropriate progression level(s) in the following categories: the learner and learning; content; instructional practice; and professional responsibility.
1.2 Providers ensure that candidates use research and evidence to develop an understanding of the teaching profession and use both to measure their P-12 students’ progress and their own professional practice.
1.3 Providers ensure that candidates apply content and pedagogical knowledge as reflected in outcome assessments in response to standards of Specialized Professional Associations (SPA), the National Board for Professional Teaching Standards (NBPTS), states, or other accrediting bodies (e.g., National Association of Schools of Music – NASM).
1.4 Providers ensure that candidates demonstrate skills and commitment that afford all P-12 students access to rigorous college- and career-ready standards (e.g., Next Generation Science Standards, National Career Readiness Certificate, Common Core State Standards).
1.5 Providers ensure that candidates model and apply technology standards as they design, implement and assess learning experiences to engage students and improve learning; and enrich professional practice.
The Professional Education Unit Council (PEUC) strives to provide candidate learning and practicum experiences that are carefully sequenced to ensure progressive opportunities for the development of content and pedagogical knowledge. (Sequence of Courses). Using this framework, students move through courses that build upon the knowledge and understanding gained and demonstrated in previous courses, allowing students to apply for Juncture at key points in the program. Students can Juncture once they exhibit completion of coursework and required assessments. (See Juncture documents).
Through faculty discussion and analysis of the 17 program areas offered at SU, five in-common key assessments have been identified: PRAXIS Content (though specific to program area), PRAXIS PLT, TPA, Pro 05, and ST-11. These assessments, when considered holistically, along with individual program area SPA reports and their respective key assessments, illustrate that all components of Standard 1 are addressed at the understanding, application and impact on P-12 student levels. Formal data from three semesters of each of these key assessments has been disaggregated to illustrate the level of competence of our candidates across all program areas. Content acquisition has been considered within individual programs, as evidenced through SPA reports, in a meaningful, thoughtful manner over the past five years. Collectively, however, there has been minimal holistic analysis and evaluation of data across programs until the 2016-2017 school year.
The PRAXIS Content assessment data (see same titled document) as well as the PRAXIS PLT data (see same titled document) present some challenges regarding analyzing the data in a meaningful way. Historically students were not required to take these assessments prior to student teaching, allowing them the option of graduating without a recommendation for WV state for certification. This results in some candidates not having passing scores for some tests. New WV-state policy taking effect Fall 2017 will require passing WV-required PRAXIS Content test(s) prior to student teaching which will improve completeness of data sets. Many of our graduates seek certification in other states that require different or fewer PRAXIS Content tests. For example, WV elementary certification requires passage of five content tests (nearly a $1000 cost to students), while neighboring Maryland only requires one test combining all elementary content areas. Thus, this policy change is seen as financially punitive to out of state candidates. Consequently, while SU could make a policy change and require the PLT prior to graduation, we have not chosen to do so. Therefore, we anticipate the continuation of incomplete data sets for the PLT. Because SU does not have a cohort system, it should be recognized that candidates have taken their PRAXIS tests at various points during and in some cases after their student career. Nonetheless, cohorts were formed based on when the candidate student taught with test scores available by the time data was analyzed. Present data analysis only examined composite scores. Based on feedback from PEUC members, efforts will be made to redesign the database to allow us to generate reports that allow more detailed analysis of student data.
The Teacher Performance Assessment (see same titled document) tool has undergone several changes recently, so while three data sets have been provided for this category, there have been four different assessments in place during those three semesters. Prior to Spring 2015, all student teachers completed a teacher work sample during student teaching. The TWS had 5 tasks (Contextual Analysis, Profile of a Struggling Learner, Design and Implementation of Instruction, Analysis of Pre/Post Assessment Results, and Reflection on Practice). Instructions were relatively generic resulting in a 40+/- page narrative that included an appendix with lesson plans, assessments and supporting materials. A single rating was given to each task which did not allow for a breakdown of components critical to each task; inadequate for identifying specific knowledge and skills for analysis of program strengths and weaknesses. In Fall of 2014, WVDOE told EPPs it would be adopting one of two proprietary TPAs to replace the PRAXIS PLT. To determine which TPA to adopt, WVDOE conducted a pilot inviting EPPs to participate. SU participated in both Spring and Fall, 2015. (see PEUC minutes 2014-2015). Students were divided evenly between the two TPAs. One instructor provided support for all students utilizing the PPAT both semesters. Two instructors provided support for students on the edTPA the first semester it was used with one, the trained instructor, providing support the second semester of implementation. At the conclusion of each semester, student performance data was shared with and analyzed by the PEUC.
TPA Survey information and feedback (see same titled document) from students regarding their experiences with each was collected at both state and university levels. This feedback, along with instructor impressions, was shared and discussed in PEUC each semester. The pilot concluded Fall of 2015, with new state policy to be announced following Spring 2016. While awaiting the state policy decision, the Teacher Work Sample was revised to reflect faculty learning from the pilot for use Spring 2016. The TWS-R (see same titled document) resulted in more detailed data regarding program strengths and weaknesses which was presented to and discussed in the PEUC. The TWS-R was as rigorous as the two commercial TPAs and specialization coordinators praised the new and improved rubrics as they used them to extract data needed for SPAs. After this semester, the WVDOE announced policy that would not dictate a specific TPA allowing each EPP to determine whether to adopt or to develop a TPA utilizing CAEP approved processes and standards. During a WV-EPP meeting in May 2016, Stevie Chepko of CAEP strongly encouraged WV EPPS to develop their own TPA and provided training regarding the process. SU joined 12 other WV EPPs in developing the WV-TPA which was piloted Fall of 2016. Detailed notes and feedback were collected throughout the semester from both instructors and candidates. While reliability in scoring was achieved, as determined at state-run reliability trainings, student feedback was scathing for instructions, writing prompts, and rubrics. Feedback led to some improvements in the WV-TPA (see same titled document) which is again being piloted during the current semester. Anecdotal feedback from candidates and instructors thus far indicate that the changes resolved some issues, but a great deal of work will still be needed to make this instrument a viable TPA for SU to adopt for future use.If the problems noted in the second pilot are not addressed, SU plans to use the CAEP established procedures to develop the TWS-R into an instrument that contains the best of all the TPAs experienced in the past three years in order to create a TPA that will provide insightful data in evaluating the SU-EPP.
While each of the four TPAs used in the past three semesters has its own unique characteristics, defines elements of good instruction in different ways; however, challenges remain. Data clearly demonstrates that the SU-EPP is preparing its candidates well to meet the demands of instruction. Since the beginning of the first piloted TPA, data has resulted in four changes to coursework to help prepare students for more effective instruction.
A greater emphasis on understanding the language demands and necessary supports needed for academic language have been included in the Reading in the Content Area course that secondary education students take. In this class, students are now required to analyze video clips and their own lesson plans for the receptive and expressive language demands placed on students in terms of language purpose, language structures, specialized and technical vocabulary, and language mechanics. They then link this analysis explicitly to the language supports built into the lesson. These supports take many forms from scaffolding provided by the teacher or lesson structure to explicit instruction in language based skills needed for success.
The elementary pedagogy classes have built more explicit requirements into their cross curricular unit plans in regards to designing authentic assessment to collect student information regarding entry level skills to better design instruction at each students’ zone of proximal development and to then collect appropriate evidence of learning for each unit learning objective.
All three programs, early, elementary, and secondary, have been introduced to a four step analytical reflection process that requires candidates to first determine the level of learning students achieved for the stated learning objectives, to then evaluate those instructional components that led to the achieved outcome (for better or worse), to support the use of those instructional components or identify weaknesses in them using explicit ties to theory and research, and to then identify changes that need to be made to improve the effectiveness of instruction for the future.
While data from the various TPAs regarding overall assessment practices is inconsistent, candidates and instructors note that tasks requiring assessment remain problematic for many candidates requiring a great deal of one to one support and conferencing in order to successfully complete the assessment related components of the TPAs. This is something that needs to be examined in more detail across the various measures of candidate skill to determine how to better address this in courses leading up to student teaching.
The Pro 05 (see same titled document) was developed by the SU-EPP to assess candidate’s dispositions, both personal and professional, as they relate to criteria identified as necessary to being a successful classroom teacher. The Pro 05 is implemented in all education courses and is reviewed each semester by student’s advisors and is used when necessary as the basis for disposition improvement plans as seen in the Dispositional Assessment Case Study (see same titled document). Positive Pro 05 ratings are required at each juncture, but evaluation at Juncture 1 is particularly important. The only course all students, regardless of program area have completed when they apply for Juncture 1 is EDUC 200. Consequently, recent analysis of Pro 05 Data (see same titled document) by the PEUC focused on this class. Because all sections were taught by the same professor each of the three semesters data was gathered, inter-rater reliability is not a concern. Data patterns highlighted areas of concern that had previously led to formation of a committee to research other available instruments (see PEUC Pro 05 minutes). This committee gathered sample disposition assessments, but none met CAEP required development criteria. At present, the PEUC plans to follow this process in creating a new disposition assessment that will have clearly articulated descriptive criteria for fewer levels of stakeholder identified dispositions.
The ST-11, short for Student Teaching 2011 Evaluation Tool (see same titled document), is the student teaching evaluation which is completed by candidates, their cooperating teacher and their university supervisor. ST-11 Data (see same titled document) was collected from all three sources, though the consensus of the PEUC was that the most meaningful data for future consideration is a comparison of cooperating teacher and university supervisor scores to determine if and where there are any significant discrepancies as well as identifying areas of weakness within and across programs. The ST-11 was created in response to a mandate from the West Virginia Department of Education requiring all EPPs to realign all courses and observational instruments with the WV Professional Teaching Standards. The student teacher evaluation tool in use at that time was created in 1976. Rather than revise this outdated tool, a faculty committee developed a new instrument based on the state teacher evaluation standards and criteria. With input from the PEUC, as well as an examination of what other universities were doing, a subcommittee of faculty members developed the ST-11 rating to a benchmark of “beginning teachers,” evaluating whether they approach, meet, or exceed that standard. The state requires student teachers to meet all five standards, and if they do, based in part on the cooperating teacher’s input along with the university supervisor’s recommendation, they can be recommended for certification. The instrument was piloted, an ST-11 training video (see same titled document) was developed for cooperating teachers, university supervisors, and student teachers, and guidelines based on the descriptive rubrics used by the state were created for evaluating each function. Training sessions to establish inter-rater reliability were held in December 2016 and again in January 2017. Prior to student teaching, the ST-11 is used throughout Level 2 and Level 3 practicums (in an adapted form) to get candidates used to being evaluated on their teaching.
- Sequence of Courses
- Juncture 1 and 2 Documents
- PRAXIS Content Assessment Data
- PRAXIS PLT Data
- Teacher Performance Assessment
- PEUC Minutes 2014-2015
- TPA Survey Information and Feedback
- Pro 05
- Dispositional Assessment Case Study
- Pro 05 Data
- PEUC Pro 05 Minutes
- Student Teaching 2011 Evaluation Tool
- ST-11 Data
- ST-11 Training
- Data Analysis Minutes
- Syllabi with Technology and Diversity Components Highlighted