• Request Info
  • Visit
  • Apply
  • Give
  • Request Info
  • Visit
  • Apply
  • Give

Search

  • A-Z Index
  • Map
Education Research
  • Request Info
  • Visit
  • Apply
  • Give
Cross

Educational Leadership and Policy Studies

  • About
  • Our People
    • Our People Overview
    • Faculty
    • Staff
    • Students
  • Academic Programs
    • Academic Programs Overview
    • Adult & Continuing Education
    • College Student Personnel
    • Educational Administration
    • Evaluation Methodology
    • Higher Education Administration
    • Undergraduate Studies
  • Education Research & Opportunity Center
  • Admissions & Information
    • Admissions Overview
    • Graduate Forms, Handbooks, and Resources
    • Contact ELPS
  • Request Info
  • Visit
  • Apply
  • Give
  • About
  • Our People
    • Our People Overview
    • Faculty
    • Staff
    • Students
  • Academic Programs
    • Academic Programs Overview
    • Adult & Continuing Education
    • College Student Personnel
    • Educational Administration
    • Evaluation Methodology
    • Higher Education Administration
    • Undergraduate Studies
  • Education Research & Opportunity Center
  • Admissions & Information
    • Admissions Overview
    • Graduate Forms, Handbooks, and Resources
    • Contact ELPS
  1. Educational Leadership and Policy Studies
  2. Posts By jhall152
Author: jhall152

Supporting STEM Teachers with Actionable Content-Based Feedback

January 25, 2024 by jhall152

By Dr. Mary Lynne Derrington & Dr. Alyson Lavigne 

Please Note: This is Part 2 of a four-part series on actionable feedback. Stay tuned for the next posts that will focus on Leadership Content Knowledge (LCK) and teacher feedback in the areas of STEM, Literacy, and Early Childhood Education.

Missed the beginning of the series? Click here to read Part 1
on making teacher feedback count!

For school leaders, providing teachers with feedback in unfamiliar subject areas can be a challenge. At the same time, we know that teachers highly value feedback on their teaching content area as well as general pedagogical practices. When school leaders deepen their understanding of different subjects, it can prove a powerful lever to giving teachers the feedback they deserve and desire. Today, we’ll discuss ways to support teachers in the STEM (Science, Technology, Engineering and Math) area.

Imagine you are scheduled to observe a STEM lesson, an area where you might not feel confident. What might be some ways to prepare for this observation? Sarah Quebec Fuentes, Jo Beth Jimerson, and Mark Bloom recommend post-holing. In the context of building, this refers to digging holes deep enough to anchor fenceposts. As it pertains to your work, post-holing means engaging in an in-depth, but targeted exploration of the content area.

Another strategy is joining a STEM instructional coach or specialist for an observation and debrief. A third way to learn is to attend a STEM-focused professional development for teachers. These activities can help you think more deeply about the content and how it is taught.

In addition, you can identify subject-specific best practices to integrate into a pre-observation or post-observation conversation. This might look like adapting a subset of evaluation questions to specifically reflect STEM objectives. For example:

  1. Poses scenarios or identifies a problem that students can investigate (Bybee, et al., 2006).
  2. Fosters “an academically safe classroom [that] honors the individual as a mathematician and welcomes him or her into the social ecosystem of math” (Krall, 2018).
  3. Avoids imprecise language and overgeneralized tips or tricks (e.g., carry, borrow, FOIL) and instead use precise mathematical language grounded in conceptual mathematical understanding (e.g., trade, regroup, distributive property) (Karp et al., 2014, 2015).
  4. Uses models to communicate complex scientific concepts, emphasizing that models are only approximations of the actual phenomena and are limited simplifications used to explain them (Krajcik & Merritt, 2013).

Let’s imagine a meaningful mathematical talk emerges as an important practice from your post-holing in mathematics. In a pre-observation you might ask the teacher about their plans for creating meaningful mathematical talk in the lesson. During the observation, you can note if those questions appeared and/or when moments of meaningful mathematical talk were taking place. In a post-observation, you might ask teachers to reflect upon the moments they felt meaningful mathematical talk was occurring, and what inputs yielded those outcomes.

This blog entry is part of a four-part series on actionable feedback. Stay tuned for our next two posts that will focus on Leadership Content Knowledge (LCK) on concrete ways to provide feedback to teachers in the areas of, and Early Childhood Education.

If this blog has sparked your interest and you want to learn more, check out our book, Actionable Feedback to PK-12 Teachers. And for other suggestions on supervising teachers in STEM discipline areas with specific pre-observation and post-observation prompts and key practices for observation, see Chapter 8 by Sarah Quebec Fuentes, Jo Beth Jimerson, and Mark A. Bloom.

Filed Under: News

Making the Most of Your Survey Items: Item Analysis

January 15, 2024 by jhall152

By Louis Rocconi, Ph.D. 

Hi, blog world! My name is Louis Rocconi, and I am an Associate Professor and Program Coordinator in the Evaluation, Statistics, and Methodology program at The University of Tennessee, and I am MAD about item analysis. In this blog post, I want to discuss an often overlooked tool to examine and improve survey items: Item Analysis.

What is Item Analysis?

Item analysis is a set of techniques used to evaluate the quality and usefulness of test or survey items. While item analysis techniques are frequently used in test construction, these techniques are helpful when designing surveys as well. Item analysis focuses on individual items rather than the entire set of items (such as Cronbach’s alpha). Item analysis techniques can be used to identify how individuals respond to items and how well items discriminate between those with high and low scores. Item analysis can be used during pilot testing to help choose the best items for inclusion in the final set. While there are many methods for conducting item analysis, this post will focus on two methods: item difficulty/endorsability and item discrimination.

Item Difficulty/Endorsability

Item difficulty, or item endorsability, is simply the mean, or average, response (Meyer, 2014). For test items that have a “correct” response, we use the term item difficulty, which refers to the proportion of individuals who answered the item correctly. However, when using surveys with Likert-type response options (e.g., strongly disagree, disagree, agree, strongly agree), where there is no “correct” answer, we can think of the item mean as item endorsability or the extent to which the highest response option is endorsed. We often divide the mean, or average response, by the maximum possible response to put endorsability on the same scale as difficulty (i.e., ranging from 0 to 1).

A high difficulty (i.e., close to 1) indicates an item that is too easy, while a low difficulty value (i.e., close to 0) suggests an overly difficult item or an item that few respondents endorse. Typically, we are looking for difficulty values between 0.3 and 0.7. Allen and Yen (1979) argue this range maximizes the information a test provides about differences among respondents. While Allen and Yen were referring to test items, surveys with Likert-type response options generally follow the same recommendations. An item with a low endorsability indicates that people are having a difficult time endorsing the item or selecting higher response options such as strongly agree. Whereas, an item with a high endorsability indicates that the item is easy to endorse. Very high or very low values for difficulty/endorsability may indicate that we need to review the item. Examining proportions for each response option is also useful. It demonstrates how frequently a response category was used. If a response category is not used or only selected by a few respondents, this may indicate that the item is ambiguous or confusing.

Item Discrimination

Item discrimination is a measure of the relationship between scores on an item and the overall score on the construct the survey is measuring (Meyer, 2014). It measures the degree to which an item differentiates individuals who score high on the survey from those who score low on the survey. It aids in determining whether an item is positively or negatively correlated with the total performance. We can think of item discrimination as how well an item is tapping into the latent construct. Discrimination is typically measured using an item-total correlation to assess the relationship between an item and the overall score. Pearson’s correlation and its variants (i.e., point-biserial correlation) are the most common, but other types of correlations such as biserial and polychoric correlations can be used.

Meyer (2014) suggests selecting items with positive discrimination values between 0.3 and 0.7 and items that have large variances. When the item-total correlation exceeds 0.7, it suggests the item may be redundant. A content analysis or expert review panel could be used to help decide which items to keep. A negative discrimination for an item suggests that the item is negatively related with the total score. This may suggest a data entry error, a poorly written item, or that the item needs to be reverse coded. Whatever the case, negative discrimination is a flag to let you know to inspect that item. Items with low discrimination tap into the construct poorly and should be revised or eliminated. Very easy or very difficult items can also cause low discrimination, so it is good to check whether that is a reason as well. Examining discrimination coefficients for each response option is also helpful. We typically want to see a pattern where lower response options (e.g., strongly disagree, disagree) have negative discrimination coefficients and higher response options (e.g., agree, strongly agree) have positive correlations and the magnitude of the correlations is highest at the ends of the response scale (we would look for the opposite pattern if the item is negatively worded).

Conclusion

Item difficulty/endorsability and item discrimination are two easy techniques researcher can use to help improve the quality of their survey items. These techniques can easily be implemented when examining other statistics such as internal consistency reliability.

___________________________________________________________________

References

Allen, M. & Yen, W. (1979). Introduction to measurement theory. Wadsworth.

Meyer, J. P. (2014). Applied measurement with jMetrik. Routledge.

Resources

I have created some R code and output to demonstrate how to implement and interpret an item analysis.

The Standards for Educational and Psychological Testing

Filed Under: Evaluation Methodology Blog

Education, Leadership, and Policy Studies Researcher Recognized by Education Week

January 4, 2024 by jhall152

Courtesy of the College of Education, Health, and Human Sciences (January 4, 2024)

Rachel White’s Superintendent Research is a Top-10 Education Study for 2023

2023 has been quite the year for Rachel White, an assistant professor in the department of Educational Policy and Leadership Studies She’s been nationally recognized for her early-career work in the field of educational leadership with the Jack A. Culbertson Award from the University Council For Educational Administration. She’s also been selected to serve on a United States Department of Education Regional Advisory Committee to provide advice and recommendations concerning the educational needs in the Appalachian region and how those needs can be most effectively addressed. However, her research into superintendent attrition and gender gaps has put her in the national spotlight.

Rachel White sits on a wooden credenza in front of a dark blue wall. She has fair skin, long blonde hair, and is wearing a black blouse, black pants, and black high-heeled shoes.

Rachel White

Recently, Education Week named White’s study on attrition and gender gaps among K-12 district superintendents as a Top-10 Educational Study of 2023. First published in the journal Educational Researcher, one way that White’s research demonstrates the magnitude of the gender gap is through superintendent first names. She finds that one out of every five superintendents in the United States is named Michael, David, James, Jeff, John, Robert, Steven, Chris, Brian, Scott, Mark, Kevin, Jason, Matthew, or Daniel. In fact, Education Week and Ed Surge brought the story to national attention with the articles “There’s a Good Chance Your Superintendent Has One of These 15 Names” and “What Are the Odd’s Your Superintendent is Named Michael, John, or David?”

In order to diversify the superintendency, women superintendents must be hired to replace outgoing men. However, drawing on the most recent data update of her National Longitudinal Superintendent Database, White recently published a data brief showing that over the last five years, 50% of the time a man turned over, he was replaced by another man, and a woman replaced a woman 10% of the time. A man replaced a woman 18% of the time, and a woman replaced a man 22% of the time.

When thinking about the importance of this research, White shared “Nearly ten years ago, the New York Times reported a similar trend among large companies: more S&P 1500 firms were being run by men named John than women, in total. The emulation of this trend in the K12 education sector, in 2024, is alarming. Public schools are often touted as “laboratories of democracy”: places where young people learn civic engagement and leadership skills to participate in a democratic society. Yet, what young people see in K12 public schools is that leadership positions—the highest positions of power in local K-12 education institutions—are primarily reserved for men.”

One thing is for certain, we have a way to go when it comes to balanced gender representation in school district leadership. White’s research has shown that, while over 75 percent of teachers and 56 percent of principals are women, the pace at which the superintendent gender gap is closing feels glacial: the current 5-year national average for gender gap closure rate is 1.4 percentage points per year. At this rate, the estimated year of national gender equality in the superintendency is 2039.

“Superintendents are among the most visible public figures in a community, interfacing with students, educators, families, business, and local government officials on a daily basis,” White shared. “A lack of diversity in these leadership positions can convey that a district is unwelcoming of diverse leaders that bring valuable insights and perspectives to education policy and leadership work.”

White continued, “Not only do we need to recruit and hire diverse leaders to the superintendency, but school boards and communities need to be committed to respecting, valuing, and supporting diverse district superintendents. New analyses of the updated NLSD show that women’s’ attrition rates spiked from 16.8% to 18.2% over the past year, while men’s remained stable around 17% for the past three years. We need to really reflect and empirically examine why this pattern has emerged, and what school boards, communities, and organizations and universities preparing and supporting women leaders can do to change this trajectory.”

 White has doubled down on her commitment to establishing rigorous and robust research on superintendents with the launch of The Superintendent Lab—a hub for data and research on school district superintendency. In fact, The Superintendent Lab is home to The National Longitudinal Superintendent Database, with data on over 12,500 superintendents across the United States, updated annually. With the 2023-24 database update completed, the NLSD now houses over 65,000 superintendent-year data points. The database allows the lab team to learn more about issues related to superintendent labor markets over time, and even produce interactive data visualizations for the public to better understand trends in superintendent gender gaps and attrition.

Along with a team of 10 research assistants and lab affiliates, White hopes to foster a collaborative dialog among policy leaders which may lead to identifying ways to create a more inclusive and equitable K-12 school systems.

“A comprehensive understanding of the superintendency in every place and space in the United States has really never been prioritized or pursued. My hope is that, through The Superintendent Lab, and the development of rigorous and robust datasets and research, I can elevate data-driven dialogue to advance policies and practices that contribute more equitable and inclusive spaces in education. And, along the way, I am passionate about the Lab being a space for students from all levels to engage in meaningful research experiences – potentially igniting a spark in others to use their voice and pursue opportunities that will contribute to great equity and inclusion in K12 education leadership,” said White.

Filed Under: News

Kelchen Once Again Named Top Scholar Influencer

January 4, 2024 by jhall152

Courtesy of the College of Education, Health, and Human Sciences (January 4, 2024)

We’ve all heard the term, “influencer.” Many of us associate an influencer as someone with a large following on social media, such as Instagram or YouTube, who set trends or promotes products. But did you know that there are a select group of scholar influencers who help shape educational practice and policy?

Robert Kelchen stands in front of the

Robert Kelchen

One of those scholar influencers is Robert Kelchen, who serves as department head of Educational Leadership and Policy Studies (ELPS) at the University of Tennessee, Knoxville, College of Education, Health, and Human Sciences (CEEHHS).  Kelchen is ranked 41 out of 20,000 scholars nationwide in Education Week’s Edu-Scholar Public Influence Rankings for 2024. In fact, Kelchen is the only scholar from the University of Tennessee, Knoxville, to make the list.

“As a faculty member at a land-grant university, it is my job to help share knowledge well beyond the classroom or traditional academic journals,” said Kelchen. I am thrilled to have the opportunity to work with policymakers, journalists, and college leaders on a regular basis to help improve higher education.”

For 14 years, Education Week selects the top-200 scholars (out of an eligible pool of 20,000) from across the United States as having the most influence on issues and policy in education. The list is compiled by opinion columnist Rick Hess, resident scholar at the American Enterprise Institute and director of Education Policy Studies.

The selection process  includes a 38-member Selection Committee made up of university scholars representing public and private institutions from across the United States. The Selection Committee calculates scores including, Google Scholar Scores, Book Points, Amazon Rankings, Congressional Record mentions, media, and web appearances and then ranks the scholar accordingly.  Kelchen is considered a “go-to” source for reporters covering issues in higher education, with over 200 media interviews, year after year. If there is a story about higher education in the media, you’ll more than likely find a quote from Kelchen as an expert source.

“In the last year, I have had the pleasure of supporting several states on their higher education funding models, presenting to groups of legislators, and being a resource to reporters diving into complex higher education finance topics. These engagements help strengthen my own research and give me the opportunity to teach cutting-edge classes to ELPS students,” said Kelchen.

In addition, Kelchen received national recognition by the Association for the Study of Higher Education (ASHE) for his research on higher education finance, accountability policies and practices, and student financial aid. ASHE’s Council on Public Policy in Higher Education selected Kelchen for its Excellence in Public Policy Higher Education Award.

Through its eight departments and 12 centers, the UT Knoxville College of Education, Health, and Human Sciences enhances the quality of life for all through research, outreach, and practice. Find out more at cehhs.utk.edu

Filed Under: News

Are Evaluation PhD Programs Offering Training in Qualitative and Mixed Design Methodologies

January 1, 2024 by jhall152

By Kiley Compton

Hello! My name is Kiley Compton and I am a fourth-year doctoral student in UT’s Evaluation, Statistics, and Methodology (ESM) program. My research interests include program evaluation, research administration, and sponsored research metrics.  

One of the research projects I worked on as part of the ESM program examined curriculum requirements in educational evaluation, assessment, and research (EAR) doctoral programs.  Our team was comprised of first- and second-year ESM doctoral students with diverse backgrounds, research interests, and skill sets.  

An overwhelming amount of preliminary data forced us to reconsider the scope of the project. The broad focus of the study was not manageable, so we narrowed the scope and focused on the prevalence of mixed method and qualitative research methodology courses offered in U.S. PhD programs.  Experts in the field of evaluation encourage the use of qualitative and mixed method approaches to gain an in-depth understanding of the program, process, or policy being evaluated (Bamberger, 2015; Patton, 2014).  The American Evaluation Association developed a series of competencies to inform evaluation education and training standards, which includes competency in “quantitative, qualitative, and mixed designs” methodologies (AEA, 2018). Similarly, Skolits et al. (2009) advocate for professional training content that reflects the complexity of evaluations.  

This study was guided by the following research question: what is the prevalence of qualitative and mixed methods courses in Educational Assessment, Evaluation, and Research PhD programs? Sub-questions include 1) to what extent are the courses required, elective, or optional? and 2) to what extent are these courses offered at more advanced levels? For the purpose of this study, elective courses are those that fulfill a specific, focused requirement, while optional courses are those that are offered but do not fulfill elective requirements.  

Methods 

This study focused on PhD programs similar to UT’s ESM program. PhD programs from public and private institutions were selected based on the U.S. Department of Education’s National Center for Education Statistics (NCES) Classification of Instructional Programs (CIP) assignment. Programs under the 13.06 “Educational Assessment, Evaluation, and Research” CIP umbrella were included.  We initially identified a total of 50 programs. 

Our team collected and reviewed available program- and course-level data from program websites, handbooks, and catalogs, and assessed which elements were necessary to answer the research questions. We created a comprehensive data code book based on agreed upon definitions and met regularly throughout the data collection process to assess progress, discuss ambiguous data, and refine definitions as needed. More than 14 program-level data points were collected, including program overview, total credit hours required, and number of dissertation hours required. Additionally, available course data were collected, including course number, name, type, level, requirement level, description, and credit hours. While 50 programs were identified, only 36 of the 50 programs were included in the final analysis due to unavailable or incomplete data. After collecting detailed information for the 36 programs, course-level information was coded based on the variables of interest: course type, course level, and requirement level.  

Results 

​​​Prevalence of qualitative & mixed methods courses 

The team analyzed data from 1,134 courses representing 36 programs, both in aggregate and within individual programs. Results show that only 14% (n=162) of the courses offered or required to graduate were identified as primarily qualitative and only 1% (n=17) of these courses were identified as mixed methods research (MMR). Further, only 6% (n=70) of these courses were identified as evaluation courses (Table 1). Out of 36 programs, three programs offered no qualitative courses. Qualitative courses made up somewhere between 1% and 20% of course offerings for 28 programs. Only five of the programs reviewed exceeded 20%. Only 12 programs offered any mixed methods courses and MMR courses made up less than 10% of the course offerings in each of those programs. 

Table 1. 

Aggregate Course Data by Type and Representation


Course Type                                        n (%)                            Program Count


Quantitative Methods                         409 (36%)                        36 (100%)

Other                                                  317 (28%)                        36 (100%)

Qualitative Methods                           162 (14%)                        33 (92%)

Research Methods                             159 (14%)                       36 (100%)

Program Evaluation                            70 (6%)                           36 (100%)

Mixed Methods                                    17 (1%)                          12 (33%)


Total                                                    1,134 (100%)                         –

 

Requirement level of qualitative and mixed method courses 

Out of 162 qualitative courses, 41% (n=66) were listed as required, 43% (n=69) were listed as elective, and 16% (n=26) were listed as optional (figure 2). Out of 17 mixed methods research courses, 65% (n=11) were listed as required and 35% (n=6) were listed as elective.  

Course level of qualitative and mixed-method courses 

Out of 162 qualitative courses, 73% (n=118) were offered at an advanced level and 27% (n=73) were offered at an introductory level. Out of 17 mixed methods research courses, 71% (n=12) were offered at an advanced level and 29% (n=5) were offered at an introductory level. 

Discussion 

Findings from the study provide valuable insight into the landscape of doctoral curriculum in Educational Assessment, Evaluation, and Research programs. Both qualitative and mixed methods courses were underrepresented in the programs analyzed. However, the majority of course offerings were required and classified as advanced.​​​​ Given that various methodologies are needed to conduct rigorous evaluations, it is our hope that these findings will encourage doctoral training programs to include more courses on mixed and qualitative methods, and that they will encourage seasoned and novice evaluators to seek out training on these methodologies.  

This study highlights opportunities for collaborative work in the ESM program and ESM faculty’s commitment to fostering professional development.  The project began as a project for a research seminar. ESM faculty mentored us through proposal development, data collection and analysis, and dissemination. They also encouraged us to share our findings at conferences and in journals and helped us through the process of drafting and submitting abstracts and manuscripts. Faculty worked closely with our team through every step of the process, serving as both expert consultants and supportive colleagues.  

The study also highlights how messy data can get. Our team even affectionately nicknamed the project “​​messy MESA,” owing to challenges, including changes to the scope, missing data, and changes to the team as students left and joined, along with the common acronym for measurement, evaluation, statistics, and assessment (MESA). While I hope that the product of our study will contribute to the fields of evaluation, assessments, and applied research, the process has made me a better researcher.  

References 

American Evaluation Association. (2018.). AEA evaluator competencies. https://www.eval.org/About/Competencies-Standards/AEA-Evaluator-Competencies  

Bamberger, M. (2015). Innovations in the use of mixed methods in real-world evaluation. Journal of Development Effectiveness, 7(3), 317–326. https://doi.org/10.1080/19439342.2015.1068832 

Capraro, R. M., & Thompson, B. (2008). The educational researcher defined: What will future researchers be trained to do? The Journal of Educational Research, 101, 247-253. doi:10.3200/JOER.101.4.247-253 

Dillman, L. (2013). Evaluator skill acquisition: Linking educational experiences to competencies. The American Journal of Evaluation, 34(2), 270–285. https://doi.org/10.1177/1098214012464512 

Engle, M., Altschuld, J. W., & Kim, Y. C. (2006). 2002 Survey of evaluation preparation programs in universities: An update of the 1992 American Evaluation Association–sponsored study. American Journal of Evaluation, 27(3), 353-359.  

LaVelle, J. M. (2020). Educating evaluators 1976–2017: An expanded analysis of university-based evaluation education programs. American Journal of Evaluation, 41(4), 494-509. 

LaVelle, J. M., & Donaldson, S. I. (2015). The state of preparing evaluators. In J. W. Altschuld & M.Engle (Eds.), Accreditation, certification, and credentialing: Relevant concerns for U.S. evaluators. New Directions for Evaluation,145, 39–52. 

Leech, N. L., & Goodwin, L. D. (2008). Building a methodological foundation: Doctoral-Level methods courses in colleges of education. Research in the Schools, 15(1). 

Leech, N. L., & Haug, C. A. (2015). Investigating graduate level research and statistics courses in schools of education. International Journal of Doctoral Studies, 10, 93-110. Retrieved from http://ijds.org/Volume10/IJDSv10p093-110Leech0658.pdf 

Levine, A. (2007). Educating researchers. Washington, DC: The Education Schools Project. 

Mathison, S. (2008). What is the difference between evaluation and research—and why do we care. Fundamental Issues in Evaluation, 183-196. 

McAdaragh, M. O., & LaVelle, J. M., & Zhang, L. (2020). Evaluation and supporting inquiry  

courses in MSW programs. Research on Social Work Practice, 30(7), 750-759.  

doi:10.1177/1049731520921243 

McEwan, H., & Slaughter, H. (2004). A brief history of the college of education’s doctoral  

degrees. Educational Perspectives, 2(37), 3-9. Retrieved from  

https://files.eric.ed.gov/fulltext/EJ877606.pdf 

National Center for Education Statistics. (2020). The Classification of Instructional Programs [Data set]. https://nces.ed.gov/ipeds/cipcode/default.aspx?y=56.  

Page, R. N. (2001). Reshaping graduate preparation in educational research methods: One school’s experience. Educational Researcher, 30(5), 19-25. 

Patton, M.Q. (2014). Qualitative evaluation and research methods (4th ed.). Sage Publications. 

Paul, C. A. (n.d.). Elementary and Secondary Education Act of 1965. Social Welfare History  

Project. Retrieved from  

https://socialwelfare.library.vcu.edu/programs/education/elementary-and-secondary-educ 

ation-act-of-1965/ 

Seidling, M. B. (2015). Evaluator certification and credentialing revisited: A survey of American Evaluation Association members in the United States. In J. W. Altschuld & M. Engle (Eds.), Accreditation, certification, and credentialing: Relevant concerns for U.S. evaluators. New Directions for Evaluation,145, 87–102 

Skolits, G. J., Morrow, J. A., & Burr, E. M. (2009). Reconceptualizing evaluator roles. American Journal of Evaluation, 30(3), 275-295. 

Standerfer, L. (2006). Before NCLB: The history of ESEA. Principal Leadership, 6(8), 26-27. 

Trevisan, M. S. (2004). Practical training in evaluation: A review of the literature. American Journal of Evaluation, 25(2), 255-272. 

Warner, L. H. (2020). Developing interpersonal skills of evaluators: A service-learning approach. American Journal of Evaluation, 41(3), 432-451. 

 

Filed Under: Evaluation Methodology Blog, News

Learning to Learn New Research Methods: How Watching YouTube Helped Me Complete My First Client Facing Project

December 15, 2023 by jhall152

By Austin Boyd

Every measurement, evaluation, statistics, and assessment (MESA) professional ​​​​​​​​has their own “bag of tricks” to help them get the job done​,​ their go-to set of evaluation, statistical, and methodological skills and tools that they are most comfortable applying. For many, these are the skills and tools that they were taught directly while obtaining their MESA degrees. But what do we do when we need new tools and methodologies that we weren’t taught directly by a professor?  

My name is Austin Boyd, and I am a​​ researcher, instructor, UTK ESM alumni, and most importantly, a lifelong learner. I have had the opportunity to work on projects in several different research areas including psychometrics, para-social relationships, quality in higher education, and social network analysis. I seek out ​opportunities to learn​ about new areas of research while applying my MESA skill set in any area of research I can. My drive to enter new research areas often leads to me realizing that, while I feel confident in the MESA skills and tools I currently possess, these are only a fraction of what I could be using in a given project. This leads me to two options: 1) use a ​​​​​​method that I am comfortable with that might not be the perfect choice for the project; or 2) learn a new method that fits the needs of the project. Obviously, we have to choose option 2, but where do we even start learning a new ​research ​method?  

In my first year of graduate school, I took on an evaluation client who had recently learned about ​​Social Network Analysis (SNA), which is a method of visually displaying the social structure between social objects in terms of their relationships (Tichy & Fombrun, 1979) The​ client​ decided that this new analysis would revolutionize the way they looked at their professional development attendance but had no idea how to use it. This is where I came in, a new and excited PhD student, ready to take on the challenge. Except, SNA wasn’t something we would be covering in class. In fact, it wasn’t something covered in any of the classes I could take. I had to begin teaching myself something that I had only just heard of. This is where I learned two of the best starting points for any new researcher: Google and YouTube.  

Although they aren’t the most conventional starting points for learning, you would be surprised how convenient they can be. I could have begun by looking in the literature for articles or textbooks that covered SNA. However, I didn’t have time to go through an entire textbook on the topic in addition to my normal coursework, and most of the articles I found were applied research, far above my current understanding. What I needed was an entry point that began with the basics of conducting an SNA. Google, unlike the journal articles, was able to take me to several websites covering the basics of SNA and even led me to free online trainings on SNA for beginners. YouTube was able to supplement this knowledge with step-by-step video instructions on how to conduct my own SNA, both in software I was already proficient in, and in Gephi (Bastian, Heymann, & Jacomy, 2009), a new software designed specifically for this ​​​​analysis. For examples of these friendly starting points, see the SNA resources below. 

Marvel Web Image

 

These videos and websites weren’t perfect, and certainly weren’t what I ended up citing in my final report to my client, but they were a starting ​​point. A stepping stone that got me to a place where reading literature didn’t leave me confused, frustrated, and scared that I would have to abandon a project. This allowed me to successfully complete my first client facing research project, and they were equally thrilled with the results. Eventually, I even became comfortable enough to see areas for improvement in the literature, leading me to author my own paper creating a function that could reformat data to be used in one and two mode undirected social network analysis (Boyd & Rocconi, 2021). I’ve even used my free time to apply what I learned for fun and created a social network for the Marvel Cinematic Universe and the Pokémon game franchise (see below). 

​​It is unrealistic to expect to master every type of data analysis method that exists ​in just four years of graduate school. And even if we could, the field continues to expand every day with new methods, tools, and programs being added to aid in conducting research. This requires us to all be lifelong learners, who aren’t afraid to learn new skills, even if it means starting by watching some YouTube videos. 

 

​​​References 

Bastian M., Heymann S., & Jacomy M. (2009). Gephi: An open source software for exploring and manipulating networks. International AAAI Conference on Weblogs and Social Media. From AAAI 

Boyd, A. T., & Rocconi, L. M. (2021). Formatting data for one and two mode undirected social network analysis. Practical Assessment, Research & Evaluation, 26(24). Available online: https://scholarworks.umass.edu/pare/vol26/iss1/24/  

Tichy, N., & Fombrun, C. (1979). Network Analysis in Organizational Settings. Human Relations, 32(11), 923– 965. https://doi.org/10.1177/001872677903201103 

SNA Resources 

Aggarwal, C. C. (2011). An Introduction to Social Network Data Analytics. Social Network Data Analytics. Springer, Boston, MA 

Yang, S., Keller, F., & Zheng, L. (2017). Social network analysis: methods and examples. Los Angeles: Sage. 

https://visiblenetworklabs.com/guides/social-network-analysis-101/ 

https://github.com/gephi/gephi/wiki 

https://towardsdatascience.com/network-analysis-d734cd7270f8 

https://virtualitics.com/resources/a-beginners-guide-to-network-analysis/ 

https://ladal.edu.au/net.html 

Videos 

https://www.youtube.com/watch?v=xnX555j2sI8&ab_channel=DataCamp 

https://www.youtube.com/playlist?list=PLvRW_kd75IZuhy5AJE8GUyoV2aDl1o649 

https://www.youtube.com/watch?v=PT99WF1VEws&ab_channel=AlexandraOtt 

https://www.youtube.com/playlist?list=PL4iQXwvEG8CQSy4T1Z3cJZunvPtQp4dRy 

 

Filed Under: Evaluation Methodology Blog

Make Your Feedback to Teachers Matter: Leadership Content Knowledge is Key

December 12, 2023 by jhall152

By Dr. Mary Lynne Derrington & Dr. Alyson Lavigne 

Please Note: This is Part 1 of a four-part series on actionable feedback. Stay tuned for the next posts that will focus on Leadership Content Knowledge (LCK) and teacher feedback in the areas of STEM, Literacy, and Early Childhood Education.

The most important job of a school leader is to focus on the central purpose of schools—teaching and learning. Feedback to teachers on how to improve instructional practice is a critical element in promoting school success.

On average, principals spend 9 hours a week observing, providing feedback, and discussing instruction with teachers. Including documentation, this equates to nearly six 40-hour work-weeks and as much as 25% of a principal’s time.

Besides the time principals spend in these tasks, they are costly. It costs $700 million a year to observe all 3.1 million K-12 public school teachers just twice a year. All these efforts are based on the belief that, when school leaders observe teachers, they provide teachers with meaningful feedback — and that feedback, in turn, improves teaching and learning.

So, how does a school leader ensure that their feedback impacts practice? Feedback only matters when it can be acted upon, so what makes feedback actionable? We can all agree that for feedback to be actionable it must be timely, concrete, and clear. But it must also relate to the task at hand—teaching subject matter content.

When researchers ask teachers about the feedback they receive from school leaders, half of teachers reported that the feedback received from principals is not useful. Teachers say that they rarely receive feedback about their teaching content. Yet we know that pedagogical content knowledge is important for effective teaching and for student learning.

If you want to make your feedback to teachers matter, emphasize a teacher’s curriculum subject matter content as a part of your feedback. This requires differentiation for each teacher by subject matter and context of the classroom. Differentiation personalizes the feedback and emphasizes that the subject, content, and context of the classroom matters.

How can school leaders meet this lofty goal and possess expertise in every content area? First, a strong background in effective teaching practices is an important start. Second, leaders need a deep content knowledge of the subject and how it is learned (by students), and how it is taught, sometimes referred to as post-holing.

Principals can gain content expertise in many ways. For example:

  • Work with a content PLC team
  • Learn the standards for the subject
  • Review discipline-specific best practice research
  • Attend a subject-specific conference

Post-holing provides a great opportunity to align with other activities that might be occurring in the school, and demonstrates that you care about the subject matter and the teacher by providing deeper differentiated feedback. Challenge yourself to tackle one subject matter each year.

This blog entry is part of a four-part series on actionable feedback. Stay tuned for our next three posts that will focus on Leadership Content Knowledge (LCK) on concrete ways to provide feedback to teachers in the areas of STEM, Literacy, and Early Childhood Education.

If you want to dig into this content (pun intended!) a bit more, check out our book, Actionable Feedback to PK-12 Teachers. And for other suggestions on differentiated feedback, see Chapter 3 by Ellie Drago-Severson and Jessica Bloom-DeStefano.

Filed Under: News

The What, When, Why, and How of Formative Evaluation of Instruction

December 1, 2023 by jhall152

By M. Andrew Young

Hello! My name is M. Andrew Young. I am a second-year Ph.D. student in the Evaluation, Statistics, and Methodology Ph.D. program here at UT-Knoxville. I currently work in higher education assessment as a Director of Assessment at East Tennessee State University’s college of Pharmacy. As part of my duties, I am frequently called upon to conduct classroom assessments.  

Higher education assessment often employs the usage of summative evaluation of instruction, also commonly known as course evaluations, summative assessment of instruction (SAI), summative evaluation of instruction (SEI), among other titles, at the end of a course. At my institution the purpose of summative evaluation of instruction is primarily centered on evaluating faculty for tenure, promotion, and retention. What if there were a more student-centered approach to getting classroom evaluation feedback that not only benefits students in future classes (like summative assessment does), but also benefits students currently enrolled in the class? Enter formative evaluation of instruction, (FEI).  

 

What is FEI? 

FEI, sometimes referred to as midterm evaluations, entails seeking feedback from students prior to the semester midpoint to make mid-stream changes that will address each cohort’s individual learning needs. Collecting such meaningful and actionable FEI can prove to be challenging. Sometimes faculty may prefer to not participate in formative evaluation because they do not find the feedback from students actionable, or they may not value the student input. Furthermore, there is little direction on how to conduct this feedback and how to use it for continual quality improvement in the classroom. While there exists a lot of literature on summative evaluation of teaching, there seems to be a dearth of research surrounding best practices for formative evaluation of teaching. The few articles that I have been able to discover offer suggestions for FEI covered later in this post. 

 

When Should We Use FEI? 

In my opinion, every classroom can benefit from formative evaluation. When to administer it is as much an art as it is a science. Timing is everything and the results can differ greatly depending on the timing of the administration of the evaluation. In my time working as a Director of Assessment, I have found that the most meaningful feedback can be gathered in the first half of the semester, directly after a major assessment. Students have a better understanding of their comprehension of the material and the effectiveness of the classroom instruction. There is very little literature to support this, so this is purely anecdotal. None of the resources I have found have prescribed precisely when FEI should be conducted, but the name implies that the feedback should be sought at or around the semester midpoint. 

 

Why Should We Conduct FEI? 

FEI Can Help:

  • Improve student satisfaction on summative feedback of instruction (Snooks et al., 2007; Veeck et al., 2016),  
  • Make substantive changes to the classroom experience including textbooks, examinations/assessments of learning, and instructional methods (Snooks et al., 2007; Taylor et al., 2020) 
  • Strengthen teaching and improving rapport between students and faculty (Snooks et al., 2007; Taylor et al., 2020) 
  • Improve faculty development including promotion and tenure (Taylor et al., 2020; Veeck et al., 2016), encouraging active learning (Taylor et al., 2020) 
  • Bolster communication of expectations in a reciprocal relationship between instructor and student (Snooks et al., 2007; Taylor et al., 2020). 

 

How Should We Administer the FEI? 

Research has provided a wide variety of suggested practices including, but not limited to involving a facilitator for the formative evaluation, asking open-ended questions, using no more than ten minutes of classroom time, keeping it anonymous, and keeping it short (Holt & Moore, 1992; Snooks et al., 2007; Taylor et al., 2020), and even having students work in groups to provide the feedback or student conferencing (Fluckiger et al., 2010; Veeck et al., 2016).  

Hanover (2022) concluded that formative evaluation should include elements of: a 7-point Likert scale question evaluating how the course is going for the student followed by an open-ended explanation of rating question, involving the “Keep, Stop, Start” model with open-ended response-style questions, and finally, open-ended questions that allow students to suggest changes and provide additional feedback on the course and/or instructor. The “Keep, Stop, Start” model is applied by asking students what they would like the instructors to keep doing, stop doing, and/or start doing. In the college of pharmacy, we use the method that Hanover presented where we ask students to self-evaluate how well they feel they are doing in the class, and then explain their rating with an open-ended, free-response field. This has only been in practice at the college of pharmacy for the past academic year, and anecdotally from conversation with faculty, the data that has been collected has generally been more actionable for the faculty. Like all evaluations, it is not a perfect system and sometimes some of the data is not actionable, but in our college FEI is an integral part of indirect classroom assessment. The purpose is to collect and analyze themes that are associated with the different levels of evaluation rating. (Best Practices in Designing Course Evaluations, 2022). The most important step, however, is to close the feedback loop in a timely manner (Fluckiger et al., 2010; Taylor et al., 2020; Veeck et al., 2016). Closing the feedback loop for our purposes is essentially asking the course coordinator to respond to the feedback given in the FEI, usually within a week’s time, and detailing what changes, if any, will be made in the classroom and learning environment. Obviously, not all feedback is actionable, and in some cases, best practices in the literature conflict with suggestions made, but it is important for the students to know what can be changed and what cannot/will not be changed and why. 

 

What Remains? 

Some accrediting bodies (like the American Council for Pharmacy Education, or ACPE), require colleges to have an avenue for formative student feedback as part of their standards. I believe that formative evaluation benefits students and faculty alike, and where it may be too early to make a sweeping change and require FEI for every higher education institution, there may be value in educating faculty and assessment professionals of the benefits of FEI. Although outside the scope of this short blog post, adopting FEI as a common practice should be approached carefully, intentionally, and with best practices for change management in organizations. Some final thoughts: in order to get the students engaged in providing good feedback, ideally the practice of FEI has to be championed by the faculty. While it could be mandated by administration, that practice would likely not engender as much buy-in, and if the faculty, who are the primary touch-points for the students, aren’t sold on the practice or participate begrudgingly, that will create an environment where the data collected is not optimal and/or actionable. Students talk with each other across cohorts. If students in upper classes have a negative opinion on the process, that will have a negative trickle-down effect. What is the best way to make students disengage? Don’t close the feedback loop. 

 

References and Resources 

Best Practices in Designing Course Evaluations. (2022). Hanover Research. 

Fluckiger, J., Tixier, Y., Pasco, R., & Danielson, K. (2010). Formative Feedback: Involving Students as Partners in Assessment to Enhance Learning. College Teaching, 58, 136–140. https://doi.org/10.1080/87567555.2010.484031 

Holt, M. E., & Moore, A. B. (1992). Checking Halfway: The Value of Midterm Course Evaluation. Evaluation Practice, 13(1), 47–50. 

Snooks, M. K., Neeley, S. E., & Revere, L. (2007). Midterm Student Feedback: Results of a Pilot Study. Journal on Excellence in College Teaching, 18(3), 55–73. 

Taylor, R. L., Knorr, K., Ogrodnik, M., & Sinclair, P. (2020). Seven principles for good practice in midterm student feedback. International Journal for Academic Development, 25(4), 350–362. 

Veeck, A., O’Reilly, K., MacMillan, A., & Yu, H. (2016). The Use of Collaborative Midterm Student Evaluations to Provide Actionable Results. Journal of Marketing Education, 38(3), 157–169. https://doi.org/10.1177/0273475315619652 

 

Filed Under: Evaluation Methodology Blog

Evaluation in the Age of Emerging Technologies

November 15, 2023 by jhall152

By Richard Amoako

Greetings! My name is Richard Dickson Amoako. I am a second year PhD. student in Evaluation, Statistics, and Methodology at the University of Tennessee, Knoxville. My research interests focus on areas such as program evaluation, impact evaluation, higher education assessment, and emerging technologies in evaluation.  

As a lover of technology and technological innovations, I am intrigued by technological advancements in all spheres of our lives. The most recent development is the increased development and improvement of artificial intelligence (AI) and machine learning (ML). As an emerging evaluator, I am interested in learning about the implications of these technologies for evaluation practice.  

Throughout this blog post, I explore the implications of these technologies for evaluation including relevant technologies useful for evaluation, how these technologies can change the conduct of evaluation, the benefits and opportunities for evaluators, as well as the challenges and issues with the use of these emerging technologies in evaluation.  

 

Relevant Emerging Technologies for Evaluation 

Emerging technologies are new and innovative tools, techniques, and platforms that can transform the evaluation profession. These technologies can broadly be categorized into four groups, data collection and management tools, data visualization and reporting tools, data analysis and modeling tools, and digital and mobile tools. Three examples of the most popular emerging technologies relevant to evaluation are artificial intelligence, machine learning, and big data analytics. 

  • Data collection and analysis: AI and ML can help evaluators analyze data faster and more accurately. These technologies can also identify patterns and trends that may not be apparent to the naked eye. Additionally, emerging technologies have also led to new data collection methods, such as crowdsourcing, social media monitoring, and web analytics. These methods provide valuable opportunities for evaluators to access a wider range of data sources and collect more comprehensive and diverse data. 
  • Increased access to data: Social media, mobile devices, and other technologies have made it easier to collect data from a wider range of sources. This can help evaluators gather more diverse perspectives and ideas. 
  • Improved collaboration: Evaluators can collaborate more effectively with the help of video conferencing, online collaboration platforms, and project management software, regardless of where they are located. 
  • Improved visualization: Evaluators can present their findings in a more engaging and understandable way by using emerging technologies like data visualization software and virtual reality. 

 

Challenges and Issues Associated with Emerging Technologies in Evaluation 

While emerging technologies offer many exciting opportunities for evaluators, they also come with challenges. One of the main challenges is keeping up to date with the latest technologies and trends. Evaluators should have a solid understanding of the technologies they use, as well as the limitations and potential biases associated with those technologies. In some cases, emerging technologies can be expensive or require specialized equipment, which can be a barrier for evaluators with limited resources. 

Another challenge is the need to ensure emerging technologies are used ethically and responsibly. As the use of emerging technologies in evaluation becomes more widespread, there is a risk that evaluators may inadvertently compromise the privacy and security of program participants. In addition, they may inadvertently misuse data. To address these challenges, our profession needs to develop clear guidelines and best practices for using these technologies in evaluation. 

To conclude, emerging technologies are revolutionizing the evaluation landscape, opening new opportunities for evaluators to collect, analyze, and use data. With artificial intelligence and machine learning, as well as real-time monitoring and feedback, emerging technologies are changing evaluation and increasing the potential for action-based research. However, as with any advancing technology, there are also challenges to resolve. Evaluators must keep up to date with the latest technologies and develop clear guidelines and best practices. They must also ensure that these technologies are used ethically and responsibly. 

 

Resources 

Adlakha D. (2017). Quantifying the modern city: Emerging technologies and big data for active living research. Frontiers in Public Health, 5, 105. https://doi.org/10.3389/fpubh.2017.00105 

Borgo, R., Micallef, L., Bach, B. McGee , F.,  Lee, B. (2018). Information visualization evaluation using crowdsourcing. STAR – State of The Art Report, 37(7). Available at:  https://www.microsoft.com/en-us/research/uploads/prod/2018/05/InfoVis-Crowdsourcing-CGF2018.pdf 

Dimitriadou, E., & Lanitis, A. A. (2023).  Critical evaluation, challenges, and future perspectives of using artificial intelligence and emerging technologies in smart classrooms. Smart Learn. Environ, 10, 12. https://doi.org/10.1186/s40561-023-00231-3 

Huda, M., Maseleno, A., Atmotiyoso, P., Siregar, M., Ahmad, R., Jasmi, K. A., & Muhamad, N. H. N. (2018). Big data emerging technology: Insights into innovative environment for online learning Resources. International Journal of Emerging Technologies in Learning (iJET), 13(01), pp. 23–36. https://doi.org/10.3991/ijet.v13i01.6990 

Jurafsky, D., & Martin, J. H. (2009). Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. Prentice Hall. 

World Health Organization. (2016). Monitoring and evaluating digital health interventions: A practical guide to conducting research and assessment. WHO Press. Available at:  https://saluddigital.com/wp-content/uploads/2019/06/WHO.-Monitoring-and-Evaluating-Digital-Health-Interventions.pdf 

Filed Under: Evaluation Methodology Blog

Martinez Coaching Ugandan Olympian in 2024 Paris Games

November 10, 2023 by jhall152

Kathleen Noble, a 2020 Olympic singles rower from Uganda, is being coached by Dr. James Martinez, an assistant professor in the Department of Educational Leadership & Policy Studies (ELPS). Dr. Martinez, himself a 5-time U.S. National and Olympic team member between 1993-1998, began working with Mrs. Noble this past July after she moved to Knoxville with her husband, Nico.

“Kathleen is an exceptionally competitive athlete, and an even better person,” says Dr. Martinez. A 28-year-old graduate of Princeton University, Mrs. Noble was an internationally competitive youth swimmer, having competed at the 2012 Short-Course World Championships in Istanbul. Holder of many Ugandan national records in freestyle and butterfly events, she started rowing as a walk–on athlete in her sophomore year of college and ultimately competed at the 2019 Under-23 World Rowing Championships.

Competing for Uganda in the 2020 Olympics (held in 2021 due to COVID), Mrs. Noble is the first rower ever to compete for her country. “Kathleen is a world-class athlete in every sense of the word,” says Martinez. “Her passion to understand every aspect of the sport, from racing, to nutrition, to training, to rigging the boat is inspiring.” Dr. Martinez and Mrs. Noble recently returned from the African Olympic Qualification Regatta in Tunisia, where she placed fourth among fifteen woman single scullers, qualifying her for the Paris games.

Dr. Martinez balances his UTK research (focused on school administrator self-efficacy), teaching and service demands and family responsibilities while supervising Mrs. Noble’s preparation for the Olympics. “Days are pretty full,” he says, “but no more so than when I was a schoolteacher and in training myself while raising our young children back in the day.”

Dr. Martinez credits his wife, Elizabeth, who earned her Master’s degree from the University of Tennessee, Knoxville’s School of Landscape Architecture, for her incredible support.. “She’s the glue that holds it all together,” he says.

Filed Under: News

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • Next Page »

Educational Leadership and Policy Studies

325 Bailey Education Complex
Knoxville, Tennessee 37996

Phone: 865-974-2214
Fax: 865.974.6146

The University of Tennessee, Knoxville
Knoxville, Tennessee 37996
865-974-1000

The flagship campus of the University of Tennessee System and partner in the Tennessee Transfer Pathway.

ADA Privacy Safety Title IX