• Request Info
  • Visit
  • Apply
  • Give
  • Request Info
  • Visit
  • Apply
  • Give

Search

  • A-Z Index
  • Map
Education Research
  • Request Info
  • Visit
  • Apply
  • Give
Cross

Educational Leadership and Policy Studies

  • About
  • Our People
    • Our People Overview
    • Faculty
    • Staff
    • Students
  • Academic Programs
    • Academic Programs Overview
    • Adult & Continuing Education
    • College Student Personnel
    • Educational Administration
    • Evaluation Methodology
    • Higher Education Administration
    • Undergraduate Studies
  • Education Research & Opportunity Center
  • Admissions & Information
    • Admissions Overview
    • Graduate Forms, Handbooks, and Resources
    • Contact ELPS
  • Request Info
  • Visit
  • Apply
  • Give
  • About
  • Our People
    • Our People Overview
    • Faculty
    • Staff
    • Students
  • Academic Programs
    • Academic Programs Overview
    • Adult & Continuing Education
    • College Student Personnel
    • Educational Administration
    • Evaluation Methodology
    • Higher Education Administration
    • Undergraduate Studies
  • Education Research & Opportunity Center
  • Admissions & Information
    • Admissions Overview
    • Graduate Forms, Handbooks, and Resources
    • Contact ELPS
  1. Educational Leadership and Policy Studies
  2. 2024
  3. February 2024

February 2024

Archives for February 2024

Evaluation Capacity Building: What is it, and is a Job Doing it a Good Fit for Me?

February 15, 2024 by jhall152

By Dr. Brenna Butler

Hi, I’m Dr. Brenna Butler, and I’m currently an Evaluation Specialist at Penn State Extension (https://extension.psu.edu/brenna-butler). I graduated from the ESM Ph.D. program in May 2021, and in my current role, a large portion of my job involves evaluation capacity building (ECB) within Penn State Extension. What does ECB specifically look like day-to-day, and is ECB a component of a job that would be a good fit for you? This blog post will cover some of my thoughts and opinions of what ECB may look like in a job in general. Keep in mind that these opinions are exclusively mine, and don’t represent those of my employer.

Evaluation capacity building (ECB) is the process of increasing the knowledge, skills, and abilities of individuals in an organization to conduct quality evaluations. This is often done by evaluators (like me!) providing the tools and information for individuals to conduct sustained evaluative practices within their organization (Sarti et al., 2017). The amount of literature covering ECB is on the rise (Bourgeois et al., 2023), indicating that evaluators taking on ECB roles within organizations may also be increasing. Although there are formal models and frameworks in the literature that describe ECB work within organizations (the article by Bourgeois and colleagues (2023) provides an excellent overview of these), I will cover three specific qualities of what it takes to be involved in ECB in an organization.

ECB Involves Teaching

Much of my role at Penn State Extension is providing mentorship to Extension Educators on how to incorporate evaluation in their educational programming. This mentorship role sometimes looks like a more formal teaching role by conducting webinars and training on topics such as writing good survey questions or developing a logic model. Other times, this mentorship role will take a more informal teaching route when I am answering questions Extension Educators email me regarding data analysis or ways to enhance their data visualizations for a presentation. Enjoying teaching and assisting others in all aspects of evaluations are key qualities of an effective evaluator who leads ECB in an organization.

ECB Involves Leading

Taking on an ECB role involves a large amount of providing guidance and being the go-to expert on evaluation within the organization. Individuals will often look to the evaluator in these positions as to what directions to take in evaluation and assessment projects. This requires speaking up in meetings to advocate for strong evaluative practices (“Let’s maybe not send out a 30-question survey where every single question is open-ended”). Being willing to speak up and go against the norms of “how the organization has always done something” is an area that an evaluator involved in ECB work needs to be comfortable doing.

One way this “we’ve always done it this way” mentality can be tackled by evaluators is through an evaluation community of practice. Each meeting is held around a different evaluation topic area where members of the organization are invited to talk about what has worked well for them and what hasn’t in that area and showcase some of the work they have conducted through collaboration with the evaluator. The intention is that these community of practice meetings that are open to the entire organization can be one way of moving forward with adopting evaluation best practices and leaning less on old habits.

ECB Involves Being Okay with “Messiness”

An organization may invest in hiring an evaluation specialist who can guide the group to better evaluative practices because they lack an expert in evaluation. If this is the case, evaluation plans may not exist, and your role as an evaluator in the organization will be to start from scratch in developing evaluative processes. Alternatively, it could be that evaluations have been occurring in the organization but may not be following best practices, and you will be tasked with leading the efforts to improve these practices.

Work in this scenario can become “messy” in the sense that tracking down historical evaluation data collected before an evaluator was guiding these efforts in the organization can become very difficult. For example, there may not be a centralized location or method to how paper survey data were being stored. One version of the data may involve tally marks on a sheet of paper indicating the number of responses to each question, and another version of the same survey data may be stored in an Excel file with unlabeled rows. These scenarios require adequate discernment by the evaluator if the historical data are worth combing through and combining so that they can be analyzed, or if starting from scratch and collecting new data will ultimately save time and effort. Being part of ECB in an organization involves being up for the challenge of working through these “messy”, complex scenarios.

Hopefully, this provided a brief overview of some of the work done by evaluators in ECB within organizations and can help you discern if a position involving ECB may be in your future (or not!).

 

Links to Explore for More Information on ECB

https://www.betterevaluation.org/frameworks-guides/rainbow-framework/manage/strengthen-evaluation-capacity

https://www.oecd.org/dac/evaluation/evaluatingcapacitydevelopment.htm

http://www.pointk.org/client_docs/tear_sheet_ecb-innovation_network.pdf

https://wmich.edu/sites/default/files/attachments/u350/2014/organiziationevalcapacity.pdf

https://scholarsjunction.msstate.edu/cgi/viewcontent.cgi?article=1272&context=jhse

 

References

Bourgeois, I., Lemire, S. T., Fierro, L. A., Castleman, A. M., & Cho, M. (2023). Laying a solid foundation for the next generation of evaluation capacity building: Findings from an integrative review. American Journal of Evaluation, 44(1), 29-49. https://doi.org/10.1177/10982140221106991

Sarti, A. J., Sutherland, S., Landriault, A., DesRosier, K., Brien, S., & Cardinal, P. (2017). Understanding of evaluation capacity building in practice: A case study of a national medical education organization. Advances in Medical Education and Practice, 761-767. https://doi.org/10.2147/AMEP.S141886

Filed Under: Evaluation Methodology Blog

Supporting Literacy Teachers with Actionable Content-Based Feedback

February 6, 2024 by jhall152

By Dr. Mary Lynne Derrington & Dr. Alyson Lavigne 

Please Note: This is Part 3 of a four-part series on actionable feedback. Stay tuned for the next posts that will focus on Leadership Content Knowledge (LCK) and teacher feedback in the areas of STEM, Literacy, and Early Childhood Education.

Missed the beginning of the series? Click here to read Part 1
on making teacher feedback count!

A strong literacy foundation in students’ early years is critical for success in their later ones. School leadership plays a significant part in establishing this foundation by equipping teachers with the right professional development.

Many (but not all) school leaders are versed in effective literacy instruction. Given its foundational importance, it is wise for principals — and others who observe and mentor teachers — to leverage the key elements of effective literacy instruction in the observation cycle. In this blog post, we outline two ways to do so.

Jan Dole, Parker Fawson, and Ray Reutzel suggest that one way to use research-based supervision and feedback practices in literacy instruction is to include in the observation cycle tools, guides, and checklists that specifically focus on literacy instruction, such as:

  • The Protocol for Language Arts Teaching Observations (PLATO; Grossman, 2013)
  • The Institute of Education Sciences’ (IES) K-3 School Leader’s Literacy Walkthrough Guide (Kosanovich et al., 2015)
  • The Institute of Education Sciences’ (IES): Grades 4-12 School Leaders Literacy Walkthrough Guide (Lee et al., 2020)

These tools highlight key concepts or what can be called “look-fors” of literacy rich environments by using a rubric or checklist. Some examples follow:

  • Strategy Use and Instruction: The teacher’s ability to teach strategies and skills that supports students in reading, writing, speaking, listening, and engaging with literature (PLATO)
  • Literacy Texts: Retell familiar stories, including key details (IES K-3; Kosanovich et al., 2015)
  • Vocabulary and Advanced Word Study: Explicit instruction is provided in using context clues to help students become independent vocabulary learners using literary and content area text (IES 4-12; Lee et al., 2020)

A second way is to develop professional learning communities (PLCs) to extend literacy supervision and feedback. Successful literacy-focused PLCs:

  • Establish a shared literacy mission, vision, values, and goals,
  • engage in regular collective inquiry on evidence-based literacy practices, and
  • promote continuous literacy instruction improvement among staff.

These strategies can be used by school leaders or complement the work of a school literacy coach. Ready to create a learning community in your school or district? Read KickUp’s tips for setting PLCs up for success.

This blog entry is part of a four-part series on actionable feedback. Stay tuned for our next post that will focus on concrete ways to provide feedback to Early Childhood Education teachers.

If this blog has sparked your interest and you want to learn more, check out our book, Actionable Feedback to PK-12 Teachers. And for other suggestions on supervising teachers in literacy, see Chapter 9 by Janice A. Dole, Parker C. Fawson, and D. Ray Reutzel.

Filed Under: News

Timing is Everything… Or Is It? How Do We Incentivize Survey Participation?

February 1, 2024 by jhall152

By M. Andrew Young

Hello! My name is M. Andrew Young. I am a second-year Ph.D. student in the Evaluation, Statistics, and Methodology Ph.D. program here at UT-Knoxville. I currently work in higher education assessment as a Director of Assessment at East Tennessee State University’s college of Pharmacy.

Let me tell you a story; and you are the main character!

4:18pm Friday Afternoon:

Aaaaaand *send*.

You put the finishing touches on your email. You’ve had a busy, but productive day. Your phone buzzes. You reach down to the desk and turn on the screen to see a message from your friends you haven’t seen in a while.

Tonight still good?

“Oh no! I forgot!” You tell yourself as you flop back in your office chair. “I was supposed to bring some drinks and a snack to their house tonight.”

As it stands – you have nothing.

You look down at your phone while you recline in your office chair, searching for “grocery stores near me.” You find the nearest result and bookmark it for later. You have a lot left to do, and right now, you can’t be bothered.

Yes! I am super excited! When is everyone arriving? You type hurriedly in your messaging app and hit send.

You can’t really focus on anything else. One minute passes by and your phone lights up again with the notification of a received text message.

Everyone is getting here around 6. See you soon!

Thanks! Looking forward to it!

You lay your phone down and dive back into your work.

4:53pm:

Work is finally wrapped up. You pack your laptop into your backpack, grab a stack of papers, joggle them on your desk to get them at least a little orderly before you jam them in the backpack. You shut your door and rush to your vehicle. You start your car, navigate to the grocery store you bookmarked earlier.

“17 minutes to your destination,” your GPS says.

5:12pm:

It took two extra minutes to arrive because, as usual, you caught the stoplights on the wrong rotation. You finally find a parking spot, shuffle out of your car and head toward the entrance.

You freeze for a moment. You see them.

You’ve seen them many times, and you always try to avoid them. You know there is going to be the awkward interaction of a greeting, a request of some sort; usually for money. Your best course of action is to ignore them. Everyone knows that you hear them, but it is a small price to pay in your hurry.

Sure enough, “Hello! Can you take three minutes of your time to answer a survey? We’ll give you a free t-shirt for your time!”

You shoot them a half smile and a glance as you pick up your pace and rush past the pop-up canopy and table stacked with items you barely pay attention to as you pass.

Shopping takes longer than you’d hoped. The lines are long at this time of day. You don’t have much, just an armful of goods, but no matter, you must wait your turn. Soon, you make your way out of the store to be unceremoniously accosted again.

5:32pm:

You have to drive across town. Now, you won’t even have enough time to go home and change before your dinner engagement. You rush towards the door. The sliding doors part as you pass through the entrance, right by them.

“Please! If you will take three minutes, we will give you a T-shirt. We really want your opinion on an important matter in your community!”

You gesture with your hand and explain, “I’m sorry, but I’m in a terrible rush!”

——————————————————————————————–

So, what went wrong for the survey researchers? Why didn’t you answer the survey? They were at the same place at the same time as you. They offered you an incentive to participate. They specified that it was only going to take three minutes of your time to complete. So, why did you brush them off as you have many other charities and survey givers in the past situated in front of your store of choice?

Oftentimes, we are asked for our input, or our charity, but before we even receive the first invitation, we have already determined that we will not participate. Why? In this scenario, you were in a hurry. The incentive they were offering was not motivating to you.

Would it have changed your willingness to participate if they offered a $10 gift card to the store you were visiting? Maybe, maybe not.

The question is, more and more, how do we incentivize participation in a survey? Paper, online, person-to-person. They are all suffering by the conundrum of falling response rates (Lindgren et al., 2020). This impacts the validity of your research study. How can you ensure that you are getting heterogeneous sampling from populations? How can you be sure that you are getting the data you need from the people you want to sample? This can be a challenge.

In recent published works on survey incentives, many studies acknowledge that time and place affects participation, but we don’t quite understand how. Some studies, such as Lindgren et al. (2020), have tried to determine the time of day and day of week to invite survey participants, but they themselves discuss the limitations in their study, which is endemic to many studies, which is the lack of heterogeneity of participants and the interplay of response and nonresponse bias:

While previous studies provide important empirical insights into the largely understudied role of timing effects in web surveys, there are several reasons why more research on this topic is needed. First, the results from previous studies are inconclusive regarding whether the timing of the invitation e-mails matter in web survey modes (Lewis & Hess, 2017, p. 354). Secondly, existing studies on timing effects in web surveys have mainly been conducted in an American context, with individuals from specific job sectors (where at least some can be suspected to work irregular hours and have continuous access to the Internet). This makes research in other contexts than the American, and with more diverse samples of individuals, warranted (Lewis & Hess, 2017, p. 361; Sauermann & Roach, 2013, p. 284). Thirdly, only the Lewis and Hess (2017), Sauermann and Roach (2013), and Zheng (2011) studies are recent enough to provide dependable information to today’s web survey practitioners, due to the significant, and rapid changes in online behavior the past decades. (p. 228)

Timing, place/environment, and matching the incentive to the situation and participant (and maybe even topic, if possible) is influential in improving response rates. Best practices indicate that pilot testing survey items can help create a better survey, but how about finding what motivates your target population to even agree to begin the survey in the first place? That is less explored, and I think is an opportunity for further study.

This gets even harder when you are trying to reach hard-to-reach populations. Many times, it takes a variety of approaches, but what is less understood, is how to decide on your initial approach. The challenge that other studies have run into, and something that I think will continue to present itself as a hurdle is this: because of the lack of research on timing and location, and because of the lack of heterogeneity in the studies that do exist, the generalizability of studies is limited, if not altogether impractical. So, that leads me full-circle back to pilot-testing incentives and timing for surveys. Get to know your audience!

Cool Citations to Read:

Guillory, J., Wiant, K. F., Farrelly, M., Fiacco, L., Alam, I., Hoffman, L., Crankshaw, E., Delahanty, J., & Alexander, T. N. (2018). Recruiting Hard-to-Reach Populations for Survey Research: Using Facebook and Instagram Advertisements and In-Person Intercept in LGBT Bars and Nightclubs to Recruit LGBT Young Adults. J Med Internet Res, 20(6), e197. https://doi.org/10.2196/jmir.9461

Lindgren, E., Markstedt, E., Martinsson, J., & Andreasson, M. (2020). Invitation Timing and Participation Rates in Online Panels: Findings From Two Survey Experiments. Social Science Computer Review, 38(2), 225–244. https://doi.org/10.1177/0894439318810387

Robinson, S. B., & Leonard, K. F. (2018). Designing Quality Survey Questions. SAGE Publications, Inc. [This is our required book in Survey Research!]

Smith, E., Loftin, R., Murphy-Hill, E., Bird, C., & Zimmermann, T. (2013). Improving developer participation rates in surveys. 2013 6th International Workshop on Cooperative and Human Aspects of Software Engineering (CHASE), 89–92. https://doi.org/10.1109/CHASE.2013.6614738

Smith, N. A., Sabat, I. E., Martinez, L. R., Weaver, K., & Xu, S. (2015). A Convenient Solution: Using MTurk To Sample From Hard-To-Reach Populations. Industrial and Organizational Psychology, 8(2), 220–228. https://doi.org/10.1017/iop.2015.29

Neat Websites to Peek At:

https://blog.hubspot.com/service/best-time-send-survey (limitations, again, no demographics understanding, they did say to not send it in high-volume work times, but not everyone works the same type of M-F 8:00am-4:30pm job)

https://globalhealthsciences.ucsf.edu/sites/globalhealthsciences.ucsf.edu/files/tls-res-guide-2nd-edition.pdf (this is targeted directly towards certain segments of hard-to-reach populations. Again, generalizability challenges, but the idea is there)

Filed Under: Evaluation Methodology Blog

Educational Leadership and Policy Studies

325 Bailey Education Complex
Knoxville, Tennessee 37996

Phone: 865-974-2214
Fax: 865.974.6146

The University of Tennessee, Knoxville
Knoxville, Tennessee 37996
865-974-1000

The flagship campus of the University of Tennessee System and partner in the Tennessee Transfer Pathway.

ADA Privacy Safety Title IX