• Request Info
  • Visit
  • Apply
  • Give
  • Request Info
  • Visit
  • Apply
  • Give

Search

  • A-Z Index
  • Map
Education Research
  • Request Info
  • Visit
  • Apply
  • Give
Cross

Educational Leadership and Policy Studies

  • About
  • Our People
    • Our People Overview
    • Faculty
    • Staff
    • Students
  • Academic Programs
    • Academic Programs Overview
    • Adult & Continuing Education
    • College Student Personnel
    • Educational Administration
    • Evaluation Methodology
    • Higher Education Administration
    • Undergraduate Studies
  • Education Research & Opportunity Center
  • Admissions & Information
    • Admissions Overview
    • Graduate Forms, Handbooks, and Resources
    • Contact ELPS
  • Request Info
  • Visit
  • Apply
  • Give
  • About
  • Our People
    • Our People Overview
    • Faculty
    • Staff
    • Students
  • Academic Programs
    • Academic Programs Overview
    • Adult & Continuing Education
    • College Student Personnel
    • Educational Administration
    • Evaluation Methodology
    • Higher Education Administration
    • Undergraduate Studies
  • Education Research & Opportunity Center
  • Admissions & Information
    • Admissions Overview
    • Graduate Forms, Handbooks, and Resources
    • Contact ELPS
  1. Educational Leadership and Policy Studies
  2. Posts By jhall152
Author: jhall152

How Do I Critically Consume Quantitative Research?

May 1, 2024 by jhall152

By Austin Boyd 

Every measurement, evaluation, statistics, and assessment (MESA) professional, whether they ​are​ established educators and practitioners or aspiring students, engages with academic literature in some capacity. Sometimes for work, ​​other times for pleasure, but always in the pursuit of new knowledge. But how do we as consumers of research determine whether the quantitative research we engage with is high quality? 

My name is Austin Boyd, and I am a researcher, instructor, and ESM alumni. ​​I have read my fair share of articles over the past decade and was fortunate enough to publish a few of my own. I have read articles in the ​​natural, formal, applied, and social sciences, and while they all shared the title of peer-reviewed publication, there was definitely variability in the quality of quantitative research from one article to the next. Initially, it was difficult for me to even consider the idea that a peer-reviewed publication would be anything less than perfect. However, as I have grown as a critical consumer of research, I have devised six questions to keep in mind when reading articles with quantitative analyses that allow me to remain objective in the face of exciting results. 

  1. ​​     ​What is the purpose of the article?  

The first question to keep in mind when reading an article is​,​ “​W​hat is its purpose?” Articles may state these in the form of research questions or even in the title by using words such as “empirical”, “validation”, and “meta-analysis”. While the purpose of an article has no bearing on its quality, it does impact the type of information a reader should expect to obtain from it. Do the research questions indicate that the article will be presenting new exploratory research on a new phenomenon or attempting to validate previous research findings? Remaining aware of the article’s purpose allows you to determine if the information is relevant and in the scope of what it should be providing. 

  1. What information is provided about obtaining participants and about the participants themselves? 

The backbone of quantitative research is data. In order to have any data, participants or cases must be found and measured for the phenomena of interest. These participants are all unique, and it is this uniqueness that needs to be disclosed to the reader. Information on the population of interest, how the selected participants were recruited, who they are, and why their results were or were not included in the analysis is essential for understanding the c​​ontext of the research. Beyond the article itself, the demographics of the participants are also important for planning future research. While research participants are largely Western, educated, industrialized, rich, and democratic societies (​​WEIRD; Henrich et al., 2010), it should not be assumed that this is the case for all research. The author(s) of an article should disclose demographic information of the participants, so the readers understand the context of the data and the generalizability of the results, and so that researchers can accurately replicate or expand the research to ne​​w contexts. 

  1. Do the analyses used make sense for the data and proposed research question(s)? 

In order to obtain results from the quantitative data collected, some form of analysis must be conducted. The most basic methods of exploring quantitative data are called statistics (Sheard, 2018). T​​he selected statistical analysis should align with the variables presented in the article and answer the research question(s) guiding the project. There is a level of subjectivity as to which statistical analysis should be used to analyze data. Variables measured on a nominal scale should not be used as the outcome variable when conducting analyses that look at the differences between group means, such as ​​t-tests and ANOVAs, while ratio scale variables should not be used to conduct analyses dealing with frequency distributions, such as chi-square tests. However, there are analyses which require the same variable types, making them seemingly interchangeable. For example, t-tests, logistic regressions, and point biserial analyses all use two variables, one continuous and one binary. However, each of these analyses addresses different research questions such as “Is there a difference between groups?”, “Can we predict an outcome?”, and “Is there a relationship between variables?”. While there is a level of subjectivity as to which statistical analysis can be used to analyze data, there are objectively incorrect analyses based on both the overarching research questions and the scale of measurement of the available variables in the data.  

  1. What results are provided? 

While a seemingly straightforward question, there is a lot of information that can be provided about a given analysis. The most basic, and least informative, is a blanket statement about the statistical significance. Even if there is no statistically significant result to report, a blanket statement is not sufficient information about the analyses with all the different values that can be reported for each analysis. For example, a t-test has a t value, degrees of freedom, p value, confidence interval, power level, and effect size, all of which provide valuable information about the results. While having some of these values does allow the reader to calculate the missing ones, the onus should not be put on the reader to do so (Cohen, 1990). Additionally, depending on the type of statistical analysis chosen, additional tests of the data must be conducted to determine if the data meets the assumptions necessary for the analysis. The results of these tests of assumptions and the decisions made based on them should be reported and supported by the existing literature. 

  1. Is there any discussion of limitations? 

Almost every article has limitations in some form or other, which should be made known to the reader. If an article didn’t have any limitations, the author would make a point to state as much. Limitations include limits to the generalizability of the findings, confounding variables, or simply time constraints. While these might seem negative, they are not immediate reasons to discredit an article entirely. As was the case for the demographics, the limitations provide further context about the research. They can even be useful in providing direction for follow-up studies in the same way a future research section would.  

  1. Do you find yourself still having questions after finishing the article?  

The final question to keep in mind once you have finished reading an article is “Do you still have questions?” At the end of an article, you shouldn’t find yourself needing more information about the study. You might want to know more about the topic or similar research, but you shouldn’t be left wondering about pieces of the research design or other methodological aspects of the study. High-quality research deserves an equally high-quality article, which includes ample information about every aspect of the study. 

While not an exhaustive list, these six questions are designed to provide a starting point for determining if research with quantitative data is of high quality. Not all research is peer-reviewed, including conference presentations, blog posts, and white papers, and simply being peer-reviewed does not make a publication infallible. It is important to understand how to critically consume research in order to successfully navigate the ever-expanding body of scientific research. 

​​​​     ​Additional Resources: 

https://blogs.lse.ac.uk/impactofsocialsciences/2016/05/09/how-to-read-and-understand-a-scientific-paper-a-guide-for-non-scientists/  

https://statmodeling.stat.columbia.edu/2021/06/16/wow-just-wow-if-you-think-psychological-science-as-bad-in-the-2010-2015-era-you-cant-imagine-how-bad-it-was-back-in-1999/ 

https://totalinternalreflectionblog.com/2018/05/21/check-the-technique-a-short-guide-to-critical-reading-of-scientific-papers/ 

https://undsci.berkeley.edu/understanding-science-101/how-science-works/scrutinizing-science-peer-review/ 

https://www.linkedin.com/pulse/critical-consumers-scientific-literature-researchers-patients-savitz/ 

​​     ​References: 

Cohen, J. (1990). Things I have learned (So Far). The American Psychologist, 45(12), 1304–1312. DOI: 10.1037/0003-066X.45.12.1304 

Henrich, J., Heine, S., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33 (2-3), 61-83 DOI: 10.1017/S0140525X0999152X 

Sheard, J. (2018). Chapter 18 – Quantitative data analysis. In K. Williamson & G. Johanson (Eds.), Research Methods (2nd ed., pp. 429-452). Chandos Publishing. DOI: 10.1016/B978-0-08-102220-7.00018-2   

Filed Under: Evaluation Methodology Blog

Engaging Students in Online, Asynchronous Courses: Strategies for Success

April 15, 2024 by jhall152

By S. Nicole Jones, Ph.D. 

Hello! My name is Nicole Jones, and I am a 2022 graduate of the Evaluation, Statistics, and Methodology (ESM) PhD program at the University of Tennessee, Knoxville (UTK). I currently work as the Assessment & Accreditation Coordinator in the College of Veterinary Medicine (CVM) at the University of Georgia (UGA). I also teach online, asynchronous program evaluation classes for UTK’s Evaluation, Statistics, & Methodology PhD and Evaluation Methodology MS programs. My research interests include the use of artificial intelligence in evaluation and assessment, competency-based assessment, and outcomes assessment. 

Prior to teaching part-time for UTK, I served as a graduate teaching assistant in two online, synchronous ESM classes while enrolled in the PhD program: Educational Research and Survey Research. In addition, I taught in-person first-year seminars to undergraduates for many years in my previous academic advising roles. However, it wasn’t until I became involved in a teaching certificate program offered by UGA’s CVM this year that I truly began to reflect more on my own teaching style, and explore ways to better engage students, especially in an online, asynchronous environment. For those who are new to teaching online classes or just need some new ideas, I thought it would be helpful to share what I’ve learned about engaging students online.  

Online Learning 

While many online courses meet synchronously, meaning they meet virtually at a scheduled time through platforms like Zoom or other Learning Management Software (LMS) tools, there are also online classes that have no scheduled meeting times or live interactions. These classes are considered to be asynchronous. If you have taken an online, asynchronous course, you likely already know that it can be easy to forget about the class, primarily because there is no scheduled class time to keep you on track. When I worked as an academic advisor, I would often encourage my students who registered for these types of courses to go ahead and set aside certain days or times of the week to devote to those classes. Many college students struggle with time management, especially in the first-year, so this was one way to help them stay engaged in the class and up to date with assignments. While it is certainly important for students to show up (or log in) and participate, it’s even more important for instructors to create an online environment that will motivate students to do so. As discussed by Conrad and Donaldson (2012), online engagement is related to student participation and interaction in the classroom, and learning in the classroom (online or in-person) rests upon the instructor’s ability to create a sense of presence and engage students in the learning process. The key to engaging online learners is for students to be engaged and supported so they take responsibility for their own learning (Conrad & Donaldson, 2012). So, how might you create an engaging online environment for students?  

Engaging Students in Online Classes 

Below are some strategies I currently use to engage students in my online, asynchronous program evaluation classes:  

  • Reach out to the students prior to the start of class via welcome email 
  • Post information about myself via an introduction post – also have students introduce themselves via discussion posts 
  • Develop a communication plan – let students know the best way to get in touch with me 
  • Host weekly virtual office hours – poll students about their availability to find the best time 
  • Clearly organize the course content by weekly modules 
  • Create a weekly checklist and/or introduction to each module 
  • Use the course announcements feature to send out reminders of assignment due dates  
  • Connect course content to campus activities, workshops, events, etc.  
  • Utilize team-based projects 
  • Provide opportunities for students to reflect on learning (i.e., weekly reflection journals) 
  • Provide feedback on assignments in a timely manner 
  • Allow for flexibility and leniency  
  • Reach out to students who miss assignment due dates – offer to meet one-on-one if needed 

In addition to these strategies, the Center for Teaching and Learning at Northern Illinois University has an excellent website with even more recommendations for increasing student engagement in online courses. Their recommendations focus on the following areas: 1) set expectations and model engagement, 2) build engagement and motivation with course content and activities, 3) initiate interaction and create faculty presence, 4) foster interaction between students and create a learning community, and 5) create an inclusive environment. I also recommend checking your current institution’s Center for Teaching and Learning to see if they have tips or suggestions as they may be more specific for the LMS your institution uses. Lastly, you may find the following resources helpful if you wish to learn more about student engagement and online teaching and learning. 

Helpful Resources 

American Journal of Distance Education: https://www.tandfonline.com/toc/hajd20/current  

Fostering Connection in Hybrid & Online Formats:
https://www.ctl.uga.edu/_resources/documents/Fostering-Connection-in-Hybrid-Online-Formats.pdf  

Conrad, R. M., & Donaldson, J. A. (2012). Continuing to Engage the online Learner: More Activities and Resources for Creative Instruction. San Francisco, CA: Jossey Bass.  

Groccia, J. E. (2018). What is student engagement? New Directions for Teaching and Learning, 154, 11-20.  

How to Make Your Teaching More Engaging: Advice Guide 

https://www.chronicle.com/article/how-to-make-your-teaching-more-engaging/?utm_source=Iterable&utm_medium=email&utm_campaign=campaign_3030574_nl_Academe-Today_date_20211015&cid=at&source=ams&sourceid=&cid2=gen_login_refresh 

 How to Make Your Teaching More Inclusive:  

https://www.chronicle.com/article/how-to-make-your-teaching-more-inclusive/ 

Iowa State University Center for Excellence in Learning and Teaching: https://www.celt.iastate.edu/learning-technologies/engaging-students/ 

Khan, A., Egbue, O., Palkie, B., & Madden, J. (2017). Active learning: Engaging students to maximize learning in an online course. The Electronic Journal of e-Learning, 15(2), 107-115. 

Lumpkin, A. (2021). Online teaching: Pedagogical practices for engaging students synchronously and asynchronously. College Student Journal, 55(2), 195-207. 

Northern Illinois University Center for Teaching and Learning. (2024, March 1). Recommendations to Increase Student Engagement in Online Classes. https://www.niu.edu/citl/resources/guides/increase-student-engagement-in-online-courses.shtml.   

Online Learning Consortium: https://onlinelearningconsortium.org/read/olc-online-learning-journal/  

Watson, S., Sullivan, D. P., & Watson, K. (2023). Teaching presence in asynchronous online classes: It’s not just a façade. Online Learning, 27(2), 288-303. 

Filed Under: Evaluation Methodology Blog

Mad with Methods and Measures

Careers in Program Evaluation: Finding and Applying for a Job as a Program Evaluator

April 1, 2024 by jhall152

By Jennifer Ann Morrow, Ph.D. 

Introduction: 

Hi! My name is Jennifer Ann Morrow and I’m an Associate Professor in Evaluation Statistics and Methodology at the University of Tennessee-Knoxville. I have been training emerging assessment and evaluation professionals for the past 22 years. My main research areas are training emerging assessment and evaluation professionals, higher education assessment and evaluation, and college student development. My favorite classes to teach are survey research, educational assessment, program evaluation, and statistics. 

What’s Out There for Program Evaluators? 

What kind of jobs are out there for program evaluators? What organizations hire program evaluators? Where should I start my job search? What should I submit with my job application? These are typical questions my students ask me as they are considering joining the evaluation job market. Searching for a job can be overwhelming and with so many resources and websites available it can be easy to get lost within all of the information when searching for a job. Here are some strategies that I share with my students as I help them navigate the program evaluation job market, I hope you find them helpful! 

First, I ask the student to describe their skills/competencies that they have and what types of skills they believe they are strong in (and hopefully enjoy doing!). In our program we use the American Evaluation Association Competencies (https://www.eval.org/About/Competencies-Standards/AEA-Evaluator-Competencies) in a self-assessment where we have students rate how confident they are in their ability to perform each competency. We have them rate themselves and provide strategies for how to remedy deficiencies each year that they are in our program. Conducting a self-assessment of your skills/competencies and strengths and weaknesses is a great way to help figure out what types of jobs best fit your skillset. It is also helpful when crafting a cover letter! Check out the resources for additional examples of self-assessments! 

Second, I have students create/update their curriculum vita (CV) and resume. Depending on the jobs that they plan on applying for they may need a CV or a resume. I tell them to use the information from their skills self-assessment and their graduate program of study to craft their CV/resume. I also have them develop a general cover letter (these should be tailored for each specific job) that showcases their experience, skills, and relevant work products. There are a ton of resources available online (see some listed below) and I share with them example CVs/resumes and cover letters from some of our graduates. I also encourage them to get feedback on these from faculty and peers before using them in a job application. 

Third, I encourage students to develop (or clean up current ones) a social media presence. I highly recommend creating a LinkedIn profile (My LinkedIn Profile). Make sure on your profile that you showcase your skills, education, experiences and make connections with others in the Program Evaluation field. LinkedIn is also a great place to search for evaluation jobs! I also recommend to students to create an academic website (Dr. Rocconi’s Website). On your website you go into more detail about your experiences, share work products (e.g., publications, presentations, evaluation reports). Make sure you put your LinkedIn and website links at the top of your CV/resume! 

Fourth, I provide my students tips for where and how to search for program evaluation jobs. I encourage them to draft relevant search terms (e.g., program evaluator, evaluation specialist, program analyst, data analyst) and make a list of job sites (see resources for some of my favorites!) that you are going to use to search for jobs. For a lot of these job sites you can search for key terms, job title, location, salary, etc. to help narrow down the results. Also, for many of these job sites you can sign up for job alerts based on your search terms where they will send you an email when a new job fits your search terms. I also encourage students to join their major professional organizations (e.g., AEA) and sign up for their newsletter or listserv as many job opportunities are posted there. 

Lastly, I tell students to create an organized job search plan. I typically do this in Excel but you can organize your information in a variety of formats and platforms. I create an Excel file that contains all of the jobs that I apply for (i.e., name of organization, link to job ad, contact information, date applied) and a list of when/where I am searching for job. When I was actively searching for jobs I dedicated time each week to go through listserv emails and search job sites for relevant jobs to apply for. I then updated my excel file each week during my search. It helps to keep things organized in case you need to follow-up with organizations regarding the status of your application. 

So, good luck on your job search and I hope that my tips and resources are helpful as you start your journey to becoming a program evaluator! 

 

Resources 

American Evaluation Association Competencies: https://www.eval.org/About/Competencies-Standards/AEA-Evaluator-Competencies  

Article about How to Become a Program Evaluator: https://www.evalcommunity.com/careers/program-evaluator/ 

Article about Program Evaluation Careers: https://money.usnews.com/money/careers/articles/2008/12/11/best-kept-secret-career-program-evaluator 

Article about Program Evaluation Jobs: https://www.evalcommunity.com/job-search/program-evaluation-jobs/ 

Creating a LinkedIn Profile: https://blog.hubspot.com/marketing/linkedin-profile-perfection-cheat-sheet  

Creating an Academic Website: https://theacademicdesigner.com/2023/how-to-make-an-academic-website/  

Evaluator Competencies Assessment: https://www.khulisa.com/wp-content/uploads/2021/02/2020-Evaluator-Competencies-Assessment-Tool-ECAT_Final_2020.07.27.pdf  

Evaluator Qualities: https://www.betterevaluation.org/frameworks-guides/managers-guide-evaluation/scope-evaluation/determine-evaluator-qualities 

Evaluator Self-Assessment: https://www.cdc.gov/evaluation/tools/self_assessment/evaluatorselfassessment.pdf  

Program Evaluation Curriculum Vita Tips: https://wmich.edu/sites/default/files/attachments/u1158/2021/Showcasing%20Your%20Eval%20Competencies%20in%20Your%20Resume%20or%20Vita%20for%20PDF.pdf  

Program Evaluation Resume Tips: https://www.zippia.com/program-evaluator-jobs/skills/#  

Resume and CVs Resources: https://www.careereducation.columbia.edu/topics/resumes-cvs  

Resume and Job Application Resources: https://academicguides.waldenu.edu/careerservicescenter/resumesandmore  

Six C’s of a Good Evaluator: https://www.evalacademy.com/articles/2019/9/26/what-makes-a-good-evaluator  

UTK’s Evaluation Methodology MS program (distance ed): https://volsonline.utk.edu/programs-degrees/education-evaluation-methodology-ms/ 

AAPOR Jobs: https://jobs.aapor.org/jobs/?append=1&quick=industry%7Csurvey&jas=3 

American Evaluation Association Job Bank: https://careers.eval.org/ 

Evaluation Jobs: https://evaluationjobs.org/ 

Higher Ed Jobs: https://www.higheredjobs.com/ 

Indeed.com: https://www.indeed.com/ 

Monitoring and Evaluation Career Website: https://www.evalcommunity.com/ 

NCME Career Center: https://www.ncme.org/community/community-network2/careercenter 

USA Government Job Website: https://www.usajobs.gov/ 

 

Filed Under: Evaluation Methodology Blog

Brian Mells Recognized as Field Award Recipient

March 25, 2024 by jhall152

Dr. Brian Mells, Principal at Whites Creek High School in Metro Nashville Public Schools named as recipient of William J. and Lucille H. Field Award for Excellence in Secondary Principalship for the State of Tennessee.

The Field Award was established to recognize one outstanding secondary school leader each year who demonstrates leadership excellence through commitment to the values of civility, candor, courage, social justice, responsibility, compassion, community, persistence, service, and excellence. Administered by the College of Education, Health, and Human Sciences at the University of Tennessee, the Field Award identifies a Tennessee secondary school principal whose life and work are characterized by leadership excellence and encourages secondary school principals to pause and reflect upon their current leadership practice and to consider their experience, challenges, and opportunities in light of the personal values that they embody. 

 

The Field Award recipient for this year is Dr. Brian Mells. Principal of Whites Creek High School in Metro Nashville Public Schools. A secondary principal since 2016, Dr. Mells holds a Bachelor’s degree from The University of Tennessee, a Master’s from Tevecca Nazarene University, an EdS and an EdD from Carson-Newman University.  During Dr. Mellls’ tenure at Whites Creek High School, he has led his campus to excellence by supporting academic rigor and student achievement, and by strengthening positive relationships with all stakeholders. Dr. Mells is an exceptional school leader who has taken the initiative to implement numerous programs on his campus, inspire instructional innovation, and improve student achievement. Dr. Mells stated that his “core belief of [his] leadership is that all students can achieve and grow academically, socially, and emotionally, when the appropriate systems and structures are in place for them to be successful. 

Dr. Mells is an innovative school leader who is passionate about developing collective efficacy and collective accountability among his faculty and staff to ensure that they achieve excellence for all stakeholders. Under Dr. Mells’ leadership, Whites Creek High School was able to increase all academic achievement outcomes for all students and earn an overall composite TVAAS of Level 5 for the first time in school history and has maintained that status for the past two years. Dr. Mells was nominated for the Field Award by MNPS superintendent, Adrienne Battle and endorsed by the Chief of Innovation, Renita Perry. Perry commented, “Dr. Mells is an innovative school leader who is passionate about developing collective efficacy and collective accountability among his faculty and staff to ensure that they achieve excellence for all stakeholders.” The department of Educational Leadership and Policy Studies at the University of Tennessee is proud to name Dr. Mells as this year’s Field Award Winner. Congratulations, Dr. Brian Mells! 

Filed Under: Uncategorized

Mad with Methods and Measures

How My Dissertation Came to be through ESM’s Support and Guidance

March 15, 2024 by jhall152

By Meredith Massey, Ph.D. 

Who I am

Greetings! I’m Dr. Meredith Massey. I finished my PhD in Evaluation, Statistics, and Methodology (ESM) at UTK in the Fall of 2023. In addition to my PhD in ESM, I also completed graduate certificates in Women, Gender, and Sexuality and Qualitative Research Methods in Education. While I was a part-time graduate student, I also worked full-time as an evaluation associate at Synergy Evaluation Institute, a university-based evaluation center. By day, I worked for clients evaluating their STEM education and outreach programs. By night, I was an emerging scholar in ESM. During my time in the program,      my research interests grew to include andragogical issues in applied research methods courses, classroom measurement and assessment, feminist research methods, and evaluation.

How my dissertation came to be

In the ESM program, students can choose to complete a three-manuscript dissertation rather than a traditional five-chapter dissertation. When it came time to start deciding what my dissertation would look like, my faculty advisor, Dr. Leia Cain, suggested I consider the three-manuscript option. As someone who has varied interests, this idea appealed to me because it allowed me the flexibility to work on three separate but related studies. My dissertation flowed from a research internship that I completed with Dr. Cain. I interviewed qualitative faculty about their assessment beliefs and practices within their qualitative methods courses. I wrote a journal article on that study to serve as my comprehensive exam writing requirement. Using my original internship study as the basis for my first dissertation manuscript was an expedient strategy as it allowed me to structure my second and third manuscripts on the findings of my first study. I presented my ideas for my second and third manuscripts to my committee in my dissertation proposal, accepted their feedback on how to proceed with my studies and then got to work.

Dissertation topic and results

In my multi-paper dissertation entitled “Interviews, rubrics and stories (Oh my!): My journey through a three-manuscript dissertation,” I chose to center faculty and students’ perspectives on assessment and learning. To that end, my first and second research studies both focused on those two issues, while the third paper went further into exploring the students’ perspective through my story of the parallel formations of my scholarly identity and my new identity as a part of a married couple. In the first study, “Interviewing the Interviewers: How qualitative faculty assess interviews,” I reported how faculty use interview assignments in their courses and how they engage with assessment tools such as rubrics for those interview assignments. We learned that the faculty view interview assignments as the best and most comprehensive assignment their students can complete to give them experience as qualitative researchers. Regarding assessment tools such as rubrics, while instructors had differing opinions on whether rubrics were an appropriate tool to use in their assessment practices, all the instructors believed that giving students feedback was an essential assessment practice. My findings in that manuscript helped shape the plan to implement the second study. In “I can do qualitative research: Using student-designed rubrics to teach interviewing,” I detailed testing out an innovative student-created rubric for an interview assignment in an introductory qualitative research methods course and used student reflections as the basis for writing an ethnodrama about how students experience their first interview assignments and how they engaged with their rubric. From this study, we learned that students grew in their confidence in conducting interviews, experienced a transformation in their paradigm, and were conflicted about using the student-designed rubric in that some students found it useful, and some did not. Both manuscripts informed my third manuscript, an autoethnography detailing the parallel transitions in my identity from an evaluator to a scholar and my identity from a single person to a married person. I wrote interweaving stories chronicling the parallels between the similar and contrasting methods I use as an evaluator and researcher, how this tied into my growing identity as a scholar, and the similarities and contrasts of how I’ve noticed my identity has been changing throughout my engagement and being newly married to my longtime boyfriend, now husband. These studies contributed valuable knowledge to the existing, though limited, andragogical literature on qualitative research methods. My hope going forward is that qualitative faculty continue this focus, beginning conversations about their classroom assessments to complete their own andragogical studies determining the impact of their teaching on their students’ learning.

What’s next?

            Now that I’m finished with my dissertation and my studies, I am happy to report that I have accepted a promotion at my job at Synergy Evaluation Institute, and I’ve also been given the opportunity to teach qualitative research methods courses as an adjunct in the ESM program. I’m excited to continue being associated with the program and teach future ESM students. Being in the ESM program at UTK, while difficult at times, has also been a joy. The ESM program encouraged me to explore my varied interests and ultimately supported me as I grew professionally as an evaluator and scholar. The program accommodated and respected me as a working professional, and I highly recommend the program to any student with an interest in working with data as an evaluator, assessment professional, statistician, qualitative researcher, faculty, or all of the above. There’s a place for all in ESM.

Resources

Journal article citation

Massey, M.C., & Cain, L.K. (In press). Interviewing the interviewers: How qualitative faculty assess interviews. The Qualitative Report.

Books specifically about Qualitative Research Methods Andragogy

Eisenhart, M., & Jurow, A. S. (2011). Teaching qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), The SAGE handbook of qualitative research (4th ed., pp. 699-714). Sage.

Hurworth, R. E. (2008). Teaching qualitative research: Cases and issues. Sense Publishers.

Swaminathan, R., & Mulvihill, T. M. (2018). Teaching qualitative research: Strategies for engaging emerging scholars. Guilford Publications.

Books to read to become familiar with Ethnodrama as a method

Leavy, P. (2015). Handbook of Arts-Based Research (2nd ed.). The Guilford Press.

Leavy, P. (2018). Handbook of Arts-Based Research (3rd ed.). The Guilford Press.

Saldana, J. (2016.) Ethnotheatre: Research from page to stage. Routledge. http://doi.org/10.4324/9781315428932

Most useful citations to become familiar with autoethnography as a method

Cooper, R., & Lilyea, B. V. (2022). I’m Interested in Autoethnography, but How Do I Do It?. The Qualitative Report, 27(1), 197-208. https://doi.org/10.46743/2160-3715/2022.5288

Ellis, C. (2004). The Ethnographic I: a methodological novel about
autoethnography
. AltaMira.

Ellis, C. (2013). Carrying the torch for autoethnography. In S. H. Jones, T. E. Adams., and C. Ellis (eds.) Handbook of Autoethnography (pp. 9-12). Left Coast Press.

Filed Under: Evaluation Methodology Blog

Introducing the Evaluation Methodology MS Program at UTK!

March 1, 2024 by jhall152

By Dr. Jennifer Ann Morrow 

Hi everyone! My name is Dr. Jennifer Ann Morrow, and I’m the program coordinator for the University of Tennessee at Knoxville’s new distance education master’s program in Evaluation Methodology. I’m happy to announce that we are currently taking applications for our first cohort that will start in Fall 2024. In a world driven by data, the EM Master’s program gives you the skills to make evidence-based decisions!  

So Why Should You Join Our Program? 

Fully Online Program 

Our new program is designed for the working professional, all courses are fully online and asynchronous which enables students to complete assignments at times convenient for them. Although our courses are asynchronous our faculty offer optional weekly synchronous student hours/help sessions to offer additional assistance and mentorship. Students also participate in both group and individual advising sessions each semester where students will receive mentorship, practical experience suggestions, and career exploration guidance.  

Applied Coursework 

Our 30-credit program is designed to be completed in just under 2 years (5 semesters, only 2 courses per semester!). Each class is designed to include hands-on applied experiences on the entire program evaluation process such as evaluation design, data collection, data analysis, and data dissemination. In their first-year, students will take a two-semester program evaluation course sequence, statistics 1, introduction to qualitative research 1, evaluation designs and data collection methods, and an elective. In their second-year students will take survey research, dissemination evaluation results, and a two-semester evaluation practicum course sequence where they will finalize a portfolio of their evaluation experiences to fulfil their comprehensive exam requirements. If students are unable to take 6 credits a semester, they have up to 6 years to complete their degree if they want to go at a slower pace.  

Experienced Faculty 

Our faculty are experienced educators! All faculty work as evaluators or in a related job such as assessment professional, applied researcher, or psychometrician. They are dedicated faculty that understand what skills and competencies are needed in the evaluation field and ensure that these are focused on within their classes. All are actively involved in their professional organizations (e.g., American Evaluation Association, American Psychological Association, Association for the Assessment of Learning in Higher Education, Association for Institutional Research) and publish their scholarly work in peer-reviewed journals.  

How to Apply 

It’s easy to apply! Go to the UTK Graduate Admissions Portal (https://apply.gradschool.utk.edu/apply/) and fill out your application. You need 2-3 letters of recommendation (complete the contact information and UTK will reach out to them to complete a recommendation), college transcripts, a goals statement (a letter introducing yourself and why you want to join our program) and submit your application fee. No GRE scores are needed! Applications are due by July 1st of each year (though we will review them early if you submit before then!). Tuition is $700 per graduate credit ($775 for out of state). 

 

Contact Me for More Information 

If you have any questions about our program just reach out! 

 

Jennifer Ann Morrow Ph.D.
jamorrow@utk.edu
(865)-974-6117
https://elpsclone.flywheelsites.com/people/jennifer-ann-morrow-phd/

Helpful Resources 

Evaluation Methodology Program Website: https://elpsclone.flywheelsites.com/evaluation-methodology-ms/  

Evaluation Methodology Program VOLS Online Website: https://volsonline.utk.edu/programs-degrees/education-evaluation-methodology-ms/  

Evaluation Methodology Program Student Handbook: https://elpsclone.flywheelsites.com/wp-content/uploads/2023/11/EM-MASTERS-HANDBOOK-2023.pdf  

UTK Educational Leadership and Policy Studies Website: https://elpsclone.flywheelsites.com/  

UTK Educational Leadership and Policy Studies Facebook Page: https://www.facebook.com/utkelps/?ref=embed_page  

UTK Graduate School Admissions Website: https://gradschool.utk.edu/future-students/office-of-graduate-admissions/applying-to-graduate-school/  

UTK Graduate School Admission Requirements: https://gradschool.utk.edu/future-students/office-of-graduate-admissions/applying-to-graduate-school/admission-requirements/  

UTK Graduate School Application Portal: https://apply.gradschool.utk.edu/apply/  

UTK Distance Education Graduate Fees: https://onestop.utk.edu/wp-content/uploads/sites/63/2023/11/Spring-24-GRAD_Online.pdf  

UTK Graduate Student Orientations: https://gradschool.utk.edu/future-students/graduate-student-orientations/  

American Evaluation Association: https://www.eval.org/ 

AEA Graduate Student and New Evaluator TIG: https://www.facebook.com/groups/gsnetig/ 

Filed Under: Evaluation Methodology Blog

Evaluation Capacity Building: What is it, and is a Job Doing it a Good Fit for Me?

February 15, 2024 by jhall152

By Dr. Brenna Butler

Hi, I’m Dr. Brenna Butler, and I’m currently an Evaluation Specialist at Penn State Extension (https://extension.psu.edu/brenna-butler). I graduated from the ESM Ph.D. program in May 2021, and in my current role, a large portion of my job involves evaluation capacity building (ECB) within Penn State Extension. What does ECB specifically look like day-to-day, and is ECB a component of a job that would be a good fit for you? This blog post will cover some of my thoughts and opinions of what ECB may look like in a job in general. Keep in mind that these opinions are exclusively mine, and don’t represent those of my employer.

Evaluation capacity building (ECB) is the process of increasing the knowledge, skills, and abilities of individuals in an organization to conduct quality evaluations. This is often done by evaluators (like me!) providing the tools and information for individuals to conduct sustained evaluative practices within their organization (Sarti et al., 2017). The amount of literature covering ECB is on the rise (Bourgeois et al., 2023), indicating that evaluators taking on ECB roles within organizations may also be increasing. Although there are formal models and frameworks in the literature that describe ECB work within organizations (the article by Bourgeois and colleagues (2023) provides an excellent overview of these), I will cover three specific qualities of what it takes to be involved in ECB in an organization.

ECB Involves Teaching

Much of my role at Penn State Extension is providing mentorship to Extension Educators on how to incorporate evaluation in their educational programming. This mentorship role sometimes looks like a more formal teaching role by conducting webinars and training on topics such as writing good survey questions or developing a logic model. Other times, this mentorship role will take a more informal teaching route when I am answering questions Extension Educators email me regarding data analysis or ways to enhance their data visualizations for a presentation. Enjoying teaching and assisting others in all aspects of evaluations are key qualities of an effective evaluator who leads ECB in an organization.

ECB Involves Leading

Taking on an ECB role involves a large amount of providing guidance and being the go-to expert on evaluation within the organization. Individuals will often look to the evaluator in these positions as to what directions to take in evaluation and assessment projects. This requires speaking up in meetings to advocate for strong evaluative practices (“Let’s maybe not send out a 30-question survey where every single question is open-ended”). Being willing to speak up and go against the norms of “how the organization has always done something” is an area that an evaluator involved in ECB work needs to be comfortable doing.

One way this “we’ve always done it this way” mentality can be tackled by evaluators is through an evaluation community of practice. Each meeting is held around a different evaluation topic area where members of the organization are invited to talk about what has worked well for them and what hasn’t in that area and showcase some of the work they have conducted through collaboration with the evaluator. The intention is that these community of practice meetings that are open to the entire organization can be one way of moving forward with adopting evaluation best practices and leaning less on old habits.

ECB Involves Being Okay with “Messiness”

An organization may invest in hiring an evaluation specialist who can guide the group to better evaluative practices because they lack an expert in evaluation. If this is the case, evaluation plans may not exist, and your role as an evaluator in the organization will be to start from scratch in developing evaluative processes. Alternatively, it could be that evaluations have been occurring in the organization but may not be following best practices, and you will be tasked with leading the efforts to improve these practices.

Work in this scenario can become “messy” in the sense that tracking down historical evaluation data collected before an evaluator was guiding these efforts in the organization can become very difficult. For example, there may not be a centralized location or method to how paper survey data were being stored. One version of the data may involve tally marks on a sheet of paper indicating the number of responses to each question, and another version of the same survey data may be stored in an Excel file with unlabeled rows. These scenarios require adequate discernment by the evaluator if the historical data are worth combing through and combining so that they can be analyzed, or if starting from scratch and collecting new data will ultimately save time and effort. Being part of ECB in an organization involves being up for the challenge of working through these “messy”, complex scenarios.

Hopefully, this provided a brief overview of some of the work done by evaluators in ECB within organizations and can help you discern if a position involving ECB may be in your future (or not!).

 

Links to Explore for More Information on ECB

https://www.betterevaluation.org/frameworks-guides/rainbow-framework/manage/strengthen-evaluation-capacity

https://www.oecd.org/dac/evaluation/evaluatingcapacitydevelopment.htm

http://www.pointk.org/client_docs/tear_sheet_ecb-innovation_network.pdf

https://wmich.edu/sites/default/files/attachments/u350/2014/organiziationevalcapacity.pdf

https://scholarsjunction.msstate.edu/cgi/viewcontent.cgi?article=1272&context=jhse

 

References

Bourgeois, I., Lemire, S. T., Fierro, L. A., Castleman, A. M., & Cho, M. (2023). Laying a solid foundation for the next generation of evaluation capacity building: Findings from an integrative review. American Journal of Evaluation, 44(1), 29-49. https://doi.org/10.1177/10982140221106991

Sarti, A. J., Sutherland, S., Landriault, A., DesRosier, K., Brien, S., & Cardinal, P. (2017). Understanding of evaluation capacity building in practice: A case study of a national medical education organization. Advances in Medical Education and Practice, 761-767. https://doi.org/10.2147/AMEP.S141886

Filed Under: Evaluation Methodology Blog

Supporting Literacy Teachers with Actionable Content-Based Feedback

February 6, 2024 by jhall152

By Dr. Mary Lynne Derrington & Dr. Alyson Lavigne 

Please Note: This is Part 3 of a four-part series on actionable feedback. Stay tuned for the next posts that will focus on Leadership Content Knowledge (LCK) and teacher feedback in the areas of STEM, Literacy, and Early Childhood Education.

Missed the beginning of the series? Click here to read Part 1
on making teacher feedback count!

A strong literacy foundation in students’ early years is critical for success in their later ones. School leadership plays a significant part in establishing this foundation by equipping teachers with the right professional development.

Many (but not all) school leaders are versed in effective literacy instruction. Given its foundational importance, it is wise for principals — and others who observe and mentor teachers — to leverage the key elements of effective literacy instruction in the observation cycle. In this blog post, we outline two ways to do so.

Jan Dole, Parker Fawson, and Ray Reutzel suggest that one way to use research-based supervision and feedback practices in literacy instruction is to include in the observation cycle tools, guides, and checklists that specifically focus on literacy instruction, such as:

  • The Protocol for Language Arts Teaching Observations (PLATO; Grossman, 2013)
  • The Institute of Education Sciences’ (IES) K-3 School Leader’s Literacy Walkthrough Guide (Kosanovich et al., 2015)
  • The Institute of Education Sciences’ (IES): Grades 4-12 School Leaders Literacy Walkthrough Guide (Lee et al., 2020)

These tools highlight key concepts or what can be called “look-fors” of literacy rich environments by using a rubric or checklist. Some examples follow:

  • Strategy Use and Instruction: The teacher’s ability to teach strategies and skills that supports students in reading, writing, speaking, listening, and engaging with literature (PLATO)
  • Literacy Texts: Retell familiar stories, including key details (IES K-3; Kosanovich et al., 2015)
  • Vocabulary and Advanced Word Study: Explicit instruction is provided in using context clues to help students become independent vocabulary learners using literary and content area text (IES 4-12; Lee et al., 2020)

A second way is to develop professional learning communities (PLCs) to extend literacy supervision and feedback. Successful literacy-focused PLCs:

  • Establish a shared literacy mission, vision, values, and goals,
  • engage in regular collective inquiry on evidence-based literacy practices, and
  • promote continuous literacy instruction improvement among staff.

These strategies can be used by school leaders or complement the work of a school literacy coach. Ready to create a learning community in your school or district? Read KickUp’s tips for setting PLCs up for success.

This blog entry is part of a four-part series on actionable feedback. Stay tuned for our next post that will focus on concrete ways to provide feedback to Early Childhood Education teachers.

If this blog has sparked your interest and you want to learn more, check out our book, Actionable Feedback to PK-12 Teachers. And for other suggestions on supervising teachers in literacy, see Chapter 9 by Janice A. Dole, Parker C. Fawson, and D. Ray Reutzel.

Filed Under: News

Timing is Everything… Or Is It? How Do We Incentivize Survey Participation?

February 1, 2024 by jhall152

By M. Andrew Young

Hello! My name is M. Andrew Young. I am a second-year Ph.D. student in the Evaluation, Statistics, and Methodology Ph.D. program here at UT-Knoxville. I currently work in higher education assessment as a Director of Assessment at East Tennessee State University’s college of Pharmacy.

Let me tell you a story; and you are the main character!

4:18pm Friday Afternoon:

Aaaaaand *send*.

You put the finishing touches on your email. You’ve had a busy, but productive day. Your phone buzzes. You reach down to the desk and turn on the screen to see a message from your friends you haven’t seen in a while.

Tonight still good?

“Oh no! I forgot!” You tell yourself as you flop back in your office chair. “I was supposed to bring some drinks and a snack to their house tonight.”

As it stands – you have nothing.

You look down at your phone while you recline in your office chair, searching for “grocery stores near me.” You find the nearest result and bookmark it for later. You have a lot left to do, and right now, you can’t be bothered.

Yes! I am super excited! When is everyone arriving? You type hurriedly in your messaging app and hit send.

You can’t really focus on anything else. One minute passes by and your phone lights up again with the notification of a received text message.

Everyone is getting here around 6. See you soon!

Thanks! Looking forward to it!

You lay your phone down and dive back into your work.

4:53pm:

Work is finally wrapped up. You pack your laptop into your backpack, grab a stack of papers, joggle them on your desk to get them at least a little orderly before you jam them in the backpack. You shut your door and rush to your vehicle. You start your car, navigate to the grocery store you bookmarked earlier.

“17 minutes to your destination,” your GPS says.

5:12pm:

It took two extra minutes to arrive because, as usual, you caught the stoplights on the wrong rotation. You finally find a parking spot, shuffle out of your car and head toward the entrance.

You freeze for a moment. You see them.

You’ve seen them many times, and you always try to avoid them. You know there is going to be the awkward interaction of a greeting, a request of some sort; usually for money. Your best course of action is to ignore them. Everyone knows that you hear them, but it is a small price to pay in your hurry.

Sure enough, “Hello! Can you take three minutes of your time to answer a survey? We’ll give you a free t-shirt for your time!”

You shoot them a half smile and a glance as you pick up your pace and rush past the pop-up canopy and table stacked with items you barely pay attention to as you pass.

Shopping takes longer than you’d hoped. The lines are long at this time of day. You don’t have much, just an armful of goods, but no matter, you must wait your turn. Soon, you make your way out of the store to be unceremoniously accosted again.

5:32pm:

You have to drive across town. Now, you won’t even have enough time to go home and change before your dinner engagement. You rush towards the door. The sliding doors part as you pass through the entrance, right by them.

“Please! If you will take three minutes, we will give you a T-shirt. We really want your opinion on an important matter in your community!”

You gesture with your hand and explain, “I’m sorry, but I’m in a terrible rush!”

——————————————————————————————–

So, what went wrong for the survey researchers? Why didn’t you answer the survey? They were at the same place at the same time as you. They offered you an incentive to participate. They specified that it was only going to take three minutes of your time to complete. So, why did you brush them off as you have many other charities and survey givers in the past situated in front of your store of choice?

Oftentimes, we are asked for our input, or our charity, but before we even receive the first invitation, we have already determined that we will not participate. Why? In this scenario, you were in a hurry. The incentive they were offering was not motivating to you.

Would it have changed your willingness to participate if they offered a $10 gift card to the store you were visiting? Maybe, maybe not.

The question is, more and more, how do we incentivize participation in a survey? Paper, online, person-to-person. They are all suffering by the conundrum of falling response rates (Lindgren et al., 2020). This impacts the validity of your research study. How can you ensure that you are getting heterogeneous sampling from populations? How can you be sure that you are getting the data you need from the people you want to sample? This can be a challenge.

In recent published works on survey incentives, many studies acknowledge that time and place affects participation, but we don’t quite understand how. Some studies, such as Lindgren et al. (2020), have tried to determine the time of day and day of week to invite survey participants, but they themselves discuss the limitations in their study, which is endemic to many studies, which is the lack of heterogeneity of participants and the interplay of response and nonresponse bias:

While previous studies provide important empirical insights into the largely understudied role of timing effects in web surveys, there are several reasons why more research on this topic is needed. First, the results from previous studies are inconclusive regarding whether the timing of the invitation e-mails matter in web survey modes (Lewis & Hess, 2017, p. 354). Secondly, existing studies on timing effects in web surveys have mainly been conducted in an American context, with individuals from specific job sectors (where at least some can be suspected to work irregular hours and have continuous access to the Internet). This makes research in other contexts than the American, and with more diverse samples of individuals, warranted (Lewis & Hess, 2017, p. 361; Sauermann & Roach, 2013, p. 284). Thirdly, only the Lewis and Hess (2017), Sauermann and Roach (2013), and Zheng (2011) studies are recent enough to provide dependable information to today’s web survey practitioners, due to the significant, and rapid changes in online behavior the past decades. (p. 228)

Timing, place/environment, and matching the incentive to the situation and participant (and maybe even topic, if possible) is influential in improving response rates. Best practices indicate that pilot testing survey items can help create a better survey, but how about finding what motivates your target population to even agree to begin the survey in the first place? That is less explored, and I think is an opportunity for further study.

This gets even harder when you are trying to reach hard-to-reach populations. Many times, it takes a variety of approaches, but what is less understood, is how to decide on your initial approach. The challenge that other studies have run into, and something that I think will continue to present itself as a hurdle is this: because of the lack of research on timing and location, and because of the lack of heterogeneity in the studies that do exist, the generalizability of studies is limited, if not altogether impractical. So, that leads me full-circle back to pilot-testing incentives and timing for surveys. Get to know your audience!

Cool Citations to Read:

Guillory, J., Wiant, K. F., Farrelly, M., Fiacco, L., Alam, I., Hoffman, L., Crankshaw, E., Delahanty, J., & Alexander, T. N. (2018). Recruiting Hard-to-Reach Populations for Survey Research: Using Facebook and Instagram Advertisements and In-Person Intercept in LGBT Bars and Nightclubs to Recruit LGBT Young Adults. J Med Internet Res, 20(6), e197. https://doi.org/10.2196/jmir.9461

Lindgren, E., Markstedt, E., Martinsson, J., & Andreasson, M. (2020). Invitation Timing and Participation Rates in Online Panels: Findings From Two Survey Experiments. Social Science Computer Review, 38(2), 225–244. https://doi.org/10.1177/0894439318810387

Robinson, S. B., & Leonard, K. F. (2018). Designing Quality Survey Questions. SAGE Publications, Inc. [This is our required book in Survey Research!]

Smith, E., Loftin, R., Murphy-Hill, E., Bird, C., & Zimmermann, T. (2013). Improving developer participation rates in surveys. 2013 6th International Workshop on Cooperative and Human Aspects of Software Engineering (CHASE), 89–92. https://doi.org/10.1109/CHASE.2013.6614738

Smith, N. A., Sabat, I. E., Martinez, L. R., Weaver, K., & Xu, S. (2015). A Convenient Solution: Using MTurk To Sample From Hard-To-Reach Populations. Industrial and Organizational Psychology, 8(2), 220–228. https://doi.org/10.1017/iop.2015.29

Neat Websites to Peek At:

https://blog.hubspot.com/service/best-time-send-survey (limitations, again, no demographics understanding, they did say to not send it in high-volume work times, but not everyone works the same type of M-F 8:00am-4:30pm job)

https://globalhealthsciences.ucsf.edu/sites/globalhealthsciences.ucsf.edu/files/tls-res-guide-2nd-edition.pdf (this is targeted directly towards certain segments of hard-to-reach populations. Again, generalizability challenges, but the idea is there)

Filed Under: Evaluation Methodology Blog

Dueñas Highlighted as a 2024 Emerging Scholar by Diverse Issues in Higher Education

January 31, 2024 by jhall152

Courtesy of the College of Education, Health, and Human Sciences

Mary Dueñas is passionate about student success, especially among underrepresented and marginalized student populations. Because of her passion for students to thrive in a higher education environment, she dedicates a large portion of her scholarship research to examine equity and access issues in higher education.

Mary Dueñas

Mary Dueñas

Her work hasn’t gone unnoticed. Just recently, Diverse Issues in Higher Education named Dueñas “An Equity and Access Champion” in their January 18th, 2024, issue and has named her a Top 15 Emergent Scholar. The publication highlights emerging scholars making an impact on education on college campuses nationwide.

“Receiving this national recognition is wonderful, and I’m honored to share this platform with other outstanding scholars from different disciplines,” said Dueñas.

Dueñas is an assistant professor in the department of Educational Leadership and Policy Studies (ELPS) at the University of Tennessee, Knoxville, College of Education, Health, and Human Sciences (CEHHS). In addition, she serves as program coordinator for the master’s student personnel program in College Student Personnel (CSP).

Using both quantative and qualitative research methods, Dueñas focuses on Latina/o/x/e  college students’ sense of belonging and their experience with imposter syndrome. She uses holistic frameworks and critical theory to share stories and explain systemic inequities that marginalized communities face in higher education.

“My research examines the ways in which larger social processes affect students and their overall well-being while also addressing underrepresented and marginalized students in relation to retention and success,” said Dueñas.

Cristobal Salinas, Jr., an associate professor of educational leadership and research methodology at Florida Atlantic University, nominated her for this prestigious national recognition. In his nomination letter, Salinas commended Dueñas for her commitment to scholarship that pushes the boundaries of higher education through novel perspectives and an innovative approach to research.

“This commitment to pioneering scholarship has been complemented by her unwavering dedication to teaching and mentoring the next generation of scholars, which is an integral part of her academic mission, explains Salinas.

Despite having a full plate at CEHHS, Dueñas has authored several peer-reviewed journal articles, been a guest on a podcast, and has several works she is authoring or co-authoring under review. One is “Síndrome del impostor: The Impact of the COVID-19 Pandemic on Latinx College Students’ Experiences with Imposter Syndrome.” She is co-authoring “Culturally Responsive Mentoring: A Psychosociocultural Perspective on Sustaining Students of Color Career Aspirations in STEM”.

Dueñas takes a glass-half-full approach to her work, focusing on the whole student. In other words, she says it’s about the positives that make a student’s experience successful and asking questions about what works.

“There is a changing landscape in how we think about higher education,” Dueñas says. “It’s not so much about the students adapting to higher education, it’s more about how higher education institutions supporting and serving students.”

 

Filed Under: News

  • 1
  • 2
  • 3
  • 4
  • Next Page »

Educational Leadership and Policy Studies

325 Bailey Education Complex
Knoxville, Tennessee 37996

Phone: 865-974-2214
Fax: 865.974.6146

The University of Tennessee, Knoxville
Knoxville, Tennessee 37996
865-974-1000

The flagship campus of the University of Tennessee System and partner in the Tennessee Transfer Pathway.

ADA Privacy Safety Title IX