top of page

Nicole M. Deterding, PhD

Qualitative Interviewing and Program Evaluation Methods

Flexible Coding of In-Depth Interviews

A 21st Century Approach

Nicole M. Deterding and Mary C. Waters

2021. Sociological Methods and Research 50 (2): 708-739.​

 

    Plain Language Summary

Grounded Theory coding methods, rooted in the 1960s technology of colored pens and index cards, often struggle to meet the demands of contemporary semi-structured interview research. We argue that the classic grounded theory framework is frequently a poor fit for today’s large-scale studies, which often involve multi-person teams and mixed-method designs. Drawing on years of experience with diverse datasets, we propose a "flexible coding" approach that leverages the power of modern qualitative data analysis (QDA) software to enhance rigor and transparency. By streamlining the organization and retrieval of data, this method allows researchers to manage complex projects more effectively while facilitating the reanalysis and secondary use of interview data.
 

    Key Takeaways​

  • Team-Friendly Methods: Offers strategies specifically designed for large-scale projects, multi-person research teams, and mixed-methods designs.

  • Software-Based Workflow: Flips line-by-line coding logic to question-first indexing to aid data search and reduction, using the organizational power of QDA software.

  • Transparency & Rigor: Provides a clear, step-by-step approach to coding that makes the analytical process more visible and easier for others to follow.

  • Data Longevity: Emphasizes organizing data in ways that facilitate secondary analysis and future use by other scholars.

Lessons from the Social Innovation Fund

Supporting Evaluation to Assess Program Effectiveness and Build a Body of Research Evidence

Lily Zandniapour and Nicole M. Deterding

2018. American Journal of Evaluation, 39 (1):27-41.

    Plain Language Summary

As funders increasingly prioritize evidence-based social programs, the federal Social Innovation Fund (SIF) served as an important experiment in using "tiered evidence" funding to scale what works while supporting newer programs to build their proof. This work describes SIF's experience of overseeing more than 130 evaluations to explain how the SIF model translated ambitious federal goals into practical requirements, templates, and technical assistance for nonprofits. By examining the public-private partnership at the heart of SIF, we highlight how funders can effectively build the evaluation capacity of small and mid-sized organizations. Ultimately, these lessons provide a roadmap for evaluators and policymakers to produce credible findings that contribute to a more robust evidence base for addressing important social problems.

 

    Key Takeaways

  • From Theory to Tools: Demonstrates how broad federal evidence goals were translated into the specific requirements and tools for nonprofit success.

  • Building Evaluation Capacity: Offers a blueprint for how funders can provide the technical assistance necessary for small and mid-sized nonprofits to conduct rigorous research.

  • The "Tiered Evidence" Model: Explains a tiered funding structure to scale proven programs while investing in the evidence base of emerging ones.

  • Actionable Evidence: Highlights the importance of producing findings that are not just scientifically credible, but also practically useful for improving program delivery and informing policy.

Building Evidence in Challenging Contexts

 

Introduction to the Special Section

Nicole M. Deterding and Anna R. Solmeyer

2018. American Journal of Evaluation, 39 (1): 24-26.
 

In 2016, I coordinated a two day, federal-wide evaluation methods meeting and co-edited a special section in The Journal of American Evaluation focused on the practical and methodological hurdles to conducting rigorous program evaluations within real-world social service settings. Articles in the special section emphasize the importance of balancing scientific rigor with the flexibility needed to adapt to such "challenging contexts," so that research findings can more effectively inform policy and practice.

 AI Transparency: I sped drafting of plain language summaries with the assistance of Google Gemini v3. AI-drafted language was revised for accuracy, nuance, and voice by me.

bottom of page