From Evidence to Action
We build capability by designing practice that changes what professionals do next.
We ground learning in research and structure rehearsal, formative feedback, and reflection. Professionals perform reliably in real contexts, not just in training.
AI supports these practice cycles with timely prompts and consistent feedback while professionals retain full control over judgment and decision-making. By reducing cognitive load and clarifying next steps, AI also helps create the conditions for relational trust to grow.
What We Mean by Capability
Capability means reliable performance in real contexts.
It is not course completion.
It shows up in the quality of decisions professionals make under real constraints, with real stakes.
Capability is the difference between knowing what research says and applying it consistently when the situation is complex, time is limited, and the learner in front of you is sending mixed signals. We build it through structured practice that is specific, repeatable, and tied directly to the decisions that drive outcomes.
Our Theory of Change
Mindset CoPilot strengthens capability by changing what professionals practice repeatedly.
We build capability through a repeatable chain of change:
We focus practice on high-leverage, real-world decisions professionals must make under real constraints.
We deliver formative feedback tied directly to performance, so professionals know what to adjust next.
We structure reflection that links decisions to evidence and outcomes, strengthening judgment over time.
AI in Service of Human Judgement
Mindset CoPilot uses AI to strengthen practice and learning while professionals retain full control.
AI helps professionals:
Structure ongoing practice around high-leverage decisions
Surface patterns across attempts to support improvement over time
Focus attention on the most important next step
Provide timely prompts and performance feedback
AI does not replace professional judgment or make decisions on behalf of learners. It supports learning by making practice cycles more consistent, more responsive, and easier to sustain in everyday work.
Mindset CoPilot participates in the EdSAFE AI Alliance Industry Council and aligns with the SAFE AI Framework, emphasizing Safety, Accountability, Fairness, and Efficacy.
Science Foundations
Guiding Principle
Mindset CoPilot treats evidence as infrastructure, not static proof points. Evidence shapes what we build, what we ask educators to practice, and how formative feedback strengthens capability over time.
Evidence requires a shift from I (preference) and we (tradition) to it (what the evidence supports). When research points to better approaches, we update routines even when it challenges convention.
Domain Science
Each application begins with deep domain research that defines what learners must know and be able to do. Literacy Mindset is our first application, grounded in Dr. Paige Pullen’s research on evidence-aligned literacy instruction spanning emergent literacy, fluency, vocabulary, and explicit instruction.
We ground each application in domain-specific evidence, not learning theory alone.
Instructional Science
We draw on instructional science to design explicit instruction, sequencing, practice design, and formative feedback. This supports effective choices under real classroom conditions and reduces cognitive overload during decision-making.
Learning Science and Adult Learning
We apply learning science and adult learning to strengthen retention, transfer, and motivation. Findings on spacing, retrieval, feedback, and reflection shape practice cycles so learning carries into daily instruction.
From Research to Learning Systems
We use evidence-centered design to connect claims about educator capability to observable performance and defensible evidence. In practice, we track practice participation, decision quality across attempts, shifts in real-world instruction, and learner outcomes, disaggregated to examine equity.
Looking Ahead
We are collaborating with Georgia Southern University and the Kentucky Center for Reading Research at the University of Louisville to examine how practice-based professional learning systems support sustained instructional improvement over time. Dr. Paige Pullen is also authoring a new Literacy Canon for the Evidence Advocacy Center to guide evidence-aligned practice across the field.
Selected Evidence
The references below are a curated starting point for the research that informs our design decisions.
The Science of Learning
Hau, I. C. Love to Learn
Knowles, M. The Adult Learner
Mayer, R. Multimedia Learning
Bjork, R. A. and Bjork, E. L. Desirable difficulties and long-term retention
Instructional Science
Carnine, D. et al. Direct instruction and instructional design
Kirschner, P. A. and Hendrick, C. How Learning Happens / How Teaching Happens
Hattie, J. Visible Learning
Literacy Science
Pullen, P. C. Promising interventions for emergent literacy
Pullen, P. C. The complex nature of reading fluency
Pullen, P. C. A tiered intervention model for early vocabulary instruction
Pullen, P. C. Effects of explicit instruction on decoding
Handbook of Response to Intervention (RTI) and Multi-Tiered Systems of Support (MTSS)
University of Florida Lastinger Center for Learning. Literacy Matrix Annual Report
Assessment and Adaptive Learning Systems
Assessment in the Service of Learning Handbook Series, Volumes I–III
Mislevy, R. Evidence-centered design and principled assessment
Gunderia, S. Applied assessment design in adaptive instructional systems
Gunderia, S. et al. Personalized mastery learning ecosystems using Bloom’s objects of change