Evaluating Adaptive Edtech: 3 Guiding Questions

Tech tips
Modern collage with halftone hands holding a puzzle. Team building concept. People holding a pieces of jigsaw. Teamwork. Team communication. Joint cooperation. Idea generation. Newspaper elements

by Lewis Poché

“There’s an app for that,” one of Apple’s slogans, was trademarked 15 years ago, yet is as relevant as ever. There are countless apps, websites, digital products, and edtech options available. The edtech realm once faced the issue of insufficient options. Now, there are too many.

So, the question becomes: How do educators evaluate and select the best tools for their students? The University of Notre Dame’s Higher-Powered Learning team has found that educators are hungry for guidance on what works. This led me to create our own evaluation tool and seek solutions like the Workforce EdTech Tool Evaluation Criteria.

The first criterion is “Proven Effectiveness.”  While there are repositories of edtech Tool reviews (EdTech Index, for example), perhaps you want to review a tool’s effectiveness yourself. What are the hallmarks you should look for?

Defining Adaptive Software

The term “edtech” includes a wide swath of tools. Let’s narrow our focus to a subset of edtech tools called “adaptive software,” which can be defined as “programs that present content in a dynamic way to help meet content standards.” We aren’t talking about e-textbooks or other static content. Nor are we talking about general edtech tools that might support users along their journeys (GenAI tools, spell checkers, translators) but that don’t include specific learning objectives related to educational content.

Khan Academy is my adaptive software “first love.” When I taught middle school math, Khan Academy was the resource I turned to to personalize and differentiate learning, and I had my students use it daily to apply their learning. With its repository practice questions, students will never run out of learning opportunities.You might consider Khan Academy a digital version of a worksheet. 

That is not all that Khan Academy offers. Through its legendary YouTube channel, Khan Academy has shared thousands of instructional videos on topics ranging from unit rates to Renaissance art for students to watch if they need additional instruction on a topic. Students get immediate feedback when they answer a question, creating formative tight feedback loops. Furthermore, Khan Academy provides teachers with reports on how students are doing at answering practice questions that can inform their teaching.

3 Key Questions

The effectiveness of any tool lies not in the tool itself but in how it is implemented. Of course, there are some questions we can ask about a tool before we use it. There are three basic questions I ask when reviewing adaptive software: 

  1. Does it gather baseline data on a user’s proficiency through a diagnostic assessment?
  2. Does it provide instruction that mimics that of a good tutor through videos, hints, demonstrations, etc.?
  3. Does it adjust and personalize the learning path according to a user’s responses on assessments?

If a tool claims to be “adaptive” but doesn’t have these components, it is not taking advantage of the benefits of using technology to differentiate and personalize learning. We know that throwing inadequate technology at learners can be detrimental, or at the very least, less effective than tech-free learning.

To return to an earlier example, Khan Academy meets the three criteria for effective adaptive software in the following ways: 

  • gathering baseline student data through a Khan Academy assessment or a standardized diagnostic assessment platform (e.g., NWEA MAP Growth)
  • mimicking the instruction provided by a teacher through supportive resources if students get stuck
  • dynamically adapting the learning path according to student responses to practice questions

The three questions above are just the tip of the iceberg of reviewing adaptive software. In our Adaptive Software Review Standards, we identify a couple dozen essential characteristics of effective adaptive software. (For a more in-depth explanation of these standards, read our three blog posts Adaptive, Data & Intervention, and Engagement)

In this approach, we attempt to boil down effective software to a handful of instructional standards graded on a 2-point scale. For many educators, this helps simplify the evaluation process. However, a pragmatic, comprehensive, qualitative approach is useful when weighing many considerations. 

I was delighted to find that the Workforce EdTech Scoring Rubric offered a formalized process in the sense that it prioritized a qualitative “note-taking” comparison approach to evaluating software. Compared to our Adaptive Software Review Standards, the Workforce EdTech Rubric casts a broader net regarding what tools might meet your needs. Comparing these two resources is not an apples-to-apples comparison; they have different use cases and serve different functions. 

If you’re reviewing adaptive software for its effectiveness, I encourage you to reflect on the three key questions I posed above and ask the pragmatic question, “Does this tool do the job I want it to do?” 

Lewis Poché is the Associate Program Director of Higher-Powered Learning (HPL) at the University of Notre Dame’s Alliance for Catholic Education. HPL empowers Catholic school teachers and leaders to leverage technology and related innovative research-based educational practices to meet the needs of all learners. To learn more about Higher-Powered Learning, please visit ace.nd.edu/hpl.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.

Related News