Exploring the EdTech Maker Space: Our Coding Process

Research, Stories

A goal of the Teaching Skills That Matter-SkillBlox (TSTM-SB) Instructional Pilot is to connect teachers to free and open digital resources that they can use for instruction that aligns with the TSTM framework. Together with our partners in this work, the American Institutes for Research, we want to empower teachers to locate, create, evaluate, and use free and open educational resources (OER). World Education has led a key activity supporting this goal. We have developed and led a unique form of professional development called the EdTech Makerspace (ETMS) – a service learning opportunity where teachers 1) learn about OER and build skills to work with them and 2) simultaneously collaborate to curate and make OER to add to the free application SkillBlox. Any teacher with a SkillBlox account can use it to locate activities and build instruction that helps learners build TSTM skills or develop competencies in the TSTM subject areas (i.e., workforce preparation; civics education; and financial, health, and digital literacies).

Another important purpose of the project is to conduct qualitative research to answer these questions: 

  • What characteristics of OER do teachers identify as relevant and useful for supporting TSTM-aligned instruction? 
  • What supports and resources are most productive for adult educators as they collaborate to produce and share TSTM-aligned OER?

Our research team believes in making our work as transparent as possible. In this and upcoming blog posts, we invite you behind the scenes of our research process to see how we have collected, reviewed, and analyzed data from each ETMS to gain meaningful insights about how teachers learn about OER and then leverage that knowledge to use or create them. By sharing both our methods and findings, we hope to build a stronger connection between research and practice to help educators better understand and apply our findings in their work. Let’s start by taking a look at the data we collected and our approach to reviewing it known as coding.

Data Collection

First, some background. From June 2022 to December 2023, we worked with over 100 teachers in five ETMS events, each lasting 6-8 weeks that centered on either the curation or creation of OER. We collected data from multiple sources to help us consider our research questions, including surveys completed by ETMS participants, notes and transcripts from observations of their engagement in the ETMS, and transcripts from focus group discussions with the participants and their facilitators. We anonymized our data (making sure that anyone reading our future reports would not be able to identify participants) and then used NVivo software to organize it and to facilitate our coding work. Using NVivo allowed multiple researchers to code independently and simultaneously and then merge our analyses.

Data Coding: What and How

To make sense of the data collected, a team of four researchers qualitatively coded the data to locate themes that could lead to insights and help us answer the research questions introduced above. Coding is the process of assigning words or short phrases that label the content of text passages. Codes can summarize, distill, capture the essence of data, or highlight salient or evocative points (Saldana, 2016). 

Like most coding processes, our approach has been iterative and collaborative. Stretching over two years of data collection and analysis, our process followed these steps: 

  1. After each of our five ETMSs, researchers reviewed new data independently, applying thematic labels (i.e., “codes”) to excerpts. 
  2. We then met to compare coded data, discussing similarities and differences in our interpretation, emerging themes, and questions to guide subsequent review of the data. 
  3. As new data was added from a completed ETMS, we built upon our prior coding work. We responded to new data to generate insights and added new codes as needed until we reached “saturation of codes” (i.e., the point where our codes adequately captured the themes that emerged from the data). 

This approach strengthened our analysis and interpretations by ensuring continuous and collaborative review and refinement of the collected data. Let’s now consider the key cycles of our iterative coding process. 

Cycle 1: Initial Inductive Coding 

Our first cycle of coding was inductive, based entirely on the data collected from the first ETMS. We read through the data looking for emergent patterns and themes. We discussed our initial themes and shared them with the larger TSTM-SkillBlox team who had been present in the ETMS activities to confirm the relevance of the themes. This also meant the team could take insights gleaned from this conversation and refine upcoming Maker Spaces. At this point, the purpose of our coding was to get a sense of what was happening during ETMS sessions and from participant responses about their activities as makers.  As we held the first few ETMSs, we continued this process of inductive coding. 

Codes from the TSTM coding process
Figure 1. Second Cycle Codes

Cycle 2: Midpoint Coding Refinement 

After several ETMSs had occurred and researchers had identified commonalities across them, we organized the codes into families. Three researchers collaborated to organize the codes, merging those that were overlapping and essentially duplicative, arriving at 22 codes based on shared characteristics. We ended up with 15 top-level codes and 8 subcodes. Figure 1 shows our second cycle code list. 

Cycle 3: Building a Thematic Coding Structure

By December of 2022, the team had been using the codes for approximately four months. During this time, data collection was ongoing. Researchers continued analyzing new data as it was entered into NVivo, and added new codes when the existing codes were insufficient to summarize what we observed in the data. At this point, the team revisited the coding structure. The number of codes increased to 26 codes. Sensing that we were nearing coding saturation (i.e., we saw that we had the codes we needed to analyze our data). We had confidence that the themes we observed were reflected in our codes, so we began creating our codebook, which we’ll discuss in our next blog post. 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.

Related News