top of page

DATA ANALYSIS

Attitude Survey

There were three main statements that I focused on from the attitude survey: I feel engaged during math, I feel challenged during math, and I am good at math. I intentionally selected these three statements due to their alignment with my purpose statement.

Pre%20engagement_edited.jpg
Post%20engagment_edited.jpg

The first question I looked at was "I feel engaged during math instruction." This pre-data showed that only 13% of my students felt engaged during math instruction. After analyzing my data further, I found that the students who selected agree for this statement were my students who typically fell within the “on-level” ability group. This told me that my instruction prior to my action research was only beneficial to my on-level learners. The post-data illustrated increased engagement with 31% of students agreeing with the statement and 69% feeling neutral towards it. Compared to the pre-data where 38% of my students felt disengaged, 0% of my students felt disengaged after the implementation of differentiated math instruction, which proved that it lead to increased student engagement overall.

Pre%20Challenged_edited.jpg
Post%20Challenge_edited.jpg

Another portion of data was regarding how challenged my students felt during math instruction. The pre-data outlined that 25% of my class was not feeling challenged prior to the implementation of differentiated math instruction. When I analyzed this piece of data, I found that the students who selected disagree for this statement were my students who fell within the above-level ability group. The post-data showed me that through the implementation of differentiated instruction, I was able to meet all of my students' needs, which included enrichment for my higher ability learners. Because of the wide range of learners that were represented in my classroom, it was very important for me to provide my above-level learners with instruction that allowed them to work within their accurate zone of proximal development, meaning the lessons provided them with instruction that was neither too easy nor too robust that it would lead to frustration. Based on the data I gained from the attitude survey, I can conclude that through the guided math rotations my students received the level of content they needed.

Pre%20good_edited.jpg
Post%20good_edited.jpg

Based on my data, prior to beginning my action research, only 25% of my students felt they were good at math. This piece of data, as mentioned in my rationale, allowed me to notice there was a definite need for a different approach to math instruction. After analyzing the data, I was able to conclude that differentiated instruction and guided math rotations contributed to increased confidence in the area of math. On the post-survey, 50% of the students agreed and 44% of the students felt neutral towards the statement “I am good at math.” Through my observations and data analysis I can infer that this boost in confidence was due to the mutual respect and the implementation of math talk into our daily instruction. During our whole-group mini lessons, the end goal was not for students to reach mastery on the concepts. Rather, I wanted them to be able to talk about their thinking and process instead of only focusing on the answers. This exposed my students to multiple perspectives and methods of thinking, which further supported the culturally responsive classroom environment I fostered.

Topic Pre and Post Tests

Student-Birthdays-by-Month-Bar-Graph.jpg

A paired-samples t-test was conducted to determine the effect of the implementation of differentiated math instruction on student achievement. There was a significant difference in the scores prior to implementing differentiated instruction (M=44.63, SD=19.10) and after implementing (M=91.31, SD=9.07) differentiated instruction; t(16)= 11.44, p = 4.16e-9. The observed effect size d is large, 2.86. This indicates that the magnitude of the difference between the average and μ0 is large. These results suggest that the implementation of differentiated math instruction had a positive effect on student achievement. Specifically, the results suggest that the use of differentiated instruction increased math achievement.

​

​

Prior to exposure of content, students were given a pre-test. The pre and post-tests were identical and were made up of 23 questions. The pre-test was meant to gather information on the students at the time of the assessment, whereas the post-test was given to measure growth from the beginning of the action research to the end. As noted in my rationale, two of my students were left out of the data due to the fact that they received their daily math instruction from the special education teacher and did not participate in differentiated math instruction in our classroom. 

 

As shown in the graph, the initial scores, found in orange, varied greatly. Some students scored as low as four points out of 23, whereas a few others received near perfect scores. The post-test, found in pink, demonstrated the growth that 100% of my students made throughout the data collection period. Majority of my students made significant growth from the pre-test to the post-test. The average score on the post-test was 21/23 which was significantly higher than the average on the pre-test which was 10/23. Students A, J, and P who all scored fairly low the first time made significant growth on the post-test.

 

While analyzing the data I found a common trend that students who were absent for an extended period of time demonstrated less growth from the pre-test to the post-test than some of their peers. For example, Student O missed two consecutive weeks of instruction and only progressed six points compared to the class average of eleven points. Students who also showed low growth rates were those who scored higher on the pre-test. More specifically, Students F and G had demonstrated a high understanding of the mathematical concepts on the pre-test, thus were only able to grow a few more points on the post-test, which both students did.

Multiplication Timed Tests

Screen Shot 2021-03-17 at 10.51.07 AM.pn

PRE-TEST

Screen Shot 2021-03-17 at 10.51.43 AM.pn

POST-TEST

Two weeks into my action research I decided to administer a multiplication timed test to my students, as noted in my data collection methods. The purpose of this was so I could further differentiate in the workstations and provide my students with math games that included facts they had not yet mastered. At the conclusion of the action research, I gave my students the same multiplication timed test. The purpose of this timed test was to analyze whether or not the workstations were effective in building up fact fluency and automaticity in multiplication.

 

Through the analysis of data, I found students were quicker when it came to recalling multiplication facts on the post-test, and were more accurate with their answers after the differentiation in the workstations. The student example above illustrated this progress. This student was able to get all but one problem answered in the time allotted on the post-test, opposed to the first time where she was unable to answer approximately 1/4 of the test. This demonstrates the increase in automaticity this student experienced, which mirrors the trend that the majority of my students experienced as well. The data also showed an increase in overall scores. More specifically, the average on the pre multiplication test was 75 out of 94 questions, whereas the average on the post multiplication test was 86 out of 94. All of my students made growth from one timed test to the next, other than my two students who received a perfect score the first time. Although these students did not make growth, they maintained their perfect score on the post multiplication test as well.

 

​

​

​

​

​

​

​

​

​

​

​

 

 

 

 

​

The growth that my students made in regards to their fact fluency in multiplication proved that my workstations were effective in having them build up their automaticity. I believe because I pulled out select multiplication facts they missed on the pre-test and was intentional with the math games they were playing in the workstations, this contributed to their success.

Student-Birthdays-by-Month-Bar-Graph.jpg

Student Engagement Rubric

Screen%20Shot%202021-03-17%20at%2012.54_

Prior to implementing differentiated math instruction, I created and introduced a rubric to monitor student engagement. Before beginning the action research, I scored my students on the rubric to provide myself with baseline data to assist me in tracking whether or not this style of instruction led to increased student engagement. At the conclusion of the data collection period, I averaged the scores my students received on each rubric to represent their general level of engagement in math. The chart above illustrates the growth my students made in their level of engagement from the beginning of the action research to the end. The pink bar represents their baseline level of engagement from the pre-data rubric, while the orange bar represents their average level of growth made throughout the entirety of the data collection period. After analyzing this data, I found that 100% of my students made growth when differentiated math instruction and guided math rotations were introduced. Therefore, I can conclude that this method of instruction did lead to increased student engagement for my population.

​

Another reason I believe engagement increased is due to the level of accountability all of my students had to possess. In our previous format of instruction, whole-group lecture, students were not held accountable for their learning. This lack of accountability caused my quieter and slower processing students to not participate and fall behind. Because of the style of differentiated math instruction and the implementation of small group rotations, all of my students were required to be accountable for their own learning, which resulted in increased engagement. For example, Student H was a student who I would classify as introverted. Although this student was not disruptive, her level of engagement in the whole-group setting was very lacking, which would lead to misunderstandings and the developing of misconceptions. Through the implementation of guided math rotations, this student learned how to advocate for herself and finally felt confident to share her ideas with our group. It was evident through their post-data score, this student became significantly more engaged through this method of instruction.

​

When digging a little deeper, I also found all but one of the eight students who scored 13 out of 15 or greater on the post engagement rubric received above a 90% or greater on the post multiplication timed test. This leads me to conclude that students who stayed on task and engaged throughout the entirety of the workstations performed higher when tested on their fact fluency in multiplication. 

​

When comparing my observed level of engagement from students to their perceived level of engagement, I found similar trends. My baseline data from the engagement rubric showed some students were disengaged based on their low scores. This mirrors the data from the attitude survey which suggested 38% of students felt disengaged during whole-group math instruction. At the conclusion of the data collection period, the majority of my students demonstrated engagement over the course of the action research, which matched their perceived level of engagement post-data which illustrated 0% of my students felt disengaged after the implementation of differentiated math instruction. This supports my hypothesis that differentiated instruction boosts engagement.

Triangulation of Data

Each piece of data supported the purpose of my study: differentiated math instruction will lead to increased student achievement and engagement. Overall, the post-test data from the unit tests demonstrated an increase in student scores in comparison to the pre-test scores. Furthermore, the data collected from the engagement rubric and attitude survey demonstrated an increase in student engagement. Therefore, both sets of data as a whole confirm the hypothesis that data-driven differentiated instruction in small groups increased student achievement and engagement. The differentiated groups, instruction, and games provided each student with equal opportunities to access content, regardless of their initial level of understanding. This method of individualized instruction helped foster a culturally responsive teaching and learning environment for my students. 

 

Each data point confirmed that when executed together, differentiated instruction was successful in my classroom with my population of students. Differentiated instruction enriched my math block, while the data collected explained the growth that occurred within my students and their learning. For example, based on the post attitude survey, differentiated math instruction provided my students with an overall positive experience. Through the four data points gathered, all of them showed an increase, which confirms the purpose for my study.

Questions that arose from the data

Would differentiated math instruction be more effective if students are grouped by ability or by personality?

I came into the action research with the mindset of using flexible grouping, meaning a student’s assignment to a group can change based on performance. However, I found this challenging to implement effectively due to the personalities in my classroom. I was then forced to decide whether or not to advance students to the next group because they showed progress on a concept, or keep them in the same group because there were other students who they could not work with. Over the six-week data collection period I only ended up moving three students from their original group assignment. Because guided math rotations required large amounts of independence and accountability, I came to the conclusion that it would be better for my population of students to be grouped with people they could work well with and stay focused rather than focusing solely on ability. 

​

Does the method in which the attitude surveys were conducted impact the results?

I implemented surveys to gauge students' thoughts and ​attitudes towards math prior to the study and at the conclusion of the study. I administered this survey through Google Forms. After analyzing the results from the survey, I began to wonder if the method of which I collected this information had an impact on the results. More specifically, I wondered how the results would have compared if I administered this survey as an interview or on paper. Would students have had a better understanding of the questions if they were asked through an interview or would that have skewed their results? Also, if this survey was anonymous, would that have resulted in different responses?

bottom of page