Formative assessment has come into focus in recent years. In Sweden, the use of formative assessment is typically emphasized in the curriculum of upper secondary schools. However, scientific studies show both positive as well as no effects at all of formative assessment on student performance.
Furthermore, formative assessment has proved to be time consuming, which obviously is a problem if it has no effects on learning. A new thesis by Daniel Larsson at the Linnæus University, Sweden, shows that multivariate data analysis, MVDA, can be used to give some answers about the effectiveness of such teaching practices.
What can multivariate data analysis tell us about the effectiveness of formative assessments used in upper secondary schools?
Whereas summative assessment can be described as a summary of the learning that has taken place, such as grades given at the end of a course, formative assessment can be described as an assessment of a student’s learning, with the purpose of developing the student’s abilities and skills, not just to evaluate them. For example, feedback is an important part of formative assessment.
However, in addition to its uncertain effects, there is substantial variation in the application of formal assessment practices in schools. Furthermore, it is not clear what characterizes “good” or “not so good” practices or how effects should be measured. Hence, solid methods are needed in order to distinguish formative assessment practices in different schools.
If different practices can be identified, the result can be used to assess the correlation between implementation of different practices and student learning. Daniel Larsson demonstrates in his thesis that MVDA can be used in this context.
Building models in SIMCA
Daniel Larsson used SIMCA 15 from the Umetrics Suite to build multivariate projection models to identify differences of formative assessment practices. Data was collected from six classes in three upper secondary schools among students who had attended the course “chemistry 2” (kemi 2). Students from one of the classes were studying at a private school whereas the rest of the students were studying at municipal schools. A secondary study was also conducted to analyze the frequency of completed teaching elements.
In order to use MVDA, fairly large amounts of data are needed. Hence, Daniel Larsson created a questionnaire with two sets of questions, in total 34 questions. The first set of questions was geared at identifying the students’ experiences of formative assessment practices, such as clarifying, sharing and creating an understanding of learning goals, implementation of effective discussions, and giving feedback. The second set of questions was geared at assessing the students’ experiences of the frequency of teaching elements, such as written lab reports, homework evaluated by the teacher, and oral presentations.
In the first set of questions, students were asked to rank their answers from 1 to 5, where 1 corresponded to “completely disagree” and 5 corresponded to “fully agree”. In the second set of questions, students were asked to specify how often they experienced that they had completed different teaching elements from 1 to 7, where 1 corresponded to “never”, and 7 corresponded to “once a week”.
A principal component analysis, PCA, was then conducted in SIMCA 15 to assess the students’ experiences of formative assessment practices. Additionally, a partial least squares discriminant analysis, PLS-DA, was conducted to assess the students’ experiences of the frequency of teaching elements.
Results: visualization in SIMCA gives enhanced insights
The analyses showed that there is no clear connection between a particular class and specific formative assessment practices. Even students from the same class had widely varying experiences of formative assessment practices. A number of factors could explain this phenomenon, for example ambiguity in the wording of the questions and the fact that the experience of formative assessment is to some extent unique to each student. Instead of focusing on differences between classes, Daniel Larsson suggests that future studies could focus on differences between students. Do students with good experiences of formative assessment practices obtain higher grades?
However, the models revealed a difference between the municipal schools and the private school in terms of the students’ experiences of the frequency of teaching elements, which is easily visualized in SIMCA (see image below). The private school had a higher frequency of oral presentations and written exams, while the municipal schools had a higher frequency of completed laboratory tasks. The result should however be taken with caution due to the limited amount of data. Daniel Larsson suggests that larger groups from different schools could be used in a future study to further evaluate the method.
A great advantage of SIMCA is the visualization of results, which can add insights that are otherwise difficult to obtain.
The score plot shows the difference between the municipal schools (green dots) and the private school (blue dots). School number 1 has three participating classes, denoted as 1A, 1B, and 1C. School number 2 has two participating classes, denoted as 2A and 2B, and school number 3, the private school, has one participating class, denoted as 3A. The additional numbers denote the individual students in each class.
Want to know more?
Find out more about of SIMCA online and get a free trial.
Read the full thesis from Daniel (in Swedish)