Verslag group 14: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
No edit summary
Line 141: Line 141:
[[File:Naam.PNG|thumb|Name form for pupils to enter their name and join the quiz]]  
[[File:Naam.PNG|thumb|Name form for pupils to enter their name and join the quiz]]  
The pupils all fill in their name and click the "Start!" button.  
The pupils all fill in their name and click the "Start!" button.  
By clicking start, their name will appear in the teacher overview, which is not visible for the children. We will expand ont his overview later. An empty question form is now displayed to all children, until the teacher presses the 'next question' button on the teacher overview interface.  
By clicking start, their name will appear in the teacher overview, which is not visible for the children. We will expand on this overview later. An empty question form is now displayed to all children, until the teacher presses the 'next question' button on the teacher overview interface.


==Connection is establised==  
==Connection is establised==  

Revision as of 17:48, 3 April 2018

Return PRE2017 3 Groep14


Introduction

Slim In Rekenen

For the course Robots Everywhere from the Technical University of Eindhoven, a project about some form of robotics needed to be made. The only explanation given was that robotics has to be part of the project in some way and that a product needed to be delivered at the end. Because of the huge amount of freedom in this course, for this particular project it was decided to create a smart quizzing program for in class. The program can be used in groups 3 and 4 from schools with a traditional educational tactics. Furthermore all research is focused on these types of schools in the Netherlands.

This quiz will contain only question about simple mathematics, so addition, subtraction, multiplying, and division. The purpose of the program is to easily identify students that need some extra attention of the teacher, while stimulating all students on an individual level. The way in which this is done is by finding the child’s knowledge boundary based on his or her answers to the previous questions. This way, the child will get questions that are still challenging enough to be interesting, but not too hard that the pupil will give up. Due to the fact that the quizzing program will be started at the same time for a whole class but all pupils will have their own individual test after a while, it is a perfect combination of individualism while working as a class. This way nobody feels left out and nobody gets bored. All questions in the test are multiple choice to make it easier for the children to input their answers.

The teachers will be able to see from the results which children have mastered the subjects sufficiently and are ahead of the program, but also which pupils lack behind. Because these results will be visible very quickly, the teacher knows sooner which children will need some extra attention to catch up. This extra attention can then be given where the teacher sees fit.

To test whether or not teachers think the program could make an actual difference in their teaching programs, a questionnaire was sent to elementary school teachers. The questionnaire focused on their opinions and whether they had tips to keep in mind while designing the program. An important question is whether or not they think the program will help them plan their time better, as this is one of our main goals, and if they would implement the program in their lesson.

The following report is an elaborate documentation of the program, the questionnaire and its results.

Problem Description

The purpose of our program is to quickly recognize which children need extra help on a specific topic and to customize the test to each child, such that they can train specific topics on their own level. Quickly recoginizing the children will help the teacher to immediately step in. For example, at the start of the year teachers do not know the weaknesses of the children, but our program can find them immediately after 1 test. Furthermore the personalised test ensures that the children will not lose interest, due to the questions being to easy, and ensures that they will not feel left out, due to always getting the questions wrong. These personalised questions are determined on the answers of previous questions.

Our program is now focussed on children in the third and fourth grade of the elementary school in the Netherlands using a tradional education system. The test consists of mathematical questions, that are divided by subject, for example multiplication tables or adding numbers up to 20. These choices were made to ensure an easy starting point for the program.

Objective

Develop a smart quiz program for a computer that can assess knowledge levels of the pupils, to give the teacher a better view on his pupils, and ask questions to the pupils on their personal boundary, so they learn effectively.

Users

The users of the program will be the pupils of the third and fourth grade of the elementary school in the Netherlands using a tradional education system, the parents and teacher of these pupils. The users will have the following requirements:

  • The pupils want:
    • A program that stimulates their imagination
    • A test that remains interesting throughout the whole session
    • A test that does not make them feel dumb or left out
  • The parents want:
    • A test that stimulates their children to learn and to perform the best they can.
    • Their children to be happy
    • Check their children's progress
    • The privacy of their children to be guaranteed
  • The teacher wants:
    • A test that motivates the pupils to learn and to perform the best they can
    • The system to match the curriculum, meaning the teacher has to change the settings of each test
    • To check the progress of each pupil in an easy way
    • To divide his or her time in the best way over all the pupils

For the program we did not put our focus on privacy, because this is only a small part of our program. The program will not contain an interface for the parents, due to fact that the goal only incorporates the pupils and the teachers. However this could be added later on if desired. Almost all of the other the requirements will be implemented in our program by the use of the traditional didactic system.

Desk Research

Didactic Systems

In the book, het didactische werkvormenboek [1], a few important aspects of teaching are found.

  • The best way for pupils to take in information is when they read and/or see it.
  • When asking questions, there need to be both open questions and closed questions.
  • Make sure pupils work with the whole class, but also occasionally in smaller groups or on their own.
  • Switching between teaching tactics stimulates learning. An example could be switching between a game and simple sums.

For our project we chose to focus on a traditional education system instead of a constructive education system. This means that we will keep in mind that

  • The teacher largely decided on the curriculum and order in which subjects are taught.
  • The focus lies on teaching the whole class at once.
  • There is a curriculum in which some subjects are more central than others.
  • Learning is an individual activity.
  • Pupil’s progress is checked by means of tests.


The Use of Computer Programs in Dutch Education

From a government study [2], the following things became clear:

  • Separating the class in smaller groups in year 3 has a positive effect on the pupils’ performances as it causes more interaction.
  • More than 90 percent of the teachers in primary education use computers to teach. According to them using ICT can contribute to a more efficient, effective and likeable education.

The next article shows us that education often does not seem to fit the natural, experimental learning process that children have. In games it is possible to naturally lean, which is why using games can be relevant for primary education. There would have to be a good balance between playing and effectively learning, to make sure that the advantages of learning in a fun way are not lost. This means the following things need to be taken into account when asking questions:

  • The information should not be presented in a way that is too abstract, make sure it speaks to the children’s imagination.
  • There needs to be a lot of repetition.
  • Important information should be taught in more than one way.
  • Take some time, it should not be taught too fast.
  • Use realistic characters.
  • Use animation, childlike dialogues, interaction and direct feedback.
  • Do not punish children for making mistakes.

The following factors influence the quality of a course.

  • From the international and Dutch literature it follows that all researched interventions that focused on the methods used during instructions and lectures were effective. It is nearly impossible to identify the elements that make a certain intervention work effectively, as several effective interventions might have opposite starting points. Furthermore there are not a lot studies that directly compare different methods. The resulting idea is that `it works to manipulate the methods used'. This is in line with the report of the KNAW (2009) that the Dutch research available does not give a clear view on the relation between the didactics and the skills when it comes to teaching arithmetics.
  • Using technology for teaching arithmetics has a positive influence on the prestations of children, just as using real three dimensional objects. This follows from both international and Dutch research, for example the use of the programs Snappet and Rekentuin. It is not clear how much the use of the programs increased the available time to teach arithmetics, and thus this will need some further investigation.
  • The use test results to improve the learning process has a positive relation with the arithmetic prestations. This can either be feedback to the teacher, for example by means of a digital system to track students, or direct feedback to the students per question that is done.
  • Dividing students in different groups which each have their own level has a positive influence on the prestations of the individual students. Note that further investigation is required with respect to the implementation of this division. In The Netherlands, it is common to divide students in three different groups. [3]


Individual or in the Class

  • As we can see in the following article [4], the program is part of groupdifferentiation with individual learling lines. This means that there is quite some form of external regie but with focus on the individuals. Consequently, the teacher has some set goals and subjects on which the pupils need to work, and they will work on these subjects when the teacher tells them to. However, as the program will ask questions on their personal knowledge levels, the individuals are also taken into account. This is a perfect combination of working with the class while working on one’s own.

GDmetId2.PNG

Working on Paper vs Working on an Electronic Device

  • As indicated in the following atricle [5] using computers instead of paper to administrate achievement tests in elementary schools is supported. Using computers has multiple advantages such as being able to use an adaptive technology, which is the goal for the finished program. On the other hand, it is possible that children experience computer anxiety resulting in lower achievements, but for this project it is assumed that this will not be the case as most children these days work with computers from very early on.

Articles About the Layout of the Test

How to implement feedback

  • In this article [6] it becomes clear that attributional feedback to children is useful as it is an effective way to promote rapid problem solving, self-efficacy, and achievement. This is probably because children have a sense of how well they are doing and attributional feedback helps to support these self-perceptions and validates their sense of efficacy. Because of this, the children will stay motivated to work on leading to a better performance in the end. The best way to do this is by giving them ability attributional feedback, so telling them they are either very smart, or not that good at math.
  • Feedback in between The major findings evidenced in this research[7] were the following:
    • Feedback was generally effective for learning, but more so on the lower level (identical) questions than on the higher level (reworded) ones;
    • Feedback information had greater impact in the absence of supporting text than with supporting text;

In this article they used several feedback methods such as knowledge of correct response (KCR), which identifies the correct response, and answer until correct (AUC). These two were shown to be the most effective.

When implementing these articles about feedback in our program the following decisions were made the KCR method with almost no supporting text except for the attributional feedback will be used.

How to Construct Arithmetical Questions

When constructing arithmetic questions it is important to know how students put numbers and operations on numbers into context. In the article [8] it is explained what should be taken into consideration.

Mental Arithmetic

Mental arithmetic is insightfully doing calculations (arithmetic) with numbers, while the value of the numbers is kept in mind. While doing mental arithmetic one also makes use of ready knowledge, and properties of numbers, operations, and the underlying relations. There are three main methods for doing mental arithmetic:

  • Step-by-step mental arithmetic: [math]\displaystyle{ 36 + 12 \rightarrow 36 + 10 \rightarrow 46 + 2 = 48 }[/math].
  • Mental arithmetic by dividing the digits: [math]\displaystyle{ 36 + 12 \rightarrow 30 + 10 = 40 \rightarrow 6 + 2 = 8 \rightarrow 40 + 8 = 48 }[/math].
  • Handy mental arithmetic, for example by compensation: [math]\displaystyle{ 73 + 29 \rightarrow 74 + 30 }[/math].

"Use of symbols in mathematical questions" In an article [9] about the mathematical development of children it is mentioned that in order to get a correct numerical magnitude representation, children need to practice with symbolic numbers, so digits, but also with non-symbolic formats, like the herd questions in our program. Therefore we have to keep this in mind when implementing this in the layout of the questions our program.

From these articles we can conclude that we want to focus on learning in a likeable way, using a computer program. We will do this by asking simple maths questions using fun pictures of animals. This way we hope to intrinsically motivate the children to play with our program. Other things we will implement is that we want to make sure it will not go too fast and we can use child-friendly language.

Program

In this section, we describe the actual program that was developed. The overview of a typical test scenario is as follows. First, the teacher launches the teacher interface of the program and specifies the parameters of the current test, such as the types of operations used in the questions, the amount of time the students have to answer the questions, etc. Next, all pupils launch the pupil interface of the program and wait till a connection between the teacher- and pupil program is established. All pupils fill out their name, and are added to an overview presented in the teacher interface. The teacher clicks the 'next question' button and a question is presented on all pupil interfaces. All pupils answer this question by clicking on one of the answer buttons, and receive immediate feedback on their answer. The teacher then clicks the 'next question' button again, and the cycle is repeated, until all questions have been answered. Based on his previous answers on previous questions, the next question a pupil will receive will either be an easier or a more challenging question, that properly matches the pupils knowledge level. We will now get into the details of the program and the scenario outlined above.

Teacher initialization interface is started

According to this website [10] students from our target audience should be able to do the following things at the end of the school year:

  • Group 3 can add and subtract from 0 to 20.
  • Group 4 can multiply and add and subtract from 0 to 100.

For this reason exactly, we made sure the teacher can change the settings of the program to fit the needs of the current group. The settings of the current test are completely customizable to the current topic and current level of knowledge the teacher wants to test. We developed our program such that it can at least accomodate the testing needs of our target audients, i.e. addition, subtraction and multiplication can be tested from 0 to 100. However, while developing, we took care of the fact that the program can easily be extended to the testing needs of higher groups. We will expand on this later.

Teacher initialization interface launched when the teacher starts a new test

When the teacher first starts a new test, the teacher initialization interface is launched. A form will be shown on which they can adjust the settings for the upcoming test. This form is shown on the right, including all the settings customizable by the teacher and their standard setting. The standard settings shown on the right are set to accomodate our target audience of group 3, but can be easily customized to meet other testing needs.

First, the teacher chooses the numerical bounds between which the questions of the current test have to lie. Initially, questions lie between the range [0,20]. The teacher chooses whether he wants to test integer or decimal numbers. The standard setting is integer numbers. The decimal option is currently disabled, because of the question types we have currently implemented, but we still included this option because our code is generic so that the test can easily be extended to include decimal numbers. Next, the teacher chooses the operations he wishes to test: addition, subtraction, multiplication and/or division, standardly set to addition and subtraction. The maximal amount of arguments per question, which is initially 2, can also be customized. Even though our target audience will always use the standerd setting of 2 arguments per question, we still included this option with the possible future testing needs of higher groups in mind. By including this option now, extending the program for the use of higher programs will be a lot easier, because we already kept this in mind. The teacher chooses the amount of questions in the upcoming test. The standard setting here is 20 questions per test. The teacher also initializes the amount of time the pupils have to answer the question (in seconds), standard setting is 60 seconds per question. Lastly, the teacher chooses whether the program should automatically move to the next question if all children have answered or not, and whether a timer that displays the remaining time to answer the question should be enabled or disabled. When clicking the 'start' button, the program generates a test based on the settings filled in by the teacher and starts searching for pupil programs that are trying to connect to the current test. We will expand on the connection process later.

Pupil interface is started

Waiting screen for children.

When the pupil interface is launched by the pupils, a waiting form will be displayed, as is shown on the right. At this point, the pupil program is trying to connect to a teacher program where a test was generated. We will expand on this connection process in the next subsection 'connection is established'. After the teacher has generated a quiz, i.e. after the teacher pressed the 'start' button on the initial teacher interface, this connection between the teacher and all pupil programs will be initiated. As soon as the connection has been established, a name form will be displayed to the pupils, as shown on the right.

Name form for pupils to enter their name and join the quiz

The pupils all fill in their name and click the "Start!" button. By clicking start, their name will appear in the teacher overview, which is not visible for the children. We will expand on this overview later. An empty question form is now displayed to all children, until the teacher presses the 'next question' button on the teacher overview interface.

Connection is establised

TODO: dennis please fill in :)

Teacher overview during the quiz

Teacher overview with one pupil connected

After the teacher pressed the 'start' button on the teacher setup screen, a teacher overview form is displayed to the teacher. Every time a pupil program establishes a connection with the curren teacher program, the pupils name will appear in a new row on the teacher overview, as shown on the right. In this figure, one pupil called 'Dennis' is currently connected and able to take the test. The teacher overview has several other fields that give the information on how the pupils are doing. The first column tells the teacher whether the pupil has answered the current question yet. The second column tells the teacher, if the pupil has answered the current question, whether he answered correctly or not. The third column shows the current score of the pupil and the last column shows the level category of the pupil (see subsection 'score function'). This overview is an easy way for the teacher to see how all the pupils are doing, which pupils need extra attention and which pupils can move on to more challenging topics. We kept the code for this overview screen as generic as possible, so that adding additional columns at a later stage is easily done.

When the teacher presses the 'next question' button, all pupils collectively move on to a new question. Note that not all pupils will get the same next question. Every pupil receives a personalized question, based on their previous performance, i.e. their score (which will be discussed in detail below). The way in which questions are generated is also discussed below.

Questions

Currently, our tests can generate two different question types. Again, the code supporting these question types is generic, so adding additional question types at a later stage is easily done. As mentioned in our research, we found that order to get a correct numerical magnitude representation, children need to practice with symbolic numbers, so digits, but also with non-symbolic formats. Based on this finding, we developed two types of questions:

  • Flock questions, where children can add or subtract flocks of pigs, cows, or chickens. These questions are a way for the pupils to practice with non-symbolic formats, by counting amounts of animals, and training with symbolic numbers, by transforming the amount they just counted into an actual numerical answer.
  • Animals with small blackboards which have simple math questions on them. These questions are a way for the pupils to train their mathematical skills in a purely symbolic manner.

By combining these two different kinds of questions, the pupils both train and test test their numerical magnitude representation, as well as their understanding of basic symbols of mathematical representation. All questions are multiple choice to make it easy for the young children to answer the question.

The two type of questions. Left. A flock question. Right. A regular question.

As shown in the picture, both question types are depicted with the question-to-be-answered in shown largely in the center of the screen. This question is either depicted by animals displayed in flocks (left) or by a symbolic equation written on a blackboard (right). Underneath are four multiple choice answers the pupils can choose from, one of which is the correct answer. At the bottom of the screen is a timer, that indicates the time the pupil has left to answer the question. This timer can be enabled or disabled by the teacher, based on the way he/she wishes to test.

Feedback

As soon as a pupils clicks on one of the answers of a question, the pupil will immediately receive feedback on their answer (rather than at the end of the test), as supported by our literature study on feedback. Because the pupils are most likely to be young and are only just learning to read, we use smiley faces to indicate to the children how they are doing. This is again supported by our literature study. A happy face is displayed to indicate they answered the question correctly; a sad face means they answered the question wrong. If a pupil answers a question wrong, we wanted to highlight the correct answer, without focussing on the fact that they answered wrong, as supported by our literature study on how to visualize feedback for the children. In order to achieve this, we visualized the feedback as follows. When a pupil answers correctly, their answer is highlighted with a green background, and all other multiple choice answers get a grey background, to indicate that their answer was correct. This is displayed on the left of the picture below. When a pupil answers incorrectly, the correct answer is highlighted with a green background, and all other answers get a grey background. We chose not to highlight the incorrect answer of the pupils, so that we focus on what the answer should have been, and not on the fact that they answered wrong, as supported by our literature study. This situation is shown on the right of the picture below.

Feedback for the children. Left. This question was answered correctly. Right. This question was answered wrong.

Score

In the program, a score function is used to determine the current level of a pupil. In our program, we chose to work with 3 level categories: easy, average or hard. After each question that a pupil has answered, we use the score function to determine the knowledge level of the pupil, and so, to determine the question difficulty of the next question the pupil will receive. This means pupils can receive questions of different level categories during the test; the pupils can shift between different level categories, based on their score up-until-now. The final level category that the child ends up in, determines whether the child needs extra help: children that ended in the easy level probably need some extra attention from the teacher, while children that end up in the hard level can probably move on to a more difficult topic. Note that this level category can also be viewed by the teacher, in the last column of the teacher overview form.

The score function we used in our program is as follows:

[math]\displaystyle{ score = score+\frac{correct * current Time}{total Time} }[/math].

Here, 'correct' is a variable that is 1 if the pupil answered the previous question correctly, and 0 otherwise. 'currentTime' is the time it took the pupil to answer the previous question. 'totalTime' is a variable that is set to the total time the pupils had to answer the question. To determine the level of the next question a pupil will receive, this score is then compared with two bounadary variables. These two boundaries are:

[math]\displaystyle{ lowerBound= amountOfQuestions * 100 * \frac{1}{3} }[/math]

and

[math]\displaystyle{ upperBound= amountOfQuestions * 100 * \frac{2}{3} }[/math].

Here, 'amountOfQuestions' is the total amount of questions the pupils have already answered, and '100' is the maximum amount of points a pupil can receive per question. Therefore, amountOfQuestions * 100 is therefore the total amount of score a pupil could have earned up until now. If the pupil has a score lower than one third of this total score, i.e. his score is lower than the 'lowerBound' variable, the pupil is shifted to the easy level and his next question will be of the easy level category. If the pupil has a score higher than the upperBound variable, the pupil will be shifted to the high level and his next question will be of this level category. Otherwise, the pupil will be in the average level category, and his next question will be of average level.

Generating the next question

As explained above, when the teacher presses the 'next question' button on the teacher overview form, every pupil receives a question in their current level category (easy, average or hard) based on their score. The next question this pupil receives should thus be of the level category the pupil is in. We thus want to generate the next question of a child at the right level (easy, average or hard), while adhering to the settings set by the teacher in the initialization form.

Easy questions are generated the same as average questions, except the bound set by the teacher in the initialization form (standardly set to the range [0,20]) is narrowed by 25 percent. For our target group, this means they will receive easier questions with lower numbers. The hard questions are also generated similarly to average questions, except the bound set by the teacher is expanded by 25 percent. For our target group, this means they will receive harder questions with higher numbers.

Now, we will get into the details of how the questions are actually generated in our program.

TODO: dennis please fill in :)

End of the quiz

After the last question has been answered and feedback was received on this question, the pupil programs are automatically closed. The teacher now sees the teacher overview form, where every pupil is classified in a certain level category: easy, average or hard. Pupils in the easy level probably need extra attention from the teacher on the topic that was just tested, while pupils in the hard level can probably move on to more challenging topics. This final overview form can thus be used by the teacher to properly manage their time, and find out which children found it particularly challenging to answer the questions of this topic. The final score of the pupils can be used by the teacher to compare the knowledge levels of all the pupils relative to each other.

Results

Survey

We have distributed a short survey with Google Forms in order to find out what actual elementary school teachers think about our program. A clear explanation of the program was given, including screenshots. The purpose of this survey was to find out the following things:

  • Is there a demand for programs like ours.
  • Would teachers actually implement our program into their lessons.
  • Do teachers think it would be a fun new addition that will help the children.
  • Will it help the teachers.
  • Are there any other important features the program should have?

The survey has reached many people, however due to the planning we only had 15 reactions when it was time to analyze the data. Though this is not very much, 15 is enough to get some results. We asked all participants to consent to filling out the survey. The most important results can be seen below, with screenshots from the actual survey. The survey was Dutch as we focus on the Dutch education system, all questions will be translated here.

1groep14.PNG Zou u dit programma willen implementeren.PNG 2groep14.PNG 3groep14.PNG

  • Would you implement this program in your lessons? And why? As we can see, a decent 80% of the teachers indicated that they would in fact use this program in their lessons, mostly because it looks nice, could help with quickly seeing the children’s levels, and that they expect children to be engaged as it is on a computer. The more negative reactions usually come down to that children do not need extra testing.

Nuttig tijd indelen.PNG

  • Do you think this program would help you to use your time more optimal? 60% of the teachers do unfortunately enough not think that the program will help them with dividing their time. But as most teachers are still positive about the program we do not think this is too bad. They could like it as it is fun and engaging for the children, even though they are already good at dividing their time. The results for this question could also be due to the fact that, in hindsight, the question is not formulated in the most clear way there is.

Traditionele toets vs dit.PNG

  • Do you think that the children will learn more with this program than with a ‘traditional’ test on paper, as the level of the test will adapt to the children individually? This is practically a tie, the teachers are not convinced that this will effectively teach the children more than a paper test will. Obviously, this is not a very positive result but as still a bit more than half believes it is better than paper, we do not have to throw out our program immediately. Also when keeping in mind that there are relatively few reactions, this question could really go either way when asking more people.

6groep14.PNG Persoonlijke vragen.PNG Toetsen per onderdeel.PNG 9groep14.PNG 10groep14.PNG 11groep14.PNG 12groep14.PNG

Conclusion

Part 1

  • relatie tussen onderzoek en hoe het eruit ziet en hoe het werkt( wat komt er uit het literatuur onderzoek en hebben we gebruikt in de praktijk)

Part 2

  • conclusion of the enquete

Discusion

The following is a summation of topics that would be of interest in further research.

  • Implementation and---more importantly---the comparison of different types of questions, e.g. reading the clock. This can be done in terms of plug-in forms in the program.
  • Machine learning classification of pupils in (possibly more than three) categories.
  • Validation of the score function and possibly incorporating other variables (such as answers that are 'almost' correct and the current score).
  • Changing the question difficulty based on more question properties (e.g. operations or 'tricky' multiple choice questions) and measuring the effect of these changes.
  • Implementation of the quiz on different platforms, as well as surveying the need for this.

Appendices

Contributions

References

  1. P. Hoogeveen, J. Winkels, from: "Het didactische werkvormenboek", The Netherlands, March 2014
  2. Inspectie van het Onderwijs, "De staat van het onderwijs", Onderwijsverslag 2011/2012, The Netherlands, April 2013
  3. M. Hickendorf et al. "Rekenen op de basisschool", Universiteit Leiden, The Netherlands, October 2017
  4. A. van Loon et al., [http://ixperium.nl/files/2014/08/dimensies-gepersonaliseerd-leren.pdf "Dimensies van gepersonaliseerd leren"], The Netherlands, October 2016
  5. https://ac.els-cdn.com/0747563287900069/1-s2.0-0747563287900069-main.pdf?_tid=09dccd7c-00f6-4fd7-9dbb-f4775a2aee61&acdnat=1521453097_4f6140382d37ec957a157e375f2e6a67
  6. D. Schunk, "Ability Versus Effort Attributional Feedback: Differential Effects on Self-Efficacy and Achievement", Journal of Educational Psychology, 75, 848-856, 1983
  7. Roy B. Clariana, Steven M. RossGary R. Morrison "The effects of different feedback strategies using computer-administered multiple-choice questions as instruction" June 1991, Volume 39, Issue 2, pp 5–17
  8. M. van Zanten et al., "Kennisbases", The Netherlands, 2009
  9. B. de Smedt et al., [https://dspace.lboro.ac.uk/dspace-jspui/bitstream/2134/16152/1/DeSmedt,Noel,Gilmore%26Ansari(2013).pdf "How do symbolic and non-symbolic numerical magnitude processing relate to individual differences in children’s mathematical skills? A review of evidence from brain and behavior "], Belgium, 2013
  10. Website "Wijzer over de basisschool"