education's digital future

Second report from Keith Devlin's and Coursera’s Introduction to Mathematical Thinking MOOC

Second report from Keith Devlin's and Coursera’s Introduction to Mathematical Thinking MOOC

Notes 1. Small post-publication edits made on 14 June to improve flow and clarity. 2. This post has been republished on the London Mathematical Society's De Morgan Forum.

About a month ago I finished Keith Devlin’s 10 week introduction to mathematical thinking course. This report supplements the one I published in April, which I’d based on my experience and observations during the first six weeks of the course.

In what follows I will not repeat the earlier report's description of the how the course worked.

Comments, questions and corrections welcome.

1. The numbers. With commendable openness, Keith Devlin reported the following data in his 3 June 2013 The MOOC will soon die. Long live the MOOR:

Total enrolment: 27,930

Number still active during final week of lectures: ca 4,000

Total submitting exam: 870

Number of students receiving a Statement of Accomplishment: 1,950

Number of students awarded a SoA with Distinction: 390

2. Making it to the finish. From my point of view as a learner, although I had the expected trouble “fitting it all in” - made worse by being drugged up to the eyeballs owing to a broken shoulder, as well as having to do a fair bit of work and social travelling - I enjoyed the second half of the course as much as the first . I continued to feel stretched. I liked the non-intuitive discomfort of learning about the properties of numbers. I surprised myself with my examination result (more on this later). I made it to the end. I received and felt childishly pleased with my Statement of Accomplishment.

3. Back to being a lone learner. All sense of being part of a group of learners (I’d been an on-off participant in a Google Groups based discussion group during the first half of the course) had evaporated by the mid-point, leaving me operating as a lone learner. (I think this is pretty inevitable in courses - as this one did - that encourage the formation of independent discussion groups, and in which a large proportion of those who start the course cease to be active, because – and this is only a supposition – the "activity density" in any formed discussion group is likely to drop below that necessary to keep the group active.)

4. Peer-assessment. I wrote my first report just after training in the peer-review process had begun. At that time I was very taken with the whole process, which consists of:

a) learning the subject matter of the course in part through marking sample answers to problems using a structured rubric, whilst

b) at the same time preparing yourself to take part in “high stakes” marking of fellow students' answers to final examination questions.

5. I remain taken with the overall idea, not least because it can be economically be run at a very large scale. But I am not convinced that Keith Devlin and colleagues got the design of the process completely right this time. This is not to criticise: Devlin was candid to us students, as well as in his public writing, that he is improving the process iteratively, and I hope that feedback of this kind can contribute to that endeavour.

6. Training in peer marking. I've two main reservations about the training process as it developed on this particular course.

a) Sticking to the rubric. A marking rubric such as this one is an indispensible adjunct to the peer-marking, as well as to the learning processes. In the former case it helps ensure consistency of marking; in the latter case it helps learners unfamiliar with a field understand what is expected of them by showing what a good clear answer to a problem (in this case a simple mathematical proof) should consist of. On this course, we were given recorded feedback on our peer-marking by means of videos by Keith Devlin summarising his own assessment of the problems we’d marked as part of our training. The problem here was that Devlin was prone to read between the lines of a proof, correctly judging that the proof was mathematically watertight, and tending to ignore the presentational requirements of the rubric: this led him to be more generous in his marking, it seemed to me, than the rubric had dictated. Essentially the rubric-based approach (or, more particularly, the approach encouraged by the rubric used for this course) led to over-harsh marking. I found this unsettling; and the problem could have been solved by a less presentationally focused rubric, or alternatively by Devlin sticking more closely to the rubric.

b) Peer-marking by learners who are outside their comfort zone. Some parts of the course were more challenging than others. No complaints there. However, at the point where learners are being trained in peer-marking pertaining to content that they have not themselves confidently mastered, the process begins to break down a bit. Of course reviewing the “correct” marking of problems that are at the edge of your understanding can be an aid to developing your own grasp: but only up to a point. In short, there seem to be two flaws in being trained through marking material in which you are not confident:

i) it does not necessarily help you master the content;

ii) it does not necessarily make you an effective marker.

7. I think one possible solution to the first part of this problem may lie in peer-marking being run between levels of learning – that is by having learners on a higher level course mark the work of learners on a lower level course – rather than by having learners on the same course marking each others work. To be fair to Keith Devlin, a different approach to weeding out poor markers was used on this course, to which I now turn; but this only addresses point 6a, not 6b.

8. The process of peer-based marking of the final exam. After we’d completed the untimed exam we were given three sample exam submissions on which to practice grading. The purpose of this – I think it is very clever approach to have taken – was to filter out from the subsequently “high stakes” assessing of final exams those of us who were not up to standard as markers. (As mentioned previously, the full explanation of the process is available in my first report.) We then moved on to peer marking three of our class-mates' exams, using a not very elegant web-based interface. (One of my three scripts was largely empty, which saved time......) In my case I made the mistake of doing my marking before training, and I assume from how lousily the quality of my marking was judged to be (it was found to be far too harsh), that the results of my marking of peers' exames will have been excluded from consideration. Finally, we self-evaluated our own work using the rubric, and no doubt influenced by our reading of our peers' exam answers. You can see how peer and self-evaluation marks were presented in this PDF [18 pages, 2 MB] which contains my own exam result (and most of my answers). Note how there are some small (but, towards the end, much larger) differences between the peer and self-evaluations. In truth I reckon the peer-markers were “taken in” by the superficial solidity of my answers to the final questions: when self-evaluating I knew perfectly well that I’d been bluffing; I suspect my peers, who may well, like me, have been struggling with this relatively difficult material, were taken in. My point 6b above is relevant here.

8. Possibilities for improvement. I conclude with a brief and pretty sketchy list of suggestions for improvements. None of these should take away from the fact that this was a really good course, well organised, with a great deal of thought and commitment going into its design, production, and running.

a) Text transcripts of lectures. Prior to the exam I would have scan read transcripts of some of the video lectures if these had been available. This would have been a lot quicker than re-watching the lectures.

b) Course pace. The pace of the course was relentless, with what seemed like only a couple of days each week in which the problem sets were open for completion. I wonder whether the course completion rate might have been higher if the course had been a bit “longer and thinner”? Certainly, the final week of the course was a killer, with three 30 minute lectures, two assignments with a total of 25 questions, covering material on real analysis that Devlin indicated represented a difficult transition. It felt to me to be covering enough ground to have been spread over 2 weeks.

c) Marking rubric. A less presentationally focused marking rubric might help, for the reasons outlined in 6a above.

d) Community. A different way of engendering a sense of community is needed than simply encouraging learners to form their own learning sets, which are then likely to dwindle into ineffectiveness as the number of active learners drops. Here I think the “One big discussion group” approach that either evolved or was decided upon by Peter Norvig and Sebastian Thrun in their 2011 Artificial Intelligence (AI) MOOC has definite advantages. (I wrote extensively about the course during 2011.) The AI course used an OSQA driven platform, with a modicum of moderation, through which several course leaders emerged who gained kudos for their pedagogically helpful interventions, responses, etc. I think that encouraging learners to go off and do their own thing may have pushed away some people who’d have otherwise played a leading and constructive role in a well-structured central forum.

None of these suggestions address the "6b challenge": and it is this that strikes me as being where a major difficulty in peer-based marking of high stakes assessments lies. The optimist in me says that the challenge can be solved.

Category Tags: