University of Cambridge
Natural Sciences Tripos, Part IA Mathematics
Final Examiners’ Report 2018–19
1
Examiners
Examiner
Name
Department
2
Structure of the Examination
As in previous years, the examination consisted of two three-hour written papers and a
Scientific Computing project, which was assessed earlier in the year.
Each paper had a Section A, consisting of 10 short questions adding up to 20 marks,
and a Section B, consisting of 10 long questions each worth 20 marks. The number of
marks available for each part of a question was indicated.
Section A of Paper 1 was based on the core A-level Mathematics syllabus, while Section
A of Paper 2 was based on the NST Mathematics Course A. Two of the questions in Section
B of each paper required knowledge of Course B; they were indicated with asterisks and
placed at the end of the paper.
Candidates were instructed to attempt all questions in Section A and at most 5 ques-
tions from Section B. Despite these instructions, a small number of candidates answered
more than five questions from Section B, in which case the Examiners accepted the five
answers with the highest marks and eliminated the excess marks from the markbook.
Nevertheless, students are encouraged to follow the clear instruction to attempt up to five
questions, so decision on how to deal with excess questions remains at the discretion of
the Examiners. (It is noted that this year three attempts attracting in excess of 15 marks
were discarded, including one 20 mark attempt.)
A total of 120 marks was therefore available on each paper, and the Scientific Com-
puting mark was scaled to a maximum of 20. Each candidate was therefore given a raw
mark out of 260.
3
Warnings
Several warnings were received concerning specific learning difficulties and colour blind-
ness, but no specific actions was required.
Examiners were alerted to one candidate sitting Paper 1 early (on the Friday preceding
the main cohort), and one sitting Paper 1 one day late. Measures were put in place to
accommodate this, but no issues were experienced.
1
4
Conduct of the Examinations and Complaints
4.1
Venue
The examinations were held in the Sports Hall of the University Sports Centre in West
Cambridge which can accommodate all NST candidates; and additional cohort of Com-
puter Science candidates sat the examination upstairs in the Multi-Purpose Room on the
same site. A relatively small number of candidates took the examination in Colleges or
other venues.
4.2
Candidates
There were 542 candidates entered for the Mathematics IA examination of whom 421
were from the Natural Sciences Tripos (NST) and 121 from the Computer Sciences Tripos
(CST). (Although the CST candidates sat the same two written papers and were marked
in an identical fashion, they did not undertake the same computational projects and were
not considered explicitly when setting the examination boundaries. Instead, their raw
marks were passed on to the CST1A Examiners.)
There were no reports of misconduct received. Although one CST candidate took
ill during Paper 2 and left a little early, this did not cause a significant disturbance to
the other candidates in the room. No other incidents occurred that were relevant to the
conduct of the examination. Several Examiners remained in the Sports Hall throughout the
examinations and one Examiner remained in the Multi-Purpose Room upstairs throughout
the examinations.
4.3
Corrections and announcements
Prior to the start of Paper 1, the Invigilators had placed on each desk in the Sports Hall a
copy of the formulae booklet distributed for the NST1A Physics examination. Fortunately,
this established prior to the start of the exam and (with the aid of the Examiners) all copies
of the booklet were collected again before the examination began (although candidates had
begun to take their seats). While the first page of the examination paper correctly stated
the requirements, and the requirements had been provided to the NST administrator prior
to the examination period, Invigilators noted that they had received confusing instructions.
Had this problem not been rectified, it could have caused significant unfairness as the
booklets contained information that would have helped significantly with the examination.
(There is no evidence that this mistake was replicated across the colleges.)
The only announcement made during the examinations was to clarify that P1Q14R(a)(i)
was worth six marks as the corresponding “[6]” in the margin had been accidentally re-
moved. This did not affect the candidates’ ability to pursue the question. There were no
corrections required to the question statements on either of the two examination papers,
and no adjustments were necessary to the marking to accommodate any ambiguity or
misrepresentation in the questions. There were only a few inconsequential queries from
candidates during the examination.
4.4
Complaints
No complaints were received prior to the Final Meeting of the Examiners.
2
5
Delivery and sorting of scripts
The Senior Examiner is currently responsible for transporting the large quantity of scripts
from the Sports Hall to the CMS for sorting; this has been recommended from the past to
ensure more rapid delivery of examination scripts to the Mathematics Faculty, given the
tight schedule available for sorting and marking.
This year, the Senior Examiner accompanied the scripts in a taxi from the Sports Hall
back to CMS. (Last year, an officer from the Undegraduate Office arrived with a taxi to
collect the scripts.) On both days, the scripts were packed and ready for collection by
12:35pm, shortly after the 12:30 for which the prepaid taxi was booked. (The booking
and prepayment was organised by the Undergraduate Office). The Senior examiner, with
help from another Examiner and an examination supervisor, moved the scripts from both
halls to the taxi (using a trolley). The scripts were delivered to the sorting room (MR15)
at around 1pm. The sorting by question and checking against coversheets was completed
shortly before 6pm on both days.
Several candidates had misnumbered questions on scripts and in most cases such dis-
crepancies were remedied during sorting. Similarly, some missing coversheets were filled
during sorting, although it is noted that a number of these had errors.
One candidate (who sat their paper in college) recorded their candidate number with
one illegible digit and a second incorrect one. This was resolved without ambiguity using
their script for Section A where all candidates who had sat the paper in college attempted
this question.
During marking, one coversheet for Paper 1 Section A did not have a script attached.
A thorough search of the material delivered to CMS was conducted. The resolution of this
case is discussed in §10.
The plan to have the floating assessor mark the scripts for P1Q11Z was not communi-
cated to those doing the sorting until after the bulk of that had been completed, causing
some slight delay for Paper 1. This communication failure was resolved for Paper 2.
6
Assessors
Approximately 6,400 questions have to be marked and checked in about six days: (Section
A as 1 question + 5 Section B questions) × 2 papers × 533 students.
Given that, as with last year, three additional Assessors were hired to mark Section A
questions and selected questions from Section B:
Code
Name
Assignment
A1
Section A on Paper 1
A2
Section A on Paper 2
AF
P1Q11Z and P2Q13T
In all cases, the marking was completed with the oversight of the Examiner responsible
for setting the question.
This was the first involvement in Cambridge examinations for Assessor A1, while
Assessor A2 had marked Section A of Paper 1 last year. The ‘floating’ Assessor, AF, last
year marked two of the Section B questions last year as well as previously acting as a Part
III assessor.
3
The purpose of the Assessors is to decrease the marking load and fairly distribute
amongst the Examiners and, consequently, to minimize the likelihood of mistakes occurring
in marking, checking and entering marks, as well as to cope with possible situations of
accident or illness.
This year, the large Section A marking sets from both papers were undertaken by the
Assessors, the third assessor dealt with two Section B questions. In addition, Examiners
S and W between them marked P2Q15Y due to the very large number of scripts that
Examiner Y would otherwise have needed to mark. (Examiner Y was responsible for
all three questions on differential equations.) Each Assessor received Examiner’s model
solution and mark scheme as well as a personal mark checker. For logistical reasons,
Examiner S also assisted with the mark entry for some of the questions of Examiners Y
and Z.
The work of the three Assessors, and the sharing of the workload for Examiner Y,
definitely contributed to the delivery of the marks in the tight timescales available, which
would otherwise have been unreasonable.
See Recommendation regarding the use of Assessors and other load sharing strategies.
7
Checking
Examiners either found their own Mark Checkers or were supplied with one by the Under-
graduate Office. Mark Checkers verified that each page of every script had been marked,
that the marks had been totalled correctly and transferred to the coversheets, and that
the mark had been entered correctly into the database.
8
Raw Marks, Dicrepancies and Computer Project
8.1
Raw marks.
As with previous years, each Examiner had a personalized web-based marksheet accessed
via Raven and a passphrase. The Senior Examiner had access to all the markbooks,
although this access was not concurrent. Marks for the Scientific Computing project were
provided to the Senior Examiner in the form of a spreadsheet from Computing Lab via
the central NST administrator.
Examiners were responsible for entering the marks for the questions they were re-
sponsible into the online Markbook. While Faculty Board approval had been received
for Assessors to themselves enter the marks for the questions they had marked, a robust
mechanism for this was not in place within the online system, and so this task reverted to
Examiners. In the end, the Senior Examiner (S) entered marks for P1Q11Z and around
half of P2Q15Y in addition to Section A on Paper 1, P1Q12S and P218S.
See recommendations for modifications to the mark entry system and future Assessor
mark entry.
, a DAMTP Computer Officer, combined these marks into a single
spreadsheet in order to generate a raw mark out of 260 for each candidate (120 marks
from each of the papers, plus 20 marks from the Scientific Computing projects).
4
8.2
Discrepancies.
Data from the Master Coversheets [MCS] (where each student ticked their questions at-
tempted) were read by machine into another spreadsheet and compared with the mark-
book by the Computer Officer. A number of discrepancies arose which were dealt with as
follows:
1. Where a candidate had not entered a question on the MCS but a mark was found in
the markbook, the script was checked and the mark was accepted if it belonged to
that candidate. In most such cases, the candidate (or the sorter, where the MCS had
been missing) had misnumbered questions on their MCS (taking the wrong number
from the same examiner). In some of those cases, candidates corrected their wrong
input by crossing it out, but those were read by machine as attempted anyway.
2. Where a candidate had entered a question on the MCS but a mark was not found in
the markbook, the packets of scripts were searched. In most cases this was caused
again by misnumbering, so no changes in the markbook were required.
3. In five cases a search of the scripts showed that the Examiner had entered a mark
under an adjacent candidate number and the error had not been detected by the
mark checker.
4. In three cases, the candidate had recorded their candidate number incorrectly, but
it was possible to unambiguously determine the correct number from other data
(e.g. desk number or the list of candidates who had completed the exam in college).
Handwriting was also compared to ensure the correct allocation was made.
5. In two cases, where the candidates had sat the papers in college, each claimed a
single question (within the rubic) where the script was not found and a search of
the rubbish was requested. (For one of these, the candidate had incorrectly recorded
their candidate number on the Master Coversheets and blue coversheets.)
6. In one case, a script coversheet was found that was recorded on the MCS, but for
which no script could be found. A search of the rubbish was requested. Resolution
of this is discussed in §10.
All these errors were corrected in the overall markbook by the Computer Officer under
the supervision of the Senior Examiner.
8.3
Computer Project.
We are pleased to report that this year the provision of the Computer Project marks for
NST1A candidates went smoothly. Although these were supplied keyed on crsid rather
then BCN, the Computer Officer had in place the necessary translation mechanism.
The CST1A candidates were separated from the NST1A candidates prior to the im-
portation of the Computer Project marks as they undertake a different project system.
Incorporation of computing marks for CST1A candidates is now handled by CST.
5
9
Scaling and Classification
The scaling or marks and classification of candidates were carried out according to the
instructions given to NST Senior Examiners. The guidelines state that approximately
25% of NST candidates should obtain a scaled mark of at least 70 out of 100 (the TOP
partition), approximately 65% should obtain a scaled mark greater than or equal to 50 and
less than 70 out of 100 (the MIDDLE partition), and approximately 10% should obtain
less than 50 out of 100 (the BOTTOM partition). When determining the boundaries,
CST and ET candidates, and those that had withdrawn, were excluded. This year the
boundaries of the partitions in terms of raw marks were a = 210.5 and b = 128.5 (out of
260); see table 1. (For comparison, last year a = 198 and b = 124.5.)
Boundary
Raw mark
Scaled mark
max
260
100
a
210.5
70
b
128.5
50
Table 1: The boundaries of the partitions in terms of raw marks: max = the maximum number
of marks allocated to the subject, a = the raw mark of the lowest candidate in the TOP section,
b = the raw mark of the lowest candidate in the MIDDLE section.
The values of a and b were adjusted slightly (as permitted by the guidelines) to identify
suitable gaps in the marks of the candidates which also arguably corresponded to identifi-
able differences in the quality of marks achieved. We also considered the candidates who
were nominally failed (i.e. obtained a scaled mark less than 40%, corresponding to a raw
mark of around 104) and noted that they were only a small number. All NST candidates
were then assigned a total scaled mark out of 100 by a piecewise-linear scaling of the total
raw mark. The class boundaries and distribution are given in Table 2.
Partition
Number
Percentage
TOP
102
24.8
MIDDLE
275
66.9
BOTTOM
35
8.3
Total
412
100
Withdrawn
9
-
Table 2: Distribution of candidates in each partition.
The corresponding maximal scaled marks for Paper 1, Paper 2 and Scientitic Com-
puting are approximately 46.2, 46.2 and 7.7, respectively. If the total scaled mark is
partitioned among the three components in the same proportion as the raw marks for
the three components, in principle, this can lead to a scaled Scientific Computing mark
that exceeds the maximum possible (i.e. 7.7). In such cases a ‘backward rescaling’ algo-
rithm must be applied in order to redistribute the excess marks between the two papers.
However, the latter case proved not to be relevant this year.
6
Figure 1 shows how the total raw (out of 260) and scaled marks (out of 100) marks are
distributed across the candidates by plotting the total against the candidate rank. The
vertical (red) lines indicate the boundaries between the TOP, MIDDLE and BOTTOM
partitions. Note that while there is significant curvature towards either end of the distri-
bution, the distribution is nearly linear across much of its middle range, this flatness being
enhanced by the scaling process (note the slight kink in the curve for the scaled mark at
the boundary between the TOP and MIDDLE partitions).
260
240
220
Scaled mark
200
Raw mark
180
160
140
120
100
80
60
40
20
TOP
MIDDLE
BOTTOM
0
1
11
21
31
41
51
61
71
81
91
101
111
121
131
141
151
161
171
181
191
201
211
221
231
241
251
261
271
281
291
301
311
321
331
341
351
361
371
381
391
401
411
Figure 1: Plot showing the raw marks out of 260 (orange line) and the scaled marks out of 100
(blue line) plotted against the candidate rank. The vertical red lines show the boundaries between
the TOP, MIDDLE and BOTTOM partitions.
10 Missing Scripts
There were three candidates who have claimed attempts (based on the Master Coversheets)
for which Examiners, despite extensive searching of the scripts, could find no material to
mark. Following guidelines, in each case, no marks were awarded for the missing attempts.
However, Examiners note the following:
XXXXX This candidate claimed to have answered Section A on Paper 1. The script bundle
contained a blue coversheet for this question, but no answer. If the missing attempt
had been completed, the Examiners’ best estimate, based on the 11 questions we
have received and the correlation between attempts and total marks for the missing
one, is that the candidate would have gained in the region of 15.5 additional raw
marks. This would have increased their scaled marks from 58.49 to 62.27, moving
them 58 places up the ranking, but the candidate would remain in the MIDDLE
section.
XXXXX This candidate claimed to have answered question 16W on Paper 1. If the
missing attempt had been completed, the Examiners’ best estimate, based on the 11
ques-tions we have received and the correlation between attempts and total marks for the
missing one, is that the candidate would have gained in the region of 13.9 additional
7
marks. This would have increased their scaled marks from 58.11 to 61.50, moving
them 43 places up the ranking, but the candidate would remain in the MIDDLE
section.
XXXXX This candidate claimed to have answered question 12R in Paper 2. If the missing
attempt had been completed, based on the mean mark for the eight attempts across
the two written papers (12 attempts were required for full marks), Examiners esti-
mate the attempt may have been worth 6.5 raw marks. This would have increased
their raw total from 52.00 to 58.50 and their ranking would have remained the same
(the lowest ranked candidate). Indeed, had they scored a full 20 raw marks on that
attempt, they would remain the lowest ranked candidate. Feedback from the college
where this candidate sat the examination noted that the candidate finished and left
the examination at around 10:49. The material left behind was described as “one
sheet of rough paper with some angles drawn on it.” This description does not
correspond to the missing attempt.
Prior to the final meeting, Examiners arranged to sift through the rubbish from the
venues in which
and
sat Paper 1. Unfortunately, the rubbish was not available
until after the final meeting. The results of this search are as follows:
XXXXX An additional Section B question coversheet was located during the search of the
rubbish from the Sports Hall. This coversheet showed the candidate and desk num-
ber, but did not indicate a question number and was not attached to any material.
It appears that this coversheet was filled in but not used. Its value arises from it
identifying the location within the rubbish of material collected from this candidate.
Three sides of rough working with were found immediately adjacent to the cover-
sheet. Comparison of the handwriting, pen type and colour indicate these belong
to 5362Q. The contents of the rough working also coincides with the attempts that
were marked for the candidate. Crucially, there is evidence on the rough working
that the candidate had attempted Section A. However, the fragment is too small to
represent a markable unit. The conclusion is therefore that the candidate’s claim is
correct and the script has gone missing. While there are a number of scenarios for
its loss (including the candidate having taken it from the exam room), Examiners
are not in a position to determine whether this loss occurred before or after the
associated collection of scripts came into the care of the Examiners.
XXXXX This candidate sat their examination in the Graduate Centre. Fragments of
answers for two questions from the NST1A Mathematics paper were found in the
rubbish from this venue. Neither of these fragments were for the question claimed by
the candidate and the handwriting did not match that of the candidate. No rough
workings were found.
11
Verification of Mark Processing
A computer system is in place that calculates all the original scaled marks from the raw
marks and outputs them on a single spreadsheet so that easy comparisons can be made
to see the consequences of the scaling procedures. The results were compared and were
found to be consistent. Assignment of computer project marks was also double-checked.
8
12
Performance of candidates and overall statistics
12.1
Overall: Raw Mark
Mean
Median
Paper 1
81.70
83
Paper 2
81.13
84
Total
162.37
165
Table 3: Performance (raw marks) between the two papers.
Paper 1
Question
AX
11Z
12S
13Y
14R
15V
16W
17T
18Z
19V*
20R*
Attempts
533
431
98
477
214
331
303
402
274
33
98
Mean
15.63
12.20
12.58
15.90
14.42
11.39
14.08
13.81
10.96
8.15
13.66
Median
16.00
13.00
14.00
17.00
15.00
11.00
15.00
14.00
11.00
8.00
14.00
Std. Dev.
2.79
4.39
4.69
4.43
4.58
4.88
4.15
4.19
5.62
5.07
4.48
Corr.
0.524
0.682
0.515
0.667
0.690
0.712
0.512
0.634
0.608
0.797
0.734
alpha (%)
71.11
32.95
40.82
69.39
51.87
30.21
51.82
48.26
27.74
9.09
47.96
beta (%)
26.08
39.91
37.76
20.75
34.58
32.33
34.32
36.07
31.02
21.21
33.67
Paper 2
Question
AS
11X
12R
13T
14W
15Y
16V
17Z
18S
19T*
20Y*
Attempts
531
408
410
458
160
488
112
206
224
97
91
Mean
11.26
15.29
13.98
16.47
10.91
14.02
13.15
11.06
12.12
13.73
14.79
Median
11.00
16.00
15.00
18.00
11.00
15.00
15.00
11.00
13.00
14.00
20.00
Std. Dev.
3.94
3.57
4.17
4.21
5.21
5.58
5.68
5.23
5.81
4.23
6.30
Corr.
0.801
0.671
0.591
0.525
0.675
0.674
0.719
0.699
0.704
0.742
0.630
alpha (%)
21.85
65.44
50.73
82.97
27.50
56.76
50.89
32.52
41.52
49.48
58.24
beta (%)
44.63
25.74
35.37
8.73
31.25
20.49
23.21
26.21
26.79
32.99
14.29
Table 4: Performance (raw marks) for each question.
Tables 3 and 4 summarises the overall statistics for the examination. All marks are raw
marks (out of 120 for each paper), any excess marks resulting from more than five attempts
at Section B having been removed from the statistical analysis.
This year the median raw mark (excluding withdrawals) on the two written papers
was 165 out of 240. A comparison with previous years is given in table 5. The highest
mark was 237 out of 240 and the lowest was 52. The mean raw mark was 162.4.
Year
2019
2018
2017
2016
2015
2014
2013
2012
2011
2010
2009
2008
Total
165
153
158
137
151
160
150
182
170
162
140
131
Table 5: Comparison of median raw marks for the two written papers (out of 240) with previous
years.
The histograms shown in figures 2 and 3 show the distributions of the total raw mark
9

(out of 260) and the total scaled mark (out of 100). These include the Scientific Computing
project. While the histogram of the raw marks (figure 2) has a form that is broadly
predictable, Examiners note the extreme distortion to the histogram that results from
the scaling. In particular, the TOP partition is stretched by the required distribution
while the MIDDLE partition is severely compressed. In the TOP partition, one raw mark
equates to 0.606 scaled marks, while in the MIDDLE partition, one raw mark equates to
only 0.244 scaled marks. The discontinuity in scaling coincides approximately with the
drop off in numbers for marks above the mode thus substantially enhances that drop-off.
Examiners note, however, that the current rank-based algorithm used for NST1A overall
means that the actual marks have less value and so this distortion is less unacceptable.
Figure 2: Histogram of frequency distribution of raw marks (including scientific computing).
Figure 3: Frequency histogram of rescaled marks (including scientific computing).
As is customary, all questions were discussed with the Lecturers and received their
approval before the camera-ready version was sent to Reprographics. A commentary on
10
the performance on individual questions in given in the Appendix A, and scatter plots
and additional statistical information can be found in Appendix B.
13
Conclusions and Recommendations
We were satisfied with the successful conduct of the NST IA Mathematics Examinations
for 2019. However, there are several recommendations which we would like to make to
ensure that they continue to run smoothly, ensuring suitable contingencies are in place in
foreseeable scenarios.
13.1
Recommendations carried over from previous years
The first of these recommendations carried forwards represents an issue that has been of
concern for over twenty years. The second has reiterated each year since the papers were
first sat at the Sports Hall. While we accept that logistics may make the first of these
recommendations difficult to address without a major restructuring of all undergraduate
exams, the second is relatively straight forwards and there can be no excuse for not
addressing it.
1. Timing of Exam. The late timing of this examination places the Examiners and As-
sessors under considerable pressure, especially given that the combined NST/CST
IA Mathematics is arguably the largest examination in the University. The tim-
ing should be kept under review and small margins for improvement considered,
including a later date for submission of marks centrally etc.
2. Question’s correction in Sports Hall. The Sports Hall generally works well as a venue
in which all candidates can be accommodated.
However, a serious question remains of how the Examiners would deal with a sit-
uation in which a question needed to be corrected during the examination and in
which the correction were of a technical nature, involving a formula or graph, that
could not easily be explained using the microphone.
While this problem (of correction) has never occurred in all previous years, but if
it does then a solution should be in place. The Registry needs to take this
issue seriously and cease just making the insulting statement “Finally,
we encourage departments to ensure that exam question papers are com-
prehensively checked in order to limit and ideally eliminate corrections
and mistakes.”
For the present year, Examiners were promised three large whiteboards along the
front of the hall with a further two located at the sides. These were not provided.
Instead, there were just two modest sized whiteboards at the front that would have
been totally ineffective if corrections had been needed. Not only were the white-
boards of inadequate size and number, but they were not high enough to be visible
over the heads of other candidates and there was no mechanism of being able to
write characters of adequate size or weight on them. Figure 4 gives an indication of
the lack of visibility of these.
Using a data projector which projects on the front wall could be a solution. Either
a simple screen (or better, a pair of screens) could be suspended from the steel
11

Figure 4: The poor visibility of the whiteboard at the front of the Hall from at the centre near
the back of the hall. Some candidates were seated towards one side of the hall on one of the rows
behind that from which this photo was taken. The whiteboard is located in the centre of the
photograph slightly to the right of the white nets that are hanging near the corner of the room.
The second whiteboard was positioned in the same relative position on the other side of the Hall
and was no more visible from this position. The photograph was taken about 40s after the end of
Paper 1.
structure supporting the roof, or a sufficiently powerful projector could even project
straight on the wall.
Examiners also recommend that the projector is attached to a visualiser to allow
either hand-written, hand-drawn or printed material to be projected readily. The
‘examination time’ should also be displayed on the projected image (this could be
provided by a clock placed within the field of view of the visualiser).
It is essential that the Registry seek an improved solution for future
examinations in the Hall. This has been an ongoing issue that has been
raised every year since the Hall was first brought into use.
Examiners ask that the Chair of the NST1A Examiners either requests the Chair
of the NST Management Committee to write (or jointly write) to the Pro Vice
Chancellor for Education to ensure the issue is taken seriously and more appropriate
measures are put in place.
13.2
Recommendations regarding this year issues:
1. Rubric and coversheets. Examiners should give some thought to the rubric on the
coversheet of the Examination. Due to the use of a floating Assessor, it is desirable
for every question to be given its own coversheet, rather than binding multiple
questions with the same responsible Examiner together. The current rubric does
not make this completely clear. Each attempt having its own separate coversheet
12
simplifies the separation of, for example, P1Q11Z and P1Q18Z. Additionally, the
coversheet should make it clear that both the question number and Examiner letter
should be given; many candidates did not. Examiners should also check the content
of the coversheets relative to the rubric. For example, candidates should not be
asked to fill in boxes on the coversheet that have already been filled.
2. Rubric and Section A. Examiners should consider whether the current rubric and
instructions for Section A are sufficiently clear. For example, a large number of
candidates (unnecessarily) started each question in Section A on a new page.
3. Consistency between rubric and announcements by examination room staff. It is
recommended that Examiners get to see and comment on the instructions that will
be read out by invigilators to ensure consistency. Such instructions should be dated
and reviewed annually. The Senior Examiner should be sent a copy of the final
version of the instructions in advance of the examinations.
4. Formula booklets. The front sheet of the exam paper should say “Calculators and
formula booklets are not permitted in this examination.”
5. Section titles. Examiners suggest that immediately following the ‘Section A’ and
‘Section B’ titles instructions specific to that section are repeated within the paper.
It is important that these are consistent with and reinforce the statements on the
coversheet. For example, for Section B, the instruction “Start every question in this
Section on a separate page.” The aim of this is to help reduce the number of scripts
where the rubric is not followed.
6. Spare coversheets. The supply of coversheets in the examination rooms should be
sufficient to not only provide enough for each candidate to be issued with six, but
also for there to be spares such that candidates who attempt more than the required
number of questions, or who spoil their coversheets, can be provided with additional
ones. It was necessary this year to make use of some photocopied coversheets.
7. Additional script paper. By the end of the examination, a considerable number of
candidates had requested additional script paper, and their receipt of this would
sometimes take a number of minutes. The Examiners therefore recommend that all
candidates are provided with a larger supply of script paper at the start of each
examination.
8. Material distributed to candidates. Examiners should check what material is placed
on the desks in advance of candidates being allowed into the Sports Hall. From
experience this year, it takes around 10 minutes to collect in material that should
not have been distributed. If material is missing, it may take significantly longer
to distribute it unless it is already present in the examination room(s). The Senior
Examiner should attempt to determine the source of any such error and whether it
has been replicated across the colleges.
9. Verbal instructions (before the start of the exam). Verbal instructions at the start of
the examination should reinforce the need for each Section B question to be started
on a new page (some scripts had multiple Section B questions on the same page).
13
Verbal instructions at the start of the examination should note that candidates
requiring additional paper should make such a request before they have run out of
paper.
10. The start of the examination. Examiners recommend that there is a silent pause
between the invigilator giving announcements and announcing the start of the ex-
amination. This pause should be of at least 30 seconds. Such a pause will also
aid in having the examination start at a precise time on the official clock in the
examination room.
Additionally, consideration should be given to having settable clocks or a count-down
timer such that the ‘official’ time in the examination room is set to 9:00 exactly at
the start of the examination. For 2019, there was a delay between the official clock
reaching 9:00 and the start of the examination.
11. Verbal instructions (at the end of the exam). Verbal instructions given to candidates
at the end of each examination should reinforce the need to complete the desk
number as well as their candidate number, and to number each page of their script.
Candidates should also be reminded that they are required to fill in both sides of
the Master Coversheets.
12. Sorting and coversheets. It is important that a record is made of any Master Cover-
sheets that are filled in by those undertaking the sorting. A hand-written note was
provided on at least some of the Coversheets where this had been done. While a
checkbox was also provided on the Master Coversheet to indicate an issue identified
by the sorters, this was not reflected in the computer-generated report of anomalies.
(A significant number of the discrepancies identified were between scripts and Mas-
ter Coversheets completed by the sorters. It is important to note, however, that this
is not a criticism of those undertaking the sorting.)
13. Assessors (distribution of the marking load). As with last year, assigning two As-
sessors to two large Section A markings seems reasonable and works well.
This year, the third ‘floating’ Assessor marked only two Section B questions, but
these were amongst the questions with the largest number of attempts and assigning
any additional questions to that Assessor was likely to lead to overload. The Senior
Examiner made an assessment of the number of scripts likely to be received for each
question, and the distribution of marking between the two Papers in advance of each
of the Papers. On this basis, Examiners were alerted to the provisional deployment
of the floating Assessor prior to the scripts being sorted.
It is recommended that Assessors continue to be utilised and that the Senior Ex-
aminer should attempt to make a provisional estimate of the marking load (based
partly on the popularity of given topics from the previous years) prior to the exam.
14. Planning marking load. It is further recommended that Examiners should consider
the likely marking load and the split between the two papers at the point when
responsibilities for particular questions are being considered. For the current year,
one Examiner was responsible for three questions requiring only A Course material.
As two of these questions were in the first paper and have been relatively popular
historically, it was natural to assign one of these to the Assessor. Although some
14
Examiners had two Section B questions in Paper 2, the second question was always
one requiring B Course material, thus limiting its likely uptake.
15. Redistribution of marking between Examiners. Instead of overloading the third ‘float-
ing’ Assessor, Examiners S and W between them marked the 488 scripts of candidates
who attempted P2Q15Y. It is noted that while not formalised, the possible need this
strategy had been discussed in advance between Examiners S and W at the point
that questions were being assigned to Examiners. Communication between the two
Examiners who undertook the marking, and the Examiner who had set the question,
ensured the scripts were marked in an equitable manner.
With 8 examiners and 3 assessors, each will have to mark about 580 scripts if evenly
distributed. However, the latter is hard to achieve due to natural clustering of ques-
tions for examiners (e.g., linear algebra, probability, ODEs) and student preferences.
If Examiners would have marked exactly the Section B questions they had set (with-
out the help of an Assessor), the marking load would have varied from about 320 at
the lower levels to over 1,000 at the top end. With the strategy used this year, the
marking load ranged from 408 to 722 scripts marked by Examiners (and 889 marked
by Assessor AF), with the final distribution given in table 6.
Examiner
R
S
T
V
W
X
Y
Z
A1
A2
AF
Mark own §B
722
322
957
476
463
408
1056
911
533
531
0
With Assessor
722
322
499
476
463
408
1056
480
533
531
889
Actually marked
722
700
499
476
573
408
568
480
533
531
889
Table 6: Marking load.
For the present year, some Examiners knew a priori that they would have to mark
only two Section B questions (whether because they had set a Section A question or
because they had only been responsible for setting two Section B questions). It is
recommended that, when assigning the questions, such Examiners should be made
aware that they may be called upon to undertake some marking of other questions
if necessary to balance the workload where this cannot be achieved by the efforts of
the floating Assessor alone.
16. Discrepancies and scripts identifiers. It has been recommended previously that,
shortly before exam, the Senior Examiner should advise all Examiners and Assessors
that exam scripts have 3 identifiers: the candidate number, a letter and a desk
number. Then, in case of a poorly written candidate number (which appears with
the letter on the spreadsheet), this number can be easily checked (both candidate and
desk numbers increase simultaneously). This would help to prevent discrepancies.
It is noted that the Undergraduate Office is also provided with a list of candidates
who sat in college. This can be useful in the case of incorrect candidate numbers as
such scripts do not have a desk number that would otherwise provide a parity check.
17. Checkers (number). In 2018, shortly before exams, it was found that there was an
insufficient number of checkers. Consequently, several checkers were appointed in
a rush and some did not receive proper instructions and that caused a number of
problems.
15
This year, the adequacy of checkers was confirmed in advance and no problems were
encountered (although there was some later changes for logistical reasons associated
with mark entry).
18. Checkers (instructions). The Senior Examiner should remind Examiners to ask their
Checkers to insert college scripts in the correct place in the main sequence of scripts.
(All Examiners did so in 2019 without prompting.)
19. Markbook (access for Assessors). Although the Faculty Board agreed that Assessors
could be granted permission to enter into the Markbook the marks for the questions
they had marked, an oversight meant that a suitable mechanism was not put in
place to permit an Assessor access to the Markbook for a given question. Thus the
responsibility for mark entry this year remained with the responsible Examiner (or,
in some cases the Senior Examiner, as they had access to all the Markbooks).
The Computer Officer (
is now aware of this ongoing need, but it will
be necessary for the Senior Examiner to make sure a suitable implementation is in
place for the next year.
20. Markbook (mark entry by others). Examiners discussed whether it would be desirable
to have others enter their marks for them. While it was agreed that this was not
an effective use of Examiners’ time and added to the workload, Examiners were
concerned that having to transfer the scripts to another party to enter the marks
could add another bottleneck to what is already a very tight schedule. (Examiners
noted that they felt it was important that the Checkers were not involved in entering
the marks.)
21. Markbook (minor improvements). Examiners note that there are a number of minor
changes to the online Markbook that may help reduce further the (small) number of
transcription errors and improve the speed of data entry. (i) Given that coversheet
details are likely to be known in advance of Examiners entering marks, the cells where
marks may reasonably be expected could be highlighted to indicate this, whilst still
allowing entry in cells where the coversheet information did not indicate an attempt.
(ii) The Logout link should be in a font at least as large as the Marks checked
and complete link. (iii) Examiners noted that using a bold black font on the
candidate number to highlight the number for the selected candidate would be read
more easily than the red font, particular in bright conditions (perhaps having those
not selected in dark blue). (iv) To decrease the likelihood of accidentally entering
marks in the wrong column when in
question, next candidate mode, it
would be useful to be able to lock mark entry to a particular column (e.g., by having
an enter check box in the column header.)
Examiners also noted that they would find a tally of running statistics at the top of
the page useful.
22. Postprocessing spreadsheet. Examiners noted that tools for postprocessing the marks
(e.g. the calculation of correlation coefficients and the production of scatter plots) are
valuable. The present situation is that individual Senior Examiners ‘hack’ together
an ad hoc solution each year whereas it would be more efficient for at least a basic
set of such tools to be provided or handed on from one year to the next.
16
23. Examiner eligibility. As one of the Examiners appointed this year had retired from
their previous role in the University, there was some confusion over their eligibility
to act as an Examiner, especially as there is a discrepancy between the guidance and
rules on eligibility stated in different places.
It is recommended that, when there is a history of experience and reliability in
the role, such people may continue to act as Examiners, where they are willing to
do so and with the agreement of the Faculty Board of Mathematics. It is noted,
however, that confirmation of their eligibility and agreement of their appointment
and confirmation that the University will permit it must be completed before the
start of the process of setting examination questions.
24. Ability to fulfil the role. Departments should be reminded that they need to be con-
fident that those who are being nominated as Examiners are capable of completing
the task they are assigned. This year it proved necessary for a replacement Examiner
to step in part-way through the process. While this worked well in the end, it could
have had serious implications had action not been taken until the point of marking.
25. Application of scaling. Examiners discussed the manner in which the somewhat
arbitrary scaling of the marks is undertaken. They agreed that it would be more
desirable and lead to less confusion if the Computational Project (CP) mark was
simply mapped using a constant factor onto its nominal 7.7% of the final mark and
the rescaling was then applied only to the written papers. Specifically, the total
mark would be rescaled as is done currently, but then the scaled CP mark would be
set to 7.7/20 of the raw mark (out of 20). The scaled total less this scaled CP mark
would then represent the scaled written mark, which would be distributed pro rata
between the two written papers. In rare cases, the algorithm doing this may need
to transfer scaled marks in excess of the maximum for one paper to the other paper.
26. Distribution of material between papers. Examiners noted that while historically the
distribution of material between the two papers was considered each year with the
decision being made only after the questions had been set, in recent years there has
been a tendency for questions to covering the same subset of material to end up
with the same number in the same paper. While some candidates may receive a
benefit from this consistency, it was felt that the examination was perhaps starting
to become too formulaic. Examiners recommend that this approach is reconsidered,
although note that the change in pattern make invoke some complaints from some
quarters.
27. Topical or cross-sectional questions. For many years the division of questions across
the courses has segmented the schedules into discrete packets and examined those
packets separately, thus allowing students to concentrate on half the material and still
have the potential to earn very high marks. Only Section A of Paper 2 (introduced
nearly 20 years ago) forces the students to have a broader view of the material
they should have learnt in the course.
Additionally, the packets include ‘basic’
aspects such as vectors, complex numbers and integration that end up appearing
both in their own basic questions. For example, integration occurred in its own basic
question, P1Q17T, as well as multiple integration (P1Q12S), vector fields (P2Q13T),
surface integrals (P2Q16V) and Fourier Series (P2Q18S). In contrast, the material
17
on probability could be safely ignored by candidates as avoiding the two Section B
questions (found in different papers) would not significantly reduce choice and only
two marks of Section A in Paper 2 dealt with probability. Examiners recommend
that the current strategies of following the same pattern of questions each year and
of having topic-centred questions rather than cross-sectional questions is reviewed.
They agree, however, that although they believe they have the authority (a) to
change the way material covered in the schedules is translated into examination
questions and (b) to introduce more cross-sectional questions where a single question
may draw upon multiple elements of the schedules, the recommendation is that any
such plan is reviewed in conjunction with the appropriate subcommittee of the NST
Management Committee.
13.3
Issues raised previously but resolved this year (need monitoring)
1. Computer project marks delivery. Following problems last year, it was established
at an early stage that computing marks would not be included for CST candidates
and so processing of their marks would be split from those of the NST candidates
at an early stage.
Examiners note that by making early contact with those responsible for the Scientific
Computing projects, there was no difficulty in receiving the marks this year for NST
canddiates. Examiners note, however, that there has been a long standing problem
of receiving Computer project marks in time, and so it is necessary for Examiners
to remain vigilant.
2. New examiners. Generally, all examiners receive a guidelines pack on what they
are supposed to do. However, we would recommend that the Senior Examiner will
advise examiners who are new to Cambridge (or to NST IA) about basic rules. Those
include:
(a) Questions and answers should not be communicated by email;
(b) The scripts should be returned in the order they were received (i.e., in the order
of the candidate numbers);
(c) In addition to completing the markbook, returning scripts to MR15 is bound
by the same deadline;
(d) Examiners should know the rules of mark checking and advise their checkers
accordingly.
3. Quiet environment for those who sit exam in Colleges.
Directors of Studies, or
other authorities, should ensure that students taking exam in NST IA Maths are
not put in one room with students from other subjects who use computer for typing
their answers since it causes a lot of distraction. (In previous years there have been
complaints in this respect, but none were received this year.)
4. Examiner from Computer Lab. The introduction of an additional Examiner from the
Computer Laboratory had been an issue for several years, and last year (2017-2018)
it was resolved by getting
(Computer Lab, Churchill College) as
an Examiner. The Examiners hope this can continue.
18
5. Checking the PA system. The PA system in the Sports Halls should be checked and
functioning before the examination period (there were malfunctioning in 2017).
6. Lettering on the coversheets. In 2017 and perhaps before, there were inconsistencies
and errors in lettering of instructions on three different coversheets: first page of
exam papers, Master Coversheet, and the coversheet for individual questions. These
were resolved for both this (2019) and last (2018) year, but would need a double-
check.
7. Eliminating the possibility of copying. The following recommendations were made
in 2017, and were implemented in 2018 and 2019.
Exam office should ensure that single-sided paper should be used as standard during
the examinations (i.e. blank pages lined only on one side), reducing the temptation
to write on the reverse side. Examiners note, however, that for Paper 2 in 2019
there was insufficient single-sided paper and so some candidates had to be given
double-sided paper.
As part of their instructions, Invigilators should instruct the students to cover pages
of completed work by turning them over to the blank reverse side, i.e. not leaving
them exposed to view by other students on their desks. As a matter of good practice,
students should be occasionally encouraged to turn over their work when they have
not followed this instruction.
Desks in the Sports Hall should be placed as far apart as possible.
Acknowledgements
The Examiners are very grateful to
and the other staff
of the Undergraduate Office for their advice and assistance at all stages, to
for preparing the spreadsheets and processing the data, and to the Assessors and Mark
Checkers for their valuable assistance.
19
APPENDIX A: Comments on individual questions
The following comments are provided by the Examiner or Assessor responsible for marking
each question.
Paper 1
Section AX (Marked by Assessor A1)
[A-level]
Attempts: 533 (Quality: α = 379 ⇒ 71.1%, β = 139 ⇒ 26.1%), Average mark: 15.63
The students did well with an average mark around 70%. The most common sources
of mark deduction came from lack of attention to detail rather than lack of conceptual
knowledge. There were some questions for which a few students did not know a formula for
obtaining the answer, and consequently could not attempt the question at all: these were
the questions regarding integration by parts, and geometric and arithmetic progression
sums.
For comparison, the mean mark in 2018 was 14.4 and in 2017 was 14.8. Hence, the
conclusion is that Examiners succeeded in their aim to make Section A on Paper 1 more
accessible.
Question 11Z (Marked by Assessor AF) [Complex numbers]
Attempts: 431 (Quality: α = 142 ⇒ 33.0%, β = 172 ⇒ 39.9%), Average mark: 12.20
A popular question that was reasonably well answered, although a lot of students made
a meal of the algebra. In (a), those who applied De Moivre’s theorem on each of the
trigonometric terms separately completed the exercise with few issues, whereas those who
tried to use two applications of relevant double angle formula often got tied up.
In part (b), those that used the angle addition formula for Tan answered the problem
quickly, whereas those who reverted to Sine and Cosine formulae often got tied up in
algebra.
Part (c) was mostly well answered, although there were quite a few students who wrote
things akin to |a + ib| = 1 ⇒ a + ib = ±1 which was a little worrying. For the final part,
those who simply used log zw = w log z did well and easily got correct expressions for the
real and imaginary parts of the function. A decent proportion of the students thought
that if r > 0 then ri = r, which caused problems later. For the students that made it this
far, the sketch was mostly well done. The most common error was to get the direction of
rotation of the spiral the wrong way round.
Question 12S [Multiple integration]
Attempts: 98 (Quality: α = 40 ⇒ 40.8%, β = 37 ⇒ 37.8%), Average mark: 12.58
General comments: This question was not very popular, attracting only 98 attempts
with the mean mark ' 12.6.
(a) An alarming number of candidates were not able to formulate or compute the volume
of an axisymmetric volume. Of those who failed, a surprising number attempted
an integral of the form R R π˜
r2dz. While some attempted to use ˜
r = −R ln(z/R),
0
20
while a larger number used a change of variables to rewrite this as an integral of
r2e−r/R, often getting incorrect factors of R in the process. In both cases, most such
candidates forgot that this was only valid for z ≥ Re−1. Indeed, only one candidate
came close to achieving the correct answer following this route (that candidate was
let down by a sign error).
A surprising number of candidates, having correctly
calculate the volume V , failed to read the question properly and so did not give an
expression for R (this requirement should have also prompted many to realise their
expression for V was not correct, but no candidate made this observation).
(b) Most candidates were able to formulate the integral correctly, with many gaining
full marks. Those who did not were let down by poor algebra, either not carrying
through powers correctly, or evaluating [f (x) − g(x)]ba as f(b) − g(b) − f(a) − g(a).
(c) Perhaps predictably, this part tripped up quite a few candidates. Some who were
unable to compute (a) correctly nevertheless saw that the symmetry of the sliced
cylinder could be used and gained the marks, while the majority attempted a harder
integration, often using (unsuccessfully) Cartesian coordinates.
It is noted that the performance in 2018 was also poor, with an average mark of 8.63
from 164 attempts. On that basis, the number of attempts has declined, but the average
mark increased.
Question 13Y [ODEs]
Attempts: 477 (Quality: α = 331 ⇒ 69.4%, β = 99 ⇒ 20.8%), Average mark: 15.90
Part (a). Solving a differential equation, being told first to check that the differential form
(given) was exact. Most managed to follow the instructions OK. Some forgot to apply
the given boundary condition. Some showed the differential form was exact but failed
(forgot?) to then solve the equation.
Part (b). Two first-order linear inhomogeneous equations, requiring use of an inte-
grating factor. The first was done very well. The second had a (to me surprisingly large)
number of attempts where the candidate clearly had no idea how to respond to R ln(x)dx.
Part (c). This used the Laplace equation in spherical polar coordinates, written out
in full, with a solution to demonstrate by substitution. This was very straightforward dif-
ferentiation. Mostly done very well, with careless mistakes rather than misunderstanding
being apparent.
Question 14R [Functions of several variables (curve sketching etc.)]
Attempts: 214 (Quality: α = 111 ⇒ 51.9%, β = 74 ⇒ 34.6%), Average mark: 14.42
Part (a): Elementary mistakes in differentiation; product rule often not used; log
taken to transform product to sum and then differentiate; no checking of dimensions of
the differential form, through which errors would easily have been spotted.
Part (b): Often, the expression for the energy was used directly to compare masses,
and the result of Part (a) was not used; depending on how the expression is manipulated,
there are numerical differences in the mass ratios, however this makes no change to the
sign of the differential of the masses.
Part (c): Bimodal answers, either very poorly or very well done; mistakes include not
understanding the chain rule correctly and not working out the algebra correctly.
21
Question 15V [Taylor series]
Attempts: 331 (Quality: α = 100 ⇒ 30.2%, β = 107 ⇒ 32.3%), Average mark: 11.39
Although the vast majority of candidates earned some marks in Part (a), it was poorly
attempted, particularly the remainder term. Most candidates provided the first n + 1
terms instead of the first n: this was not penalised (but the error term Rn had to be
consistent with the number of terms provided).
In (b)(i), candidates tried to substitute a series for ex into a series for sin x, which does
not yield a closed-form solution.
Part (b)(ii) was well attempted with a healthy proportion of candidates spotting that
a substitution y = x + 1 would yield a quick solution. A great many other candidates
deduced a series by repeated differentiation and evaluation. Most unsuccessful attempts
tried to provide a series in increasing powers of x instead of increasing powers of (x + 1).
Candidates struggled to recall standard power series for ln(1 + x) but some were able
to derive it and use it correctly.
Overall, the average mark was 11.39 (and marks spanning the full 0–20 range). 331
attempts indicated that this was a popular question.
Question 16W [Probability]
Attempts: 303 (Quality: α = 157 ⇒ 51.8%, β = 104 ⇒ 34.3%), Average mark: 14.08
General comments: A popular and straightforward question on discrete probabilities
attracted a lot of attempts (302) with the mean mark ' 14.1.
(a) Practically all the candidates obtained 8 easy marks. However, a few candidates
found the sample space only and did not calculate the probabilities for events.
(b) This part was answered correctly by the majority of candidates.
(c) No major problems, except some slips in manipulating with fractions, e.g. a typical
mistake: 4/25 − 1/10 = −1/25.
(d) Not many candidates sketched correctly the graph. Even if they demonstrated that
P (R|B1 ∩ B2) ≤ r, the curve for P (R|B1 ∩ B2) versus r was shown above bisector.
Question 17T [Integration]
Attempts: 402 (Quality: α = 194 ⇒ 48.3%, β = 145 ⇒ 36.1%), Average mark: 13.81
Part (a): Question (i) was done by most students without major problems (apart from
some mistakes in the integration by part): half of them solved it by integrating by part and
the other half solved it by writing sins in terms of exponentials. Question (ii) has proven
to be tricky for about half of the students who attempted it. Some managed to split the
integration by partial fractioning but then tried several substitutions that complicated the
integration rather than simplifying it and some of them abandoned it before completion.
Part (b): The question has proven more difficult than anticipated. About a third of
the students who attempted it either did not perform the substitution t = ln(x) and tried
with other substitutions that did not lead them anywhere or were not able to perform the
partial fractioning in a correct way.
22
Part (c): Most students were able to integrate by part to write In in terms of In−1.
However a significant minority either made some trivial mistake in the integration by part
or were not able to identify In−1. In the second part, some students recomputed I3 from
scratch rather than using the recursive formula that they had just derived.
Question 18Z [Linear algebra]
Attempts: 274 (Quality: α = 76 ⇒ 27.7%, β = 85 ⇒ 31.0%), Average mark: 11.00
This question had two parts. The first part involved finding orthogonal 2x2 matrices
satisfying certain conditions and could be done by brute force algebra (though exploiting
the fact that orthogonal matrices describe rotations and reflections made things easier).
In general, students had few problems with this, though many failed to keep track of all
the stated conditions and many failed to check their answers by back substitution; doing
so would have prevented the loss of many easy marks.
In the second part, students were asked to find a best fit linear formula for a data
set using linear algebra methods. Some students had difficulty in assembling the data
into a matrix and many had difficulty in extracting the final expressions for the model
parameters.
Question 19V* [Continuity/diff and series]
Attempts: 33 (Quality: α = 3 ⇒ 9.1%, β = 7 ⇒ 21.2%), Average mark: 8.15
This starred question was attempted by 33 candidates. Many answers demonstrated ex-
cellent understanding and ability, but many other were little more than a few lines and
consequently scored few marks – perhaps candidates ran out of time on a question near
the end of the paper? The average mark, 8.15, derived from only 33 attempts, reflects
that.
Question 20R* [Properties of integral]
Attempts: 98 (Quality: α = 47 ⇒ 48.0%, β = 33 ⇒ 33.7%), Average mark: 13.66
Part (a): A common mistake was to replace summation of a constant by the constant
itself; mistake in summation often led to wrong limit.
Part (b): Leibniz rule was not understood (or memorised) correctly.
Part (c): Very few successes in evaluating the first integral; either no attempts or
incorrect substitution; second part was well-answered.
Paper 2
Section AS (Marked by Assessor A2) [NST IA course]
Attempts: 531 (Quality: α = 116 ⇒ 21.9%, β = 237 ⇒ 44.6%), Average mark: 11.26
The 10 questions in section A covered a broad range of basic skills. Answers showed that
most candidates were familiar with most of the mathematical methods needed, but errors
in understanding the requirements of some questions were common. Candidates had the
most success with questions 1, 3, 5, 9. Full marks of 20/20 were very rare.
23
Q1: most candidates attempted this straightforward complex variable question and
then answered correctly. Some did not spot the required factors of (z2 + 1), some made
arithmetic errors with factors of i.
Q2: most candidates attempted this. Many did not use the (assumed known) Taylor
series expansion of exp x and spent substantial effort calculating derivatives of exp x3
to no good effect: a hint might have avoided this. Those who used the expansion of
exp x generally answered part (a) correctly. For part (b) quite a few candidates tried to
manipulate the integral rather than integrating the series from (a) term by term.
Q3: Nearly all candidates attempted this ODE question and answered correctly, finding
the appropriate integrating factor or change of variables. Although the question was not
trivial, this was likely the most successfully answered question. (In some cases the only
successfully answered question.)
Q4: For part (a) there were several plausible integration-by-parts options used. The
right option led to the correct answer easily, but a substantial number of candidates
unsuccessfully followed another option. Most candidates answered part (b) correctly, but
omitting the constant of integration was a common error.
Q5: Most candidates attempted this. They understood what was required, calcu-
lated the straightforward partial derivatives, and answered part (a) correctly apart from
occasional arithmetic errors. They were less certain about part (b): while the majority
recognised the conditions for a saddle point, quite a few incorrectly opted for maximum
or minimum.
Q6: This was very hit or miss. For part (a) many (usefully) used a Venn diagram to
help work out the answer. The term P (A ∩ B ∩ C) caused a lot of problems (Include it?
Plus or minus sign? Multiples?), and probably substantially less than half of the attempts
were correct. For part (b) some cited Bayes theorem but of these few continued to deduce
that the answer was 1. Quite a few simply wrote 1 (correctly) as the answer, often without
explanation. Some did not understand the notation in the question.
Q7: overall this generalised eigenvalue problem was handled well. Candidates used a
variety of methods (matrix inverse, algebraic equations in two variables (the most popular),
matrix equation in two variables), and usually proceeded toward the correct answer. (Quite
a few arithmetic slips in manipulating and solving a quadratic occurred though.) Some
obtained one value for λ by trying x = y, but missed the second value. Several candidates
misunderstood the term non-trivial.
Q8: Although most candidates clearly understood the concept of Fourier series, this
Fourier series question caused a lot of problems. Many used the hint when trying to
calculate the coefficients by integration (usually formulating the integrals correctly), but
then spent a lot of effort on integrations by parts with no success. A minority used the
hint to express f correctly as a sum of two sin(nx) terms. Several of these did not then
proceed to list all the corresponding Fourier coefficients.
Q9: Nearly all candidates attempted this multiple integral question. Most correctly
formulated the required integral, and most of these then obtained the correct answer.
Errors were mainly through lack of care: incorrect integral bounds or arithmetic slips.
Q10: This question on vector area had the least attempts and caused the most prob-
lems. Possibly the candidates were uncertain how to proceed. Of those who attempted
the question, for part (a) many started with a lengthy integration over the surface area of
the shell and did not reach the right answer. Only a few thought of the simple solution
of using the area of the disc at the base of the defined shell. It might have helped to
24
provide a definition of vector area in Cartesian co-ordinates, perhaps as the projection in
directions x, y, z. Perhaps ‘open surface’ could have been used instead of ‘outside of the
shell’ in the wording of the question.
Performance of candidates this year, with a mean of 11.3, should be compared with
those of 2018 (mean 11.6) and 2017 (mean 14.8). Although the questions were aimed
to provide a good test that would be accessible to the majority of candidates, there is
clearly a systematic weakness when assessing the entire breadth of NST1A. In light of this
performance, it has been agreed with DAMTP that a more detailed (anonymised) analysis
of this performance will be undertaken. Interestingly, the correlation between the marks
scored on this question and the total raw mark achieved is the highest of all the questions
in the examination (0.80, compared with an average correlation of 0.66).
Question 11X [Vectors]
Attempts: 408 (Quality: α = 267 ⇒ 65.4%, β = 105 ⇒ 25.7%), Average mark: 15.29
There were a lot of answers and with a few exceptions they scored high marks. Even so
there were exceptions, and the simple questions at the start did contribute some discrimi-
nation: for example maybe one candidate in ten offered a left-handed set. The significance
of e · f × g was very well understood though a significant minority worked with e × (f × g)
with or without brackets. Part (c) which concerned a line was found harder than part (b)
which concerned a plane: it might have been better to transfer one or two marks from
(a)(ii) to (c). The whole question was set in the unit cube and a very small minority were
able to see the geometry sufficiently clearly as not to need vector formulae.
Question 12R [Functions of several variables]
Attempts: 410 (Quality: α = 208 ⇒ 50.7%, β = 145 ⇒ 35.4%), Average mark: 13.98
Part (a): A common error was to not evaluate function at stationary points, the question
was not read carefully enough; often, cases for pair of equations determining stationary
points was not understood correctly; more complex equation of the pair was often used
instead of the simpler one; errors in determining stationary points fed into remaining parts
of the questions.
Part (b): The Hessian test for nature of stationary point not universally understood;
some use of single-variable terminology like “inflection point”.
Part (c): Well-answered, though some did not understand the notion of contour plot,
and attempted to plot the variation of f as one of the arguments was varied at fixed values
of the other argument.
Part (d): A common mistake was to plot the negative of the gradient, rather than the
gradient; possibly, automatic response to plotting forces for given potentials.
Question 13T (Marked by Assessor AF) [Vector fields]
Attempts: 458 (Quality: α = 380 ⇒ 83.0%, β = 40 ⇒ 8.7%), Average mark: 16.47
A very popular question that was well answered by the vast majority of students. The
computation of the two line integrals in (a) and (b) was done very well. Many students
lost marks in (c) when they assumed that a sufficient (rather than necessary) condition
for the vector field to be conservative was for the two integrals in (a) and (b) to be equal.
25
There were a few students that spotted this issue afterwards and computed the curl of the
vector field in the case β = 0, showing that it did indeed vanish. In part (d) most students
found the scalar potential by inspection and so finished the problem with little work.
Question 14W [Probability]
Attempts: 160 (Quality: α = 44 ⇒ 27.5%, β = 50 ⇒ 31.3%), Average mark: 10.91
General comments: This question on distributions of continuous random variables was
quite a challenge for candidates. The answers revealed a general misunderstanding of
probability to find a random variable in an infinitesimally small interval, e.g. P (u ≤ U ≤
u + du) in many answers was proportional only to pdf and du was omitted. Not many
candidates realised that R x+dx f (y)dy = f (x)dx.
x
(a) Straight answer was given by practically all candidates for the probability P (a ≤
X ≤ b).
(b)(i) Well answered by many candidates. However, some of the candidates thought that
b
Z
P (a ≤ U = X + c ≤ b) =
(f (x) + c)dx .
a
(b)(ii) Big problems for many candidates stating that P (u ≤ U ≤ u + du) = f (u − c).
(c)(i) Similar difficulties for many as in (b,ii)
(c)(ii) Only very few candidates (typically gaining 17-20 marks) managed to succeed here.
Many candidates offered a qualitative explanation of the formula and thus obtained
2 (out of 3) marks.
(d)(i) Quite a few candidates did not bother to specify the range of y in which g(y) = 1.
(d, ii) A typical mistake in evaluation of h(z) was in using infinite limits of integration.
Several candidates corectly evaluated the integral but then stated that this is valid
only for z ∈ (−1/2, 1/2) and h(z) = 0 outside of this interval.
(d)(iii) Not many correct sketches for h(z) were presented. Instead, some of the candidates
sketched the pdf’s with negative parts, i.e. h(z) < 0. Also, discontinuous pdf’s were
shown in some of the answers.
Question 15Y [ODEs and PDEs]
Attempts: 488 (Quality: α = 277 ⇒ 56.8%, β = 100 ⇒ 20.5%), Average mark: 14.0
General comment: This very straightforward question on the second-order ODEs at-
tracted a lot of attempts with the majority of them resulting in solid 17-20 marks.
Typical mistakes:
(a, i) A few candidates did not find the roots of quadratic auxiliary equations correctly.
Some of the candidates made a typical mistake in finding constants for general
solution by forgetting factor 3 in cos(3x). Another typical mistake was in the trans-
formation for general solution,
Ae(2+3i)x + Be(2−3i)x = e2 Ae(3i)x + Be(−3i)x .
26
(a, ii) Here, many candidates tried first yPI = Ax2e−x for particular integral and then
some of them concluded that A = x giving the answer yPI = x3e−x. Also, there
were many attempts for trying yPI = A + Bx + Cx2 + Dx3 e−x not realizing that
(A + Bx) e−x is the complementary function. Several candidates gave ycf = Ae−x
or ycf = Axe−x as the answer for complementary function.
(b, i) Quite a few candidates did not know how to differentiate original equations w.r.t.
time, producing e.g.,
du
d2u
dv
= −3u + v ,
→
= −3 +
.
dt
dt2
dt
(b, ii) If they got correct equation in (b,i) then mainly this part did not cause any problems,
except slips like the following one,
Ae(−1+i)x + Be(−1−i)x = e−1 Ae(i)x + Be(−i)x .
(b, iii) Quite a lot of algebraic slips in finding the second constant, but otherwise this part
did not cause major problems.
Question 16V [Surface integrals]
Attempts: 112 (Quality: α = 57 ⇒ 50.9%, β = 26 ⇒ 23.2%), Average mark: 13.15
This question was well attempted, although some answers were little more than a few lines
of maths and a note of what they would do if more time were available.
Candidates were generally able to find the correct surface elements for Part (a) and
were able to complete the necessary integrals correctly. The most common mistake was
to integrate over the volume rather than the surface. Small algebraic slips were forgiven
as the question was exploring candidates’ understanding of surface integrals and the use
of appropriate co-ordinate systems.
Part (b) was answered well, and a healthy proportion of candidates offered a cross-
check of their answer using the Divergence Theorem (which yields the answer, e−1, in a
couple of lines).
Question 17Z [Linear algebra]
Attempts: 206 (Quality: α = 67 ⇒ 32.5%, β = 54 ⇒ 26.2%), Average mark: 11.06
The first part of this three-part question essentially required students to find the eigenval-
ues and eigenvectors of a 3x3 matrix. But the eigenvalue equation was given in transposed
form, which already caused some students difficulty. Some did not even recognize the
problem as an eigenvalue problem, and attempted a brute force algebraic approach.
The second part involved stating and manipulating mostly straightforward relations
involving determinants of n × n matrices. Many silly mistakes were nevertheless made, of
which almost all would have been caught by checking the special case of a 1 × 1 matrix,
and all of which could have been caught by checking the special case of a 2 × 2 matrix.
Some students wrote expressions valid only for 3 × 3 matrices in the first place.
The third part involved diagonalization of a square, but not necessarily, orthogonal
matrix. A great many students blithely assumed the matrix to be orthogonal, resorted to
stating known formulae therefore, and paid the penalty.
27
Question 18S [Fourier methods]
Attempts: 224 (Quality: α = 93 ⇒ 41.9%, β = 60 ⇒ 27.0%), Average mark: 12.12
General comments: This question attracted answers from 222 candidates, approxi-
mately half the cohort, although the mean mark was a slightly disappointing ' 12.2.
(a) This was generally reasonably well done, although the integration of R 1 cosh(x)
−1
cos(nπx) dx caused an issue for quite a few (and a lot of unnecessary lines of incorrect
algebra for many). While most spotted from the outset that cosh(x) is even, and so
bn ≡ 0, there was a significant minority who only realised this after attempting to
compute bn. Frustratingly, a significant number of candidates did not write p = 2
and q = ±π (answers of q = π were accepted), but were not penalised if it was clear
they had made the correct transition from cosh(x) to cosh(1).
(b) Despite the hint, around half the candidates who attempted this part did so by
evaluating R 1 sinh(x) sin(nπx) dx rather than through integration or differentiation.
−1
Some attempted other routes, such as trig identities, but all but one failed.
(c) A relatively common error was incorrect factors in the statement of Parseval’s the-
orem (costing them one mark). Beyond that point, many candidates came unstuck
and resorted to integration in an attempt to evaluate (f − g)2 (often failing to
notice a priori that cos(x) sinh(x) is odd and so integrates to zero) despite the clear
instruction to use Parseval’s theorem. Some using Parseval’s theorem proceeded
with a2 −
n
b2n instead of the sum, even thought they had stated the theorem correctly.
Nevertheless a reasonable number were able to compute the required result correctly
(or nearly so, if they had made an earlier mistake).
Question 19T* [Lagrange multipliers]
Attempts: 97 (Quality: α = 48 ⇒ 49.5%, β = 32 ⇒ 33.0%), Average mark: 13.73
Overall, the majority of students who attempted the question did quite well. Most of
the marks got lost due to algebraic mistakes in the solution of the systems of equation to
derive the minima in (a) or (b).
About a quarter of the students did not derive the correct condition for the cone to be
inscribed in the sphere in question (b). Some students did not write down explicitly the
Lagrangian or have not indicated the steps to prove that the inequality between geometric
and arithmetic mean was a direct consequence of the minimum value of f that they had
just derived in question (c).
Question 20Y* [PDEs & Separation variables]
Attempts: 91 (Quality: α = 53 ⇒ 58.2%, β = 13 ⇒ 14.3%), Average mark: 14.79
Part (a). Two first-order PDEs, asked to solve by separation of variables. Given the clue,
most attempts were well done.
Part (b). A Gaussian solution to the diffusion equation. The question led the way
through the question, with method and intermediate results specified. Most attempts were
very well done. Most errors were sloppy calculation. It seemed to be well understood.
28
Appendix B
Scatter plots for NST1A Mathematics 2019
Paper 1
Quest. Q00A Q11Z Q12S Q13Y Q14R Q15V Q16W Q17T Q18Z Q19V Q20R
Attempts
533
431
98
477
214
331
303
402
274
33
98
Mean
15.63 12.20 12.58 15.90 14.42 11.39
14.08 13.81 10.96
8.15 13.66
Median
16.00 13.00 14.00 17.00 15.00 11.00
15.00 14.00 11.00
8.00 14.00
StdDev
2.79
4.39
4.69
4.43
4.58
4.88
4.15
4.19
5.62
5.07
4.48
Correlation 0.524 0.682 0.515 0.667 0.690 0.712
0.512 0.634 0.608 0.797 0.734
Alpha %
71.11 32.95 40.82 69.39 51.87 30.21
51.82 48.26 27.74
9.09 47.96
Beta %
26.08 39.91 37.76 20.75 34.58 32.33
34.32 36.07 31.02 21.21 33.67
n Alpha
379
142
40
331
111
100
157
194
76
3
47
n Beta
139
172
37
99
74
107
104
145
85
7
33
Min
4
0
1
1
0
0
1
1
1
0
1
Max
20
20
20
20
20
20
20
20
20
20
20
Scatter plot (excluding zeroes)
20
18
16
n
o
ti 14
seuq 12dtec10eles 8no k6
P1Q00A
Mar 4
Linear (Fit)
2
0
0
20
40
60
80
100
120
140
160
180
200
220
240
Total mark on written papers
Scatter plot (excluding zeroes)
20
18
16
n
o
ti 14
seuq 12dtec10eles 8no k6
P1Q11Z
Mar 4
Linear (Fit)
2
0
0
20
40
60
80
100
120
140
160
180
200
220
240
Total mark on written papers
Scatter plot (excluding zeroes)
20
18
16
n
o
ti 14
seuq 12dtec10eles 8no k6
P1Q12S
Mar 4
Linear (Fit)
2
0
0
20
40
60
80
100
120
140
160
180
200
220
240
Total mark on written papers
Scatter plot (excluding zeroes)
20
18
16
n
o
ti 14
seuq 12dtec10eles 8no k6
P1Q13Y
Mar 4
Linear (Fit)
2
0
0
20
40
60
80
100
120
140
160
180
200
220
240
Total mark on written papers
Scatter plot (excluding zeroes)
20
18
16
n
o
ti 14
seuq 12dtec10eles 8no k6
P1Q14R
Mar 4
Linear (Fit)
2
0
0
20
40
60
80
100
120
140
160
180
200
220
240
Total mark on written papers
Scatter plot (excluding zeroes)
20
18
16
n
o
ti 14
seuq 12dtec10eles 8no k6
P1Q15V
Mar 4
Linear (Fit)
2
0
0
20
40
60
80
100
120
140
160
180
200
220
240
Total mark on written papers
Scatter plot (excluding zeroes)
20
18
16
n
o
ti 14
seuq 12dtec10eles 8no k6
P1Q16W
Mar 4
Linear (Fit)
2
0
0
20
40
60
80
100
120
140
160
180
200
220
240
Total mark on written papers
Scatter plot (excluding zeroes)
20
18
16
n
o
ti 14
seuq 12dtec10eles 8no k6
P1Q17T
Mar 4
Linear (Fit)
2
0
0
20
40
60
80
100
120
140
160
180
200
220
240
Total mark on written papers
Scatter plot (excluding zeroes)
20
18
16
n
o
ti 14
seuq 12dtec10eles 8no k6
P1Q18Z
Mar 4
Linear (Fit)
2
0
0
20
40
60
80
100
120
140
160
180
200
220
240
Total mark on written papers
Scatter plot (excluding zeroes)
20
18
16
n
o
ti 14
seuq 12dtec10eles 8no k6
P1Q19V
Mar 4
Linear (Fit)
2
0
0
20
40
60
80
100
120
140
160
180
200
220
240
Total mark on written papers
Scatter plot (excluding zeroes)
20
18
16
n
o
ti 14
seuq 12dtec10eles 8no k6
P1Q20R
Mar 4
Linear (Fit)
2
0
0
20
40
60
80
100
120
140
160
180
200
220
240
Total mark on written papers
Paper 2
Quest. Q00A Q11X Q12R Q13T Q14W Q15Y Q16V Q17Z Q18S Q19T Q20Y
Attempts
531
408
410
458
160
488
112
206
224
97
91
Mean
11.26 15.29 13.98 16.47
10.91 14.02 13.15 11.06 12.12 13.73 14.79
Median
11.00 16.00 15.00 18.00
11.00 15.00 15.00 11.00 13.00 14.00 20.00
StdDev
3.94
3.57
4.17
4.21
5.21
5.58
5.68
5.23
5.81
4.23
6.30
Correlation 0.801 0.671 0.591 0.525
0.675 0.674 0.719 0.699 0.704 0.742 0.630
Alpha %
21.85 65.44 50.73 82.97
27.50 56.76 50.89 32.52 41.52 49.48 58.24
Beta %
44.63 25.74 35.37
8.73
31.25 20.49 23.21 26.21 26.79 32.99 14.29
n Alpha
116
267
208
380
44
277
57
67
93
48
53
n Beta
237
105
145
40
50
100
26
54
60
32
13
Min
0
3
1
0
0
0
0
1
2
1
1
Max
20
20
20
20
20
20
20
20
20
20
20
Scatter plot (excluding zeroes)
20
18
16
n
o
ti 14
seuq 12dtec10eles 8no k6
P2Q00A
Mar 4
Linear (Fit)
2
0
0
20
40
60
80
100
120
140
160
180
200
220
240
Total mark on written papers
Scatter plot (excluding zeroes)
20
18
16
n
o
ti 14
seuq 12dtec10eles 8no k6
P2Q11X
Mar 4
Linear (Fit)
2
0
0
20
40
60
80
100
120
140
160
180
200
220
240
Total mark on written papers
Scatter plot (excluding zeroes)
20
18
16
n
o
ti 14
seuq 12dtec10eles 8no k6
P2Q12R
Mar 4
Linear (Fit)
2
0
0
20
40
60
80
100
120
140
160
180
200
220
240
Total mark on written papers
Scatter plot (excluding zeroes)
20
18
16
n
o
ti 14
seuq 12dtec10eles 8no k6
P2Q13T
Mar 4
Linear (Fit)
2
0
0
20
40
60
80
100
120
140
160
180
200
220
240
Total mark on written papers
Scatter plot (excluding zeroes)
20
18
16
n
o
ti 14
seuq 12dtec10eles 8no k6
P2Q14W
Mar 4
Linear (Fit)
2
0
0
20
40
60
80
100
120
140
160
180
200
220
240
Total mark on written papers
Scatter plot (excluding zeroes)
20
18
16
n
o
ti 14
seuq 12dtec10eles 8no k6
P2Q15Y
Mar 4
Linear (Fit)
2
0
0
20
40
60
80
100
120
140
160
180
200
220
240
Total mark on written papers
Scatter plot (excluding zeroes)
20
18
16
n
o
ti 14
seuq 12dtec10eles 8no k6
P2Q16V
Mar 4
Linear (Fit)
2
0
0
20
40
60
80
100
120
140
160
180
200
220
240
Total mark on written papers
Scatter plot (excluding zeroes)
20
18
16
n
o
ti 14
seuq 12dtec10eles 8no k6
P2Q17Z
Mar 4
Linear (Fit)
2
0
0
20
40
60
80
100
120
140
160
180
200
220
240
Total mark on written papers
Scatter plot (excluding zeroes)
20
18
16
n
o
ti 14
seuq 12dtec10eles 8no k6
P2Q18S
Mar 4
Linear (Fit)
2
0
0
20
40
60
80
100
120
140
160
180
200
220
240
Total mark on written papers
Scatter plot (excluding zeroes)
20
18
16
n
o
ti 14
seuq 12dtec10eles 8no k6
P2Q19T
Mar 4
Linear (Fit)
2
0
0
20
40
60
80
100
120
140
160
180
200
220
240
Total mark on written papers
Scatter plot (excluding zeroes)
20
18
16
n
o
ti 14
seuq 12dtec10eles 8no k6
P2Q20Y
Mar 4
Linear (Fit)
2
0
0
20
40
60
80
100
120
140
160
180
200
220
240
Total mark on written papers
Section A
20.00%
18.00%
16.00%
14.00%
12.00%
10.00%
8.00%
6.00%
4.00%
2.00%
0.00%
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
P1QAX
P2QAS
Paper 1, Section B (excluding starred questions)
20.00%
18.00%
16.00%
14.00%
12.00%
10.00%
8.00%
6.00%
4.00%
2.00%
0.00%
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
P1Q11Z
P1Q12S
P1Q13Y
P1Q14R
P1Q15V
P1Q16W
P1Q17T
P1Q18Z
Paper 2, Section B (excluding starred questions)
20.00%
18.00%
16.00%
14.00%
12.00%
10.00%
8.00%
6.00%
4.00%
2.00%
0.00%
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
P2Q11X
P2Q12R
P2Q13T
P2Q14W
P2Q15Y
P2Q16V
P2Q17Z
P2Q18S
Section B starred questions
20.00%
18.00%
16.00%
14.00%
12.00%
10.00%
8.00%
6.00%
4.00%
2.00%
0.00%
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
P1Q19V
P1Q20R
P2Q19T
P2Q20Y
Paper 1 mark distribution
12.00%
10.00%
8.00%
6.00%
4.00%
2.00%
0.00%
0
1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20
Paper 2 mark distribution
12.00%
10.00%
8.00%
6.00%
4.00%
2.00%
0.00%
0
1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20
Overall mark distribution
12.00%
10.00%
8.00%
6.00%
4.00%
2.00%
0.00%
0
1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20
1
The Examiners
The examiners were
was on paternity leave during the exam period, and additional
assessors were appointed to ensure marking could be done on time:
,
),
and
2
The Examination
As in previous years, the examination consists of two three-hour written
papers and a Scientic Computing project, which was assessed earlier in the
year. Each paper had a Section A, consisting of 10 short questions adding
up to 20 marks, and a Section B, consisting of 10 long questions each worth
20 marks. The number of marks available for each part of a question was
to be indicated.
Section A of Paper 1 was to be based on the core A-
level Mathematics syllabus, while Section A of Paper 2 was based on the
NST Mathematics Course A. Two of the questions in Section B of each
paper required knowledge of Course B; they were indicated with asterisks
and placed at the end of the paper. Candidates should attempt all questions
in Section A , 2 marks each, and at most 5 questions from Section B, 20
marks each. A total of 120 marks was therefore to be available on each
paper, and the Scientic Computing mark was to be scaled to a maximum of
20, making a total raw mark of 260.
1
3 Warnings and AMAs
There was a list of warning for candidates, and marking instructions were
taken into account, but do not have significance for this exam. An Adjusted
Modes of Assessment (AMA) applications were approved for two students: for
XXXXX XXXXX(X College) and XXXXX XXXXX (X College) and the marks
for these students will be held over to next year by the central NST office.
4 Conduct of the Exam
The arrangements for this were mostly handled by the Moodle section of the
University IT department.
handled the sorting of PDF files and
making them available on sharepoint to the examiners.
5 Checking
Checking was carried out in as close an approximation to the traditional
method as possible with digital format, however due to issues with editing
PDFs it was not possible for all checkers to mark scripts in green. See
recommendation.
6 Scaling
We carried out liecewise linear scaling fixing the “mark-points” at which
students were awarded 70 percent and 50 percent. This was done in the usual
way, after careful discussion, and choosing the closest point to the 25th and
90th percentile respectively at which there was a significant gap between
successively ranked candidates. (Although there is no classification this year,
this is the usual procedure, and ensures to the extent possible a gap between
students awarded different classifications (1st/2nd class and 2nd/3rd class).)
After discussion it was agreed, in particular by those examiners with many
years experience, that the level was very good this year, in spite of the great
difficulties under which students have been working, and so when choosing
between generous or severe in choosing the location of mark-points (i.e., going
up or down to find a significant gap) we made the generous choice.
2
7
Conclusions and Recommendations
There are lessons, some useful, from what happened this year and last.
(a) The use of sharepoint in the process of writing and agreeing questions
for the exam paper worked well. Secure computer connections have
been used for some years for the input and storage of marks, and so
is presumably known to be safe from hacking. Using sharepoint in the
process of writing and agreeing the exam questions saves a lot of paper,
and also time and effort from support staff, and could be used in future
whether or not physical meetings are allowed.
(b) A lot of candidates gave us their CRSID and/or their name and college
on their uploaded scripts, so the anonymity of marking was compro-
mised. In general it is worthwhile to impress upon the students the
importance of following instructions in the whole process, the message
about using the blind candidate number should be re-inforced and, in
particular, the importance of scans being clear and readable.
(c) It would help to have the candidate number (which is stamped onto the
script by the submission process) slightly larger as it is hard to read
on a small screen. Also the position on the left margin often coincides
with punch holes - it might be better on the top or on the right hand
side perhaps? It might be helpful in script chasing if questions and the
number of pages for each question scanned by the candidate was noted
by them in their upload.
(d) The online uploading of scripts should be devised so as to allow only 5
attempts at Section B questions to be uploaded, ideally into question
specific holders” so as to clarify the process of ensuring that marks are
given to all questions uploaded.
(e) Editing PDFs can be problematic, and certain methods of annotating
them with scores can make the checking process very length (when the
checker has to keep clicking a comment/note to view the partial credit).
To improve the efficiency recommend the department purchases licences
for a good PDF editor to be used by examiners/assessors and check-
ers, and guidelines on how to do this efficiently are provided by the
computer officers. In future it would be useful to have more specific
instructions/methodologies as well as apps recommended for checkers.
3
(f) In general examiners and assessors found the digital process preferable
in many ways, e.g. from
I think that online marking
is a huge improvement on physical scripts. Much easier to check and
correct. Much easier to deal with anomalies. When picking up scripts,
checking and bringing back is factored in, online is quicker as well. I
made fewer errors in marking and adding up. I hope this method is
here to stay! Further comments:
(i) A final folder for checked and corrected scripts would be helpful.
This should include the pdf from the checker with the ticks /
comments from the assessor to show that all issues have been
covered. (This would make it possible to track each stage of the
process by looking in the appropriate file. Namely, (i) unmarked-
scripts, (ii) marked-scripts, (iii) for checking-scripts, (iv) checked
scripts and pdf with checks (v) checked and corrected – scripts
with pdf with checks and assessors counter checks colour coded. )
(ii) The checked folder should include a pdf from the checker showing
questions requiring action.
(iii) The split files were VERY helpful because there was no way to
view or edit online so had to download everything first. Some as-
sessors suggested splitting into even smaller files would be helpful.
(g) Several examiners and assessors are interested in the possibility of view-
ing and editing files online.
(h)
identified a glitch in the program which splits large PDFs
of many scripts into smaller ones, causing a small number of scripts
to go missing. (This only affected examiners who use the split PDFs).
This is one of several issues which makes clear that the department
should invest significant time and resources over the coming year to
test the various aspects of the exam process well in advance of next
year’s exam period.
(i) The exam was not invigilated, and so the examiners do not have a good
sense of the extent to which students did in fact follow the instruction
to do it as a closed book exam without external sources etc. In fact
one examiner noted that: Several scripts made huge leaps from incor-
rect mathematics to the correct answer. This wasn’t for ”show-that”
4
questions but for ”calculate” questions so one might take the view that
a candidate used Wolfram Alpha to get the answer and was unable to
back-fill the working. It might be worth considering how to address
this issue: either by taking stronger steps to distinguish this exam
from invigilated ones (eg by not scaling the marks so they compare to
standard classifications), or alternatively by introducing invigilation (if
future exams are again online).
(j) This was the first year in which assessors were given examiner letters
and access to markbooks, and this greatly streamlined the process of
getting checked marks ready in time.
8
Thanks
We are very grateful to
for assistance with computer and IT
issues, to
in the
undergraduate office, to
and the NST mathematics commit-
tee, to
in the NST management office. In addition we would
like to thank the assessors
for working very efficiently in
a time-pressured setting, and likewise to the checkers
A
Appendix Comments on the questions
Section A
P1 Section A
The 10 short questions in section A covered a broad range of basic skills.
Most candidates attempted all of the questions, and answers showed that
candidates were familiar with most of the mathematical methods required.
Full marks of 20/20 were obtained in 1 in 8 scripts assessed. Less than 6
percent obtained marks less than 10/20. Simple arithmetic errors such as
√
√
√
q
2 ×
3 =
5 and
1 = 1 , were common however.
2
4
5
The questions were deliberately intended to be easier this year, with the
aim of producing a higher average to that attained in recent years, and this
was successful:
Attempts: 518 Alpha: 385 ; Beta: 103 ; Average: 16.21
1. Nearly all candidates answered this straightforward differentiation ques-
tion correctly. Errors were mostly arithmetic.
2. Nearly all candidates factorised the simple quadratic and correctly de-
duced the required range for negative values.
3. Most found a correct expression for the sin or cos of the angle, but not
providing the two angles in the required range was a common error.
4. Nearly all candidates correctly evaluated the sums of the two series,
making use of the formula for the sum of a geometric series.
5. Most candidates correctly found the turning points of the function, but
many either did not provide the corresponding function value or did
not use the range of x values stated in the question for the maximum
and minimum.
6. Most candidates were familiar with the formulae for finding the sin and
cos of angles between two vectors. A few incorrectly used the origin as
the common point.
7. In finding the number of real values of the cubic polynomial candidates
used a wide range of methods. Most located the turning points, several
deduced the right answer quickly by just evaluating the cubic at a few
points. A few tried Descartes rule of signs, a few tried a time-consuming
approach involving complex numbers.
8. Nearly all candidates obtained the right expression for the integral eas-
ily, though some omitted the arbitrary constant.
9. While many candidates answered this quickly using a standard half-
angle formula, quite a few spent substantial effort deriving a formula,
often unsuccessfully.
10. Most candidates answered this well, quickly finding the correct gradient
for the tangent to the ellipse and thus obtaining the equation for the
tangent line.
6
Question P1 11W [Complex numbers]
General comment: A popular and relatively easy question which attracted
a lot of attempts. Many candidates scored marks in the 17 − 20 range. The
marking scheme was changed for part b(i) from 2 to 3 marks and for part b(ii)
from 4 to 3, in order to distribute the marks more evenly between b(i) and
b(ii). No significant differencies were noticed between on-line and in-person
(in the previous years) modes of examination.
Typical mistakes:
(a) (i,ii) Easy and straightforward warm-up questions correctly answered
by almost all candidates with some minor arithmetic slips.
(b)
(i) Quite a few candidates showed linear trajectories instead of hy-
perbolic.
(ii) Well answered with a few mistakes in location of start- and end-
points.
(iii) This was the most challenging part of the question. The main
difficulty was in finding the coordinates of the crossing points with
the vertical axis. Many candidates calculated these crossing points
by using a non-optimal method which involved calculation of the
values of parameter t, first, and then the values of imaginary part
often obtained in terms of combersome expressions with irrational
numbers involved.
(c) mainly well answered by the majority of candidates.
Question P1 12R [Multi-dimensional Integration]
This was a question in two parts, the first of which was to calculate a volume
integral in spherical coordinates. Students were then asked to calculate the
limiting expressions for the function calculated and plot the function. In
general, most students were able to calculate the volume integral, and many
students were able to calculate the limits but few found the form of those
limits.
The plots of the function where typically very good, but with a
significant number of plots which did not represent the function calculated.
The second part of the question involved calculating the volume integral of
a shape in cylindrical coordinates. Students either found this part of the
question relatively straightforward, or struggled to construct the appropriate
bounds on the volume integral.
7
Question P1 14 [Multi-dimensional calculus]
There was a good take-up and candidates showed good understanding of
the material. The most common error came in calculating the directional
derivative: not normalising the direction vector (1, 1, 0). A good minority of
scripts found the 1 percent change in r easily without using the differential
(and scored full marks).
Question P1 15V [Taylor Series]
A surprising number of candidates struggled to provide the first n terms of a
Taylor series expansion (part a), instead giving n+1 terms or having the in-
correct functional form; and a considerable number of candidates used their
own notation without relating their variables to those in the question. A sig-
nificant proportion also failed to give a correct remainder term and failed to
appreciate that the point of interest, x, in the question’s notation) might be
below, rather than strictly above, the expansion point (’a’ in the question’s
notation). Parts b, c, and d were tackled well with no signs of misunder-
standing the work required; and all three parts of the question worked well
to reveal candidates’ relative strengths (statistics below). Part d revealed
that no more than a handful of candidates have a good understanding of the
radius of convergence. Despite this online exam being closed book, there was
clear evidence in multiple scripts that points to the use of online mathemat-
ical tools and calculators.
P1 16T [Probability]
In the first part of the question most students were able to calculate the
mean and the variance for the case of one die and the mean for the case
of three die.
But there were many wrong answers when calculating the
variance of the latter case.
In Part b students use standard probability
techniques and, in general, obtained the correct answers. The last part of
the question presented some difficulties for many students. It was, however,
a straightforward application of Bayes theorem (expected to be used), only
requiring the relatively simple calculation of P (T |F ).
8
Question P1 17W [Integration]
General comment: A moderately popular and relatively easy question.
Quite a lot of candidates scored marks in the 17 − 20 range.
(a) The question on the fundamental theorem of calculus was answered
mainly correctly. However, several candidates gave the following incor-
rect answers: dF/dx = f (x) − f (a) and dF/dx = f (u).
(b) (i,ii) A question on integration of 1/x in the negative domain assumed
giving the answer in terms of real-valued (and this has been em-
phasised in the formulation of the question) function, i.e. ln |x|.
However, many candidates missed the modulus sign in the answer.
Some candidates did not know how to plot graph of ln(x).
√
(c)
(i) The integration of 1/ x2 − 1 in the positive domain of x (x > 1)
did not cause difficulties and many students gave correct answer
either in terms of logarithmic or inverse hyperbolic function.
(ii) This was the most difficult part of the question and many candi-
dates gave incorrect answer, R(x) = arccosh−1|x| or in the equiv-
alent logarithmic form. In many cases, the graph of R(x) was
sketched correctly (for incorrect expression of R(x)) in the nega-
tive domain but, unfortunately, with no explanations given.
(d) That was an easy part of the question which was answered correctly
by the majority of candidates.
Question P1 18Z [Matrices]
(a) Only about 25 percent of the students recognised this as an e-val prob-
lem. This topic needs more time and more worked egs in the lectures
and problem sets.
(b) The format AB = BC would have worked better if the matrix B was
required to be non-singular. Most students found singular B which
worked by direct algebraic expansion of AB = BC. A handful spotted
B = 0 works. As a result the number of students who obtained 8 marks
for a suitable B matrix and the comment that it was singular, but who
really did not have much of a clue about the theory, was high. I expect
the marks to be on the high side for this question as a result.
9
(c) Many students falsely said Tr(AB) = hboxT r(A)TR(B). Enough that
this needs to be covered in lectures.
(d) Some students thought that giving a simple example was enough to
show that something was true.
(e) General presentation of arguments was pretty poor and did cost marks.
It would be useful to give the students a 1 hour session with a worked
example handout showing how to set out different types of solutions. I
expect the average mark for this question to be high. Had the matrx B
been required to be invertible then thnk the average mark would have
been low.
Question P1 19* [Convergence and Limits.]
This question attracted a small number of attempts, but generally of good
quality. The desirability of going back to the definition of differentiability in
terms of existence of the limit of the difference quotient was not universally
understood.
Question P1 20* [Integration. Riemann sums.]
This question attracted a small number of attempts, but generally of good
quality.
P2 Section A
1. This question was attempted by everyone, even though many candi-
dates had difficulties in the algebra, and thus obtained wrong results.
2. This question was straightforward for the candidates who knew the
definition. A surprising number, though, gave a scalar number as vector
area.
3. Almost every candidate answered this question correctly.
4. The n = 1 case was largely answered correctly arguing based on sym-
metry, whereas the n = 0 case caused more difficulties.
5. This question was straightforward, and most candidates answered cor-
rectly.
10
6. Whilst some candidates made a small sign mistake that resulted in the
divergence vanishing, a surprisingly large percentage of candidates gave
the divergence as a vector, applying ∂x to the first component and so
on.
7. Almost every candidate knew that c had to be found by normalisa-
tion, yet many had algebraic difficulties. For the second part, some
candidates approximated the problem using integrals.
8. Apart from a couple of candidates, everyone misunderstood this ques-
tion, and gave the n-th term of the expansion in series, rather than
using the definition of the Maclaurin series. That is, with (1 + x2)−1 =
1−x2 +x4 +..., they wrote the first term as either 1 or x2. A minority of
candidates had difficulties with the expansion of the geometric series.
9. Almost every candidate answered this question correctly.
10. Even though this question was straightforward and required almost no
computations, many candidates wrote that e0 = e. A few obtained zero
flux and did not question their result.
Question P2 11W [vectors]
General comment: A very popular and relatively easy question attracted
a lot answers many of those scored marks in the 17 − 20 range.
(a) A very straightforward part which did not cause any difficulties except
of seldom arithmetic mistakes.
(b) An easy part for many candidates. In some attempts, the vector n1×n2
was not normalised. Some of the candidates approached the problem
by finding the values of two parameters λ and t by requiring that the
dot products of bmr2 − r1 with both direction vectors are equal to zero,
which led to correct answer but not for all attempts.
(c)i,ii This part was answered correctly by many candidates. When finding
parameter λ in expression for d = |a2 + λn2|, a few candidates made
a sign error. Some candidates calculated the projection of a2 onto
direction vector n2 and thus gave not a general formula for the distance
but a formula for a particular problem, i.e. d = |a2 − (1/3)n2| (still full
11
marks were awarded for such solutions). In several cases, the projection
of a2 on to n2 was confused with the shortest distance.
(d) This was the most challenging part of the question.
(i) Some candidates found a distance between 2 points as a function
of 2 parameters, then minimized the distance with respect to one
of them and obtained correct formula without using the vector
product (as required by formulation of the problem) - 1 mark was
awarded in such cases. Some candidates evaluated (r2(t) − a1) ·
(n1 × n2/)|n1 × n2| in order to find the required distance which
was based on a wrong assumption that the shortest distance is
measured along the direction perpendicular to both lines.
(ii) The main difficulty was related to finding the asymptotes. Sev-
eral candidates sketched the graph with a square-root behaviour
exhibiting a singularity near the minimum despite the fact that
they calculated the derivative at the minimum and demonstrated
that the derivative equals zero.
(iii) This was an easy part answered correctly by many applicants who
found correct answers for parts d(i) and d(ii).
Question P2 12
There was a good take-up including many excellent scripts.
There were
rather many low marks, below 50 percent: almost always these arose from
stopping after the first two parts of the question. Twenty or more scripts gave
the contour plot correctly without working through the preliminaries, and I
wondered if that might be evidence of widespread cheating: reassuringly,
a couple of the later scripts gave good explanations of how to do this, so
we should congratulate this sizeable minority of the class on their ability to
visualise the function.
P2 13S [Vector Calculus]
The question was in general well answered.
Most students were able to
calculate the curl of the given vector field and identify the single value for
which the field is conservative. The calculation of the line integral was meant
to be done directly but some students opted to use Stokes theorem, which
12
was allowed as long as a proper justification of the validity of the theorem
in this case was given. The determination of the scalar field phi in the last
part of the question (for the allowed value of the parameter a) could have
been done simply by inspection but many students opted for a longer, more
rigorous, derivation.
Question P2 14V [Probability]
Candidates interpreted parts (a)(i)-(a)(iv) well, provided excellent sketch
graphs, correctly applied probability techniques and accompanying explana-
tions. Part (b) was very effective: better candidate indicated an understand-
ing of the quantity to be computed and were able to express that mathe-
matically, while weaker candidates acknowledged guessing in their answers
or simply attempted invalid applications of probability tools. No parts of the
question were misinterpreted. Despite this online exam being closed book,
there was clear evidence in a small number of scripts that points to the use
of online mathematical tools and calculators, especially with part (b).
Question P2 16R
This question consisted of two parts, the first of which was to calculate
a vector field given the curl of a vector, to calculate the divergence of that
vector field, and to recalculate the vector field in cylindrical polar coordinates
having been provided with the expression for the curl in cylindrical polar
coordinates. In general, attempts at this part of the question were good, with
the majority of marks lost for algebraic errors and imprecise notation. The
second part of the involved the calculation of the integral of the divergence
of a given function over a triangular pyramid. Attempts at this part of the
question were either relative good, with mainly algebraic errors, or tended to
attempt to use the divergence theorem to calculate a surface integral. This
second approach was not required, and involved a significantly more involved
calculation. While there were some successful attempts, attempts were less
successful, particularly when calculating the surface integral on the sloping
face of the pyramid. A rubric of part marks for this part of the question
was developed which rewarded approaches using either the suggested volume
integral, or the alternative surface integral, with comparable scores from the
candidates independent of the method attempted.
13
Question P2 Q17
(ai) Reasonably well done. Most students knew to reverse the order for
transpose and inverse products. Most knew the definition of orthogo-
nal.
(aii) A very small number of students gave good solutions, one or two even
using Einstein summation convention. Most students gave very poor
solutions and clearly did not know how to use suffix notation. (Some-
times it was hard to believe this had been covered in supervision.)
Recommend that suffix notation be given more attention on the prob-
lem sets. Errors included triples of indices in individual terms, a single
index for a matrix and not knowing how matrix multiplication matched
up with index position. Index structure in equations often out of bal-
ance.
(bi) Almost everyone did this correctly.
(bii) Everyone got lambda = 2, -2 as special cases but there were all sorts
of muddled interpretations. Too few students checked for existence of
solutions for these two cases.
(biii) Most knew a suitable method, but there were plenty of arithmetic er-
rors.
(ci) Majority knew what to do.
(cii) Mostly correct.
Question P2 18S [Fourier series]
This was well done with the exception of the first part. Many candidates
didn’t even attempt to write down the first order conditions for a minimum,
and of those who did a surprisingly large proportion then wrote the condition
in terms of a derivative with respect to x rather than the aj. This section
was marked sympathetically as a consequence. The remaining sections were
generally well done, although quite a few students were unable to properly
state the Parseval theorem.
14
Question P2 19* [Multivariable calculus, Hessians, Lagrange Mul-
tipliers]
Generally quiet well done, except that very few candidates inserted the units
at the end, and a majority were very sloppy about the reasons to exclude the
case α = 0.
Question P2 20W* [PDEs]
General comment: This starred question attracted a moderate but typical
number of attempts. The question was challenging for many and only about
10 students achieved full marks from this question.
(a) This part on solving the one-dimensional diffusion equation did not re-
quire integration for finding coefficients in the series expansion of the
solution and was relatively easy for many candidates. Several candi-
dates correctly calculated x-dependence of the temperature but gave
the final answer in the form containing dependence on n in the time-
dependent part of the temperature profile, i.e. θ(x, t) ∝ e−κt(πn/L)2,
forgetting that n = 0 and 2. In several scripts, solution of equation
sin(αL) = 0 was given in the following form, α = (π/L)(n + 1/2).
(b) This part of the question was challenging for the majority of candidates.
Quite a few candidates incorrectly assumed the orthogonality of sin-
and cos-functions on interval [0, L].
Many candidates tried to find
unknown coefficients in the sum of harmonic functions by integrating
over the interval, x ∈ [−L, L], without making odd extension of the
function to the negative domain, which led to incorrect zero answer.
15
NATURAL SCIENCES TRIPOS
SENIOR EXAMINER’S REPORT
SUBJECT: Mathematics
Senior Examiner:
Examiners:
Structure of the examination:
Written paper/s: 2
Practical components: 0
Number of candidates: 369 NST+116 CST = 485 total of which 11 candidates had
withdrawn from 1 or more papers and/or are non-HON
Number sitting the exam/s outside the main exam hall/s: 44 (Paper 1), 45 (Paper 2)
Conduct of the Examination:
Paper 1 was sat in the Sports Hall from 0900 - 1200 on Monday, June 13th with four
examiners present throughout
and
in his role as Proctor). One question was raised during the exam, and one from a
student sitting in college in the hour after the exam concluded, but no actions were
required.
Paper 1
Question A
11Z
12X
13Y
14W
15R
16V
17T
18S
19V* 20T*
Count
476
326
257
300
292
156
239
307
414
31
44
Average 14.9 9.4
14.9
14.3
11.4
9.6
12.7
14.3
13.8
14.4 16.8
Stdev
3.4
4.6
4.2
5.5
5.2
5.2
4.2
5.5
5.0
5.1
3.4
Paper 2 was also sat in the Sports Hall, from 0900 - 1200 on Wednesday, June 15th with
again four examiners present throughout
and
in his role as Proctor). Two questions were raised during the exam, but
no actions were required.
Paper 2
Question A
11Z
12W 13Z
14V
15Y
16R
17S
18W 19X* 20T*
Count
475
162
381
236
323
389
88
348
150
175
114
Average 13.6 9.8
11.2
11.2
14.0
16.5
8.9
11.8
9.0
16.4
15.3
Stdev
3.8
6.9
5.1
4.7
5.2
5.2
4.5
4.6
5.3
2.8
3.9
The timetabling of both exams was appropriate, and leaves sufficient time for marking,
should the distribution of scripts for marking run reasonably smoothly.
Marking/Scaling:
The NST 1A Mathematics exam involves a heavy marking load, with 369 candidates from NST and
116 candidates from CST for a total of 485 candidates. The initial round of scanning was
completed impressively quickly, with a turnaround of roughly 27 hours from the end of each
exam. These scanned scripts were uploaded to SharePoint folders for each of the questions in
Paper 1 and 2. Each of the examiners and assessors where therefore able to download and mark
the scripts electronically, marking in the usual fashion for physical scripts by marking each page,
indicating marks awarded throughout, and then totalling the candidates score (out of 20) on each
script. The marked scripts were then uploaded to a "Marked and Unchecked Scripts" folder.
Alas, had the scripts arrived in the first scan the process would have been seamless. Instead,
following the initial scanning of the scripts, a significant number (~350) of scripts were
subsequently found and processed up to a week following the initial upload. This caused difficulty
in organising, and keeping track of the scripts, and meant that it became increasingly difficult to
ensure that all scripts were properly mark checked. All examiners were requested to have their
marks to the department by 1000 on Monday, 27 June, and at this point the mark book was first
constructed in full. Given the relatively straightforward rubric of the NST 1A exam - each
candidate should complete 6 units, the A section and 5 B questions, on each paper - a manual
check was undertaken for each candidate with less than 6 marked units on each paper. This
necessitated checking for missing questions on dozens of scripts in Paper 1, and over 60 scripts in
Paper 2. Several misplaced question scans were found in this process, including a number of
papers which had not been originally scanned.
Subject Examiners’ Meeting:
The final examiners meeting was held virtually on Thursday, 30 June with all examiners present. A
complete mark book had been compiled, with all questions accounted for (verified by manual
check for all candidates attempting less than 6 units per paper), and all remaining missing scripts
matched with the list of absent or withdrawn papers. The marks in the mark book was reviewed
by all examiners present, and particular care was paid to the difference in mark between adjacent
candidates near the putative grade boundaries. For this reason, the upper grade boundary was
set at a = 204.00 / 260 for the
First boundary so that 25.3% of candidates achieved a mark of
between 70 - 100%. The lower grade boundary was set to 115.75 / 260 for the
III / Fail boundary,
comprising which resulted in 10.03% of candidates below this boundary.
Due to the number of marks being added to the mark book until very shortly before the final
subject examiners meeting, the examiners agreed that the entire exam should be mark checked
again. The Undergraduate Office kindly coordinated the assignment of additional mark checkers,
who conducted a thorough review of the exam. During this process, an additional 16 changes
were found necessary, and the mark book updated. This did not result in a material change in the
mark distribution so the scalings discussed during the examiners meeting were left unchanged.
Administration:
During and after the marking period a small number of scripts were found to be scanned
improperly and so where rescanned. A much higher volume of scripts where found missing during
review of the mark book (as discussed above) necessitating a manual rechecking of many of the
physical scripts (approximately 60, as discussed above). Following the examiners meeting, all
scripts were thoroughly mark checked again, with a number of issues raised and dealt with. This
resulted in 16 marks changed in the mark book, but did not require additional rescaling beyond
what had been discussed in the final subject examiners meeting.
The examiners would like to note their very great thanks to the staff in the Maths Undergraduate
Office, particularly
, and for Computer Officer
, for their expertise
and unflappable dedication particularly during the post-exam period.
Conclusions and Recommendations:
The examiners would like to make the following generic recommendation for the examination
process in future years. It was uniformly felt that the advantages of scanning scripts, distributing
them electronically, and marking and mark checking electronically outweighed the disruption
caused this year by issues in the scanning process. However, it is recommended that the scanning
process include a series of manual checks to ensure consistency and accuracy of the scans, to aid
in the initial distribution of scripts for marking. A more detailed series of recommendations will be
sent (shortly) to the Maths Undergraduate Office, and copied to the NST Senior Examiner for
information.
Date: 4 July 2022
(Additional information may be required by Faculty Boards (e.g. question level data); this is not needed by the
Chairman of Examiners but can be included if it is easier to provide one report. Faculty Boards may publish certain
information and may therefore require content to be presented in a particular format.)
Examinersʼ Report
Natural Sciences Tripos, Part IA
Mathematics
2023
Examiners
*replacing
Assessors
Prior to the examination
The Examiners met remotely in late November to review the procedures for setting
questions and to agree the distribution of topics among the eight Examiners. As in
previous years, each of the two papers consisted of Section A (ten short questions to be
attempted by all candidates) and Section B (ten longer questions from which candidates
could choose five to attempt). Two of the Section B questions on each paper were starred
and were intended to be accessible only to students that had attended Course B.
Draft questions were prepared using templates provided by the Faculty of Mathematics
and were stored and exchanged securely using SharePoint. The questions were first
checked for accuracy, length and appropriateness by a paired Examiner. After editing,
they were assembled into draft examination papers and made available to all six
Lecturers and eight Examiners for feedback. Detailed comments on the accuracy and
suitability of the questions were taken into account and the entire draft papers were
carefully scrutinized at Examinersʼ meetings in early February and mid-March. This
process resulted in minor modifications being made to most questions and more
significant changes being made to others, until all Lecturers and Examiners approved the
questions. The final version of the papers was uploaded via Teams in mid-April.
The examination
The examination took place on Monday 12 June (Paper 1, 9:00 am–12:00 pm) and
Wednesday 14 June (Paper 2, 9:00 am–12:00 pm). Most candidates sat the examination
in the Sports Hall of the University Sports Centre. Whiteboards were provided by the
Faculty of Mathematics for the display of any corrections to the papers, but none was
required. Examiners R, S, T and X were present for Paper 1 and Examiners S, V, W, X, Z
for Paper 2. The examination was conducted in a smooth and efficient manner.
1
During Paper 1, two candidates asked whether printed equations were correct. The
questions were checked carefully and no corrections were required. During Paper 2, one
candidate raised multiple queries about the interpretation of one of the questions. The
responsible Examiner was present and was able to confirm that the question was correct
as set.
The examination had 493 candidates: 353 from Part IA Natural Sciences (NST0) and 140
from Part IA Computer Sciences (CST0). Of these, two candidates withdrew from the
examination papers and four others were recorded as absent from one or both papers.
After the examination
Scripts from the Sports Hall were collected and sent by courier to the Centre for
Mathematical Sciences for sorting and scanning, while those from other venues arrived
later. The main batch of scanned scripts was made available to Examiners and Assessors
within one or two days of each paper being sat. A second and final batch, consisting of
scripts from other venues and some from the Sports Hall, was made available to
Examiners and Assessors early the following week. Some of the questions were allocated
to the Assessors to spread the marking load. The scripts were marked electronically, with
a deadline two weeks after the sitting of Paper 1. Marks were entered into a secure
database prepared by the Faculty of Mathematics. The addition of marks and their entry
into the markbook was verified by mark checkers.
The Examiners found that the scanning process and electronic marking procedure
worked very well. Only a few anomalies were found with the scanned scripts and these
were easily resolved by referring to the raw scans. However, several Examiners were
surprised to find that the second batches of scripts were almost as large as the first
batches, as they included many scripts from the Sports Hall as well as from other venues.
Comparison of the markbook with data from the master cover sheets revealed fourteen
discrepancies, all of which were investigated by the Senior Examiner with the help of the
Computer Officer, the Undergraduate Office Manager and the relevant Examiners and
Assessors. Ten of the discrepancies were accounted for by the Examiner or Assessor
having entered the mark in the wrong row. One was a case of the student misidentifying
the question on the cover sheet. Another resulted from an error in scanning the master
cover sheet. In the remaining two cases it was deduced that the Examiner or Assessor
had entered a mark by mistake, because no corresponding script could be found and the
candidate already had a full set of marks; the spurious marks were removed from the
database. All the discrepancies were resolved satisfactorily. Three candidates had
attempted six questions from Section B on one of the two papers; all the questions were
marked as normal, but in each case the lowest-scoring mark was discarded from the
markbook.
Scripts were missing for two candidates that were known to have withdrawn from the
examination and for four candidates that were recorded as absent from one or both
papers. Apart from this, the final markbook was complete. Each paper produced a raw
mark out of a total of 120. The NST candidates also received a mark for the Scientific
Computing practicals (rescaled to be out of 20), which was included in the markbook.
The marking and assessment boycott did not have a direct impact on this examination.
2
Final meeting
The Final meeting of Examiners took place on Thursday 29 June and lasted
approximately two hours. All Examiners attended the meeting, either in person or
remotely, via Zoom. They discussed the conduct of the examination, the scanning and
delivery of scripts, the resolution of discrepancies between the markbook and the master
cover sheets, the processing of marks and the withdrawals and absences. They were
provided with a statistical analysis of the results and discussed the performance of the
candidates on individual questions as well as on the examination as a whole.
Owing to a change this year in the procedure for combining marks from different NST
subjects, the Examiners were not required to draw any boundaries or apply any scaling to
the marks. Instead, only the raw marks were provided. The Examiners noted that the
distribution of total raw marks for NST candidates differed from the rough guidelines
suggested by the NST administration; in particular, the percentage of NST candidates
scoring raw marks above 70% was 53.3%, whereas the suggested guideline is only 30%.
It was agreed that this discrepancy is, to some extent, an inevitable feature of a
Mathematics examination.
The Examiners formally approved the raw marks for transmission to the administration of
NST and CST. Owing to the marking and assessment boycott, it was likely that the
combined NST scaled marks and classes would not be available until later in the year.
Since the raw mark distribution in NST Mathematics differed from that expected by NST,
and might give a misleading impression of the class of the result, it was felt important to
provide a cautionary note on the interpretation of the raw marks.
Number (NST)
Number (CST)
40
20
30
15
20
10
10
5
0
P1+P2
0
P1+P2
0
50
100
150
200
0
50
100
150
200
Histograms of raw marks (Papers 1 and 2 combined, out of 240), for NST (left) and CST (right) candidates
Fraction
0.25
0.20
0.15
0.10
0.05
0.00
NST total mark/100
0
20
40
60
80
100
Normalized histogram of total raw marks (scaled, out of 100) for NST candidates
3
Conclusions and recommendations
The Examiners are very grateful to the Mathematics Undergraduate Office, in particular
, and the Mathematics IT Technical Lead,
for their extremely valuable assistance and advice.
The improved system for scanning and electronically marking scripts was found to work
well and was greatly appreciated by the Examiners.
The Examiners
recommend that, after the main batch of scripts is delivered to the
Examiners after each paper, a count of these scripts is carried out, so that the Senior
Examiner can estimate how large the remaining batch of scripts for each paper will be.
This will help the Examiners and Assessors to allocate the time required to complete their
marking by the deadline.
The Examiners
recommend that the arrangements for remunerating non-UTO Examiners
and Assessors are clarified and improved. The current system, which requires Examiners
and Assessors to complete timesheets and to be paid by the hour at a rate significantly
below the rate for undergraduate supervising, was felt to be inappropriate. The committee
greatly valued the essential contributions and experience brought by non-UTO Examiners
and Assessors, some of whom continue in this role from year to year, and
recommend that they are properly rewarded for the intense and stressful work that they undertake.
The Examiners noted that some candidates did not show their working clearly in answers
to Section B questions, which meant that they did not receive as much partial credit as
they might have done for solutions that led to incorrect answers. They
recommend that
the Lecturers, when presenting example solutions during the lecture courses,
demonstrate how to set out a good answer to an examination question.
The Examiners
recommend that the set of coversheets provided to each candidate in the
Sports Hall is held together by a treasury tag rather than being enclosed in a plastic
wallet. This arrangement is believed to be used in other subjects and to be preferred by
the Invigilators.
4
Appendix: Comments on questions
Paper 1, Section A [489 attempts, mean mark 16.3]
The 10 short questions in Section A tested a broad range of basic skills. Most candidates attempted all of
the questions, and answers showed that candidates were familiar with most of the mathematical methods
required. Just over 10% obtained full (20) marks, just over 4% obtained 10 marks or less.
Q1: using either combination formulae or Pascal’s triangle most candidates found the correct coefficients
in the polynomial.
Q2: the majority of candidates found a quadratic by factorisation and hence evaluated the remaining
two roots. Arithmetic errors in either phase were commonplace however.
Q3: nearly all candidates rearranged the simple quadratic correctly to find the circle centre and radius.
Q4: most found the two required points correctly by substituting for x or y in the quadratic: errors were
largely arithmetic.
Q5: most found correct expressions for the amplitude and angle, but many then incorrectly included the
angle 7π/6 as well as π/6.
Q6: candidates either used a summation formula or direct addition to sum the series. Common errors
were use of the wrong formula or arithmetic slips in evaluating the formula.
Q7: most found a correct expression for the function derivative. A substantial number then failed to
provide the requested function values as well as the locations of stationary points.
Q8: nearly all answered this straightforward implicit derivative question correctly, with errors being
largely careless derivation.
Q9: nearly all correctly evaluated the required integral.
Q10: by far the most difficult question: many candidates did not know the shape of x exp(−x) and hence
mistook the area to be calculated. The evaluation of the area using integration by parts was tricky, and
only a minority of candidates obtained the correct answer.
Paper 1, Question 11Y [432 attempts, mean mark 12.7]
[Complex numbers] This was a popular question that proved to be well-written in its ability to acquire a
wide range of marks and so discriminate between attempts. Part a) was straightforward with errors all
being essentially arithmetic. Answers to part b) were varied. Some attempts failed because the imaginary
part of a complex number was not treated as real. Most fell down on identifying conic sections even after
finding a suitable form. Part c) was well done, again suffering mostly from arithmetic errors. Part d)
required a bit more work to obtain the marks but was not weighted highly enough to account for this.
Paper 1, Question 12V [94 attempts, mean mark 12.6]
[Multiple integration] This question generated a good distribution of marks around a well-placed aver-
age.
Students showed no problems with their comprehension of the description of the solid object in
part (a), and many proceeded to calculate its volume correctly for 7 marks in part (a).
Errors were
largely confined to integration mistakes; fewer than a quarter of attempts failed to formulate a correct
integral. Part (a)(ii) was also well attempted and follow-on marks were awarded for people who correctly
manipulated an incorrect answer from (a)(i). Lost marks here were attributable to careless errors. Part
(b)(i) revealed that many candidates were unable to determine whether the curved faces (proportional to
a square root) are convex or concave and, as such, the question revealed concrete evidence of a stronger or
weaker grasp of the subject. Otherwise, candidates translated the description of the object’s shape into a
sketch with impressive fidelity. Many formulated a correct equation for part (ii) but rather fewer evaded
1
mathematical slips in the working to reach a correct final answer.
This suggests that more practice
during term, or revision, would serve them well.
Paper 1, Question 13Y [411 attempts, mean mark 12.0]
[ODEs] The bulk of this popular question was moderately well done. Common mistakes in part (a) were
forgetting the trivial solution and forgetting the absolute sign. Part (b) was mostly marred by integration
mistakes but many students seemed to be very well versed on the integrating factor method. In part (c)
the students failed to use the general definition of arcsin and lost many otherwise trivially obtainable
marks. The formal definition of arcsin should be more strongly emphasised in the course (especially A).
Paper 1, Question 14R [218 attempts, mean mark 13.2]
[Differentials/partial derivatives] This question was reasonably well attempted. Part (a) was answered
mostly correctly, with the integrability condition being correctly stated. There were numerical errors
associated with obtaining the constant that ensure the exactness of the differential form. In part (b)
the geometrical figure was mostly correctly identified, but there were numerical errors in obtaining the
differential change in volume. In a small fraction of the cases, differentials were squared in an attempt
to identify the ‘error’. Part (c) was mostly correct, with occasional numerical errors in applying the
chain rule. Part (d) was the least well-answered, with errors in computing the second partial derivatives,
following change of variables. In many cases, the mixed partials were omitted from, leading to an incorrect
derivation of the invariance of the Laplacian. In some attempts, it was explicitly noted that the Laplacian
was invariant under orthogonal transformations.
Paper 1, Question 15S [333 attempts, mean mark 12.5]
[Taylor series] A popular question that looked simple on the page.
(a) Mostly done well by quoting and adapting the series for sinh x. Some candidates tried to evaluate the
Taylor coefficients by differentiating the function repeatedly, but this required too much accurate work
to produce the desired series.
(b) Many candidates were able to quote the series for ln(1 + x) and expand this recursively. Arithmetic
errors or omitted terms often led to mistakes in the third coefficient. (A frequent error was 1/3 + 1/3 +
1/2 = 5/6.) Common errors with quoting the ln(1 + x) series were (i) beginning with a 1, (ii) not having
alternating signs and (iii) having factorials in the denominator. Other candidates worked out the Taylor
coefficients by differentiating the function up to three times.
(c) Many candidates tried (mostly unsuccessfully) to use trigonometric identities or complex exponentials
to manipulate the expression, but the best method was to quote the sin series to three terms and expand
binomially, keeping terms to the required order. Some candidates did this without difficulty. Others tried
to evaluate the Taylor coefficients by differentiating the function repeatedly, but this required too much
accurate work to produce the desired series.
Paper 1, Question 16X [257 attempts, mean mark 15.0]
[Probability distribution] The question was very well done (though there is a strong tail) and many
candidates achieved full marks. Instead of working from the probability density towards the cumulative
probability this question is set the other way round: a minority of candidates were not able to manage
that. A very frequent issue was plotting the algebraic functions outside the zero to one range where the
probability lies. Candidates were spared the final calculation on the grounds that the arithmetic might
be heavy: many candidates showed that to be an unnecessary restriction and did indeed calculate the
final answer (16/25, very close indeed to the 68% value for the normal distribution) which could have
made a very good close.
2
Paper 1, Question 17Z [198 attempts, mean mark 13.2]
[Fourier series] About 10 percent of candidates did not understand what orthogonality means for functions,
giving answers like ‘the product of their gradients is minus 1’ or that the integral of their product over
the domain is 1, not zero. A sizeable chunk included a possible weight function, which is taught in the
A course but not the B course, so there is no reason to suppose that this question was neglected by the
course A students (a risk as it is the last topic covered by the A course.)
Most recalled the equations for the coefficients, but some got the normalization prefactor wrong.
If
they implemented the wrong formula correctly they gained all subsequent marks and were not doubly
penalized.
The calculation for h = x2 was well done for 6 marks, but for the g = x part a depressingly large number
of students did not pay attention to the paucity of marks, and just mindlessly applied the same procedure,
rather than spotting the differentiation trick, these got full marks but wasted their own time. Some tried
to square root the expression for h, and got no marks.
The last part was not in general well answered with students mainly trying to get Parseval’s theorem
to give the result for both parts.
Those that did try to evaluate g at x = 1/2 did not explain clearly
what sin(nπ/2) is (i.e. not noting that it is zero for even n), some even tried to evaluate g(π) which gave
unusable expressions.
Others using Parseval tried to simultaneously set h(x) = 0 and integrate at the
same time.
Paper 1, Question 18T [379 attempts, mean mark 15.4]
[Matrices] In part (a) most of the students got the right linear system for the parameters α, β and γ.
Many students tried to solve the system by finding a solution. However, they did not show in a systematic
way, utilizing the tools taught in this course, that the system has infinitely many solutions. Students
who did not show that the system has infinitely many solutions lost two marks.
Part (b) was solved by almost all the students. However, some have an arithmetic mistake in part (a)
which led them to the wrong matrix in part (b). They were not penalized in part (b) for any mistake
they committed in part (a).
Part (c) was solved correctly by almost all the students. Here again they were not penalized for obtaining
the wrong matrix in part (b).
Almost all the students were able to solve part (d). Some who had the wrong matrix from part (b)
and hence the wrong determinant in part (c) struggled in showing that the matrix is not singular for all
values of the parameter α. In the latter case students received partial credit for demonstrating the right
approach and idea to solving this part.
Some students realized that the determinant is the product of the eigen-values and used parts (c) and
(d) to solve part (e). A common mistake was that the students assumed that all the eigen-values are
real, and lost one mark for not taking into account the possibility of two conjugate complex eigen-values.
Some students found the characteristic polynomial and realized that it is a cubic polynomial with real
coefficients and conclude that it must have a real root and hence the matrix has a real eigen-value. But
this does not answer the question correctly. Others have investigated the cubic polynomial and showed
that it must have a positive root. For this they obtained full marks.
In my opinion the distribution of marks, giving 15 marks for parts (a), (b) and (c), the easy parts, has
allowed for many students to score high marks for this question. I think more marks should have been
given to part (e), which is slightly more challenging and requires deeper understanding and skills. It is
worth adding that some students skip some details in their answers and leave it up to the examiner to
second guess their intention. Students need to be better instructed of how to answer questions in Section
B of this exam. In particular, that it is to their own benefit that they need to provide the full details of
their solutions. This is the difference between Section A and Section B. This will allow the examiner to
give the adequate partial marks in case there is a mistake in some of the steps of the solution.
3
Paper 1, Question 19W* [89 attempts, mean mark 10.4]
[Series and limits] A moderately popular starred question. The question was split into two parts: conver-
gence of series, (a)-(b), and evaluation of limits, (c)-(d). The part (a) tested the theoretical aspects, i.e.
knowledge of the ratio test for convergence of positively-defined series, while the part (b) tested practical
application of the convergence tests. Similarly, two remaining parts of the questions tested the knowledge
of limits of functions.
Typical mistakes:
(a) Many candidates states that the series converges when L = lim uk+1 < 1, but did not say anything
k→∞
uk
about L ≥ 1. Some of the candidates did not analyse the case when L = 1.
∞
(b)
(i) This part was answered mainly correctly by using comparison with 1 + P 1 = 1 + π2/6.
k2
k=1
However, a few candidates used the ratio test which is inconclusive in this case.
(ii) This part was answered mainly correctly by applying the ratio or comparison test but some
candidates were unsure about behaviour of the series for a = 1, for which comparison with
harmonic series was used by the majority of candidates.
(c) Not many candidates mentioned how to proceed when the limit for the ratio of derivatives is
undetermined.
(d)
(i) Many candidates successfully applied l’Hˆ
opital’s rule for evaluation of this limit, but some
students calculated derivative of the product rather than of fraction.
(ii) Again, l’Hˆ
opital’s rule was applied for finding this limit by practically all candidates but some
of them had difficulties in finding the derivative of xx.
(iii) This was the most challenging part of the question and only a few candidates successfully
coped with analysis of different ranges for parameter a.
Paper 1, Question 20X* [26 attempts, mean mark 14.6]
[Lagrange multipliers] Fairly easy and straightforward question covering this topic.
Part (a): Answers varied from a three or four lines of “How it works” to up to two pages of “How and
Why it works”. The students seem unable to understand what the question needed here and I think the
lecturer needs to make clear what is required in a “How” answer and also in a “Why” answer. More a
literacy than a maths issue. Lecturer could help by explaining what the answer should look like for exam
purposes.
Part (b)(ii): Students got in a tangle with arithmetic. Otherwise the question was quite easy and as a
result too much credit for being able to calculate.
Part (b)(iii): Some skipped easy marks by ignoring picture. In general marks lost by students ignoring
parts of the question. Tighter marking scheme visible on exam paper might have helped.
Paper 2, Section A [488 attempts, mean mark 10.9]
Question 1 [set theory]
An easy warm-up question well answered by the majority.
(a) Typically, correct answers with only a few responses such as X = Ω, A.
(b) Similarly to (a), mainly correct answers with a few exceptions like Y = A, A ∪ A, ∅.
4
Question 2 [cylindrical and spherical polar coordinates]
This question was problematic for many and possibly was the worst answered one.
(a) Many candidates did not pay attention to the quadrant which θ belongs to and just gave the
answer θ = arctan(4/3) or θ = − arctan(4/3). Also there were answers with flipped fraction, e.g.
θ = arctan(3/4) + π.
(b) The same difficulties with identifying the correct quadrant for φ and θ. There were answers with θ
being outside the range [0, π] and inverse trigonometric functions of incorrect arguments were used
in some answers, e.g. θ = π − arctan(1/5). Some candidates used the same value for r, either 5 or
√26, in both parts (a) and (b), and very few even gave negative values for r.
Question 3 [vector triple product]
Mainly answered correctly.
(a) Some candidates gave answers, like (a · c) b +(a · b) c, or β = (a × c) and γ = (a × b). In some
| {z }
|
{z
}
β
γ
answers, the correct formula for a × (b × c) was given but some candidates missed the minus sign
in the expression for γ; full mark was awarded in such cases.
(b) Some of the candidates just stated that |a × b| is the length of the vector a × b and some wrote
that this is the volume of parallelepiped.
Question 4 [vector area]
The majority of candidates knew the definition of the vector product, but there were a lot of mistakes in
finding direction and magnitude of the vector area. Some of the candidates gave the answer in the form
of a scalar and some presented the answer as a 2d vector. Many applicants used a very inefficient way
for evaluation of the vector area by calculating first a unit normal, then the magnitude of the area and
then taking their product instead of doing this in one go by using the vector product. Several gave zero
as the answer because the contour formed a closed loop.
Question 5 [cumulative normal distribution]
Mainly answered correctly. There were several answers displaying a straight horizontal line and some
graphs crossing the origin and going to negative values.
Question 6 [Newton–Raphson]
Mainly answered correctly. Some of the candidates derived the formula for x1.
(a) Quite a few answers with wrong sign in the expression for x1, i.e. x0 + f (x0)/f 0(x0), and flipped
fraction, x0 − f 0(x0)/f (x0).
(b) There were mistakes in adding the numbers in correct general expression for x1.
Question 7 [gradient and directional derivative]
(a) Quite a few applicants gave correct expression for the gradient but did not evaluate it at the origin.
Some applicants used the definition of divergence and gave the answer for gradient in a scalar form.
(b) A typical mistake here was in using not unit direction vector. Some of the applicants did not know
the definition of directional derivative and just calculated gradient at point (1,1,0).
5
Question 8 [double and surface integrals ]
(a) Only a few candidates used the symmetry properties. The incorrect (non-zero) answers mainly
resulted from using wrong limits both in polar and Cartesian coordinates.
(b) Many but not all used ∇ × ∇Φ ≡ 0.
Question 9 [suffix notations]
This was a challenge for quite a few candidates and some of them were unfamiliar with the notation for
Kronecker-delta.
(a) Some of the students did not know how the Kronecker-delta works in summation and gave the
answer in a matrix form.
(b) Not all candidates realised that δii is a trace of identity matrix. Some of the students incorrectly
applied summation over repeated indices.
Question 10 [linear algebra]
This question was answered mainly correctly.
(a) Some of the candidates did not know what to do with three equivalent equations and attempted to
give answers for some particular cases, such as x = 0, y = −z.
(b) Some candidates thought that the locus is a sheaf of planes or just a single point or the whole space.
Paper 2, Question 11T [408 attempts, mean mark 18.2]
[Vectors] Almost all the students solved part (a) correctly. However, few students have made some
arithmetic mistakes and lost some marks for that. Other students tried to solve the problem for general
vectors and not for the specific vectors stated in the question. Some have made slight mistakes for
which they lost some marks. Students should be instructed not to attempt beyond what is stated in the
question.
Many students solved part (b) using different approaches. However, some deficiencies were visible in the
mathematical arguments of the some of the students. In particular, many students assumed without
justification that the vectors u and v are not collinear. Probably more marks should be allocated to this
part which could have allowed for more strict marking.
Many students solved part (c) using part (b). However, some of these students used the position vectors
of the points instead of the difference vectors. For this mistake they lost some marks. Other students
solved part (c) by finding the equation of the plan through three points and then realizing in (c)(i) that
the fourth point does not satisfy the equation of the underlying plane, while in part c(ii) the fourth point
satisfies the equation of the underlying plane. For this solution the students received full marks.
This problem was relatively easy, and the students seem to be well trained to solve similar problems. In
particular, have more marks been allocated to part (b) would have allowed for a more strict marking
of that part. In addition, one more part should have been more challenging to allow for the distinction
between the better students from the rest.
6
Paper 2, Question 12R [254 attempts, mean mark 11.1]
[Stationary points] This question was well-attempted but not well-answered. The stationary points and
the value of the function at the stationary points were correctly identified in most cases. The plotting,
however, left much to be desired. In most instances, a generic local maximum or a generic saddle point
was plotted, without paying attention to the specific form of the function and its symmetries. Gradient
arrows were not always perpendicular to the contours. In some instances, instead of a contour plot, a
graph was plotted.
Paper 2, Question 13Z [374 attempts, mean mark 15.1]
[Line integrals/conservative vector fields] Majority of students did this very well, with most errors coming
from the last part.
A depressingly large minority thought it was legitimate to see if the two different
paths gave the same result to see if the corresponding vector field was conservative, and such attempts
got no marks.
A few students found the potential for F, and a couple even found the potential for G
together with the restriction that b = 0, but the majority took the intended easy path of testing if the curl
was zero everywhere. Two attempts tried to use double integrals for no discernable reason, and produced
pages of utterly irrelevant working, and about 10 students clearly had no idea how to parameterize a line,
nor did they know how to set up a line integral.
Paper 2, Question 14X [216 attempts, mean mark 15.9]
[Permutations and combinations] Over the years there have not been many questions explicitly aimed at
Permutations and Combinations and this question may have been too straightforward leading to a high
average mark, though there was still an appreciable discrimination and there were a few disappointing
scripts. Venn diagrams are ideal for explaining the later parts of part (a) but were offered by only a small
minority.
Paper 2, Question 15Y [453 attempts, mean mark 13.0]
[ODEs] An unsurprisingly popular question.
(a) A standard problem of a harmonically forced, undamped oscillator. Many candidates correctly ob-
tained the solution for k 6= 1. Some noticed that the solution did not work for k = 1 but relatively few
went on to find the solution in this resonant case. Some candidates, noting the possibility of resonance,
tried only the resonant form of particular integral and then deduced incorrectly that k had to equal 1.
At a more basic level, quite a few candidates were unable to find the complementary function for this
standard equation.
(b) The solution of two coupled first-order ODEs was done well by many candidates. Arithmetic and
algebraic errors were the main source of difficulty.
Paper 2, Question 16V [121 attempts, mean mark 12.9]
[Surface integration] In part (a), the vast majority of students were able to sketch the cube or mathe-
matically formulate the six normal vectors to the faces. Many students lost marks to careless errors in
calculating dot products or substituting the limits of integration, and consequently lost marks. Part (b)
allowed stronger candidates to demonstrate their ability and earn all 7 marks, typically by drawing a de-
cent sketch and observing that the answer is obvious; meanwhile the weakest were unable to demonstrate
an understanding of the purpose of (b)(i) and made no attempt or no progress at (b)(ii). Part (c) was
well handled, often by sketch or observation and sometimes by explicit calculation. Very few candidates
attempted to use the Divergence Theroem (other than as a prudent cross-check on their answers) and
respected the explicit instruction to ‘[use] surface integration’.
7
Paper 2, Question 17T [281 attempts, mean mark 13.5]
[Matrices] The bulk of this popular question was moderately well done.
Part (a) was the most problematic with very many attempts including mathematically wrong operations
and nonsensical statements – perhaps the course (especially A) should have a larger section dedicated to
proofs and more relevant exercises.
Part (b) was better done with many students remembering that for an orthogonal matrix the three vectors
need to be linearly independent and normalizing accordingly. Many students tried to use components for
these two parts – a needless waste of time.
Part (c) was generally well done, with many successful examples of the use of induction in (iii).
Paper 2, Question 18R [228 attempts, mean mark 16.0]
[Integration] This question was too easy and as a result the marks were very polarised. Most students
could do it, a few students had no clue where to start.
This could be solved by a strong A level student. Some students working backwards from the solu-
tion.
Some students verifying, rather than deriving but no reason in the question to deny marks for
this. Model solution only gave one form of three different possible equivalent forms for the answer. Over-
all, the question was a too long calculation which favoured those with the confidence to keep going.
Paper 2, Question 19Z* [12 attempts, mean mark 16.5]
[Cauchy-Schwarz inequality] The small number of attempts, with a high mean mark is very probably a
reflection of the fact that this topic has not been examined since 2008, and so only the students who
were very confident on the topic would be brave enough to try. The only slight issues were asserting
that equality required orthogonality of the functions as opposed to linear dependence, and incorrectly
asserting that ω must be equal to 1 rather than merely a constant.
Paper 2, Question 20V* [86 attempts, mean mark 16.2]
[PDEs] This was a reasonably popular question, especially in the context of its reliance on B-course
material.
Students handled all parts of this question competently, which is reflected in the relatively
high average mark. Further to their credit, the students largely applied the ideas of PDEs correctly to all
three parts, and where marks were lost, it was due to failures to manipulate equations or to integrate. A
common mathematical infidelity was the treatment of constants of integration, which most got away with
because the question asked for a (any) concrete solution, not the general form.
Had the general form
been required, it was not obvious from the scripts that some students’ grasp would have scored as many
marks. More explicit lecture coverage of the treatment of these constants might prove beneficial to the
students.
8
Document Outline