How to Write a Source of Error in a Singapore Science Practical (Non-Trivial Answers Only)
14 Apr 2026, 00:00 Z
Want small-group support? Browse our O-Level Physics Tuition hub. Not sure which level to start with? Visit Physics Tuition Singapore.
Looking for the full lab practical series? Visit the O-Level Physics Practicals.
Practical course completion-record note
For practical, lab, and experiment courses, Eclat Institute maintains centre-held attendance records and may also issue an internal attendance or completion document based on participation and internal assessment.
- For SEAB private-candidate declarations, the key evidence is the centre's attendance or completion record, not a government-issued certificate.
- This is an internal centre-issued certificate, not an MOE/SEAB qualification or accreditation.
- Recognition (if any) is determined by the receiving school, institution, or employer.
- For SEAB private candidates taking science practical papers, SEAB states you should either have taken the subject before or attend a practical course and complete it before the practical paper date.
View our sample completion document (Current sample layout (design may be refined over time))
Planning a revision session? Use our study places near me map to find libraries, community study rooms, and late-night spots.
> **TL;DR**\
> A valid source of error names a specific, plausible physical factor and states how it affects the measurement and in which direction.\
> "Human error" alone scores zero: it is a category label, not a description of a mechanism.\
> The five templates below -- reaction time, parallax, zero error, calibration drift, end-point uncertainty -- cover roughly 80% of O-Level practical evaluation questions.
Sources of error questions appear in the ACE (Analysis, Conclusion, Evaluation) strand of O-Level Physics (6091), Chemistry (6092), and Biology (6093) practicals. Before working through error technique, make sure you can identify your experiment's independent, dependent, and controlled variables, since the best source-of-error answers grow directly from the CVs you could not perfectly control -- read the companion post on [independent, dependent, and controlled variables](https://eclatinstitute.sg/blog/ip-combined-sciences-lower-sec-notes/Independent-Dependent-Controlled-Variables-Sec-1-Science) for that foundation. When you need a per-experiment reference rather than the underlying technique, jump to the subject-specific banks: [O-Level Physics](https://eclatinstitute.sg/blog/o-level-physics-experiments/Sources-of-Error-O-Level-Physics-Practicals), [O-Level Chemistry](https://eclatinstitute.sg/blog/o-level-chemistry-experiments/Sources-of-Error-O-Level-Chemistry-Practicals), and [O-Level Biology](https://eclatinstitute.sg/blog/o-level-biology-experiments/Sources-of-Error-O-Level-Biology-Practicals). For the full Paper 3 skill map, start at the [O-Level Physics Experiments hub](https://eclatinstitute.sg/blog/o-level-physics-experiments).
---
## 1 | What counts as a valid source of error
An examiner cannot award a mark for a phrase that tells them nothing actionable. The three required elements of a valid source-of-error answer are:
**Element 1: What varies or is uncertain.** Name the specific physical factor, not the category of human mistake. "The length of the pendulum string" is a named factor. "The way I measured it" is not.
**Element 2: How it affects the measurement.** Spell out the causal chain. If the string length was measured from the clamp to the centre of the bob but the student measured to the bottom of the bob instead, the string length is recorded as too long, the period is calculated as too long, and the derived value of $g$ is underestimated. That chain is what earns the mark.
**Element 3: Direction or scatter.** State whether the error makes readings consistently too high (or too low), or whether it causes unpredictable scatter around the true value. This connects directly to the systematic-versus-random taxonomy in the next section.
A fourth element -- a specific improvement -- is almost always asked for in a companion sub-question or as part of an extended ACE sentence. The improvement must be physically achievable in a school lab and must address the mechanism you named, not a generic "be more careful."
**Plausibility matters.** A source of error must be plausible for the specific experiment. Claiming that the mass of a pendulum bob changed during a 30-minute experiment is not plausible. Claiming that the string stretched is not plausible for nylon cord over a short session. Claiming that reaction time introduced scatter in the stopwatch readings is entirely plausible and will be credited.
---
## 2 | The random vs systematic taxonomy
Every source of error in O-Level Science falls into one of two categories. Naming the category is not enough on its own -- you still need elements 1 to 3 from the previous section -- but choosing the wrong category undermines your improvement suggestion and loses a mark.
### Random errors
A random error produces unpredictable scatter. Each reading may be too high or too low by a different amount. Over many trials, random errors tend to average out toward the true value, which is why the standard improvement is repetition and averaging.
| **Characteristic** | **Detail** |
| ----------------------- | ----------------------------------------------------------------- |
| Direction each trial | Sometimes too high, sometimes too low |
| Pattern | No consistent bias |
| Effect on mean | Reduced by more trials |
| Standard improvement | Repeat the measurement and take the mean |
**Examples of random errors:**
- Reaction time when starting and stopping a stopwatch (slightly early or slightly late each time, unpredictably)
- Parallax error when the eye position shifts between readings at a measuring cylinder
- Vibrations in the bench causing small fluctuations in a balance reading
- Difficulty judging the exact end-point of a colour change in titration (some judgements are a drop early, some a drop late)
### Systematic errors
A systematic error shifts every reading in the same direction by approximately the same amount. Because every reading is biased the same way, repeating the experiment does not help -- the mean is still wrong. The standard improvement is calibration, zeroing, or subtracting the known offset.
| **Characteristic** | **Detail** |
| ----------------------- | ----------------------------------------------------------------- |
| Direction each trial | Always the same -- always too high, or always too low |
| Pattern | Consistent bias |
| Effect on mean | Unchanged by more trials |
| Standard improvement | Calibrate against a known standard; record and subtract the offset |
**Examples of systematic errors:**
- A zero error on an ammeter that reads +0.05 A when no current flows (every current reading is 0.05 A too high)
- A thermometer that consistently reads 1.5 degrees higher than the true temperature
- Reading the meniscus of a burette from above rather than at eye level, consistently making the reading appear lower than the true volume
### Why the distinction matters for your improvement
If you misclassify a systematic error as random and suggest "repeat and average," the improvement mark is lost. Repeating a measurement with a miscalibrated thermometer just gives you more wrong readings clustered around the same wrong value. The correct improvement is to calibrate the thermometer against a known standard (melting ice at 0 degrees C, boiling water at 100 degrees C) and record the offset.
Conversely, if you misclassify a random error as systematic and suggest calibration, the suggestion does not make physical sense, because calibration cannot fix an error that is inherently unpredictable in direction.
---
## 3 | Why "human error" alone loses marks
"Human error" is a category, not a description of a mechanism. Every error made by a human experimenter is, in that broad sense, a human error -- including zero errors on instruments, end-point uncertainty in titrations, and parallax at a burette. Saying "human error" tells the examiner nothing that connects to a specific experiment or a specific improvement.
The marking guide for SEAB O-Level practicals uses language such as "name the source specifically" and "state the effect on the measurement." An answer that says only "human error" satisfies neither criterion.
**Invalid:** "A source of error is human error when reading the stopwatch."
**Why invalid:** Does not name the physical mechanism (was it a reaction time lag? parallax on the stopwatch face? a misread digit?), does not state the direction of the error, cannot be linked to an improvement.
**Valid:** "Reaction time in starting and stopping the stopwatch introduces a random error in the measured period. Each individual timing may be slightly too long (if the student is slow to press stop) or slightly too short (if the student anticipates the return). Timing 20 oscillations and dividing by 20 reduces the fractional contribution of this error."
The valid version names the mechanism (reaction time), classifies it (random), explains the direction ambiguity, and links to a specific improvement (20 oscillations).
---
## 4 | The five-template approach
Roughly 80% of O-Level practical evaluation questions on sources of error can be answered using one of five templates. The templates are not answers to copy; they are structures to adapt to the specific experiment by inserting the relevant instrument, measurement, and direction.
### Template 1: Reaction time
**Applies to:** Any experiment that uses a stopwatch -- simple pendulum, simple harmonic motion, rate of reaction, enzyme activity.
**Mechanism:** The experimenter's nervous system and finger introduce a small unpredictable lag between the event and pressing the button.
**Type:** Random
**Direction:** Each reading may be slightly too long (late stop) or slightly too short (early stop). The sign changes unpredictably between trials.
**Model sentence:** "Reaction time when starting and stopping the stopwatch introduces a random error in the measured period. Individual timings may be slightly too long or too short, so the calculated value of $T$ has scatter around the true value."
**Mitigation:** "Time 20 oscillations and divide by 20. The same absolute reaction time error becomes $\frac{1}{20}$ of the fractional error in $T$."
---
### Template 2: Parallax
**Applies to:** Any experiment reading a scale from a distance -- metre rule, burette, measuring cylinder, thermometer, ammeter, voltmeter.
**Mechanism:** The experimenter's eye is not exactly level with the scale graduation being read. The line of sight passes at an angle through the scale, so the reading appears offset from the true value.
**Type:** Random if the eye position shifts unpredictably; systematic if the student always reads from above or always from below.
**Direction:** Reading from above a concave meniscus (water in a burette) makes the reading appear lower than the true volume -- a negative systematic error. Reading from below makes it appear higher.
**Model sentence (random version):** "Parallax at the metre rule introduces a random error in the measured length because the eye position was not consistently at the level of the graduation being read."
**Model sentence (systematic version):** "Reading the burette from consistently above the meniscus means every volume reading is slightly lower than the true value, introducing a systematic error that makes the titre appear smaller than it actually is."
**Mitigation:** "Position the eye level with the bottom of the meniscus and place a white card behind the burette to improve contrast. This removes the consistent angular offset."
---
### Template 3: Zero error
**Applies to:** Any instrument that should read zero before measurement -- balance, ruler, ammeter, voltmeter, micrometer screw gauge.
**Mechanism:** The instrument has a non-zero reading when no quantity is applied. Every subsequent measurement includes this fixed offset.
**Type:** Systematic
**Direction:** A positive zero error (the instrument reads $+x$ at zero) makes every reading $x$ too high. A negative zero error makes every reading $x$ too low.
**Model sentence:** "A zero error of $+0.05\ \text{A}$ on the ammeter means every current reading is $0.05\ \text{A}$ higher than the true value. The error is systematic and does not reduce with repetition."
**Mitigation:** "Check the zero of the ammeter before connecting the circuit. If a zero error is present, record it and subtract it from every reading."
---
### Template 4: Calibration drift
**Applies to:** Thermometers, electronic balances, pH meters, and other instruments that can drift over extended use or temperature change.
**Mechanism:** The instrument's internal reference changes over time or temperature, causing the reading to deviate from the true value by a growing offset.
**Type:** Systematic (constant bias at any given moment, but the magnitude may change slowly over the session)
**Direction:** Depends on the direction of drift -- could be consistently high or consistently low.
**Model sentence:** "If the thermometer was not calibrated against a reference before the heating curve experiment, any calibration offset causes every temperature reading to be consistently too high or too low. This is a systematic error that cannot be detected from the data alone."
**Mitigation:** "Calibrate the thermometer against the melting point of ice ($0\ ^\circ\text{C}$) and the boiling point of water ($100\ ^\circ\text{C}$, adjusted for local atmospheric pressure) before the experiment. Record the offset and correct all readings."
---
### Template 5: End-point uncertainty
**Applies to:** Acid-base titration, iodine-starch reactions, DCPIP decolorisation, and any experiment where a colour change signals the completion of a process.
**Mechanism:** The colour change at the end-point is gradual, not instantaneous. The experimenter must judge when the colour is permanent. This judgment introduces a small random variation between concordant titrations.
**Type:** Random
**Direction:** Some titrations end one drop early (titre slightly too small), others one drop late (titre slightly too large). The error is typically within one drop ($0.05\ \text{cm}^3$) in each direction.
**Model sentence:** "End-point uncertainty in the acid-base titration introduces a random error of up to $0.05\ \text{cm}^3$ per titration. Some runs end one drop early and some one drop late, causing the individual titres to scatter around the true equivalence point."
**Mitigation:** "Approach the end-point dropwise and swirl after each drop. Use concordant titrations (within $0.10\ \text{cm}^3$ of each other) and take the mean of concordant results only."
---
## 5 | Non-trivial examples per experiment
For each of the five experiments below, one valid and one invalid answer are shown. The invalid answers represent the most common exam responses that score zero.
### Simple pendulum
**Valid:** Reaction time when starting and stopping the stopwatch introduces a random error in the measured period. Individual timings are sometimes slightly too long and sometimes too short. Timing 20 oscillations and dividing by 20 reduces the fractional error.
**Invalid:** "The string may have stretched." A nylon string in a 30-minute school practical does not measurably stretch. Claiming an implausible error loses credibility with the examiner and the mark.
---
### Acid-base titration
**Valid:** End-point uncertainty within $\pm 1$ drop ($\pm 0.05\ \text{cm}^3$) introduces a random error in the measured titre. Some runs end one drop early, others one drop late. Using concordant titrations (three results within $0.10\ \text{cm}^3$ of each other) and averaging reduces the random scatter.
**Invalid:** "The chemicals may have been impure." Possible in principle, but not measurable or verifiable in a school practical context. A good source-of-error answer must be specific enough to link to a quantifiable effect and an achievable improvement. Vague contamination claims do not qualify.
---
### Resistance of a wire (circuit)
**Valid:** Poor electrical contact at the crocodile clip connections introduces additional and variable resistance into the circuit. This makes the measured current slightly lower than the true value for the wire alone, causing the calculated resistance to be slightly higher than the true resistance of the wire. The error is partly systematic (the contact resistance is present each time) and partly random (it varies as the clip shifts).
**Invalid:** "The battery may have been weak." Battery voltage sag is real but typically negligible over the timescale of a school resistance experiment. Unless the question specifically prompts you to consider power supply variation, this is an unlikely and unverifiable source in context.
---
### Density by water displacement
**Valid:** Air bubbles trapped on the surface of the object add to the volume of water displaced, making the measured volume higher than the true volume of the object. This causes the calculated density to be lower than the true density. The effect is systematic if the object has a consistently rough or porous surface.
**Invalid:** "The object may have been wet." The mass of a water film on a small object is typically less than 0.01 g, negligible against the uncertainty in the balance reading. Citing negligible effects wastes words and signals that the student is guessing.
---
### Heating curve (naphthalene or stearic acid)
**Valid:** Heat loss from the beaker to the surrounding air is continuous throughout the experiment. This reduces the rate at which heat is added to the substance and makes the temperature rise more slowly than in an ideally insulated system. The effect is systematic and causes the plateau temperature (melting or freezing point) to appear lower than the thermodynamic value.
**Invalid:** "The heater might have been uneven." Without specifying which aspect of the heater -- power output, contact area, or temperature distribution -- this is too vague to link to a direction or an improvement. Expand it to name the mechanism, for example: "If the heater coil was not fully submerged in the liquid, the rate of heat transfer varied as the liquid level changed."
---
## 6 | How to write a one-sentence mitigation
The improvement sub-question follows the source-of-error question in almost every ACE marking scheme. A strong mitigation sentence has three parts:
**Structure:** "[Name of error] can be reduced by [specific action] because [mechanism by which it helps]."
**Worked example (reaction time):** "Reaction time can be reduced by timing 20 oscillations instead of 1 and dividing by 20, because the same absolute reaction time error becomes $\frac{1}{20}$ of the fractional error in the measured period, so the calculated period is far more precise."
**Worked example (parallax at a burette):** "Parallax can be eliminated by positioning the eye level with the bottom of the meniscus before each reading, because this removes the angular offset between the line of sight and the scale."
**Worked example (zero error on a balance):** "The zero error can be eliminated by pressing the tare button with the empty container on the balance before adding the substance, or by recording the zero error and subtracting it from every mass reading."
**What to avoid:**
- "Be more careful." This does not name an action or a mechanism.
- "Repeat the experiment." Valid for random errors only, and only if you also specify "and take the mean of the results." Repeating without averaging changes nothing.
- "Use better equipment." Too vague. Name the specific instrument property you need (higher resolution, calibrated against a standard, digital rather than analogue).
A useful test: could a student who has never done this experiment read your mitigation and know exactly what to do differently? If not, the improvement is not specific enough.
---
## 7 | The variables-to-error bridge
The fastest way to generate a non-trivial source of error for any experiment is to work from your list of controlled variables. Once you have identified the CVs -- see the [independent, dependent, and controlled variables companion post](https://eclatinstitute.sg/blog/ip-combined-sciences-lower-sec-notes/Independent-Dependent-Controlled-Variables-Sec-1-Science) -- ask which of them you could not perfectly hold constant. Temperature of the surroundings, for example, is listed as a CV in many experiments but in practice rises or falls slowly over a session. That imperfect control is the source of error. Naming it, stating the direction of its effect on the DV, and classifying it as systematic or random is the complete answer. This approach works across all three O-Level sciences because it is grounded in the structure of the experiment rather than in memorised lists.
---
For a per-experiment reference once you have this technique, use the subject-specific banks: [O-Level Physics sources of error](https://eclatinstitute.sg/blog/o-level-physics-experiments/Sources-of-Error-O-Level-Physics-Practicals), [O-Level Chemistry sources of error](https://eclatinstitute.sg/blog/o-level-chemistry-experiments/Sources-of-Error-O-Level-Chemistry-Practicals), and [O-Level Biology sources of error](https://eclatinstitute.sg/blog/o-level-biology-experiments/Sources-of-Error-O-Level-Biology-Practicals). To revisit the variables foundation, return to the [independent, dependent, and controlled variables post](https://eclatinstitute.sg/blog/ip-combined-sciences-lower-sec-notes/Independent-Dependent-Controlled-Variables-Sec-1-Science). For the full practical skill map and companion procedure posts, visit the [O-Level Physics Experiments hub](https://eclatinstitute.sg/blog/o-level-physics-experiments).




