Law enforcement officers have used field sobriety testing for decades to determine whether to charge a motorist with driving while intoxicated. However, the reliability of these tests is worth investigating because an officer’s decision significantly impacts a driver’s life.
Before accepting a police officer’s interpretation of field sobriety test results, drivers can understand more about these tests and their accuracy rate.
The origin of field sobriety tests
In 1975, the National Highway Traffic Safety Administration commissioned the Southern California Research Institute to determine the most accurate roadside field sobriety tests. The SCRI settled on three tests: the horizontal gaze nystagmus, the walk and turn and the one-leg stand.
However, the HGN was only accurate three out of four times, and the other two could not crack 70%. Combining tests indicated an 80% accuracy. Still, these results mean that two out of 10 people are likely to receive charges of DWI. These outcomes are hardly encouraging to a detained motorist.
Other questions about field sobriety tests
Others have raised credible concerns about these tests. For example, many test subjects had an abnormally high blood-alcohol content. The researchers obtained little evidence to compare how a person with lower levels would perform.
The group did not even compare how those with no alcohol in their system performed, meaning the essential control group was missing. Later tests showed that officers had difficulty when including this variable. Additionally, these studies have received no independent peer review.
Officers often use field sobriety tests to prove intoxication in borderline cases, though the researchers did not design the tests well enough for this purpose. The lack of reliability of these tests opens various legal challenges for defendants to use in trials.