CRF Blog

The Myth of Fingerprints

by David De La Torre

In The Myth of Fingerprints, Smithsonian Magazine looks at the rise (and apparent decline) of fingerprint evidence and the rise of DNA evidence in court.

The idea of fingerprints gradually dawned on several different thinkers. One was Henry Faulds, a Scottish physician who was working as a missionary in Japan in the 1870s. One day while sifting through shards of 2,000-year-old pottery, he noticed that the ridge patterns of the potter’s ancient fingerprints were still visible. He began inking prints of his colleagues at the hospital—and noticing they seemed unique. Faulds even used prints to solve a small crime. An employee was stealing alcohol from the hospital and drinking it in a beaker. Faulds located a print left on the glass, matched it to a print he’d taken from a colleague, and — presto — identified the culprit.

How reliable were prints, though? Could a person’s fingerprints change? To find out, Faulds and some students scraped off their fingertip ridges, and discovered they grew back in precisely the same pattern. When he examined children’s development over two years, Faulds found their prints stayed the same. By 1880 he was convinced, and wrote a letter to the journal Nature arguing that prints could be a way for police to deduce identity.

“When bloody finger-marks or impressions on clay, glass, etc., exist,” Faulds wrote, “they may lead to the scientific identification of criminals.”

Other thinkers were endorsing and exploring the idea—and began trying to create a way to categorize prints. Sure, fingerprints were great in theory, but they were truly useful only if you could quickly match them to a suspect. [more]