Foreword by John Daugman

The arrival of this Handbook in 2012 suitably marks a number of milestones and
anniversaries for iris recognition. The most breathtaking of these is the fact
that now on a daily basis more than 100 trillion, or 10-to-the–14th-power, iris
comparisons are performed. This juggernaut (a Hindi word, appropriately) was
unleashed by the Indian Government to check for duplicate identities as the
Universal IDentification Authority of India, or UIDAI, enrolls the iris
patterns of all its 1.2 billion citizens within three years. This vastly
ambitious programme requires enrolling about 1 million persons every day,
across 36,000 stations operated by 83 agencies. Its purpose is to issue each
citizen a biometrically provable unique entitlement number (Aadhaar) by which
benefits may be claimed, and social inclusion enhanced; thus the slogan of
UIDAI is: “To give the poor an identity.” With about 200 million persons
enrolled so far, against whom the daily intake of another million must be
compared for de-duplication, the daily number of iris cross-comparisons is
about 10-to-the–14th-power, and growing. Similar national projects are also
underway in Indonesia and in several smaller countries.

Also breathtaking (but perhaps mainly just for me personally) is the fact that
this year is only the 20-year anniversary of the first academic paper proposing
an actual method for iris recognition. In August 1992, having recently arrived
at Cambridge University as a Research Fellow, I submitted a paper about the
method to IEEE Transactions on Pattern Analysis and Machine Intelligence
(PAMI) entitled: “High confidence visual recognition of persons by a test of
statistical independence.” The core theoretical idea was that the failure of
a test of independence could be a very strong basis for pattern recognition, if
there is sufficiently high entropy (enough degrees-of-freedom of random
variation) among samples from different classes, as I was able to demonstrate
with a set of 592 iris images. The PAMI paper was published in 1993, shortly
before my corresponding US Patent 5,291,560 was also issued. That original
algorithm was widely licensed through a series of companies (IriScan, Iridian,
Sarnoff, Sensar, LG-Iris, Panasonic, Oki, BI2, IrisGuard, Unisys, Sagem,
Enschede, Securimetrics and L1 now owned by Safran/Morpho). With various
improvements over the years, this algorithm remains today the basis of all
significant public deployments of iris recognition. But academic research on
many aspects of this technology has exploded in recent years. To quote from
the excellent survey chapter by Bowyer, Hollingsworth and Flynn in this book:
during just the three-year period 2008–2010 there were more papers published
about iris recognition than during the entire 15 year period 1992–2007.

The conjecture that perhaps the iris could serve as a fingerprint has a much
longer history, and this year marks the 60-year anniversary of the following
statement in Adler’s classic clinical textbook {\it Physiology of the Eye}
(Chapter VI, page 143): “In fact, the markings of the iris are so distinctive
that it has been proposed to use photographs as a means of identification,
instead of fingerprints.” Apparently Adler referred to a proposal by the
British ophthalmologist Doggart. In the 1980’s two American ophthalmologists,
Flom and Safir managed to patent Adler’s and Doggart’s conjecture, but they had
no actual algorithm or implementation to perform it and so the patent was
conjecture. The roots of the conjecture stretch back even further: In 1892
Alphonse Bertillon documented nuances in “Tableau de l’iris humain”; and
divination of all sorts of things based on iris patterns goes back to ancient
Egypt, Babylonia, and Greece. Iris divination persists today, as “iridology.”

Optical systems for iris image acquisition have enjoyed impressive engineering
advances, enabling generally a more flexible user interface and a more
comfortable distance between camera and subject than the “in-your-face”
experience and the “stop-and-stare” interface of the first cameras. Pioneering
work by Jim Matey and his team at Sarnoff Labs led to the current generation of
systems capturing “iris-at-a-distance” and “iris-on-the-move,” in which capture
volume is nearly a cubic meter and on-the-move means walking at 1 meter/second,
enabling throughput rates of a person per second. There has been a
“long-distance race” to demonstrate the longest stand-off distance, with some
claims extending to the tens of meters. The camera is then essentially a
telescope, but the need to project enough radiant light safely onto the target
to overcome its inverse square-law dilution is a limitation. These
developments bring two wry thoughts to my mind: First, I recall that when I
originally began giving live demonstrations of iris recognition, the capture
volume was perhaps a cubic inch; the hardware was a wooden box containing a
videocamera, a video display, a near-infrared light source, and a voice
interface that replayed the name of a person when visually identified. Second,
I read that the Hubble Space Telescope is to be decommissioned, and I wonder
whether we might convert it into the Hubble Iris Camera for the ultimate
“iris-at-a-distance” demonstration…

In the first dozen years after the 1993 PAMI paper, it was always very
difficult to persuade leaders of the established biometrics community to take
an interest in the claim that the iris algorithm had extraordinary resistance
against False Matches, as well as enormous matching speed. The encoding of an
iris pattern into a sign bit sequence enables not only extremely fast XOR
matching (e.g., on a 32-bit machine, 32 parallel bits from each of two
IrisCodes can be simultaneously compared in a single machine instruction, in
almost a single clock cycle at say 3 GHz). But even more importantly, the
Bernoulli nature of random bit pair comparisons generates binomial
distributions for the (dis)similarity scores between different eyes. The
binomial distribution (for “imposter” comparisons) is dominated by
combinatorial terms with geometric tails that attenuate extremely rapidly. For
example, if you accept as a match any IrisCode pair for which no more than 32%
of the bits disagree, then the False Match likelihood is about 1 in a million;
but if your criterion is just slightly stricter, say that no more than 28\% of
the bits may disagree, then the False Match likelihood is about 1 in a billion
(i.e., reduced by a further thousand-fold as result of a mere 4-percentile
point [0.04] reduction in threshold). These claims became contentious in the
year 2000 when the Director of the US “National Biometric Test Center” (NBTC)
in San Jose wrote that in their testing of an iris recognition prototype at
NBTC, many False Matches had been observed. I received copies of all the
images, ran all-against-all cross-comparisons, and sure enough, there were many
apparent False Matches. But when I inspected these putative False Match images
visually, it became clear that they were all in fact True Matches but with
changed identities. The Director of the NBTC later confirmed this and
generously acknowledged: “Clearly we were getting scammed by some of our
student volunteers (at $25 a head, they were changing names and coming through
multiple times).”

Another obstacle to confirmation of the extreme resistance of this biometric to
False Matches was the decision in the first large-scale test (ICE 2006:
Iris Challenge Evaluation) to evaluate at a False Match Rate of 1 in a
thousand (FMR = 0.001). In this very non-demanding region of an ROC plot, most
biometrics will appear equally powerful. Indeed since ROC curves converge into
the corners at either extreme, if one tested at say FMR = 0.01 then probably
the length of one’s big toe would seem as discriminating as the iris. The long
tradition of face recognition tests had typically used the FMR = 0.001
benchmark for obvious reasons: face recognition cannot perform at more
demanding FMR levels. Thus the ICE 2006 Report drew the extraordinary
conclusion that face and iris were equally powerful biometrics. Imagine how
well face recognition would hold up in the 100 trillion daily cross-comparisons
done by UIDAI. And if iris were operating at the FMR = 0.001 level, then every
day in UIDAI there would be 100 billion False Matches – a number equal to the
number of stars in our galaxy, or of neurons in the human brain.

A critical feature of iris recognition is that it produces very flat ROC or DET
curves. By threshold adjustment the FMR can be shifted over four or five
orders of magnitude while the FnMR hardly changes. Thus at FMR = 0.001 iris
may appear unremarkable, as in ICE 2006, and so Newton and Phillips (2007)
disputed “the conventional wisdom” that iris was a very powerful biometric.
But hardly any price is paid in iris FnMR when its FMR is shifted by several
log units, to 0.0000001 or smaller, as required for national-scale deployments.
Fortunately, tests by NIST subsequent to ICE have understood this point about
the likelihood ratio (the slope of the ROC curve) and have pushed iris testing
into the billions of cross-comparisons (IREX-I) and indeed now 1,200 billion
cross-comparisons (IREX-III). IREX-I confirmed (7.3.2) that “there is little
variation in FnMR across the five decades of FMR”, and also confirmed exactly
the exponential decline in FMR with minuscule (percentile point) reductions in
threshold as I had tabulated in earlier papers. IREX-III results (presented by
Patrick Grother in London, October 2011) included a comparison of iris and face
performance using the best face algorithms from 2010 on a database of 1.6
million mugshot face images (compliant with a police mugshot standard), and
also 1.6 million DoD detainee iris images. These NIST tests showed that for
any plausible FnMR target, iris recognition makes 100,000 times fewer False
Matches than face.

I am delighted to see the range of topics included in this Handbook, which
reflects in part the richness of our subject and all the connections it draws
among biology, photonics, optical engineering, security engineering,
mathematics, algorithms, and standardisation. Especially hot current topics
include iris image quality metrics, with the recent NIST report (IREX-II or
IQCE) on quality-performance covariates and their predictive powers across
matchers, and current development of an ISO/IEC Standard (29794–6) for quality.
One area that remains rather unexplored is the role of information theory,
which lies at the heart of our subject since it measures both the complexity of
random variation (the key to biometric collision avoidance), and discriminating
power.

Twenty years is a remarkably short time to get from 0 to 100 trillion iris
comparisons per day. But also, 20 years is perhaps a generation. It feels as
though the real potential of this technology is just beginning to be understood
(as can probably also be said about its limitations). This Handbook – the
first book to be devoted entirely to iris recognition – is full of excellent
contributions from a new generation of researchers. If I have been a
torch-bearer, I am all too happy to “pass the torch” to them while remaining, I
hope, still on the field amidst increasing numbers of colleagues captivated by
the entropy of the eye.

John Daugman
Cambridge, February 2012

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s