A couple of weeks ago, when the news broke that someone had claimed to have "hacked" iris biometrics by reverse engineering a template into an image of an iris that would be accepted by an iris recognition system, I said: It's not a real biometric modality until someone hacks it.
That's because a hacking claim can generate a lot of media publicity even if it doesn't constitute proof that a technology is fatally flawed. Where's the publicity value of hacking something that nobody uses, anyway? Claims like this can also be taken as a sign that a new technology, iris biometrics in this case, has crossed some sort of adoption and awareness threshold.
So what about the hack? Now that more information is available and assuming that Wired has things about right, "experiment" is a far better descriptor than "hack" for what actually went down. "Hack" would seem to indicate that a system can be manipulated into behaving unexpectedly and with exploitable consequences in its real world conditions. Think of picking a lock. A doorknob with a key hole can be manipulated by tools that aren't the proper key to open a locked door in its normal operating environment.
The method that the researchers relied upon to develop the fake iris from the real template bears no resemblance to the lock-picking example. What
the researchers did is known as hill-climbing. In simple terms, it's like playing the children's game Cold-Warm-Hot but the feedback is more detailed. A hill-climbing experiment relies upon the system being experimented on giving detailed information back to the experimenter about how well the experimenter is doing. The experimenter presents a sample and the system gives a score (cold, warm, hot). The experimenter refines the sample and hopes the score will improve. Lather, rinse, repeat. A few hundred iterations later, the light turns green.
Technically, you don't even need to have a sample (template) to start hill climbing. You could just start feeding the system random characters until you hit upon a combination that fit the template's template(?).
This is one of those exercises that is academically interesting but doesn't provide much useful information to system engineers or organization managers. Scientific experiments deal with their subjects by isolating and manipulating one variable at a time. Real world security systems are deployed with careful consideration of the value of what is being protected and a dependence upon all sorts of environmental factors.
A person who wanted to bypass an iris scanner using this method in the real world would:
1. Hack into a biometric database to steal a template of an authorized user; pray templates aren't encrypted
2. Determine which biometric algorithm (which company's technology) generated the template
3. Buy (or steal) that company's software development kit
4. Build and successfully run the hill-climbing routine
5. Print the resulting image using a high quality printer
6. Go to the sensor
7. Place print-out in front of iris scanner
8. Cross fingers
Simple, right? Compared to what?
Once you're talking about hacking into unencrypted biometric template databases (and depending upon your CRUD privileges) almost anything is possible and little of it requires Xeroxing yourself a pair of contact lenses.
Why not just blow away the whole database of iris templates? Problem solved. The scanners, now just locks with no key, would have to be disabled at least temporarily.
If stealth is more your style, just hack into the database, create a credential for yourself by placing your very own iris template in there and dispense with the whole rigmarole of the hill-climbing business. Delete your template (and why not all the others) after the heist.
If your hacking skillz aren't up to the task, you could stalk someone who is already enrolled with a Nikon D4 and a wildlife photography lens and skip steps one thru four (and eight) on the above list.
You could trick, threaten or bribe someone into letting you in.
Break the door or a window.
The elaborateness of the process undertaken by the researchers pretty much proves that the iris sensor isn't going to be the weak link in any real world security deployment.