Monday, July 2, 2012

A Visionary's Perspective

The Chartered Institute for IT has published a wide ranging interview, Getting a facial, with Professor Maja Pantic, from Imperial College, London.

Prof. Pantic has been working on automatic facial behaviour analysis. This type of research, if successful, could lead to a revolution in the way humans interact with technologies devoted to security, entertainment, health and the control of local physical environments in homes and offices.

The interview is long, wide-ranging, and worth reading in it's entirety.

I would, however, like to point out two passages that have great bearing on some of the themes we discuss regularly here.

Why computer science?
But with computers, it was something completely new; we just couldn’t predict where it would go. And we still don’t really know where it will go! At the time I started studying it was 1988 - it was the time before the internet - but I did like to play computer games and that was one of the reasons, for sure, that I looked into it. [ed. Emphasis added]

You never know where a new technology will lead, and those who fixate on a technology, as a thing in itself are missing something important. Technology only has meaning in what people do with it. The people who created the internet weren't trying to kill the record labels, revolutionize the banking industry, globalize the world market for fraud, or destroy the Mom & Pop retail sector while passing the savings on to you. The internet, much less its creators, didn't do it. The people it empowered did. 


Technologies empower people. Successful technologies tend to empower people to improve things. If a technology doesn't lead to improvement, in the vast majority of cases it will fail to catch on and/or fall into disuse. In the slim minority of remaining cases (a successful "bad" technology) people tend to agree not to produce them or place extreme conditions on their production and or use i.e. chem-bio weapons, or CFC's. There really aren't many "bad" technologies that people actually have to worry about. 


It makes far more sense to worry about people using technologies that are, on balance, "good" to do bad things — a lesson the anti-biometrics crowd should internalize. Moreover, you don't need high technology to do terrible things. The most terrible things that people have ever done to other people didn't require a whole lot of technology. They just required people who wanted to do them.


The interview also contains this passage on the working relationship between people and IT...

The detection software allows us to try to predict how atypical the behaviour is of a particular person. This may be due to nervousness or it may be due to an attempt to cover something up.

It’s very pretentious to say we will have vision-based deception detection software, but what we can show are the first signs of atypical or nervous behaviour. The human observer who is monitoring a person can see their scores and review their case. It’s more of an aid to the human observer rather than a clear-cut deception detector. That’s the whole security part.

There’s a lot of human / computer interaction involved.
It's not the tech; it's the people. 


Technology like biometrics or behavioral analysis isn't a robot overlord created to boss around people like security staff. It's a tool designed to help inform their trained human judgement. This informs issues like planning for exceptions to the security rule: lost ID's, missing biometrics, etc. Technology can't be held responsible for anything. It can help people become more efficient, and inform their judgement, but it can't do a job by itself.


Back to Three Sides of the Same Coin