The wood and trees of AI
Artificial Intelligence (AI) is the big thing in technology marketing 2018. The internet giants are battling it out to become the kingpins in the field, and every new product announcement has to have AI in it somewhere, even if most people totally misunderstand what AI is and how it works.
If we take, say, pictures of letters of the alphabet, we can write a program which breaks the images of letters down into rudimentary features, such as lengths and directions of the major lines and curves making up the image. We humans, know that two long lines converging at the top and a horizontal line joining them represents the letter A, and early attempts at character recognition involved humans writing long series of rules to describe the key features of each letter. This approach is known as an Expert System.
A more sophisticated expert system might have rules which say that a symbol where the lines join at an angle at the top is likely to be an A, but it could also be a skewed E, F, or Z. Likewise a symbol with a crossbar connection two upright lines is likely to be an A or an H. An expert in handwriting would apply weightings to each possibility to determine the most likely answer, which could then be implemented in a computer program. This is what we humans do in real life. When we struggle to read someone's handwriting, we work through the possibilities in our head and make an educated guess, but whilst you and I might make the same educated guess, our internal reasoning to arrive at that guess might be wildly different. We each extrapolate our own unique experience.
Machine Learning, or AI, is a development of the expert system concept. It uses a particular type of statistical analysis which was invented back in the 1980s, but it is only in the last few years that we have had sufficient hardware power to make it work. With AI, instead of human experts trying to express their own knowledge as computer rules, the approach is to throw massive amounts of previously classified data at a program, such as all the different pictures of letters that we can find. We identify the letters for the AI, and allow it to identify all the lines etc in each image and use mathematics and computer power to work out the common correlations and calculate the optimal weightings of each potential rule.
It mimics what we do in real life, but does it in a more formal and defined fashion. In many cases, the results are undeniably good, but just like with people, it is not necessarily obvious to us how the program is arriving at its decisions, or if the logic is sound. Ultimately, the program is only as good as the data used in its programming phase.
It might be, for instance, that in our training data, we never thought to include a capital A written with a rounded top, and so our AI algorithms incorrectly identify such renderings as the letter Q or S. Or perhaps we had never included the @ sign in our data, so the AI interprets it as a letter G or a lower case D. We would, of course, very quickly spot such trivial problems and feed more training data back into the system, but that example illustrates the limitations of what we call AI. It is not true intelligence, it cannot "think", and it doesn't "understand" things in the way we do.
AI works well because we can now feed it significant volumes of data and it can be the obedient drudge. The AI techniques are being used to solve some amazingly complex problems, such as voice recognition, language translation, medical x-ray checking, and autonomous driving, but the ever-growing complexity of such applications also means we struggle more and more to know how the AI arrives at its results. Unfortunately, we also have a tendency to blindly accept that it must be correct, because its a computer.
The social implications of using AI
This blind trust in computing has huge social implications, especially when you consider that governments are quick to use technology in ways that affect our daily lives. The EU, for example, is investing millions in developing an AI system for use at its borders to evaluate immigrant status.
Suppose that you went to the bank for a loan, the clerk loads your file into an AI-based assessment program, and the program rejects you, yet you don't know why. Is it because you are a genuine credit risk, or because of a complex combination of skin colour, gender, religion, and type of pet, which has nothing to do with your credit worthiness, and is a result of poor and incomplete training data? If no-one understands exactly how the AI model is arriving at its decision, the bank clerk can only blink at you and say "Sorry, computer says No, and computers never make mistakes".
Checking the logic within the AI
Researchers at the University of Washington have been looking at tools to better understand the way AI classifiers reach the decisions they do, and one particularly revealing example has been using AI to distinguish between images of huskies and wolves. Whilst the AI was achieving good results, it was only when the internal logic of the classifier was deconstructed that it was discovered that a key feature used in distinguishing the two was not the shape of the muzzle or the size of the teeth, but the amount of snow visible in the background of the photo.
AI failures in the real world
AI failures are happening to humans too. In the Chinese city of Ningbo, police take jay-walking seriously. An AI system is used to analyse the feeds from the city's CCTV cameras and detect instances of people crossing roads when the traffic lights are still green. The AI captures the faces of offenders and posts the images to a shaming gallery in the city centre. It came as some surprise to officials to see that a repeat offender was the face of Dong Mingzhu, one of the most important and influential businesswomen in China. It transpires that Dong's photo is being used in an advert on the side of a bus, and the automated AI was detecting that as a person jaywalking in the midst of moving traffic.
AI in the surveillance society
The Chinese authorities are making extensive use of an AI system called Intellifusion Deep Eye and this video shows some of the positives that can come out of automated surveillance and facial recognition.
Most people would welcome systems which work so well in preventing and solving crimes, provided it is used as a supplement to intelligent human work, and not as an autonomous judge and jury where you must be guilty because the computer says so and computers never make mistakes. But we need a note of caution too.
This degree of technology and the ability to process massive amounts of data and identify behaviour patterns is worrying because it is a dangerous tool to put into the hands of oppressive regimes. There are far too many which openly persecute people for being of the wrong ethnicity, religion, sexual persuasion, or political orientation, and tools such as this make it trivial to map out someone's movements and contacts, to gather evidence of deviancy or dissent, and make others guilty by association.
30th November 2018
This article comes from the SKILLZONE email newsletter, published monthly since January 2008, and covering topics related to technology and the internet. All articles and artwork in the SKILLZONE newsletter are orignal content. If you would like to receive the newsletter direct to your inbox each month, please SUBSCRIBE here. It is free, and you don't get added to any other mailing lists. It uses best-practice confirmed opt-in only, and you may unsubscribe at any time.