Coursera Week 3 :: An Early Prediction

So, I’ve finished the work for the third week of the machine learning course I am doing at Coursera. It was much the same as last week: heavy duty numeric analysis with some stats. I won’t go into the grisly details of how logistic regression works for classification problems – you’ll have to enrol yourself for that.

But after three weeks of this and a little bit of extra reading about tensor flow (from Google Brain I think – which has had connections with Andrew Ng of Coursera fame) and other things, I am going to make a prediction. So far, machine learning is all about making predictions from training examples, so I think I’m OK doing this…

Current techniques for Machine Learning are not going to produce human intelligence

So, it’s only week three – and I could be wrong – but from what I have seen so far and also read in other areas, this is my intuition. These are my two reasons:

1. Too much Maths

Again, I could be wrong, but I don’t think my brain – or the brains of my small children – engage subconsciously in linear algebra, calculus or any other advanced mathematical techniques to make most decisions. Machine learning is currently focussed on heavy maths because computers can be programmed to do maths quickly and easily. Adding numbers is just what computers do and always have done. But trying to make maths do real intelligence is a classic square pegs round holes problem if you ask me.

2. Too much Training

From what I have seen and heard, quality machine learning algorithms that can solve reasonably accurately real world problems need to process gazillions of example cases on order to generalise and make predictions about data they have not seen before. Again, neither me nor my small kids needed to see anywhere near that much input before we were able to make accurate predictions about the real world. Some people more intuition that others – which is another way of saying that they need less input information in order to make accurate predictions. It is probably also true that they have some kind of instinctual way of knowing what input they have is important to the matter at hand, and which is just fluff. Machines can only figure out what is irrelevant noise by yet more number crunching.

I also conjecture that it is no surprise that we have seen advances in machine learning in step with the burgeoning processing power of computers (for the maths) and storage (to hold all those training examples). I am not saying that these techniques are not producing valuable results. I am just saying that they will not lead to real intelligence, the kind of intelligence that “powers” humanity. At best, these techniques can only lead to some kind of “simulated” intelligence and my gut tells me that this just isn’t good enough. BTW, I plan to write another blog about the Turing test at a later date.

Implications

There are probably more than two reasons – but these are the ones that come to mind right now. However, the implications have significant potential for bad outcomes.

The first outcome was articulated in another blog post that I can’t remember. In essence, the author was saying, that since current techniques won’t ever produce true intelligence, we should not focus all of our AI efforts on them. It is a dead-end (albeit possibly quite profitable) street. We need to try some other streets.

The second relates to the picture above. It is of Macbeth and Banquo meeting the three witches. As you may recall, the three witches make three predictions about the future to Macbeth. The first two come true very quickly, leading Macbeth to believe that the third one must be for real also. In the end, he goes out of his way to fulfil the prediction himself. In the process his wife goes mad, and he is eventually killed himself.

The connection to machine learning?

Most folks (myself included at this stage) don’t really understand how machine learning and artificial intelligence works. And yet, we can see that it makes a number of pretty good predictions in a number of (currently limited) areas. The temptation is to attribute to this “intelligence” more trust than is actually warranted.

An article about facial recognition on the Mozilla IRL podcast told the chilling story of a Steven Talley whose face was wrongly matched to that of a robber who performed a bank heist. He suffered a whole host of bad outcomes (including one year of prison) as a result of this. Although it seems that humans initially made the connection that the man in the grainy CCR video footage was Talley, when the FBI put facial recognition technology to work, it came up with a match. Talley thinks they were using it like finger prints. In other words – if there’s a facial recognition match, it must be him. In the end, Talley was acquitted (twice) and is now suing for $10 million. It doesn’t seem like machine learning and facial recognition is totally to blame here; human intelligence also made a number of mistakes. It is hard to know from these articles just how biased the FBI were against this man based on the facial recognition match. I guess it will come out in the trial.

The point is that putting too much trust in our current algorithms is likely to lead to “bad things”. Will that outweigh the number of “good” things? It’s hard to say… but any gains from, say, autonomous driving slashing the road toll would be instantly undone by a machine learning algorithm incorrectly identifying an incoming North Korean nuclear missile and triggering the wrong kind of military response.

The final poor outcome relates to ethics and is an area I want to come back to in future blogs. My hunch is that since this sort of machine learning approach can’t lead to true human intelligence, it will also fail to lead to other true human emotions. It will be impossible to implement an ethical framework of behaviour. Compassion, justice, mercy, humour will just not be able to be encoded into endless matrices or probability. And that could also be a big problem for us…

So, those are my predictions at the start of 2018. Only time will tell how correct they were! Now I just need a few more training examples…

Leave a Reply

Your email address will not be published. Required fields are marked *