Trust issues – and a pile of money

A few weeks ago I wrote a post about the trickiness of understanding exactly what is going on inside of a neural network.

By the time I hit “post”, I had edited out the part of my post that said I understood the principle of how they worked, and so it looked like I just didn’t get it at all – which wasn’t entirely true. The post attracted some helpful comments explaining the concept in different ways, which helped to some degree, but didn’t really address the underlying greyness for me (and I suspect also for my course lecturer on Coursera) about just what exactly the weights of the different parameters within the hidden layers of a neural network represent. When all these numbers are connected to all the other numbers in so many ways, even for very simple networks, it’s hard to intuitively see what those different weights might “mean”, and how tweaking them might affect the final output – at least compared to other algorithms.

It appears that I am not the only one struggling with this. I read this interesting article a couple of weeks back. The title was “Don’t believe the hype about AI in business” and it talks about a number of areas in which AI is still far from ubiquitous in most businesses. Depending how you measure ‘ubiquitous’, the article is clearly wrong. Any business that has employees using Google to search for stuff on the internet is using AI. That said, I suspect that the main premise that most businesses are a long way from implementing AI and machine learning techniques in their own, core business processes is largely correct.

The article goes on to give some of the reasons why this is so.

One of the reasons that caught my eye was this quote:

The bigger difficulty in working with this form of AI is that what it has learned remains a mystery — a set of indefinable responses to data. Once a neural network is trained, not even its designer knows exactly how it is doing what it does. As New York University professor Gary Marcus explains, deep learning systems have millions or even billions of parameters, identifiable to their developers only in terms of their geography within a complex neural network. They are a “black box,” researchers say.

This was my (less well articulated) point!

There has been some research done on how to interpret the weights inside neural networks. This paper (dating back to 1991) is pretty commonly referred to, and it seems others have tried to implement it and take it further. Although I haven’t read these papers, presumably they all fall short, otherwise Gary Marcus and friends wouldn’t make their “black box” comments.

Before we trust the prediction of anything – be that human or machine – most of us like to hear an explanation about how they came to make that prediction, especially if those predictions can have a serious impact on a company’s bottom line and shareholder returns. Right now, complex neural networks are just unable to give that kind of explanation, and so we are somewhat hesitant about trusting them – even if empirically we can see that they very often do make ‘intelligent’ predictions. It’s pretty hard to trust (or argue with) something that answers “just because” to any question you put to them.

So, here is the next killer app in machine learning: Software that can scan the inputs, outputs and calculated weights of a complex neural network and give a human understandable explanation of any given output.

The person who cracks that one will make themself a large pot of money.

2 Replies to “Trust issues – and a pile of money”

  1. Good thoughts Charles, I agree with your sentiments.

    A related but different angle on this is that, to a lesser degree, we also face this challenge within our own heads. Our minds seem to be composed of parts that are consciously introspectable, as well as parts that aren’t very introspectable. When we say we have a “gut feeling” that something may be one way or another, I interpret that to mean that the key parts of the analysis were performed by the part of our mind “behind the veil”, somewhat, or maybe even entirely out of reach of our conscious mind to “introspect”.

    At that point, we are almost like the data scientist looking at a black box. We can observe the behavior of that part of the system, and over time we may be able to form a theory as to why it’s doing what it’s doing, but we can’t directly see it in action.

    Psychologists suggest that our conscious minds have a pretty amazing ability to cook up theories on the fly, very quickly, to try to explain mental phenomenon. For instance, there were reports in some studies that severing the corpus collosum and showing a person one thing in one eye and another thing in another eye would result in an experience that was confusing / inconsistent. The test participant would quickly explain away their mental discrepancy with an explanation that sound not-crazy, but was actually untrue.

    I guess I’m curious whether our minds have some circuitry devoted to trying to smooth the rough edges that result because of our own internal black box(es). Maybe, as you suggest, one of the big advances in AI at some point will be this system that can do likewise for ML models.

    1. Indeed, all of us use and trust things we don’t really understand pretty much all the time – our own brains being the prime example! I have done some reading on cognitive bias which is also quite interesting. The basic premise is that there are dozens (if not hundreds) of documented ways that human beings can rationalise things away and convince themselves of stuff that just isn’t true. But of course, since it’s people that are doing it, and it’s a familiar behaviour, we have more trust in the process. But since we don’t “know” machines so well, our trust levels are lower.

Leave a Reply

Your email address will not be published. Required fields are marked *