getHarsh.in() Student for life. Student of life.

Machine Learning is flat!

M

Okay, first things first; I want you to watch Johnny Harris explain the whole issue with the flat earth theory.

Are you done? Good.

This post has nothing and everything to do with the video above. However, I wanna touch on the debate of explainability and the regulations around this unexplainable black box. A conversation that gets heated up real quick across the globe. But we are in times where there is a dire need of a decent and a balanced conversation around it. And by the conversation, I don’t mean talking math or algorithms. I mean just talking about the issues and the solutions. We have a problem. And the first step to solving a problem is accepting that we have one.

The biggest problem with machine learning is of communication. Specifically, how oversimplification and reductionism morphs and idea into an absurdity that is surprisingly easily digested by the mass and people with big chequebooks. And telling it verbatim feels like an attack on the intellect with a jumbled web of abstractions and equations that gets played down by the obsessive desire to oversimply things … which lead to absurd morphings and the cycle of you wasting you life for the next however many minutes you’ll have to, begins.

To understand it, let’s understand the communication. For effective communication to happen, there needs to be an interpretation of shared vocabulary between two communicating parties. This implies that either the machine learning folks, need to understand the vocabulary of non-machine learning folks and explain in equivalence to that, or the non-machine learning folks need to understand the machine learning vocabulary and mathematics. When the conflict in understanding arises, it’s in the failure of abstractions used by either party.

Now, this is where it gets tricky. More often than not, the biases kick in and our brain starts to fill in the gaps with our biases which in the flow of conversation get underlooked by either party. And then as the abstractions start to build upon one another, a simple concept gets morphed in a spectrum of the slight error to major absurdity. These gaps arise, when we don’t ask questions when not understanding something being communicated out of fear of humiliation or any other cause of hesitation or worse, just assume that what you think is the same as what’s being communicated, in the presence of subtle differences. And you’ll be surprised just how frequently they do happen. Note, that this communication error has almost nothing to do with the mathematics or algorithms of machine learning. Mathematics is a whole different ball game. And to be fair, this problem is not just with machine learning, it can be with any concept whose explanation will often yield a similar situation in a conversation.

If you pay close attention to this conundrum, you can see how abstractions lead to major communication errors. But that’s the catch 22. The entirety of machine learning is one big jumbled forest of abstraction. It’s not even a science after a point and training any half-decent machine learning system is an art of tuning the statistical parameters in just the right way (and talk to any experienced individual, the answer is “it comes with experience, trial and error”). This, I believe, is at the root of explainability. There is a mountain of mathematical abstractions that need to be communicated, in what is usually a less than ideal setting (with less than the ideal audience) to communicate before the model’s working can be explained. And let’s be honest, how many mathematicians and computer scientist are really prolific public speakers! And I am sorry, a Nanodegree on Udacity and following tutorials on youtube does not make you a computer scientist, can be a good stepping stone though! These two skills, in the universal set of all skills, verges on disjoint 🌚. And also, partly due to a famous “If you can’t explain it to a six-year-old, you don’t understand it yourself”. If you find someone at the intersection of these 3 abilities, just pay whatever they ask for and shut the hell up! It’s just that rare and it could turn out to be a decent investment.

Knowing something and being exceptionally clear about it are two different things. And the odds are that the person explaining to you how this BlackBox works is not exceptional in all three (computer science, mathematics and public speaking or communication). In my experience, its like the infamous CAP theorem. But anyway, I think we went off on a tangent here.

Andrej Karpathy was one of the first researchers to actively work on visualizing the neural networks in his thesis. And later TensorFlow folks build a visualizer. But that’s about it to the best of my knowledge. The most accessible works with visualizations. Everything else is pretty much an active research topic. But that doesn’t mean it doesn’t work. In most cases, these systems perform better and faster than their human counterpart.

I have witnessed firsthand the fear of being automated by AI, with people working in the energy and agriculture sectors. Not to say it isn’t entirely true, but I believe that fear is one primary motivator second to belief for human beings. I don’t see humans being completely automated but rather augmented by silicon counterparts, helping them probe deeper towards more fundamental nature of existence and it’s properties (but then again, we are verging on the philosophical and hypotheticals as with any debate on this matter).

It’s hard for even the best and brightest of AI experts to keep up with the development happening across the globe in this space. You can keep up to speed with http://www.arxiv-sanity.com/ but then again, objects in the mirror are closer than they appear and then if this is the lowest latency system, the highest latency system perhaps is of the regulatory frameworks. Especially when we have demonstrated and scientific evidence of this “unexplainable” black box known to only mathematicians dwelling in abstractions, performing better than human counterparts, being feared by many (for all the right reasons, clouded in a veil of absurd conspiracy theories and doctrines). I do not know. And to be honest, no one knows. We are all working on it, but do not have a solution yet. There is obviously a clear need for better methods of education around these systems and we should most certainly move away from the black-box approach. Tools like Picasso and ConvNet Playground are paving the way for that but despite the amazing work being done, its still a very nascent field in its own right.

But one thing’s for sure, both parties need to accept that there is a problem. And it’s not the one that it does not work or if it can’t be explained it will not work. It’s of the gap in communication and the problem of segregating the philosophical from the regulatory. Regulatory mechanisms take time and way more due diligence than engineering these models. Keeping up to pace with the advancements is a challenge. And much like the flat earth theory conundrum, let’s not lose sight of what the scientific community has to say about the matter at hand.

PS: I have addressed the problem specifically with the communication here which in my opinion and experience has been the major one. Yes, the biases from sensitivity to the data samples and the privacy does pose a big challenge in standardizing the mainstream adoption in the regulatory machinery. A ton of research is being done in this direction specifically using GANs, some non-convex optimization techniques and many novel approaches.

Add Comment

getHarsh.in() Student for life. Student of life.
Harsh Joshi

Hi

Harsh here. Let's talk product, policy, design and maths. I am still figuring things out and am a student for life. Get in touch and follow the updates.

You'll like Raw Thoughts if you are interested in the things I am interested in, when I am not rambling at my Blog.

Living by the gospel of shrug emoji ¯\_(ツ)_/¯