“Why did you do that?”

Anand Jagadeesh
4 min readMay 16, 2020

--

Please read in full before jumping into conclusions! :)
Photo by Alex Knight from Pexels

This is about asking questions? Yes, asking questions! Suppose we have a very important responsibility on our shoulders. We are responsible for the decisions we take. We are answerable for what we do when we hold a responsible position or are in-charge of some task or someone entrust us with something. Say, we did something that, we thought, would yield positive results. But, unfortunately something didn’t work correctly and the results were terrible. For example, let’s say you are the project lead building an operating system. You thought there is no need for a backdoor for the engineers to enter the system and shutdown the system during a mishap or programming turning into a catastrophe, because you thought such a backdoor would involve the user’s privacy and might be misused. You might be correct in your judgement/viewpoint. But say, a catastrophe happened and the unavailability of such a backdoor resulted in a lot of losses to a customer.

See, here what you did was right from your perspective. Since a catastrophe happened, people will turn to you and ask why you didn’t put a backdoor in the system. You have a perfect reason for what you did. If you present it well, others will also accept it and they will also feel it was a right decision, right(if they didn’t, you are in deep trouble)? That’s what reasons and justifications are for. You might be thinking why I am talking all this nonsense. Yes, this might sound nonsense to you but wait just read the rest of this article before you jump into conclusions. :) “What is all this about? Why were you talking about all that?” Yes., I am coming to that. Before proceeding further. Take two minutes to think of some situation where you were asked to explain your decision. Were you able to give a convincing explanation? At least you tried to justify your part, right? You had a reason of your own. Reasoning ability is a part of most skill tests.

We are moving into an era where robots are getting employed in almost all fields. The recent improvements in artificial intelligence and machine learning is quite promising. We will be walking down a street in the coming years and the street’s internetworked sensors and devices will be monitoring you, your health, and a hell lot of other things, the vehicles will behave autonomously, like a swarm, seamlessly connecting with each other and roaming around the road intelligently, driven by AI, much better than a human driver and a homes and everything around you becoming more and more smarter. Well, I would love to see some really intelligent systems that we have seen in sci-fi movies as a child. Yes, the future is promising. But with everything, you will get some clauses. Like a friend of mine who is a software engineer once jokingly mentioned, “nothing comes for free, even Santa’s got a ‘Claus(e)”. Recent thoughts from a group of researchers pointed towards some of the side effects or rather can be regarded as ill effects of excessive use of Machine Learning and Deep Learning to build high-end intelligent systems.

They say there were cases where a driverless or autonomous car stopped at a green light, running a red light and suddenly changed lanes without any reasons.And the scientists were not able to identify or correctly determine the reason. The things is, the car’s processes, microcode and training sets arrived at a conclusion to stop even after seeing a green light due to some error or because the results of the computation provided the solution for current situation as “stop”. But what if there was some actual reason that we didn’t know about. This can only be determined by checking logs in a system. But most of the modern systems use unsupervised deep learning techniques where most of the programmers won’t be knowing how the system arrived at the current state though they know the algorithm that runs. This is more like how individuals think. But if you ask a human it will give you a reason or explanation for what he did. We need some mechanism to get explanations from such systems, as well. Logs? I don’t know! Something. Well this is a serious issue. Think about robots performing surgeries and other important functions. Their mistakes can be fatal. We might need proper explanation. Say a driverless car runs over someone or meets with an accident. How will we justify it.

Recently, Facebook shut down their AI program because the program which was initially programmed to converse in English found that English is inefficient and invented or rather derived a language of its own to converse. There may be many reasons why it chose to have its own language. Similar things can be seen in various other sectors. Shall we add something to track the programs? Shall we add a mechanism to these programs so that we can ask questions about why they arrived at the current solution or state? I think there should be some mechanism. Anyways, I guess this is one area that we need to focus on before the next-level cyborgs roam around the planet. What are your thoughts? Please do share your ideas and suggestions, thoughts or anything you like to talk about, with me by mailing to mail@anand.xyz or through the contact form available at www.anandj.xyz or by tweeting to @anandj95 or message me on Facebook.com/anandj95

Footnote: Started writing this article a month ago. Could complete only now. There might be slight discontinuity between parts of the article. Please bear with me. Also, I do not intend to hurt anybody with this article or defame any organisation. Apologies, if you felt so! :)

Originally published at https://anand-jagadeesh.blogspot.com on May 16, 2020.

--

--

Anand Jagadeesh

⌨ Writes about: ⎇DevOps, 🧠ML/AI, 🗣️XAI & 💆Interpretable AI, 🕸️Edge Computing, 🌱Sustainable AI | 💼Global Grad. @ VCC | 🎓MSc AI @ UoStA '22 | anandj.xyz