An Interview with Prof. Shalabh Bhatnagar, Winner of ACCS-CDAC Foundation Award 2017

Prashanth Hebbar, Managing Partner, Knobly Consulting

Prof. Shalabh Bhatnagar, CSA, Indian Institute of Science, Bangalore, is the winner of the prestigious ACCS-CDAC Foundation award for 2017. The award recognizes his seminal contributions in the area of Stochastic Optimization and Control. His research has filled many gaps in the body of knowledge in control of stochastic dynamic systems.

The Award instituted in 2004, by the Advanced Computing and Communications Society and the Center for Development of Advanced Computing (CDAC), fosters the development and dissemination of the theory and applications of Computing and Communications sciences. ACCS-CDAC Award is given to individuals with outstanding contributions and accomplishments that have had a significant and demonstrable effect on the practice of computing and communications. The ACCS-CDAC Foundation Award carries a cash prize of Rs. 100,000/-, along with a citation and a plaque.

Dr. Bhatnagar’s model-free, convergent random search algorithms have found extensive engineering applications in communication networks, service systems, crowd-sourcing and semiconductor manufacturing. The award was presented at the inaugural session of the annual Advanced Computing and Communications Conference (ADCOM 2017) at The International Institute of Information Technology,Bangalore (IIITB) on 8th September 2017.


Prashanth Hebbar: One of the things Feynman talks about is that a photon travelling from A to B need not take the exact path that it takes. It would have taken any number of random paths but at the end we realize that what path it has taken is the most optimum. The question is, because you look at systems and processes and optimization, are systems inherently optimized — any system for that matter?

Shalabh Bhatnagar: Yes, that’s a very interesting point. When you say that photons travel the shortest path essentially that is something that is taken care of by nature. Nature may have designed the optimal path for the photon but on the other hand we look at man-made, engineered systems like the traffic system. We need to optimize such systems to ensure the delays are minimized and congestion levels are reduced. So, nature does the optimum anyway, and of course the time scales of nature are quite different even though it is on a very-very slow time scale evolution happens. But given the fact that we are living in this world, I think the important thing is to do the optimization ourselves.


PH: That’s an interesting point you made, time scales. When we talk time scales, does it depend on, say, a human who is on the other side, his gratification, how fast he requires gratification or is it determined by something else?

SB: There are two ways to look at it. One way to look at it is that the system is the optimizer, wherein the system tries to optimize things, it doesn’t care for individual users, but the system itself is optimizing. The other view point is that the users are the optimizers. Which means users want to optimize on their delays and paths that they travel and so on. These two objectives may or may not be in conflict. Thus, if the system is optimizing then some user references may be compromised. Or to put it more aptly, it is not necessary that every user will be satisfied with the decisions that the system makes. On the other hand, if the users are optimizing then of course, they can decide what they want to do and how they want to do it.


PH: Well that’s a great point. Diving a bit deeper into that point of view, do you think when optimizing a system, you need to look at the balance between how system behaves at one end vis-à-vis how the results come out at the other?

SB: When I talk of optimization I essentially mean formulating it as what we call as an optimization problem and then trying to solve it. We define what we call as an objective function. Then there are certain constraints. You need to define or describe all of that. Now, in the process what happens is that the optimization parameters that ultimately come out will pretty much depend on the objective function that you choose in the first place. You can decide what objective function you want. At the system level what is the objective that let you really care for. Once you have decided that the parameters will be tuned in a manner that optimizes — minimizes or maximizes — that objective. In that sense, the system is optimized. Well of course there is no one definition of optimization. It pretty much depends again on what the objective function you have set it to be. If it is to minimize delays across all the users and likewise, then that is what it will be optimized for.


PH: Let us take take traffic management as a case in point. Are not all queues by themselves optimized, and precisely why they organize themselves as a queue and not anything else?

SB: That’s exactly my point. We say that our objective function is to minimize queues. So that the level of congestion gets reduced. If the queue is constantly building up, then essentially you are experiencing longer and longer delays on the road. You don’t want that to happen. If the queues are not there then that’s perhaps the best situation. You really don’t have to worry too much about the amount of delay that you experience because as soon as the signal turns green, you are sure that you will cross the signal and go to the next one.


PH: Since we are now talking about systems and optimization, how do you see this impacting the area of machine learning?

SB: Now big data is something that everyone is talking of and working and trying, so that’s the real challenge now. Machine learning has been there for a while. There are challenges remaining in the machine learning area and people are trying to look at. One of the major challenges is of big data. The question is how to come up with efficient methods that tackle the high dimensionality. This enormous amount of data that is available and we work on schemes and techniques to come up with models which could be prediction of optimization [of anything. Essentially, the classical technique sort of fails when you have high dimensional problems — when the dimension and the amount of data is very large. The amount of computation that one usually encounters is so high that it becomes almost impossible to run through classical methods and so one needs to do some approximation.

So basically, the goal is to develop techniques that can handle big data and high dimensional data. Tackle high dimensional process, so traffic is one such example of a very high dimensional, very large data and effectively doing it in real time is a challenge. It is not easy to design something very good and that is fast as well.


PH: Do you see AI as just another fad that the computing fraternity has picked up again, after its so called classic failure from its mid-60s peak. What do you think of it?

SB: No AI is coming back. It is coming back in a very big way. There is lot of talk about the machines replacing man and so on and many people believe that is going to happen. We have been looking at drones, flying objects and how to control their movement. One instance is if there are multiple drones that are travelling, how do they coordinate their movement among themselves by just talking to one another, not that you are controlling them. If you want to do some surveillance of some area using drones, and you have sent the drone off to zones that is not very friendly, they should be able to talk to one another and figure out that I will cover this and you will cover that and stuff like that.


PH: Yes, that is actually an instance of very large dimensionality right?

SB: Right. As far as the drones are concerned, yes, we are looking at those things. Robotics is something we have started looking into. Currently, I am jointly looking working with Ashitava Ghosal, a mechanical engineer. He is a robotics man. His team has a four-legged robot and what we want to do is to learn the movements. We make sure that we apply reinforcement learning algorithms to make the robot learn to move and encounter obstacles and then work around those. Ultimately, the goal is to make move it up and down a staircase for example. If it is going down, say, we should ensure that it doesn’t fall and does not go too fast, rather its two back legs should come smoothly down; those are the challenges a robotic dog offers.


PH:Interesting. So, I think Google’s adoption of Bayesian filters which they started applying everywhere changed everything. Do you think that was a turning point?

SB: No, we don’t do the Bayesian learning. We don’t assume a model as such because we work by design with model free algorithms.


PH: So, there is no apriori.

SB:Nothing we just assume that there is nothing and we just encounter an event, take data in and the algorithm just answers itself. Its more learning by interaction with the environment.


PH: This may have a lot of industrial application like for example, controlling the HVAC systems. When people are moving in, it should be maintained at certain temperature and it should predict the required temperature over the next one hour or two hours so that the energy consumption is optimized.

SB: Correct. There are many interesting problems that arise. Take the micro-grid domain. If you have an excess of energy in a certain micro-grid can it transfer that at a certain price to anther micro-grid which is short of energy. This is an interesting optimization problem that arise which are model free. A lot of prior work assumes that they have information on the system model. Such algorithms, and frameworks are not reliable because the models cannot be 100% correct. There is a failure probability. Our algorithms, because they are model free, really don’t assume that we have a model of the system saying whether its sunny, its windy and so on. We really don’t care about that and as a result, we just look at data, whatever data is available. We just work with that and that’s nice about model free approach.


PH: Let us say, when you think of what is the probability that, another cyclone is going to hit Indian shores this week, a lot of things are dependent on various environmental incidents that are taking place, right. How do you approach this kind of a problem? Where do you start, if you don’t start with the models?

SB: If you are asking the “when” question: when is the next cyclone going to hit, I think you will need historical data to make that prediction. Suppose you ask the “what” question can apply generally to any situation because there is nothing much you can do once the cyclone hits except to make some precautionary arrangements like evacuate people. The problem to solve then would be: given the fact that cyclone may hit is a certain probability, should I make this decision, say, should I evacuate people on the coastal areas now or not. That is a control problem.

PH: So, you focus on the control problem and the trick is to ask the right question that brings to focus the control problem.

SB: Yes, we focus more on the control problem. I think the weather forecasters look at the problem of prediction but I think that if you do prediction then probably historical data and some trigger has to be there and you should be noticing some changes in the behavior of the seas and so on that essentially tells you that a cyclone is likely to make a land fall.


PH: Switching gears here, can you tell us what a young engineer or a researcher should do, what’s your advice. What are the challenging problems out there and what can they start off with?

SB: I think this is an important question. Some people make the mistake of going after areas that are extremely hyped up. Now AI is coming again, so everyone should work in AI or ML and that’s what we also see in our domain. The number of people interviewing for the ML area, for instance, in research interviews is so high, we often have a challenge scheduling those interviews. See, ultimately the goal is that people should be excited about the area itself. Whether it is hyped or not, research fields will come and go.


PH: What shortfalls do you see in our young pursuers, which skills can they hone to be better at research.

SB: I will give you a classic case which is quite common these days. Our department, for instance, is extremely strong in theory. There are theoretical computational complexities and all those algorithms on which people are working on. I think one of the key things we find particularly in our current BTech education country wide is the lack of mathematical background. People are really not focussed on acquiring mathematical skills and that becomes kind of a dampener when it comes to research. Let me illustrate this with an example. Say, you come up with an algorithm and normally what happens is that people will do some experiments using those algorithm, they will work on your device and then show that well the algorithm is doing better than some other algorithms but then the they will come back saying, “That’s okay on this setting, what about other settings,” and you end up trying 20 more settings and you have to keep on convincing. But if you also have [mathematical] analysis with that to go to say your device is doing optimal things then essentially there are not many complaints. So, it is much easier to publish papers if you have the mathematical analysis to back up your theory. I think mathematical rigour is required for doing good research.