A deep learning model got to know its limitations
Google CEO, Sundar Pitchai said that AI will be more profound than inventions of electricity or fire, emphasizing the impact it will have on the human race. The deep learning algorithms behind the success of AI are the deep neural networks, where the multiple hidden layer learn from exposure to millions of data points.
The current limitations of the deep learning algorithm are:
- Huge data and computational power: Systems demand huge sets of training data to perform complicated tasks at the level of humans. Supervised learning requires labeling of data which, though not difficult, can be complex. Unsupervised learning over comes this problem, by having systems learn by themselves using clustering. However, this needs multiple steps and requires huge amount CPU and GPU for training the model.
- Prediction Uncertainty: To counteract the high need of computational power, neural networks are given transfer learning where a model trained for one task is re-purposed on a second related task. This often breaks the model as they do not have enough understanding to comprehend the current problem in hand correctly. It’s not easy for a model to transfer the knowledge they learnt on a different task to apply to the current one. Moreover, the parameters of neural networks are interpreted in terms of mathematical weights. They are black boxes whose output cannot be fully explained.
- Embodied experience: Models are programmed with little innate knowledge and posses no common sense about the world or human psychology. They are good in local generalization, adapting to new situations which are close to the past data. Being heavily dependent on the input, they are likely to project the biases in the input data to the result. They lack the capability of human cognition of extreme generalization to comprehend these situations.
By: Jaya Kuppuswamy