“What Neural Networks can not learn?”

Abstract

“Recent developments in computing power have made possible the rise in popularity of deep neural networks (DNNs). Convolutional neural networks (CNNs) have recently been able to obtain very good performances on many tasks. However, it was observed that CNNs are not robust enough, as they seem to focus on low-level features, and usually they fail spectacularly in situations that differ very little from those on which they were trained. This work will have the goal of investigating the failure modes of CNNs in a non-distributional shift scenario, on tasks which are considered very easy by humans and which can be solved without significant prior knowledge. Thus, by comparing the human visual system and the CNNs, it will be better understood where more effort needs to be put to reduce the gap between these two systems. These tasks will extensively test the CNNs from multiple perspectives and during this work it will be observed that CNNs fail on 3 different types of tasks: abstract reasoning tasks, visual perception tasks and a novel task called low-gradient examples. As a result, even with the restrictions that tasks should be very easy for humans and the tasks should be evaluated in a non-distributional shift scenario, the CNNs are unable to learn some particular tasks. These hard tasks will be grouped into a novel benchmark that will be important as it provides the first such dataset in a non-distributional shift scenario that groups different categories of very challenging tasks in a consistent way and which tests the CNNs from multiple perspectives.”

Technologies: Python, PyTorch, Matlplotlib, NumPy