Efficiency of object recognition networks on an absolute scale
Richard Murray, Devin Kehoe, York University, Canada
Posters 1 Poster
Pacific Ballroom H-O
Thu, 25 Aug, 19:30 - 21:30 Pacific Time (UTC -7)
Deep neural networks have made rapid advances in object recognition, but progress has mostly been made through experimentation, with little guidance from normative theories. Here we use ideal observer theory and associated methods to compare current network performance to theoretical limits on performance. We measure network performance and ideal observer performance on a modified ImageNet task, where model observers view samples from a limited number of object categories, in several levels of external white Gaussian noise. We find that although current networks achieve 90% performance or better on the standard ImageNet task, the ideal observer performs vastly better on the more limited task we consider here. The networks' "calculation efficiency", a measure of the extent to which they use all available information to perform a task, is on the order of 10^-5, an exceedingly small value. We consider reasons why efficiency may be so low, and outline further uses of ideal observers and noise methods to understand network performance.