Analysis of Transformer attention in EEG signal classification
Philipp Thölke, Karim Jerbi, University of Montreal, Canada
Session:
Posters 3 Poster
Location:
Pacific Ballroom H-O
Presentation Time:
Sat, 27 Aug, 19:30 - 21:30 Pacific Time (UTC -8)
Abstract:
While deep learning models show remarkable accuracy on electrophysiological time series such as EEG, the inner workings of these models are inherently hard to interpret. The scientific investigation of brain functions and dysfunction, however, strongly relies on the ability to characterize the properties and dynamics of neural changes across experimental states or groups. Here we present an approach to estimate feature importance in Transformers using attention weights and provide a proof-of-concept using the well studied setting of resting with eyes open versus eyes closed (n=109). In addition to feature importance we visualize the information flow throughout the network, providing means to distinguish the two conditions from the internal representation of the artificial neural network.