Authors: | B. Schneider, J. Dambre, P. Bienstman | Title: | Using digital masks to enhance the bandwidth tolerance and improve performance in on-chip reservoir computing systems | Format: | International Journal | Publication date: | 12/2016 | Journal/Conference/Book: | IEEE Transactions on Neural Networks and Learning Systems
| Editor/Publisher: | IEEE, | Volume(Issue): | 27(12) p.2748-2753 | DOI: | 10.1109/TNNLS.2015.2498763 | Citations: | 22 (Dimensions.ai - last update: 17/11/2024) 17 (OpenCitations - last update: 27/6/2024) Look up on Google Scholar
| Download: |
(1.7MB) |
Abstract
Reservoir Computing (RC) is a computing scheme related to recurrent neural network theory. As a model for neural activity in the brain it attracts a lot of attention, especially because of its very simple training method. However, building a functional, on-chip, photonic implementation of RC remains a challenge. Scaling delay lines down from optical fibre scale to chip scale results in RC systems that compute faster, but at the same time require that the input signals are also scaled up in speed, which might be impractical or expensive. In this paper, we show that this problem can be alleviated by a masked RC system in which the amplitude of the input signal is modulated by a binary-valued mask. For a speech recognition task we demonstrate that the necessary input sample rate can be a factor of 40 smaller than in a conventional RC system. Additionally, we also show that linear discriminant analysis and input matrix optimisation is a well-performing alternative to linear regression for reservoir training. Related Projects
|
|
|
Citations (OpenCitations)
|
|