|
ANN vs SNN The main difference
between ANN and SNN operation is the notion of time. While ANNs inputs are static, SNNs operate based
on dynamic binary spiking inputs as a function of time. In ANNs the information is transmitted
on the artificial synapses through numerical values expressed in digital
form. In SNNs, information is transmitted on artificial synapses through
impulses whose timing is somewhat equivalent to the numerical values of the
ANNs. Wolfgang Maass has shown that SNNs,
also called third generation neural networks, have a greater computational
power and can approximate any ANN, also called second generation neural networks.
In any case, ANNs are universal approximators and therefore there is no
reasonable motivation for using SNN from a practical and applicative point of
view. The main motivation that led to develop
SNN-based neural chips and related sensory devices is the greater biological
plausibility of the SNN model compared to the ANN model. We believe that
research in the field of SNNs is very important and some applications are
particularly promising in some sectors. Nevertheless we see two criticisms
both in the applicative field and in the biological plausibility field. Criticality in the application field:
the learning algorithms based on STDP (Spike Timing Dependent Plasticity) are
more difficult to apply. Efficiency is intrinsically linked to a HW
realization. The realization of complex applications is always much more
expensive than using ANN. From a computational power point of view there is
no problem that can be solved with SNN that cannot be solved with ANN. Chips
based on communication with spikes typically use a protocol called AER
(Address Event Representation) or in any case a proprietary protocol that
allows communication via events (digital pulses in the time domain) between
NPUs (Neural Processing Units). Multiple NPUs can be interconnected
(configuration) to simulate different neural network architectures. From a
practical point of view Event-Based Processing and Event-Based Learning are
efficient only if the sensors are also designed to communicate the input data
directly with an Event Based Communication method. This factor restricts the
number of sensors that can be used effectively. Almost all commercial sensors
communicate data with different digital protocols and, therefore, it is necessary
to convert the data towards an Event Based Communication protocol. For
example a traditional digital camera needs a "pixel to event"
conversion pre-processing stage. This pre-processing stage is mandatory if
the chip is to accept input from traditional sensors. About the property of “one shot
learning”, this is strictly bound to the neural network model that is
configured. When the one-shot learning capability on CNN is mentioned, it
means that only the final classification layer is able to learn new examples
but all the parameters of the previous layers (features extraction) are
configured through a transfer learning procedure. Scalability (ability to connect many
chips to get a bigger neural network), when present, is bound to the
configured neural network architecture. Chips based on spiking neurons are
very energy efficient. However, energy efficiency comparisons are almost
always done with GPUs and, in this context, there is a clear advantage. If
you compare the power consumption of a spiking neural chip with that of a
non-spiking digital neural chip operating with a low clock on byte-type data,
the differences are almost irrelevant. Despite these criticisms, we are
carefully observing and evaluating the new generations of spiking chips and
intend to integrate them into devices for aerospace applications in the short
term. Criticality in the field of biological
plausibility: although SNNs are much closer to the biological model than
ANNs, we must point out that there is an impassable threshold between an
implementation on silicon and the countless variables involved in the
chemical processes of a real biological neuron. Beyond all the opinions related to the
research aspect, we have chosen to use ANN also as an on-chip HW
implementation. The key technology of our HW devices is, in fact, the
Neuromem® chip which offers maximum scalability, extremely low clock frequency
and ease of use. This chip implements a fixed neural
architecture of the RBF (Radial Basis Function) type with RCE (Restricted
Coulomb Energy) learning algorithm and is therefore a classifier. Scalability
is assured and virtually unlimited. One Shot Learning is inherently
guaranteed by RCE. The possibility of obtaining degrees of confidence on
multiple classifications allows to transform the classifier into a GRNN
(General Regression Neural Network) through weighted interpolations on LUTs
(Look Up Table). All Anomaly Detection problems can be solved in an extremely
simple way. |
©2024_Luca_Marchese_All_Rights_Reserved Aerospace_&_Defence_Machine_Learning_Company VAT:_IT0267070992 NATO_CAGE_CODE:_AK845 Email:_luca.marchese@synaptics.org |
Contacts_and_Social_Media |