mc5 Mang
mc5 Mang
mc5 Mang
1. Introduction
Neurocomputing [4] is a technological discipline concerned with information process-
ing systems (for example neural networks) that autonomously develop operational
capabilities in adaptive response to an information environment. Neurocomput-
ing is a fundamentally new and different approach to information processing. It
is a first alternative to programmed computing, which has dominated information
processing for the last 50 years.
An artificial neural network is a data processing structure (real or simulated)
that bears some resemblance to a natural neural tissue. More precisely, it is a set
of interconnected basic processing elements called neurons. For an input (called
stimulus) this set automatically produces an output (response). Furthermore, the
∗ The lecture presented at the Mathematical Colloquium in Osijek organized by Croatian
2. Theoretical issues
The main difference between holographic and conventional neural networks is that
a holographic neuron is more powerful than a conventional one, so that it is func-
tionally equivalent to a whole conventional network. Consequently, a holographic
network usually requires a very simple topology consisting of only few neurons. An-
other characteristic of the holographic technology is that it represents information
by complex numbers operating within two degrees of freedom (value and confi-
dence). Also an important property is that holographic training is accomplished
by direct (almost non-iterative) algorithms, while conventional training is based on
relatively slow “back-propagation” (gradient) algorithms.
A holographic neuron is sketched in Figure 1. As we can see, it is equipped
Holographic neural networks 121
with only one input channel and one output channel. However, both channels carry
whole vectors of complex numbers. An input vector S is called a stimulus and it
has the form
S = [λ1 eiθ1 , λ2 eiθ2 , . . . , λn eiθn ].
An output vector R is called a response and its form is
All complex numbers above are written in polar notation, so that moduli (magni-
tudes) are interpreted as confidence levels of data, and arguments (phase angles)
serve as actual values of data. The neuron internally holds a complex n × m matrix
X = [xjk ], which serves as a memory for recording associations.
S - X -R
Now we will explain the basic learning process. Learning one association between
a stimulus S and a desired response R requires that the correlation between the
j-th stimulus element and the k-th response element is accumulated in the (j, k)-th
entry of the memory matrix. More precisely:
X+= S̄ τ R. (1)
Now there follows an analysis of the computed response. Suppose that the
associations
(S (t) , R(t) ), t = 1, 2, . . . , p,
have previously been learned. Let us consider the k-th response element, 1 ≤ k ≤ m.
According to (1) and (2) we have:
n p p n
∗ 1 X ∗ iθj∗ X (t) (t) i(φ(t) (t) 1 X (t) iφ(t) X ∗ (t) i(θj∗ −θj(t) )
γk∗ eiφk = λj e λj γk e k −θj ) = γ e k λj λj e .
c j=1 t=1
c t=1 k j=1
where
1/2
(t) n n
γk X ∗ (t) (t)
X (t) (t)
Λ(t) = [ λj λj cos(θj∗ − θj )]2 + [ λ∗j λj sin(θj∗ − θj )]2 ,
c j=1 j=1
" Pn #
∗ (t) ∗ (t) (t)
(t) −1 j=1 λj λj sin(θj − θj + φk )
Ψ = tan Pn .
∗ (t) ∗ (t) (t)
j=1 λj λj cos(θj − θj + φk )
Thus the chosen response element is a sum of many components. Each component
corresponds to one of the learned associations.
Now let us consider the case where the new stimulus S ∗ is approximately equal
to one of the previously learned stimuli. Suppose that for some l, 1 ≤ l ≤ p,
S ∗ ≈ S (l) .
Then the above expressions for Λ(t) and Ψ(t) indicate that the l-th component of
the response has a relatively big confidence level and an explicit direction:
(l)
Λ(l) ≈ 1, Ψ(l) ≈ φk .
The other components usually have smaller confidence levels and different direc-
tions. It means, for t 6= l:
H
@
PP ¡
A
@
¤¤º ££±
¤£
error ¤ £
¤£
¤£
¤£
¤£
d£¤
Rdif = R − R0 .
Finally, learn the association between the stimulus and the above difference, by
using the old formula (1):
X+= S̄ τ Rdif .
The resulting formula, which can replace (1), is
µ ¶
τ 1
X+= S̄ R − SX . (3)
c
In fact, this formula is used as default since it assures better performance than (1).
As before, to accomplish training on a set of stimulus-response associations,
the enhanced learning step (3) has to be repeated for each association in the set.
Note that the order of steps now becomes important, namely the first association
is more distorted by subsequent encodings than the last one. Therefore, the whole
learning cycle should be repeated several times in order to stabilize. So we end up
124 R. Manger
with a form of iterative training. Still, the number of needed iterations (so called
epochs) is considerably smaller than in traditional “back propagation” algorithms,
i.e. according to [2] it is never greater than 20.
Holographic networks also allow a special regime called training with a reduced
memory profile. When this regime is applied, the previously learned stimulus-
response associations are gradually forgotten as training progresses. Consequently,
more recently learned associations expose stronger influence on a response than
older ones. The memory profile is expressed in percentages (100% - permanent
memory, < 100% - reduced memory), and it is controlled through periodical re-
scaling (reduction) of the entries in the memory matrix X.
Finally, let us note that holographic networks allow incremental training. It
means that an already trained neuron can subsequently learn an aditional stimulus-
response association. The latter is not true for traditional networks, where adding
a new training example usually means starting the whole training procedure from
scratch. Incremental training of a holographic neuron is possible for both learning
formulas (1) and (3), and for any memory profile. Again, the additional learning
step will slightly distort the prior knowledge. However, this distortion is not visible
if a reduced memory profile is used.
and evaluates both methods on the iris flowers benchmark classification problem.
Other examples of classification with holographic networks have been described in
[10, 5]; the concrete problems considered there comprise credit scoring and neuro-
logical diagnosis.
It has been suggested in [1] that holographic networks can be applied to data
compression. Namely, a holographic neuron can be used to memorize the set of
values from a file (stimulus: the value identifier, response: the value itself). After
training, the neuron should be able to approximately reproduce any value. If the
memory matrix inside the neuron happens to be smaller than the original file, we can
speak about (lossy) data compression. This idea has been explored experimentally
in [8]. The obtained results indicate that the considered holographic compression
method works well only for very regular (smooth, redundant) files.
We believe that holographic networks are very suitable for prediction (forecast-
ing) problems, specially if the considered system is dominated by short-term trends.
Namely, a natural training regime for such problems is incremental training with a
low memory profile (knowledge should be constantly revised). The recommended
regime can easily be realized with holographic networks, but much harder with tra-
ditional network types. To illustrate this, we now present original results dealing
with currency exchange rate prediction.
In our experiments we used authentic data from “Zagrebačka banka” (Bank of
Zagreb) comprising the exchange rates of seven currencies (ATS, CHF, DEM, FRF,
GBP, ITL, USD) for each working day between 1st October 1992 and 1st October
1993. To eliminate unwanted effects of the domestic inflation, we chose ATS as the
reference currency and expressed the other six currencies in terms of ATS. Some
basic statistical parameters are shown in Table 1.
4. Conclusion
Holographic neural networks are in some aspects superior to traditional network
types. For instance, they are more suitable for prediction problems, thanks to
technical feasibility of incremental training with a reduced memory profile. Also,
holographic networks assure quicker convergence during training, and are easier to
use.
The most important phase in designing a holographic application is choosing
adequate data preprocessing. Therefore, it is important that a diversity of prepro-
cessing procedures are available, so that conflicting requirements of various applica-
tions can always be accommodated. Many procedures have already been proposed
and experimentally tested. At this moment, a more reliable mathematical analysis
of the existing methods is needed.
Holographic neural networks are still an obscure technology. There are not
many papers or books that treat or even mention this type of networks. One of the
reasons may be the reluctance of the traditional “connectionist” neurocomputing
Holographic neural networks 127
community. However, the situation could change very soon, thanks to the software
support that is now available for holographic networks.
References
[1] AND Corporation, HNeT Neural Development System, Version 1.0 , Hamil-
ton, Ontario, 1990.
[2] AND America Ltd., HNeT Discovery Package - Version 1.3 for Windows,
Oakville, Ontario, 1993.
[3] AND America Ltd., Using the HNeT Professional Development System,
Oakville, Ontario, 1993.
[4] R. Hecht-Nielsen, Neurocomputing, Addison-Wesley, Reading, Massachus-
etts, 1990.
[5] R. Ho, J. G. Sutherland, I. Bruha, Neurological fuzzy diagnoses: holo-
graphic versus statistical versus neural methods, in: Frontier Decision Support
Concepts (V. L. Plantamura, B. Souček and G. Visaggio, Eds.), John Wiley and
Sons, New York, 1994, 155–169.
[7] R. Manger, M. Mauher, Using holographic neural networks for currency ex-
change rates prediction, in: Proceedings of the 16th International Conference
on Information Technology Interfaces - ITI’94 (V. Čerić and V. Hljuz-Dobrić,
Eds.), University Computing Centre, Zagreb, 1994, 143–150.
[8] R. Manger, Holographic neural networks and data compression, Informatica,
21(1997), 667–676.
[9] A. F. Siegel, Statistics and Data Analysis, John Wiley, New York, 1988.
[10] B. Souček, J. G. Sutherland, G. Visaggio, Holographic decision support
system: credit scoring based on quality metrics, in: Frontier Decision Support
Concepts (V. L. Plantamura, B. Souček and G. Visaggio, Eds.), John Wiley and
Sons, New York, 1994, 171–182.
[11] J. G. Sutherland, Holographic model of memory, learning and expression,
International Journal of Neural Systems 1(1990) 256–267.
[12] J. G. Sutherland, The holographic neural method, in: Fuzzy, Holographic
and Parallel Intelligence (B. Souček, Ed.), John Wiley and Sons, New York,
1992, 30–63.