The Neural-Network Analysis & Its Applications Data Filters: Saint-Petersburg State University JASS 2006
The Neural-Network Analysis & Its Applications Data Filters: Saint-Petersburg State University JASS 2006
The Neural-Network Analysis & Its Applications Data Filters: Saint-Petersburg State University JASS 2006
ANALYSIS
&
its applications
DATA FILTERS
About me
Name: Alexey Minin
Place of studying: Saint-Petersburg State University
Current semester: 7th semester
Field of interests: Neural Nets, Data filters for Optics
(Holography), Computational Physics,EconoPhisics.
Content:
What is Neural Net & its applications
Neural Net analysis
Self organizing Kohonen maps
Data filters
Obtained results
Recognition of images
Processing of noised signals
Addition of images
Associative search
Classification
Drawing up of schedules
Optimization
The forecast
Diagnostics
Prediction of risks
Recognition of images
M-X2
PARADIGMS of neurocomputing
Connection
Localness and parallelism of
calculations
The training based on data
(programming)
Universality of training algorithms
y f u , u w0 iwi x i
y
y
u
x1
u w0 wi xi
xn
Global communications
Formal neurons
Layers
Parallelism of calculations
Comparison of ANN&BNN
BRAIN
100hz
PC IBM
Vprop=100m/s
Vprop=3*108 m/s
100hz
109hz
N 109 hz
11
N=109
Training of a network
RECURRENT
with FEEDBACK (Elman-Jordan)
LEVEL-BY-LEVEL
WITHOUT FEEDBACK
By type of training
with tutor
E ( w) E{x , y , y ( x , w)}
without tutor
E ( w) E{x , y ( x , w)}
In this case the network is offered most to find the latent laws
in data file. So, redundancy of data supposes compression of
the information, and a network it is possible to learn to find the
most compact representation of such data, i.e. to make optimum
coding the given kind of the entrance information.
Hebb, 1949
y
x
w y x
Vector representation
E
w
,
w
E w, x 12
w x
12 y 2 ,
w y x y w
Vector representation
1 w 0
w 1.
Basis algorithm
wi yi x k yk w k
Winner:
x1
xd
# of neuron winner
i # i : w i x w i x
Training of the winner:
wi x w i
if w i 1 w i x w i x i i
I.e. the winner will appear neuron,
giving the greatest response to the
given entrance stimulus
yi 1, yi 0, i i
i # i : w i x min w i x
i
wi (t 1) wi (t 1) wi (t ) i i , t x (t ) wi (t )
Modified by Kohonen training rule
xi
4,0
3,5
Ln Y(t)
3,0
2,5
2,0
1,5
1,0
1
51
101
151
4,5
4,0
Ln Y(t)
3,5
3,0
2,5
2,0
1,5
1
51
101
151
1993
1998
2003
2008
2013
2018
2023
-24
TEST
-25
A
nnual C
S
L
-26
-27
-28
-29
PREDICTION
2028
2038
DATA FILTERS
Custom filters (e.g. Fourier filter)
Adaptive filters (e.g. Kalman filter)
Empirical mode decomposition
Holder exponent
y (n) b(1) x (n) b(2) x(n 1) ... b(nb 1) x (n nb) a(2) y (n 1) ... a(na 1) y (n na)
Adaptive filters
Further we will keep in mind, that we are going to make
forecasts, thats why we need filters, which wont
change phase of the signal.
Z-1
Z-1
b(2)
b(3)
X(n-nb)
Z-1
b(nb+1)
y(n)
-a(2)
-a(3)
-a(na+1)
Z-1
y(n-1)
Z-1
y(n-2)
Z-1
y(n-nb)
Adaptive filters
Adaptive filters
Lets try to predict next value using zero-phase filter, having
information about historical price:
I used Perceptron with 3 hidden layers, logistic act function, rotation alg, 20 min
Adaptive filters
Kalman filter
K(n)
x(n)
Z-1
ac x(n 1)
x(n 1)
Adaptive filters
Lets use Kalman filter, like the error estimator for the
forecast of the zero-phase filtered data.
tone
chirp
tone + chirp
IMF 1; iteration 0
2
1
0
-1
-2
10
20
30
40
50
60
70
80
90
100
110
120
IMF 1; iteration 0
2
1
0
-1
-2
10
20
30
40
50
60
70
80
90
100
110
120
IMF 1; iteration 0
2
1
0
-1
-2
10
20
30
40
50
60
70
80
90
100
110
120
IMF 1; iteration 0
2
1
0
-1
-2
10
20
30
40
50
60
70
80
90
100
110
120
IMF 1; iteration 0
2
1
0
-1
-2
10
20
30
40
50
60
70
80
90
100
110
120
IMF 1; iteration 0
2
1
0
-1
-2
10
20
30
40
50
60
70
80
90
100
110
120
IMF 1; iteration 0
2
1
0
-1
-2
10
20
30
40
50
60
70
80
90
100
110
120
70
80
90
100
110
120
residue
1.5
1
0.5
0
-0.5
-1
-1.5
10
20
30
40
50
60
IMF 1; iteration 1
1.5
1
0.5
0
-0.5
-1
-1.5
10
20
30
40
50
60
70
80
90
100
110
120
70
80
90
100
110
120
residue
1.5
1
0.5
0
-0.5
-1
-1.5
10
20
30
40
50
60
IMF 1; iteration 1
1.5
1
0.5
0
-0.5
-1
-1.5
10
20
30
40
50
60
70
80
90
100
110
120
70
80
90
100
110
120
residue
1.5
1
0.5
0
-0.5
-1
-1.5
10
20
30
40
50
60
IMF 1; iteration 1
1.5
1
0.5
0
-0.5
-1
-1.5
10
20
30
40
50
60
70
80
90
100
110
120
70
80
90
100
110
120
residue
1.5
1
0.5
0
-0.5
-1
-1.5
10
20
30
40
50
60
IMF 1; iteration 1
1.5
1
0.5
0
-0.5
-1
-1.5
10
20
30
40
50
60
70
80
90
100
110
120
70
80
90
100
110
120
residue
1.5
1
0.5
0
-0.5
-1
-1.5
10
20
30
40
50
60
IMF 1; iteration 1
1.5
1
0.5
0
-0.5
-1
-1.5
10
20
30
40
50
60
70
80
90
100
110
120
70
80
90
100
110
120
residue
1.5
1
0.5
0
-0.5
-1
-1.5
10
20
30
40
50
60
IMF 1; iteration 1
1.5
1
0.5
0
-0.5
-1
-1.5
10
20
30
40
50
60
70
80
90
100
110
120
70
80
90
100
110
120
residue
1.5
1
0.5
0
-0.5
-1
-1.5
10
20
30
40
50
60
IMF 1; iteration 1
1.5
1
0.5
0
-0.5
-1
-1.5
10
20
30
40
50
60
70
80
90
100
110
120
70
80
90
100
110
120
residue
1.5
1
0.5
0
-0.5
-1
-1.5
10
20
30
40
50
60
IMF 1; iteration 2
1.5
1
0.5
0
-0.5
-1
-1.5
10
20
30
40
50
60
70
80
90
100
110
120
70
80
90
100
110
120
residue
1.5
1
0.5
0
-0.5
-1
-1.5
10
20
30
40
50
60
IMF 1; iteration 2
1.5
1
0.5
0
-0.5
-1
-1.5
10
20
30
40
50
60
70
80
90
100
110
120
70
80
90
100
110
120
residue
1.5
1
0.5
0
-0.5
-1
-1.5
10
20
30
40
50
60
IMF 1; iteration 2
1.5
1
0.5
0
-0.5
-1
-1.5
10
20
30
40
50
60
70
80
90
100
110
120
70
80
90
100
110
120
residue
1.5
1
0.5
0
-0.5
-1
-1.5
10
20
30
40
50
60
IMF 1; iteration 2
1.5
1
0.5
0
-0.5
-1
-1.5
10
20
30
40
50
60
70
80
90
100
110
120
70
80
90
100
110
120
residue
1.5
1
0.5
0
-0.5
-1
-1.5
10
20
30
40
50
60
IMF 1; iteration 2
1.5
1
0.5
0
-0.5
-1
-1.5
10
20
30
40
50
60
70
80
90
100
110
120
70
80
90
100
110
120
residue
1.5
1
0.5
0
-0.5
-1
-1.5
10
20
30
40
50
60
IMF 1; iteration 2
1.5
1
0.5
0
-0.5
-1
-1.5
10
20
30
40
50
60
70
80
90
100
110
120
70
80
90
100
110
120
residue
1.5
1
0.5
0
-0.5
-1
-1.5
10
20
30
40
50
60
IMF 1; iteration 2
1.5
1
0.5
0
-0.5
-1
-1.5
10
20
30
40
50
60
70
80
90
100
110
120
70
80
90
100
110
120
residue
1
0.5
0
-0.5
-1
10
20
30
40
50
60
IMF 1; iteration 3
1
0.5
0
-0.5
-1
10
20
30
40
50
60
70
80
90
100
110
120
70
80
90
100
110
120
residue
1
0.5
0
-0.5
-1
10
20
30
40
50
60
IMF 1; iteration 4
1
0.5
0
-0.5
-1
10
20
30
40
50
60
70
80
90
100
110
120
70
80
90
100
110
120
residue
1
0.5
0
-0.5
-1
10
20
30
40
50
60
IMF 1; iteration 5
1
0.5
0
-0.5
-1
10
20
30
40
50
60
70
80
90
100
110
120
70
80
90
100
110
120
residue
1
0.5
0
-0.5
-1
10
20
30
40
50
60
IMF 1; iteration 6
1
0.5
0
-0.5
-1
10
20
30
40
50
60
70
80
90
100
110
120
70
80
90
100
110
120
residue
1
0.5
0
-0.5
-1
10
20
30
40
50
60
IMF 1; iteration 7
1
0.5
0
-0.5
-1
10
20
30
40
50
60
70
80
90
100
110
120
70
80
90
100
110
120
residue
1
0.5
0
-0.5
-1
10
20
30
40
50
60
IMF 1; iteration 8
1
0.5
0
-0.5
-1
10
20
30
40
50
60
70
80
90
100
110
120
70
80
90
100
110
120
residue
1
0.5
0
-0.5
-1
10
20
30
40
50
60
IMF 2; iteration 0
1
0.5
0
-0.5
-1
10
20
30
40
50
60
70
80
90
100
110
120
70
80
90
100
110
120
residue
1
0.5
0
-0.5
-1
10
20
30
40
50
60
IMF 2; iteration 1
1
0.5
0
-0.5
-1
10
20
30
40
50
60
70
80
90
100
110
120
70
80
90
100
110
120
residue
1
0.5
0
-0.5
-1
10
20
30
40
50
60
IMF 2; iteration 2
1
0.5
0
-0.5
-1
10
20
30
40
50
60
70
80
90
100
110
120
70
80
90
100
110
120
residue
1
0.5
0
-0.5
-1
10
20
30
40
50
60
IMF 2; iteration 3
1
0.5
0
-0.5
-1
10
20
30
40
50
60
70
80
90
100
110
120
70
80
90
100
110
120
residue
1
0.5
0
-0.5
-1
10
20
30
40
50
60
IMF 2; iteration 4
1
0.5
0
-0.5
-1
10
20
30
40
50
60
70
80
90
100
110
120
70
80
90
100
110
120
residue
1
0.5
0
-0.5
-1
10
20
30
40
50
60
IMF 2; iteration 5
1
0.5
0
-0.5
-1
10
20
30
40
50
60
70
80
90
100
110
120
70
80
90
100
110
120
residue
1
0.5
0
-0.5
-1
10
20
30
40
50
60
res.
imf6
imf5
imf4
imf3
imf2
imf1
10
20
30
40
50
60
70
80
90
100
110
120
Holder exponent
The main idea is next. Consider
Holder derived, that
f (t ) D f
| f (t t ) f (t ) | const (t ) ( t ) , (t ) [0,1]
Results
Thank You!
Any QUESTIONS?
SUGGESTIONS?
IDESAS?
Soft Im using:
1)MatLab
2)NeuroShell
3)FracLab
4)Statistika
5)Builder C++