default search action
IEEE Transactions on Neural Networks, Volume 5
Volume 5, Number 1, January 1994
- David B. Fogel:
An introduction to simulated evolutionary optimization. 3-14 - Bruce A. Whitehead, Timothy D. Choate:
Evolving space-filling curves to distribute radial basis functions over an input space. 15-23 - John Robert McDonnell, Donald E. Waagen:
Evolving recurrent perceptrons for time-series modeling. 24-38 - Vittorio Maniezzo:
Genetic evolution of the topology and weight distribution of neural networks. 39-53 - Peter J. Angeline, Gregory M. Saunders, Jordan B. Pollack:
An evolutionary algorithm that constructs recurrent neural networks. 54-65 - Volker Nissen:
Solving the quadratic assignment problem with clues from nature. 66-72 - Anthony V. Sebald, Jennifer Schlenzig:
Minimax design of neural net controllers for highly uncertain plants. 73-82 - Anoop K. Bhattacharjya, Badrinath Roysam:
Joint solution of low, intermediate, and high-level vision tasks by evolutionary optimization: Application to computer vision at low SNR. 83-95 - Günter Rudolph:
Convergence analysis of canonical genetic algorithms. 96-101 - Xiaofeng Qi, Francesco Palmieri:
Theoretical analysis of evolutionary algorithms with an infinite population size in continuous space. Part I: Basic properties of selection and mutation. 102-119 - Xiaofeng Qi, Francesco Palmieri:
Theoretical analysis of evolutionary algorithms with an infinite population size in continuous space. Part II: Analysis of the diversification role of crossover. 120-129 - Wirt Atmar:
Notes on the simulation of evolution. 130-147
Volume 5, Number 2, March 1994
- C. Lee Giles, Gary M. Kuhn, Ronald J. Williams:
Dynamic recurrent neural networks: Theory and applications. 153-156 - Yoshua Bengio, Patrice Y. Simard, Paolo Frasconi:
Learning long-term dependencies with gradient descent is difficult. 157-166 - Monica Bianchini, Marco Gori, Marco Maggini:
On the problem of local minima in recurrent neural networks. 167-177 - Olivier Nerrand, Pierre Roussel-Ragot, Dominique Urbani, Léon Personnaz, Gérard Dreyfus:
Training recurrent neural networks: why and how? An illustration in dynamical process modeling. 178-184 - Oluseyi Olurotimi:
Recurrent neural network training with feedforward complexity. 185-197 - Steve W. Piche:
Steepest descent algorithms for neural network controllers and filters. 198-212 - B. Srinivasan, U. R. Prasad, N. J. Rao:
Back propagation through adjoints for the identification of nonlinear dynamic systems using recurrent neural models. 213-228 - Ah Chung Tsoi, Andrew D. Back:
Locally recurrent globally feedforward networks: a critical review of architectures. 229-239 - Jerome T. Connor, R. Douglas Martin, Les E. Atlas:
Recurrent neural networks and robust time series prediction. 240-254 - Alexander G. Parlos, Kil To Chong, Amir F. Atiya:
Application of the recurrent multilayer perceptron in modeling complex process dynamics. 255-266 - George Kechriotis, Evangelos Zervas, Elias S. Manolakos:
Using recurrent neural networks for adaptive communication channel equalization. 267-278 - Gintaras V. Puskorius, Lee A. Feldkamp:
Neurocontrol of nonlinear dynamical systems with Kalman filter trained recurrent networks. 279-297 - Anthony J. Robinson:
An application of recurrent nets to phone probability estimation. 298-305 - P. S. Sastry, G. Santharam, K. P. Unnikrishnan:
Memory neuron networks for identification and control of dynamical systems. 306-319 - Zheng Zeng, Rodney M. Goodman, Padhraic Smyth:
Discrete recurrent neural networks for grammatical inference. 320-330 - José C. Príncipe, Jyh-Ming Kuo, Samel Çelebi:
An analysis of the gamma memory in dynamic neural networks. 331-337
Volume 5, Number 3, May 1994
- Jenq-Neng Hwang, Shyh-Rong Lay, Martin Mächler, R. Douglas Martin, Jim Schimert:
Regression modeling in back-propagation and projection pursuit learning. 342-353 - H. Schiff, P. C. Boarino, Dante Del Corso, Enrica Filippi:
A hardware implementation of a biological neural system for target localization. 354-362 - Andrew R. Webb:
Functional approximation by feed-forward networks: a least-squares approach to generalization. 363-371 - George E. Fulcher, Donald E. Brown:
A polynomial network for predicting temperature distributions. 372-379 - Konstantinos Koutroumbas, Nicholas Kalouptsidis:
Qualitative analysis of the parallel and asynchronous modes of the Hamming network. 380-391 - Anastasios Delopoulos, Andreas Tirakis, Stefanos D. Kollias:
Invariant image classification using triple-correlation-based neural networks. 392-408 - Sukhan Lee, Rhee Man Kil:
Inverse mapping of continuous functions using local and global information. 409-423 - Coe F. Miles, C. David Rogers:
The microcircuit associative memory: a biologically motivated memory architecture. 424-435 - Thomas Parisini, Riccardo Zoppoli:
Neural networks for feedback feedforward nonlinear control systems. 436-449 - Patrick Thiran, Vincent Peiris, Pascal Heim, Bertrand Hochet:
Quantization effects in digitally behaving circuit implementations of Kohonen networks. 450-458 - Alessandro Mortara, Eric A. Vittoz:
A communication architecture tailored for analog VLSI artificial neural networks: intrinsic performance and limitations. 459-466 - David S. Chen, Ramesh C. Jain:
A robust backpropagation learning algorithm for function approximation. 467-479 - Wray L. Buntine, Andreas S. Weigend:
Computing second derivatives in feed-forward networks: a review. 480-488 - M. Erkan Savran, Ömer Morgül:
On the design of dynamic associative neural memories. 489-492 - Alexander G. Parlos, Benito Fernández, Amir F. Atiya, Jayakumar Muthusami, Wei Kang Tsai:
An accelerated learning algorithm for multilayer perceptron networks. 493-497 - Marc M. Van Hulle, Dominique Martinez:
On a novel unsupervised competitive learning algorithm for scalar quantization. 498-501 - Granger G. Sutton III, James A. Reggia:
Effects of normalization constraints on competitive learning. 502-504 - Vijay V. Phansalkar, P. S. Sastry:
Analysis of the back-propagation algorithm with momentum. 505-506 - Edward M. Corwin, Antonette M. Logar, William J. B. Oldham:
An iterative method for training multilayer networks with threshold functions. 507-508 - Sang-Hoon Oh, Youngjik Lee:
Effect of nonlinear transformations on correlation between weighted sums in multilayer perceptrons. 508-510 - Mark W. Goudreau, C. Lee Giles, Srimat T. Chakradhar, Dong Chen:
First-order versus second-order single-layer recurrent neural networks. 511-513 - Dimitris C. Psichogios, Lyle H. Ungar:
SVD-NET: an algorithm that automatically selects network structure. 513-515 - Cheng-Chin Chiang, Hsin-Chia Fu:
Using multithreshold quadratic sigmoidal neurons to improve classification capability of multilayer perceptrons. 516-519 - Dharmendra S. Modha, Yeshaiahu Fainman:
A learning law for density estimation. 519-523 - C. Y. Lee, Jia-Shung Wang, Richard C. T. Lee:
Characteristics of the Hopfield associative memory utilizing isomorphism relations. 523-526
Volume 5, Number 4, July 1994
- Roberto Battiti:
Using mutual information for selecting features in supervised neural net learning. 537-550 - Natan Peterfreund, Yoram Baram:
Second-order bounds on the domain of attraction and the rate of convergence of nonlinear dynamical systems and neural networks. 551-560 - Doo-Il Choi, Sang-Hui Park:
Self-creating and organizing neural networks. 561-575 - Heekuck Oh, Suresh C. Kothari:
Adaptation of the relaxation method for learning in bidirectional associative memory. 576-583 - Simon R. Jones, Karl M. Sammut, Jamie Hunter:
Learning in linear systolic neural network engines: analysis and implementation. 584-593 - V. T. Sunil Elanayar, Yung C. Shin:
Radial basis function neural network for approximation and estimation of nonlinear stochastic dynamic systems. 594-603 - Ron Sun:
A neural network model of causality. 604-611 - Pierre Baldi, Amir F. Atiya:
How delays affect neural dynamics and learning. 612-621 - Pei-Yih Ting, Ronald A. Iltis:
Diffusion network architectures for implementation of Gibbs samplers with applications to assignment problems. 622-638 - Robert J. T. Morris, Behrokh Samadi:
Neural network control of communications systems. 639-650 - Chularat Khunasaraphan, Kanonkluk Vanapipat, Chidchanok Lursinsap:
Weight shifting techniques for self-recovery neural networks. 651-658 - Mark D. Hanes, Stanley C. Ahalt, Ashok K. Krishnamurthy:
Acoustic-to-phonetic mapping using recurrent neural networks. 659-662 - C. Song, K. P. Roenker:
Novel heterostructure device for electronic pulse-mode neural circuits. 663-665 - Alexander G. Parlos:
Fuzzy logic and neural networks: clips from the field. 666-667 - Rogelio Palomera-Garcia:
Information processing with fuzzy logic-Piero Bonissone. 667-668
Volume 5, Number 5, September 1994
- Weiyong Yan, Uwe Helmke, John B. Moore:
Global analysis of Oja's flow for neural networks. 674-683 - Konstantinos I. Diamantaras, Sun-Yuan Kung:
Multilayer neural networks for reduced-rank approximation. 684-697 - Adam Kowalczyk, Herman L. Ferrá:
Developing higher-order networks with empirically selected units. 698-711 - Thomas L. Hemminger, Yoh-Han Pao:
Detection and classification of underwater acoustic transients using neural networks. 712-718 - Hua Yang, Tharam S. Dillon:
Exponential stability and oscillation of Hopfield graded response neural network. 719-729 - Walter E. Lillo, David C. Miller, Stefen Hui, Stanislaw H. Zak:
Synthesis of Brain-State-in-a-Box (BSB) based associative memories. 730-737 - Nico Weymaere, Jean-Pierre Martens:
On the initialization and optimization of multilayer perceptrons. 738-751 - Jack S. N. Jean, Jin Wang:
Weight smoothing to improve network generalization. 752-763 - Roy L. Streit, Tod Luginbuhl:
Maximum likelihood training of probabilistic neural networks. 764-783 - Paul W. Hollis, John J. Paulos:
A neural network learning algorithm tailored for VLSI implementation. 784-791 - Alan F. Murray, Peter J. Edwards:
Enhanced MLP performance and fault tolerance resulting from synaptic weight noise during training. 792-802 - James Ting-Ho Lo:
Synthetic approach to optimal filtering. 803-811 - Hoon Kang:
Multilayer associative neural networks (MANN's): storage capacity versus perfect recall. 812-822 - Takanori Shibata, Toshio Fukuda:
Hierarchical intelligent control for robotic motion. 823-832 - Hyuck M. Kwon, Lawrence T. Schaefer:
Neural network applications for jamming state information generator. 833-837 - Jui-Cheng Yen, Fu-Juay Chang, Shyang Chang:
A new winners-take-all architecture in artificial neural networks. 838-843 - Leonard G. C. Hamey:
Comments on "Can backpropagation error surface not have local minima?". 844-845 - Binfan Liu, Jennie Si:
The best approximation to C2 functions and its error bounds using regular-center Gaussian networks. 845-847 - C. Lee Giles, Christian W. Omlin:
Pruning recurrent neural networks for improved generalization performance. 848-851
Volume 5, Number 6, November 1994
- Noboru Murata, Shuji Yoshizawa, Shun-ichi Amari:
Network information criterion-determining the number of hidden units for an artificial neural network model. 865-872 - Gregory L. Heileman, Michael Georgiopoulos, Chaouki T. Abdallah:
A dynamical adaptive resonance architecture. 873-889 - Arvind Srinivasan, Celal Batur:
Hopfield/ART-1 neural network-based fault detection and isolation. 890-899 - Shih-Lin Hung, Hojjat Adeli:
A parallel genetic/neural network learning algorithm for MIMD shared memory machines. 900-909 - Andrzej Cichocki, Rolf Unbehauen:
Simplified neural networks for solving linear least squares and total least squares problems in real time. 910-923 - Stefen Hui, Stanislaw H. Zak:
The Widrow-Hoff algorithm for McCulloch-Pitts type neurons. 924-929 - Dhananjay S. Phatak, Israel Koren:
Connectivity and performance tradeoffs in the cascade correlation learning architecture. 930-935 - Chong-Ho Choi, Jin-Young Choi:
Constructive neural networks with piecewise interpolation capabilities for function approximations. 936-944 - Hugues Bersini, Marco Saerens, Luis Gonzalez Sotelino:
Hopfield net generation, encoding and classification of temporal trajectories. 945-953 - Liang Jin, Peter N. Nikiforuk, Madan M. Gupta:
Absolute stability conditions for discrete-time recurrent neural networks. 954-964 - Terence D. Sanger:
Optimal unsupervised motor learning for dimensionality reduction of nonlinear control systems. 965-973 - Yoram Baram:
Memorizing binary vector sequences by a sparsely encoded network. 974-981 - Khalid A. Al-Mashouq, Irving S. Reed:
The use of neural nets to combine equalization with decoding for severe intersymbol interference channels. 982-988 - Martin T. Hagan, Mohammad B. Menhaj:
Training feedforward networks with the Marquardt algorithm. 989-993 - Etienne Barnard:
A model for nonpolynomial decrease in error rate with increasing sample size. 994-997 - Kondalsamy Gopalsamy, Xue-Zhong He:
Delay-independent stability in bidirectional associative memory networks. 998-1002 - Hans Christian Andersen, Fong Chwee Teng, Ah Chung Tsoi:
Single net indirect learning architecture. 1003-1005
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.