Art. 07 – Vol. 21 – No. 2 – 2011

Linear Separability in Artificial Neural Networks

Nicoleta Liviana Tudor
Cathedral of Informatics, Universitaty Petrol & Gas of Ploieşti, România

Abstract: The power and usefulness of artificial neural networks have been demonstrated in several applications including diagnostic problems, medicine, finance, robotic control, signal and image processing and other problems of pattern recognition.

A first wave of interest in neural networks emerged after the introduction of biological neurons by McCulloch and Pitts. These neurons were presented as conceptual components for circuits that could perform computational tasks. Rosenblatt proposed the perceptron, a more general computational model than McCulloch–Pitts units. The essential innovation was the introduction of numerical weights and a special interconnection pattern. The classical perceptron is in fact a neural network for the solution of certain pattern recognition problems and it can only compute linearly separable functions.

This article presents some of the methods for testing linear separability. A single layer perceptron neural network can be used for creating a classification model, when the functions are linearly separable. The complexity of linearly separating points in an input space is defined by the complexity of solving linear optimization problem.

Keywords: linear separability, neural network, perceptron, classification model, linear optimization, input space

View full article


  1. ANDERSON, JAMES. Neural models with cognitive implications. In LaBerge & Samuels, Basic Processes in Reading Perception and Comprehension Models, Hillsdale, Erlbaum, 1977, pp. 27-90.
  2. DENNING, PETER. The science of computing: Neural networks. American Scientist, nr. 80, 1992, pp. 426-429.
  3. DUMITRESCU, HARITON COSTIN. Reţele neuronale. Teorie şi aplicaţii. Editura Teora, Bucureşti, 1996, 460 p.
  4. FELDMAN, BALLARD. Connectionist models and their properties. Cognitive Science, nr. 6, 1982, pp. 205-214.
  5. KRÖSE, BEN; PATRICK VAN DER SMAGT. An introduction to Neural Networks. University of Amsterdam, Netherlands, 1996, 135 p.
  6. MINSKY, MARVIN; SEYMOUR PAPERT. Perceptrons: An lntroduction to Computational Geometry. MIT Press, Cambridge, Mass., 1969.
  7. MOISE ADRIAN. Reţele neuronale pentru recunoaşterea formelor. Editura MATRIX ROM, Bucureşti, 2005, 309 p.
  8. NILSSON, NILS; MORGAN KAUFMANN. Learning Machines. San Mateo, CA, New Edition, 1990, pp. 37-42.
  9. ROJAS, RAÚL. Neural Networks A Systematic Introduction. Springer-Verlag, Berlin, 199
  10. SCHALKOFF, ROBERT. Pattern Recognition Statistical, Structural and Neural Approaches. John Wiley & Sons, New York, 1992.

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.