Archives
Romanian Journal of Information Technology and Automatic Control / Vol. 21, No. 2, 2011
Linear Separability in Artificial Neural Networks
Nicoleta Liviana TUDOR
The power and usefulness of artificial neural networks have been demonstrated in several applications including diagnostic problems, medicine, finance, robotic control, signal and image processing and other problems of pattern recognition. A first wave of interest in neural networks emerged after the introduction of biological neurons by McCulloch and Pitts. These neurons were presented as conceptual components for circuits that could perform computational tasks. Rosenblatt proposed the perceptron, a more general computational model than McCulloch–Pitts units. The essential innovation was the introduction of numerical weights and a special interconnection pattern. The classical perceptron is in fact a neural network for the solution of certain pattern recognition problems and it can only compute linearly separable functions. This article presents some of the methods for testing linear separability. A single layer perceptron neural network can be used for creating a classification model, when the functions are linearly separable. The complexity of linearly separating points in an input space is defined by the complexity of solving linear optimization problem.
Keywords:
linear separability, neural network, perceptron, classification model, linear optimization, input space
CITE THIS PAPER AS:
Nicoleta Liviana TUDOR,
"Linear Separability in Artificial Neural Networks",
Romanian Journal of Information Technology and Automatic Control,
ISSN 1220-1758,
vol. 21(2),
pp. 71-80,
2011.