Stereo vision is a passive method used to recover the depth information of a scene, which is lost during the projection of a point in the 3D-scene onto the 2D image plane. In stereo vision, in which two or more views of a scene are used, the depth information can be reconstructed from the different positions in the images to which a physical point in the 3D-scene is projected. The displacement of the corresponding positions in the image planes is called disparity. The central problem in stereo vision, known as the correspondence problem, is to find corresponding points or features in the images. This task can be an ambiguous one due to several similar structures or periodic elements in the images. Furthermore, there may be occluded regions in the scene, which can be seen only by one camera. In these regions there is no solution for the correspondence problem. Interocular differences such as perspective distortions, differences in illumination and camera noise make it even more difficult to solve the correspondence problem. The main focus of this work is a new stereo matching algorithm, in which the matching of occluded areas is suppressed by a self-organizing process. In the first step the images are filtered by a set of oriented Gabor filters. A complex valued correlation-based similarity measurement, which is applied to the responses of the Gabor filters, is used in the second step to initialize a self-organizing process. In this self-organizing network, which is described by coupled, non-linear evolution equations, the continuity and the uniqueness constraints are established. Occlusions are detected implicitly without a computationally intensive bidirectional matching strategy.