Overtraining Archives

Discovered by accident

Some discoveries are made by accident. The wrong road brought a beautiful view. An arbitrary book from the library gave a great, new insight. A procedure was suddenly understood in a discussion with colleagues during a poster session. In a physical experiment a failure in controlling the circumstances showed a surprising phenomenon. Children playing with…

Read the rest of this entry

If we want to learn a new concept we may ask for a definition. This might be good in mathematics, but in real life it is often better to get some examples. Let us, for instance, try to understand what despair means. The dictionary tells us that it means ‘loss of hope’. This is just…

Read the rest of this entry

Aristotle and the ugly duckling theorem

We already discussed several times the significance of understanding the Platonic and Aristotelian ways of gaining knowledge. It can be of great help to researchers in the field of pattern recognition in the appreciation of contributions by others, in discussions with colleagues and in supervising students. This may hold for science in general, but it…

Read the rest of this entry

Regularization and invariants

Regularization is frequently used in statistics and machine learning to stabilize sensitive procedures in case of insufficient data.. It will be argued here that it is specifically of interest in pattern recognition applications if it can be related to invariants of the specific problem at hand. It is thereby a means to incorporate prior knowledge…

Read the rest of this entry

Recognition, belief or knowledge

Recognition systems have to be trained. An expert is necessary to act as a teacher. He has to know what is what. But … Does he really know, or does he just believe that he knows? Or, does he know that he just believes? And, does he know how good his belief is? Nils Nilsson,…

Read the rest of this entry

Is the neural network model good for pattern recognition? Or, is it too complex, too vague, too clumsy to be of any use in performing applications or in building understanding? The relation between the pattern recognition community and these questions have always been very sensitive. Its history is also interesting for observing how science may…

Read the rest of this entry

Peaking summarized

Pattern recognition learns from examples. Thereby, generalization is needed. This can only be done if the objects, or at least the differences between pattern classes have a finite complexity. That is what peaking teaches us. We will go once more through the steps. (See also our previous discussions on peaking, dimensionality problems and Hughes’ phenomenon)….

Read the rest of this entry

Trunk’s example of the peaking phenomenon

In 1979 G.V. Trunk published a very clear and simple example of the peaking phenomenon. It has been cited many times to explain the existence of peaking. Here, we will summarize and discuss it for those who want to have a better idea about the peaking problem. The paper presents an extreme example. Its value…

Read the rest of this entry

The curse of dimensionality

Imagine a two-class problem represented by 100 training objects in a 100-dimensional feature (vector) space. If the objects are in general position (not by accident in a low-dimensional subspace) then they still fit perfectly in a 99-dimensional subspace. This is a ‘plane’, formally a hyperplane, in the 100-dimensional feature space. We will argue that this…

Read the rest of this entry

Hughes phenomenon

The peaking paradox was heavily discussed in pattern recognition after a general mathematical analysis of the phenomenon was published by Hughes in 1968. It has puzzled researchers for at least a decade. This peaking is a real world phenomenon and it has been observed many times. Although the explanation by Hughes seemed general and convincing,…

Read the rest of this entry

 Page 1 of 2  1  2 »