DisTools introductory 2D example, pseudo-Euclidean embedding

Get rid of old figures, generate a trainset and a testset and give them proper names

delfigs
AT = setname(gendatb,'TrainSet')
AS = setname(gendatb,'TestSet')

Define a set of untrained classifiers all based on Fisher’s Linear Discriminant: in the original 2D feature space, in the PE space embedded from Minkowsky-2 (Euclidean, L2) distances, in a PE space embedded from squared Euclidean distances and in a PE space embedded from Minkowsky-1 (L1) distances. Give them proper names and train them by AT.

alf = []; % Parameter for PE embedding: reduction in variance: default: none
U1 = fisherc;                    % Fisher in 2D feature space
U2 = proxm([],'m',2)*pe_em([],alf)*fisherc;
U2 = setname(U2,'PE-SpaceL2-1'); % PE embedding from L2
U3 = proxm([],'m',2)*mapm('power',2)*pe_em([],alf)*fisherc;
U3 = setname(U3,'PE-SpaceL2-2'); % PE embedding from L2^2
U4 = proxm([],'m',1)*pe_em([],alf)*fisherc;
U4 = setname(U4,'PE-SpaceL1');   % PE embedding from L1
W = AT*{U1,U2,U3,U4};            % train

Show results in a scatter plot

figure;
scatterd(AT)

plotc(W)

Note how noisy the classifiers are due do digitization noise in the embedding, resulting from directions with very small eigenvalues.

Classify trainset and testset and show results.

testc({AT,AS},W)

Repeat the above for different values of alf, e.g. alf = 0.2. How do you judge the result? What about larger values for alf (< 1)?

Return to DisTools Introductory Examples Continue with next DisTools Example
Print Friendly, PDF & Email