artificialNeutralNet.mws
- Introduction
- Artificial Neural Networks
- Symbols and equations
- Data for this application
- Using basic math to build the Perceptron
- 1a. Apply a1 to find the value of i1, where i1= w0 x a1.
- 1b. Use the Perceptron Algorithm to find the new weight vector w1.
- 1c. Find delta vector d1 to verify that it is approximates the value of the new weight vector w1. Plot w0, w1, a1 and d1, then scale d1.
- 1d. Plot: a1, w0, w1, and d1. Next plot a vector from the origin to a1 and compare it to d1.
- 2a. Apply b1 to find the value of i2, where i2= w1 x b1.
- 2b. Use the Perceptron Algorithm to find the new weight vector w2.
- 2c. Find delta vector d2 to verify that it is approximates the value of the new weight vector w2. Add to the plot w2, b1 and d2, then scale d2.
- 2d. Plot: b1, w1, w2, and d2. Next plot a vector from the origin to a1 and compare it to d1.
- 3a. Apply a2 to find the value of i3, where i3= w2 x a2.
- 3b. Use the Perceptron Algorithm to find the new weight vector w3.
- 3c. Find delta vector d3 to verify that it approximates the value of the new weight vector w3. Add to the plot w3, b2 and d3, then scale d3.
- 3d. Plot: a2, w2, w3, and d3. Next plot a vector from the origin to a1 and compare it to d1.
- 4a. Apply b2 to find the value of i4, where i4= w3 x b2.
- 4b. Use the Perceptron Algorithm to find the new weight vector w4.
- 4c. Find delta vector d1 to verify that it approximates the value of the new weight vector w1. Add to the plot w4, a2 and d4, then scale d4.
- 4d. Plot: b2, w3, w4, and d4. Next plot a vector from the origin to a1 and compare it to d1.
- Plot of: w0, w1, w2, w3, w4, a1, a2, b1, b2, d1, d2, d3, d4.
- 5a. Recheck a1 with w4 to see if the perceptron knows that a1 belongs to data pattern A.
- 6a. Recheck b1 with w4 to see if the perceptron knows that b1 belongs to data pattern B.
- 7a. Recheck a2 with w4 to see if the perceptron knows that a2 belongs to data pattern A.
- Using Linear algebra to build the Perceptron
- Bring up the linear algbra package of Maple.
- LA 1a. Apply a1 to find the value of i1, where i1= w0 x a1.
- LA 1b. Use the Perceptron Algorithm to find the new weight vector w1.
- LA 1c. Find delta vector d1 to verify that it is approximates the value of the new weight vector w1. Plot w0, w1, a1 and d1, then scale d1.
- LA 1d. Plot of a1, w0, w1 and d1. Next plot a vector from the origin to a1 and compare it to d1.
- LA 2a. Apply b1 to find the value of i2, where i2= w1 x b1.
- LA 2b. Use the Perceptron Algorithm to find the new weight vector w2.
- LA 2c. Find delta vector d2 to verify that it is approximates the value of the new weight vector w2. Add to the plot w2, b1 and d2, then scale d2.
- LA 2d. Plot: b1, w1, w2 and d2. Next plot a vector from the origin to b1 and compare it to d1.
- LA 3a. Apply a2 to find the value of i3, where i3= w2 x a2.
- LA 3b. Use the Perceptron Algorithm to find the new weight vector w3.
- LA 3c. Find delta vector d3 to verify that it approximates the value of the new weight vector w3. Add to the plot w3, b2 and d3, then scale d3.
- LA 3d. Plot: a2, w2, w3 and d3. Next plot a vector from the origin to a2 and compare it to d3.
- LA 4a. Apply b2 to find the value of i4, where i4= w3 x b2.
- LA 4b. Use the Perceptron Algorithm to find the new weight vector w4.
- LA 4c. Find delta vector d1 to verify that it approximates the value of the new weight vector w1. Add to the plot w4, a2 and d4, then scale d4.
- LA 4d. Plot: b2, w3, w4 and d4. Next plot a vector from the origin to a2 and compare it to d3.
- LA Plot of: w0, w1, w2, w3, w4, a1, a2, b1, b2, d1, d2, d3, d4.
- LA 5a. Recheck a1 with w4 to see if the perceptron knows that a1 belongs to data pattern A.
- LA 6a. Recheck b1 with w4 to see if the perceptron knows that b1 belongs to data pattern B.
- LA 7a. Recheck a2 with w4 to see if the perceptron knows that a2 belongs to data pattern A.
- References
THE PERCEPTRON AND MAPLE
by Jake Trexel
jtrexel@ix.netcom.com
Introduction
The purpose of this application is to use Maple as a mathematical foundation for the development of an Artificial Neural Network (ANN). I recommend that you follow each of the sections even though they are repetitious because they will show you the process of how ANN is built. I have chosen the Preceptron to be used in this application because it was the first ANN to be developed (Caudill & Butler, 1992; Caudill & Butler, 1993; Lau, 1991).
The first part of the application focuses on using basic mathematics. The second part focuses on linear algebra. Thus no matter what your level of mathematical expertise, you will be able to follow this application.
Artificial Neural Networks
In order to fully appreciate this application, a basic knowledge of artificial neural networks is needed. Artificial neural networks is an attempt to model how the mind works, i.e., how it takes in information and how the information is processed. For example, as you are sitting at home, you hear a loud noise and wonder what it is. If it is a fire alarm, your body will respond in a certain manner. However, if it is the door bell, you will not respond in the same manner. How will your mind determine what type of sound it is ? Your ears will send the sound to the neurons in your brain and through the training that you have received in your life, your mind respond in the correct manner. Just as your brain has received training to make correct decisions, an artificial neural network must be trained.
One of the first models of this process was developed by Warren S. McCulloch and Walter Pitts in 1943 (Caudill & Butler, 1992; Caudill & Butler, 1993; Lau, 1991). This model was called "The McCulloch-Pitts Neurode". It was literally a pattern classifier. It was able to distinguish between two separate patterns that were separated by a hyperplane. In 1943 this model was considered a major break through because it was able to determine if a certain point was either set A (fire alarm ) or set B (door bell).
The McCulloch-Pitts neurode was further refined in 1958 through the development of an algorithm by Frank Rosenblatt (Caudill & Butler, 1992; Caudill & Butler, 1993; Lau, 1991) . This became known as the Perceptron algorithm (PA) which is explained below.
The Perceptron Algorithm
Apply the rules below for each output (positive or negative one) that the Perceptron has computed which is based on each input training data point.
Rule 1. a1. If the perceptron's answer is correct and the answer was positive one, then the new weight vector equals the old weight vector plus the input pattern vector.
a2. If the percepton's answer is correct and the answer was positive one, then the new weight vector equals the old weight vector minus the input pattern vector.
Rule 2. b1. If the perceptron's answer is incorrect and the perceptron's answer was positive one, then the new weight vector equals the old weight vector minus the in input pattern vector.
b2. If the perceptron's answer is incorrect and the perceptron's answer was negative one, then the new weight vector equals the old weight vector plus the input pattern vector.
Symbols and equations
i1, i2, i3, i4 = neuron inputs
A & B = data sets A and B
a1, a2, a3, a4 & b1, b2, b3, b4 = input points (weight vectors)
w0, w1, w2, w3, w4 = weight vectors
d1, d2, d3, d4 = delta vectors (the change in weight vectors that is from w0 to w1)
i1, i2, i3, i4 = (inputs) x (weight vector)
vector a1, a2, b1, b2 = a vector from the origin to the respective point
Data for this application
T = threshold value = 0
w0 = initial weight vector = (-0.6, 0.8)
A = data pattern set A with a positive one value.
Data points in set A: a1= (-0.6, 0.8), a2 = (0.7, 0.3)
B = data pattern set B with a negative one value.
Data points in set B: b1 = (-0.6, 0.8), b2 = (-0.2, -0.8)
Using basic math to build the Perceptron
1a. Apply a1 to find the value of i1, where i1= w0 x a1.
>
restart:
>
a1[1,x] := 0.3; a1[1,y] := 0.7;
>
w0[0,x] := -0.6; w0[0,y] := 0.8;
>
f1 := i1 -> [(w0[0,x] * a1[1,x]) + (w0[0,y] * a1[1,y])];
>
f1(1,2,3,4);
1b. Use the Perceptron Algorithm to find the new weight vector w1.
>
w1[1,x] := (w0[0,x] + a1[1,x]);
>
w1[1,y] := (w0[0,y] + a1[1,y]);
The new weight vector w1 = (-0.3, 1.5)
1c. Find delta vector d1 to verify that it is approximates the value of the new weight vector w1. Plot w0, w1, a1 and d1, then scale d1.
>
d1 := sqrt((-0.3 - (-0.6))^2 + (1.5 - 0.8)^2);
1d. Plot: a1, w0, w1, and d1. Next plot a vector from the origin to a1 and compare it to d1.
>
with(plots);
Warning, the name changecoords has been redefined
>
with(plottools);
Warning, the name arrow has been redefined
>
a[1]:= pointplot({[0.3,0.7]},symbol=diamond,symbolsize=40,color=black):
>
w[0]:= display(arrow([0,0],[-0.6,0.8],0.01,0.05,0.1),color=blue):
>
d[1]:= display(arrow([-0.6,0.8],[-0.3,1.5],0.01,0.05,0.1),color=black):
>
w[1]:= display(arrow([0,0],[-0.3,1.5],0.01,0.05,0.1),color=red):
>
v[1]:= display(arrow([0,0],[0.3,0.7],0.01,0.05,0.1),color=black):
>
t[1]:= textplot ({[0.4,0.7, "a1"],[-0.55,0.6, "w0"],[-0.175,1.2, "w1"],[-0.45,1.3, "d1"],[0.25,0.3, "a1 vector"]}):
>
display([a[1],w[0]],d[1],w[1],v[1],t[1],axes=normal,scaling=unconstrained);
2a. Apply b1 to find the value of i2, where i2= w1 x b1.
>
b1[1,x] := -0.6; b1[1,y] := 0.3;
>
w1[1,x] := -0.3; w1[1,y] := 1.5;
>
f := i2 -> [(w1[1,x] * b1[1,x]) + (w1[1,y] * b1[1,y])];
>
f(1,2,3,4);
Input 2 = 0.63
2b. Use the Perceptron Algorithm to find the new weight vector w2.
>
w2[1,x] := (w1[1,x] - b1[1,x]);
>
w2[1,y] := (w1[1,y] - b1[1,y]);
The new weight vector w2 = (0.3, 1.2)
2c. Find delta vector d2 to verify that it is approximates the value of the new weight vector w2. Add to the plot w2, b1 and d2, then scale d2.
>
d2 := sqrt((0.3 - (-0.3))^2 + (1.2 - 1.5)^2);
2d. Plot: b1, w1, w2, and d2. Next plot a vector from the origin to a1 and compare it to d1.
>
with(plots);
Warning, the name arrow has been redefined
>
with(plottools);
Warning, the name arrow has been redefined
>
w[1]:= display(arrow([0,0],[-0.3,1.5],0.01,0.05,0.1),color=red):
>
b[1]:= pointplot({[-0.6,0.3]},symbol=circle,symbolsize=40,color=black):
>
w[2]:= display(arrow([0,0],[0.3,1.2],0.01,0.05,0.1),color=green):
>
d[2]:= display(arrow([-0.3,1.5],[0.3,1.2],0.01,0.05,0.1),color=black):
>
v[2]:= display(arrow([0,0],[-0.6,0.3],0.01,0.05,0.1),color=black):
>
t[1]:= textplot ({[-0.65,0.3, "b1"],[0.3,1.0, "w2"],[-0.175,1.2, "w1"],[0.18,1.4, "d2"],[-0.4,0.3, "b1 vector"]}):
>
display(w[1],b[1],w[2],d[2],v[2],t[1],axes=normal,scaling=unconstrained);
3a. Apply a2 to find the value of i3, where i3= w2 x a2.
>
a2[2,x] := 0.7; a2[2,y] := 0.3;
>
w2[2,x] := 0.3; w2[2,y] := 1.2;
>
f := i3 -> [(w2[2,x] * a2[2,x]) + (w2[2,y] * a2[2,y])];
>
f(1,3,2,4);
Input 3 = 0.57
3b. Use the Perceptron Algorithm to find the new weight vector w3.
>
w3[3,x] := (w2[2,x] + a2[2,x]);
>
w3[3,y] := (w2[2,y] + a2[2,y]);
The new weight vector w3 = (1.0, 1.5)
3c. Find delta vector d3 to verify that it approximates the value of the new weight vector w3. Add to the plot w3, b2 and d3, then scale d3.
>
d3 := sqrt((1.0 - 0.3)^2 + (1.5 - 1.2)^2);
3d. Plot: a2, w2, w3, and d3. Next plot a vector from the origin to a1 and compare it to d1.
>
with(plots);
Warning, the name arrow has been redefined
>
with(plottools);
Warning, the name arrow has been redefined
>
w[2]:= display(arrow([0,0],[0.3,1.2],0.01,0.05,0.1),color=green):
>
a[2]:= pointplot({[0.7,0.3]},symbol=box,symbolsize=40,color=black):
>
w[3]:= display(arrow([0,0],[1.0,1.5],0.01,0.05,0.1),color=gold):
>
d[3]:= display(arrow([0.3,1.2],[1.0,1.5],0.01,0.05,0.1),color=black):
>
v[3]:= display(arrow([0,0],[0.7,0.3],0.01,0.05,0.1),color=black):
>
t[1]:= textplot ({[0.8,0.3, "a2"],[0.18,1.0, "w2"],[0.9,1.2, "w3"],[0.6,1.4, "d3"],[0.4,0.3, "a2 vector"]}):
>
display(w[2],a[2],w[3],d[3],v[3],t[1],axes=normal,scaling=unconstrained);
4a. Apply b2 to find the value of i4, where i4= w3 x b2.
>
b2[2,x] := -0.2; b2[2,y] := -0.8;
>
w3[3,x] := 1.0; w3[3,y] := 1.5;
>
f := i4 -> [(w3[3,x] * b2[2,x]) + (w3[3,y] * b2[2,y])];
>
f(1,2,3,4);
Input 4 = -1.40
4b. Use the Perceptron Algorithm to find the new weight vector w4.
>
w4[4,x] := (w3[3,x] - b2[2,x]);
>
w4[4,y] := (w3[3,y] - b2[2,y]);
The new weight vector w4 = (1.2, 2.3)
4c. Find delta vector d1 to verify that it approximates the value of the new weight vector w1. Add to the plot w4, a2 and d4, then scale d4.
>
d4 := sqrt((1.2 - 1.0)^2 + (2.3 - 1.5)^2);
4d. Plot: b2, w3, w4, and d4. Next plot a vector from the origin to a1 and compare it to d1.
>
with(plots);
Warning, the name arrow has been redefined
>
with(plottools);
Warning, the name arrow has been redefined
>
w[3]:= display(arrow([0,0],[1.0,1.5],0.01,0.05,0.1),color=gold):
>
b[2]:= pointplot({[-0.2,-0.8]},symbol=cross,symbolsize=40,color=black):
>
w[4]:= display(arrow([0,0],[1.2,2.3],0.01,0.05,0.1),color=plum):
>
d[4]:= display(arrow([1.0,1.5],[1.2,2.3],0.01,0.05,0.1),color=black):
>
v[4]:= display(arrow([0,0],[-0.2,-0.8],0.01,0.05,0.1),color=black):
>
t[1]:= textplot ({[-0.3,-0.82, "b2"],[0.9,2.0, "w4"],[0.9,1.2, "w3"],[1.2,1.9, "d4"],[-0.38,-0.45, "b2 vector"]}):
>
display(w[3],w[4],d[4],b[2],v[4],t[1],axes=normal,scaling=unconstrained);
Plot of: w0, w1, w2, w3, w4, a1, a2, b1, b2, d1, d2, d3, d4.
>
with(plots);
Warning, the name arrow has been redefined
>
with(plottools);
Warning, the name arrow has been redefined
>
a[1]:= pointplot({[0.3,0.7]},symbol=diamond,symbolsize=30,color=black):
>
w[0]:= display(arrow([0,0],[-0.6,0.8],0.01,0.05,0.1),color=blue):
>
d[1]:= display(arrow([-0.6,0.8],[-0.3,1.5],0.01,0.05,0.1),color=black):
>
w[1]:= display(arrow([0,0],[-0.3,1.5],0.01,0.05,0.1),color=red):
>
b[1]:= pointplot({[-0.6,0.3]},symbol=circle,symbolsize=30,color=black):
>
w[2]:= display(arrow([0,0],[0.3,1.2],0.01,0.05,0.1),color=green):
>
d[2]:= display(arrow([-0.3,1.5],[0.3,1.2],0.01,0.05,0.1),color=black):
>
a[2]:= pointplot({[0.7,0.3]},symbol=box,symbolsize=30,color=black):
>
w[3]:= display(arrow([0,0],[1.0,1.5],0.01,0.05,0.1),color=gold):
>
d[3]:= display(arrow([0.3,1.2],[1.0,1.5],0.01,0.05,0.1),color=black):
>
b[2]:= pointplot({[-0.2,-0.8]},symbol=cross,symbolsize=30,color=black):
>
w[4]:= display(arrow([0,0],[1.2,2.3],0.01,0.05,0.1),color=plum):
>
d[4]:= display(arrow([1.0,1.5],[1.2,2.3],0.01,0.05,0.1),color=black):
>
display([a[1],w[0]],d[1],w[1],b[1],w[2],d[2],a[2],w[3],d[3],w[4],d[4],b[2],axes=normal,scaling=unconstrained);
5a. Recheck a1 with w4 to see if the perceptron knows that a1 belongs to data pattern A.
>
a1[1,x] := 0.3; a1[1,y] := 0.7;
>
w4[4,x] := 1.2; w4[4,y] := 2.3;
>
>
f := i5 -> [(w4[4,x] * a1[1,x]) + (w4[4,y] * a1[1,y])];
>
f(1,2,3,4);
Input 5 = 1.97 which matches a +1 input which goes with data set A, thus the ANN know this point.
6a. Recheck b1 with w4 to see if the perceptron knows that b1 belongs to data pattern B.
>
b1[1,x] := -0.6; b1[1,y] := 0.3;
>
w4[4,x] := 1.2; w4[4,y] := 2.3;
>
f := i6 -> [(w4[4,x] * b1[1,x]) + (w4[4,y] * b1[1,y])];
>
f(1,2,3,4);
Input 6 = -0.03 which matches a -1 input which goes with data set B, thus the ANN know this point.
7a. Recheck a2 with w4 to see if the perceptron knows that a2 belongs to data pattern A.
>
a2[2,x] := 0.7; a2[2,y] := 0.3;
>
w4[4,x] := 1.2; w4[4,y] := 2.3;
>
f := i6 -> [(w4[4,x] * a2[2,x]) + (w4[4,y] * a2[2,y])];
>
f(1,2,3,4);
Input 7 = 1.53 which matches A +1 input which goes with data set A, thus the ANN know this point.
Using Linear algebra to build the Perceptron
Bring up the linear algbra package of Maple.
>
with(linalg):
LA 1a. Apply a1 to find the value of i1, where i1= w0 x a1.
>
a1 := vector([0.3,0.7]);
>
w0 := vector([-0.6,0.8]);
>
i1 := dotprod(a1,w0);
LA 1b. Use the Perceptron Algorithm to find the new weight vector w1.
>
w1 := evalm(a1+w0);
Then new weight vector w1 = (-0.3, 1.5)
LA 1c. Find delta vector d1 to verify that it is approximates the value of the new weight vector w1. Plot w0, w1, a1 and d1, then scale d1.
>
d1 := sqrt((-0.3 - (-0.6))^2 + (1.5 - 0.8)^2);
LA 1d. Plot of a1, w0, w1 and d1. Next plot a vector from the origin to a1 and compare it to d1.
>
with(plots);
Warning, the name arrow has been redefined
>
with(plottools);
Warning, the name arrow has been redefined
>
a[1]:= pointplot({[0.3,0.7]},symbol=diamond,symbolsize=40,color=black):
>
w[0]:= display(arrow([0,0],[-0.6,0.8],0.01,0.05,0.1),color=blue):
>
d[1]:= display(arrow([-0.6,0.8],[-0.3,1.5],0.01,0.05,0.1),color=black):
>
v[1]:= display(arrow([0,0],[0.3,0.7],0.01,0.05,0.1),color=black):
>
t[1]:= textplot ({[0.4,0.7, "a1"],[-0.55,0.6, "w0"],[-0.175,1.2, "w1"],[-0.45,1.3, "d1"],[0.25,0.3, "a1 vector"]}):
>
w[1]:= display(arrow([0,0],[-0.3,1.5],0.01,0.05,0.1),color=red):
>
display([a[1],w[0]],d[1],w[1],v[1],t[1],axes=normal,scaling=unconstrained);
LA 2a. Apply b1 to find the value of i2, where i2= w1 x b1.
>
b1 := vector([-0.6,0.3]);
>
w1 := vector([-0.3,1.5]);
>
i2 := dotprod(b1,w1);
Input 2 = 0.63
LA 2b. Use the Perceptron Algorithm to find the new weight vector w2.
>
w2 := evalm(w1-b1);
The new weight vector w2 = (0.3, 1.2)
LA 2c. Find delta vector d2 to verify that it is approximates the value of the new weight vector w2. Add to the plot w2, b1 and d2, then scale d2.
>
d2 := sqrt((0.3 - (-0.3))^2 + (1.2 - 1.5)^2);
LA 2d. Plot: b1, w1, w2 and d2. Next plot a vector from the origin to b1 and compare it to d1.
>
with(plots);
Warning, the name arrow has been redefined
>
with(plottools);
Warning, the name arrow has been redefined
>
w[1]:= display(arrow([0,0],[-0.3,1.5],0.01,0.05,0.1),color=red):
>
b[1]:= pointplot({[-0.6,0.3]},symbol=circle,symbolsize=30,color=black):
>
w[2]:= display(arrow([0,0],[0.3,1.2],0.01,0.05,0.1),color=green):
>
d[2]:= display(arrow([-0.3,1.5],[0.3,1.2],0.01,0.05,0.1),color=black):
>
v[2]:= display(arrow([0,0],[-0.6,0.3],0.01,0.05,0.1),color=black):
>
t[1]:= textplot ({[-0.65,0.3, "b1"],[0.3,1.0, "w2"],[-0.175,1.2, "w1"],[0.18,1.4, "d2"],[-0.4,0.3, "b1 vector"]}):
>
display(w[1],b[1],w[2],d[2],v[2],t[1],axes=normal,scaling=unconstrained);
LA 3a. Apply a2 to find the value of i3, where i3= w2 x a2.
>
a2 := vector([0.7,0.3]);
>
w2 := vector([0.3,1.2]);
>
i3 := dotprod(a2,w2);
Input 3 = 0.57
LA 3b. Use the Perceptron Algorithm to find the new weight vector w3.
>
w3 := evalm(w2+a2);
The new weight vector w3 = ( 1.3, 1.5)
LA 3c. Find delta vector d3 to verify that it approximates the value of the new weight vector w3. Add to the plot w3, b2 and d3, then scale d3.
>
d4 := sqrt((1.0 - 0.3)^2 + (1.5 - 1.2)^2);
LA 3d. Plot: a2, w2, w3 and d3. Next plot a vector from the origin to a2 and compare it to d3.
>
with(plots);
Warning, the name arrow has been redefined
>
with(plottools);
Warning, the name arrow has been redefined
>
w[2]:= display(arrow([0,0],[0.3,1.2],0.01,0.05,0.1),color=green):
>
a[2]:= pointplot({[0.7,0.3]},symbol=box,symbolsize=30,color=black):
>
w[3]:= display(arrow([0,0],[1.0,1.5],0.01,0.05,0.1),color=gold):
>
d[3]:= display(arrow([0.3,1.2],[1.0,1.5],0.01,0.05,0.1),color=black):
>
v[3]:= display(arrow([0,0],[0.7,0.3],0.01,0.05,0.1),color=black):
>
t[1]:= textplot ({[0.8,0.3, "a2"],[0.18,1.0, "w2"],[0.9,1.2, "w3"],[0.6,1.4, "d3"],[0.4,0.3, "a2 vector"]}):
>
display(w[2],a[2],w[3],d[3],v[3],t[1],axes=normal,scaling=unconstrained);
LA 4a. Apply b2 to find the value of i4, where i4= w3 x b2.
>
b2 := vector([-0.2,-0.8]);
>
w3 := vector([1.0,1.5]);
>
i4 := dotprod(b2,w3);
Input 4 = -1.40
LA 4b. Use the Perceptron Algorithm to find the new weight vector w4.
>
w4 := evalm(w3-b2);
The new weight vector w4 = (1.2, 2.3)
LA 4c. Find delta vector d1 to verify that it approximates the value of the new weight vector w1. Add to the plot w4, a2 and d4, then scale d4.
>
d4 := sqrt((1.2 - 1.0)^2 + (2.3 - 1.5)^2);
LA 4d. Plot: b2, w3, w4 and d4. Next plot a vector from the origin to a2 and compare it to d3.
>
with(plots);
Warning, the name arrow has been redefined
>
with(plottools);
Warning, the name arrow has been redefined
>
w[3]:= display(arrow([0,0],[1.0,1.5],0.01,0.05,0.1),color=gold):
>
b[2]:= pointplot({[-0.2,-0.8]},symbol=cross,symbolsize=30,color=black):
>
w[4]:= display(arrow([0,0],[1.2,2.3],0.01,0.05,0.1),color=plum):
>
d[4]:= display(arrow([1.0,1.5],[1.2,2.3],0.01,0.05,0.1),color=black):
>
v[4]:= display(arrow([0,0],[-0.2,-0.8],0.01,0.05,0.1),color=black):
>
t[1]:= textplot ({[-0.3,-0.82, "b2"],[0.9,2.0, "w4"],[0.9,1.2, "w3"],[1.2,1.9, "d4"],[-0.38,-0.45, "b2 vector"]}):
>
display(w[3],w[4],d[4],b[2],v[4],t[1],axes=normal,scaling=unconstrained);
LA Plot of: w0, w1, w2, w3, w4, a1, a2, b1, b2, d1, d2, d3, d4.
>
with(plots);
>
with(plottools);
>
a[1]:= pointplot({[0.3,0.7]},symbol=diamond,symbolsize=30,color=black):
>
w[0]:= display(arrow([0,0],[-0.6,0.8],0.01,0.05,0.1),color=blue):
>
d[1]:= display(arrow([-0.6,0.8],[-0.3,1.5],0.01,0.05,0.1),color=black):
>
w[1]:= display(arrow([0,0],[-0.3,1.5],0.01,0.05,0.1),color=red):
>
b[1]:= pointplot({[-0.6,0.3]},symbol=circle,symbolsize=30,color=black):
>
w[2]:= display(arrow([0,0],[0.3,1.2],0.01,0.05,0.1),color=green):
>
d[2]:= display(arrow([-0.3,1.5],[0.3,1.2],0.01,0.05,0.1),color=black):
>
a[2]:= pointplot({[0.7,0.3]},symbol=box,symbolsize=30,color=black):
>
w[3]:= display(arrow([0,0],[1.0,1.5],0.01,0.05,0.1),color=gold):
>
d[3]:= display(arrow([0.3,1.2],[1.0,1.5],0.01,0.05,0.1),color=black):
>
b[2]:= pointplot({[-0.2,-0.8]},symbol=cross,symbolsize=30,color=black):
>
w[4]:= display(arrow([0,0],[1.2,2.3],0.01,0.05,0.1),color=plum):
>
d[4]:= display(arrow([1.0,1.5],[1.2,2.3],0.01,0.05,0.1),color=black):
>
display([a[1],w[0]],d[1],w[1],b[1],w[2],d[2],a[2],w[3],d[3],w[4],d[4],b[2],axes=normal,scaling=unconstrained);
LA 5a. Recheck a1 with w4 to see if the perceptron knows that a1 belongs to data pattern A.
>
i5 := dotprod(a1,w4);
Input 5 = 1.97 which matches a +1 input which goes with data set A, thus the ANN knows this point.
LA 6a. Recheck b1 with w4 to see if the perceptron knows that b1 belongs to data pattern B.
>
i6 := dotprod(b1,w4);
Input 6 = -0.3 which matches a -1 input which goes with data set B, thus the ANN knows this point.
LA 7a. Recheck a2 with w4 to see if the perceptron knows that a2 belongs to data pattern A.
>
i7 := dotprod(a2,w4);
>
Input 7 = 1.53 which matches a +1 input which goes with data set a, thus the ANN knows this point.
References
Caudill, M., & Butler, C. (1992).
Understanding
neural
networks
: Computer explorations (Vol. 1). Cambridge: MIT Press.
Caudill, M., & Butler, C. (1993).
Naturally
intelligent
systems
. Cambridge: MIT Press.
Lau, C. (Ed.). (1991).
Neural
networks
:
Theoretical
foundations
and
analysis
. New York: IEEE Press.