What's perceptron algorithm invented by Frank Rosenblatt?
The perceptron algorithm is a supervised learning algorithm for binary classification of input data into one of two possible categories. It was invented by Frank Rosenblatt in 1957 and is one of the oldest and simplest artificial neural network models.
The perceptron algorithm works by taking a set of input values and computing a weighted sum of those inputs. This sum is then compared to a threshold value, and the perceptron outputs a 1 if the sum is greater than the threshold or a 0 otherwise. The weights used in the weighted sum are adjusted during training to minimize the error between the predicted output and the true output.
The training process of the perceptron algorithm involves iterating through the input data and adjusting the weights for each input until the algorithm converges to a set of weights that produces accurate predictions. During each iteration, the perceptron makes a prediction for each input and updates the weights based on whether the prediction is correct or not. The weights are updated using the following formula:
new_weight = old_weight + learning_rate * (true_output - predicted_output) * input
where learning_rate
is a hyperparameter that controls the rate at which the weights are updated, true_output
is the true label of the input, predicted_output
is the output predicted by the perceptron, and input
is the input data.
Explain each part of formula y=w1x1+w2x2+b, which is used for perceptron
The formula y = w1x1 + w2x2 + b
represents the basic computation of a perceptron, where:
y
is the output of the perceptron. It represents the predicted class label for the input data.x1
andx2
are the input features for the perceptron. They represent the values of the features for a specific input data point.w1
andw2
are the weights associated with the input features. They represent the importance of each input feature in making the prediction.b
is the bias term. It represents the threshold for activation of the perceptron.
The perceptron algorithm uses this formula to compute the weighted sum of the input features. If this sum is greater than the threshold value b
, the perceptron predicts the positive class, otherwise it predicts the negative class.
During the training process of the perceptron algorithm, the weights and bias term are adjusted to minimize the error between the predicted output and the true output. The weights are adjusted by adding a fraction of the difference between the true output and the predicted output multiplied by the input feature value. The bias term is adjusted by adding a fraction of the difference between the true output and the predicted output.
In summary, the perceptron algorithm uses the formula y = w1x1 + w2x2 + b
to compute the weighted sum of the input features and make a prediction about the class label of the input data. The weights and bias term are adjusted during training to minimize the error between the predicted output and the true output.
Is there a term for w1x1+w2x2?
Yes, the term "weighted sum" is often used to refer to the expression w1x1 + w2x2
. It represents the linear combination of the input features x1
and x2
with their corresponding weights w1
and w2
. The weighted sum is a fundamental building block in many machine learning algorithms, including the perceptron and linear regression.
What's the concept difference between A*B and A.dot(B)?
In NumPy, the *
operator performs element-wise multiplication between two arrays, while the dot()
method performs matrix multiplication between two arrays.
When two arrays A
and B
are multiplied using *
, the corresponding elements of the two arrays are multiplied element-wise to produce a new array of the same shape. For example, if:
A = np.array([[1, 2], [3, 4]])
B = np.array([[5, 6], [7, 8]])
Then A * B
would result in:
array([[ 5, 12],
[21, 32]])
标签:入门,sum,深度,Chapter2,weights,predicted,output,input,perceptron From: https://www.cnblogs.com/chucklu/p/17233001.html