August 17, 2020

K-Nearest Neighbor Regressor

  1. KNN is typically used for classification, but it can also perform regression.

  2. The process is almost identical to classification, except for the final step.

    1. Instead of counting a 1 vs 0, the regressor takes the average of the values, like an average of IMDB ratings.

  3. We can also compute a weighted average based on how close each neighbor is, so that the closest neighbor has more weight on the prediction.

  4. Scikit-learn has a built-in K-Nearest Neighbor Regression model.

    1. KNeighborsRegressor is very similar to KNeighborsClassifier. We can choose whether or not to use a weighted average using the parameter “weights.” If weights = “uniform,” all neighbors will be considered equally. If it = “distance,” then a weighted average is used.

  5. Create the regressor:

    1. classifier = KNeighborsRegressor(n_neighbors = 3, weights = “distance”)

  6. Fit the model to the training data:

    1. classifier.fit(training_data, training_labels)

  7. Make predictions on new data points

    1. predictions = classifier.predict(unknown_points)

Previous
Previous

August 18, 2020

Next
Next

August 13, 2020