Rate this post

## What is machine learning ?

Machine Learning is a topic in Information Technology. Our computer can gain insight from data and experience just as a human would. programmers teach the computer how to use its past experiences with different entities to perform better in future scenarios

Machine Learning uses mathematical models to help us understand the data. Predicting newly observed data can be done with the models fitted to previously seen data.

Our goal is not to create models, but to create high-quality models with promising predictive power, because models are only as useful as their quality of predictions. Strategies for evaluating the quality of models will now be examined. The company is referred to as the Nokia Corporation.

## Evaluating the predictions.

Accuracy is a performance metric that can be used to tell a weak classification model from a strong one. The total proportion of observations that have been correctly predicted is called accuracy. The mathematical formula for calculating Accuracy consists of four main components. The components give us the ability to explore other metrics. The formula for calculating accuracy is as follows.  The source is my PC.

• TP is the number of true positives. The total number of observations that belong to the positive class have been predicted correctly.
• The number of True Negatives is called TN. The total number of observations that belong to the negative class have been predicted correctly.
• The number of false positives is called FP. It ‘s also known as a type 1 error. This is the total number of observations that have been predicted to belong to the positive class, but actually belong to the negative class.
• The number of false negatives is called fn. It may be referred to as a type 2 error. This is the total number of observations that were predicted to be a part of the negative class, but instead were a part of the positive class.

The Accuracy Evaluation Metric is used for ease of use. There is a simple approach and explanation to this evaluation metric. The total number of observations that have been predicted correctly is simply the total proportion. When the presence of imbalanced classes causes Accuracy to suffer, it is an Evaluation Metric that does not perform well. Most predictions are going to be incorrect because the model lacks predictive power and the Accuracy value is high.

We are forced to use other evaluation metrics when we ca n’t use the Accuracy Evaluation Metric. These include, but are not limited to, the following evaluation metrics.

#### It was precision.

The proportion of observations that are predicted to be positive is referred to as the total number. The formula for Precision Evaluation Metric is as follows.  The source is my PC.

This proportion of observation is predicted to belong to the positive class. The model can randomly identify an observation that belongs to the positive class. The formula for recalling is as follows.  The source is my PC.

#### F1 score.

The average evaluation metric is used to generate a ratio. The Harmonic Mean of the precision and recall evaluation metrics is known as the F1 Score. Overall correctness is a measure of our model ‘s performance in a positive prediction environment. How many of the observations that our model has labeled as positive are actually positive ? The F1 Score Evaluation Metric has a formula.  The source is my PC.

## Evaluating class predictions.

The issue of Imbalanced classes is due to the fact that all input data is not balanced. The Accuracy Evaluation Metric has been removed from our options. Aggregating the evaluation values by averaging them is what we use in Python. Three main options are available to us.

1. We want the mean of metric scores for each class in the dataset to be weighted equally.
2. We use the mean of metric scores for each class to calculate the weights for each class.
3. The mean of metric scores for each observatory is calculated here. Medium.

## A performance is visualized.

A Confusion Matrix is the most popular way to see a classification ‘s performance. An Error Matrix may be referred to as a Confusion Matrix. There is a high level of interpretability in a Confusion Matrix. A heatmap is generated and visualized as a tabular format. The predicted classes are represented by each Column of the Confusion Matrix.

There are three important facts about the Confusion Matrix.

1. There will be zeroes in the confusion matrix if the values are along the main diagonal.
2. A Confusion Matrix shows us how the Machine Learning Model reached its conclusions.
3. Any number of classes will work with a Confusion Matrix. The performance of the model will not be affected by having a dataset containing 50 classes. ResearchGate is the source.

## Evaluating a regression model.

For a Regressor, one of the most used and well-known Evaluation Metrics isMSE. Mean Squared Error is whatMSE stands for. A mathematical representation is used to calculateMSE.  The source is my PC.

• The number of observations is represented by n.
• The true value of the target value is what we are trying to predict.
• ŷ

iis the model’s predicted value for yi.

The squared sum of all the distances between predicted and true values is a calculation calledMSE. The higher the output value, the worse the model predictions are. The model shows the advantages of squaring the error margins.

• All error values are positive when squaring the error.
• The model will penalize large error values more than small error values. Statistics How To

This is the end of my article on machine learning model evaluation.

I would like to thank you for your time.