Class 10 AI Chapter - Evaluating Model Topic - Confusion Matrix - Arvindzeclass - NCERT Solutions

Post Top Ad

Sunday, August 10, 2025

Class 10 AI Chapter - Evaluating Model Topic - Confusion Matrix

 

class 10 AI

Class 10 Artificial Intelligent Code 417 Solutions

Session 2025-26

Artificial Intelligence code 417 syllabus pdf class 10 solutions. Class 10 AI Book. Class 10 AI Notes. This article provides complete solution for class 10 AI (Artificial Intelligence)  Code - 417 students according to new syllabus 2025 – 26. In this article first of all go through the AI Code - 417 syllabus and then follow chapter wise notes.

Classification Evaluation Metric

In classification problems, your model predicts categories (e.g., spam or not spam, disease or no disease).

Evaluation metrics tell us how good those predictions are by comparing them to the actual, correct labels.

What is Classification?

Classification is a supervised machine learning task where a model learns to assign input data to one of several predefined categories or classes.

Classification is when a machine learning model learns from labeled examples and predicts the category of new data.

In simple words:

You give the model labeled examples, it learns patterns, and then it predicts the class of new, unseen data.


How It Works

  1. Training Phase

    • You feed the model labeled data (features + known class labels).
    • Example:
      • Features → Email text content
      • Label → “Spam” or “Not Spam”
      • The algorithm finds patterns that distinguish the classes.
  2. Prediction Phase

    • You give the model new data without labels.
    • It predicts the class based on what it learned.

Types of Classification

(a) Binary Classification

  • Only two classes.
  • Examples:
    • Spam vs. Not Spam
    • Has Disease vs. No Disease

(b) Multi-Class Classification

  • More than two classes, but each item belongs to only one.
  • Examples:
    • Classifying fruit as Apple, Banana, or Mango
    • Handwritten digit recognition (0–9)

(c) Multi-Label Classification

  • Each item can belong to multiple classes at once.
  • Example:
    • A news article tagged as Politics and Economy simultaneously.

Common Algorithms for Classification

  • Logistic Regression
  • Decision Trees
  • Random Forest
  • Support Vector Machines (SVM)
  • Naïve Bayes
  • K-Nearest Neighbors (KNN)
  • Neural Networks

Real-World Examples

  • Email filtering: Spam or not spam
  • Medical diagnosis: Disease present or absent
  • Sentiment analysis: Positive, Negative, Neutral
  • Image recognition: Identifying objects in pictures


Real-Life Example – Fruit Classification

Imagine you want a machine to identify whether a fruit is an Apple, Banana, or Orange.


Evaluation metrics for classification
Evaluation Metrics for classification

Step 1 – Training Data

You feed the model examples with labels:

Fruit Image Color Shape Weight Label
🍎 Red Round 150g Apple
🍌 Yellow Long 120g Banana
🍊 Orange    Round   30g Orange


The model learns patterns:

  1. Apples → Red, round, ~150g
  2. Bananas → Yellow, long, ~120g
  3. Oranges → Orange, round, ~130g

Step 2 – Prediction

Now, you show the model a new fruit with:

  1. Color: Red
  2. Shape: Round
  3. Weight: 155g

The model predicts: Apple 🍎


Confusion Matrix

A confusion matrix is a table that helps you evaluate the performance of a classification model by comparing predicted labels with actual labels.

It’s called "confusion" because it shows you where the model is getting confused — i.e., which classes are being mistaken for others.

Confusion Matrix
Confusion Matrix

Structure of a Confusion Matrix

For a binary classification (two classes: Positive and Negative), the confusion matrix looks like this:


Predicted Positive Predicted Negative
Actual Positive    True Positive (TP) False Negative (FN)
Actual Negative      False Positive (FP)True Negative (TN)

Components

  1. True Positive (TP)

    • Model predicts Positive, and it is actually Positive.
    • Example: A spam filter correctly detects a spam email as spam.
  2. True Negative (TN)

    • Model predicts Negative, and it is actually Negative.
    • Example: A spam filter correctly detects a normal email as not spam.
  3. False Positive (FP)Type I Error

    • Model predicts Positive, but it is actually Negative.
    • Example: A normal email is wrongly marked as spam.
  4. False Negative (FN)Type II Error

    • Model predicts Negative, but it is actually Positive.
    • Example: A spam email is wrongly marked as normal.


Why It’s Important

The confusion matrix allows you to calculate important evaluation metrics:

1. Accuracy

Definition:
The proportion of total predictions that were correct.

Accuracy=TP+TNTP+FP+FN+TN\text{Accuracy} = \frac{\text{TP} + \text{TN}}{\text{TP} + \text{FP} + \text{FN} + \text{TN}}

Real-life example:
If a spam email detector checks 100 emails and correctly classifies 90 of them (spam or not spam), the accuracy is 90%.

2. Precision (Positive Predictive Value)

Definition:
Out of all the items predicted as positive, how many were actually positive?

Precision=TPTP+FP\text{Precision} = \frac{\text{TP}}{\text{TP} + \text{FP}}

Real-life example:
In spam detection, if your system marks 30 emails as spam but only 25 are actually spam, precision is 253083.3%\frac{25}{30} \approx 83.3\%
High precision = few false alarms.

3. Recall (Sensitivity / True Positive Rate)

Definition:
Out of all the actual positives, how many were correctly predicted as positive?

Recall=TPTP+FN\text{Recall} = \frac{\text{TP}}{\text{TP} + \text{FN}}

Real-life example:
If there are 40 spam emails and your system correctly catches 35 of them, recall is 3540=87.5%
High recall = few missed positives.

4. Specificity (True Negative Rate)

Definition:
Out of all the actual negatives, how many were correctly predicted as negative?

Specificity=TNTN+FP\text{Specificity} = \frac{\text{TN}}{\text{TN} + \text{FP}}

Real-life example:
If you have 60 non-spam emails and your system correctly marks 55 as not spam, specificity is 556091.7%
High specificity = few false positives.

5. F1 Score

Definition:
The harmonic mean of precision and recall. It balances both.

F1 Score=2×Precision×RecallPrecision+Recall\text{F1 Score} = 2 \times \frac{\text{Precision} \times \text{Recall}}{\text{Precision} + \text{Recall}}

Real-life example:
If your spam filter’s precision is 83.3% and recall is 87.5%,

F185.3%\text{F1} \approx 85.3\%

High F1 means the model is good at both catching positives and avoiding false alarms.


Case Study - Spam Email

Scenario

We’ll take a spam email detection system example.

You tested it on 100 emails:

  1. Actual spam emails: 40
  2. Actual non-spam emails: 60

The system predicted:

  1. Correctly caught 35 spam emails (True Positives = TP = 35)
  2. Missed 5 spam emails (False Negatives = FN = 5)
  3. Wrongly marked 4 non-spam emails as spam (False Positives = FP = 4)
  4. Correctly marked 56 non-spam emails as not spam (True Negatives = TN = 56)

Confusion Matrix


Predicted Spam Predicted Not Spam
Actual Spam TP = 35 xxx FN = 5
Actual Not Spam   FP = 4 TN = 56

Metrics Calculation

  1. Accuracy

TP+TNTP+FP+FN+TN=35+5635+4+5+56=91100=91%\frac{TP + TN}{TP + FP + FN + TN} = \frac{35 + 56}{35 + 4 + 5 + 56} = \frac{91}{100} = 91\%
  1. Precision

TPTP+FP=3535+4=353989.7%\frac{TP}{TP + FP} = \frac{35}{35 + 4} = \frac{35}{39} \approx 89.7\%
  1. Recall (Sensitivity)

TPTP+FN=3535+5=3540=87.5%\frac{TP}{TP + FN} = \frac{35}{35 + 5} = \frac{35}{40} = 87.5\%
  1. Specificity

TNTN+FP=5656+4=566093.3%\frac{TN}{TN + FP} = \frac{56}{56 + 4} = \frac{56}{60} \approx 93.3\%
  1. F1 Score

F1=2×Precision×RecallPrecision+Recall
F1 = 2 \times \frac{Precision \times Recall}{Precision + Recall}


=
2×0.897×0.8750.897+0.87588.6%
= 2 \times \frac{0.897 \times 0.875}{0.897 + 0.875} \approx 88.6\%



No comments:

Post a Comment

Post Top Ad