Class 10 AI Chapter - Evaluating Model Topic - Evaluation Techniques - Arvindzeclass - NCERT Solutions

Post Top Ad

Thursday, August 7, 2025

Class 10 AI Chapter - Evaluating Model Topic - Evaluation Techniques

 

class 10 AI

Class 10 Artificial Intelligent Code 417 Solutions

Session 2025-26

Artificial Intelligence code 417 syllabus pdf class 10 solutions. Class 10 AI Book. Class 10 AI Notes. This article provides complete solution for class 10 AI (Artificial Intelligence)  Code - 417 students according to new syllabus 2025 – 26. In this article first of all go through the AI Code - 417 syllabus and then follow chapter wise notes.


📊 Evaluation in the AI Project Cycle

Evaluation is a critical stage in the AI Project Cycle where the performance of the trained AI model is assessed to determine how well it solves the problem it was designed for.


Definition:

Evaluation is the process of testing the AI model using new, unseen data (called test data) to measure its accuracy, effectiveness, and reliability in making predictions or decisions.


🔄 Position in the AI Project Cycle:

The AI project cycle typically consists of:

  1. Problem Scoping

  2. Data Acquisition

  3. Data Exploration

  4. Modeling

  5. Evaluation

  6. Deployment

After building a model in the modeling stage, you move to evaluation to check:

  1. Is the model accurate?
  2. Is it fair and unbiased?
  3. Does it generalize well to new data?


🎯 Objectives of Evaluation:

  1. Measure model performance
  2. Detect overfitting or underfitting
  3. Compare different models
  4. Ensure the model meets the goal defined in problem scoping


📋 Key Evaluation Metrics (for Classification Models):

Metric Description
Accuracy % of correct predictions made by the model
Precision Of all positive predictions, how many were correct?
Recall (Sensitivity) Of all actual positives, how many did the model find?
F1-Score Harmonic mean of precision and recall
Confusion Matrix             Table showing TP, TN, FP, FN values

🔍 Common Evaluation Methods:

1. Train-Test Split

  • The dataset is split into training and testing sets.
  • Model is trained on the training set and evaluated on the test set.

2. Cross-Validation

  • The data is divided into k parts.
  • The model is trained and tested k times, each time using a different part as the test set.
  • Gives a more reliable evaluation.

🧠 Why Evaluation Matters:

Without evaluation:

  1. You cannot be sure if your model is actually solving the problem.
  2. A model may seem accurate on training data but fail in real-world situations.

🚦 Real-World Example:

Imagine building an AI model to detect spam emails.

  1. In modeling, you train it to recognize patterns in spam emails.
  2. In evaluation, you test it on new emails to check:
    • How many real spam emails it correctly identifies (Recall)
    • How many non-spam emails it wrongly marks as spam (False Positives)
    • Only after good evaluation can it be deployed in a real email app.

🧩 Common Issues Identified During Evaluation:

Issue Meaning
Overfitting Model works well on training data but poorly on test data
Underfitting Model performs poorly on both training and test data
Bias                     Model consistently favors or harms a group

📊 Train-Test Split in AI Evaluation

Train-Test Split** is the process of dividing a dataset into two parts:

  1. Training Set: Used to train the AI model.
  2. Test Set: Used to evaluate the model’s performance on unseen data.

🎯 Purpose

To assess the model’s ability to generalize to new, real-world data and to detect problems like overfitting and bias.

📋 Common Split Ratios

Dataset Portion Percentage Purpose
Training Set 70% – 80% For learning and building model
Test Set                                   20% – 30%For evaluating model accuracy

💡 Example

If you have 10,000 labeled emails for spam detection:

  1. Training Set (80%): 8,000 emails
  2. Test Set (20%): 2,000 emails

You train the model using the training set and evaluate its performance on the test set.

🛡️ Best Practices

         ✅ Always split the data before training
         ❌ Never use test data during training
         🔁 Use stratified sampling if your data is imbalanced
         🔬 Use cross-validation for more robust evaluation

🔍 Evaluation Techniques

In the AI Project Cycle, the Evaluation stage plays a crucial role in measuring how well your AI model performs before deployment. It ensures that the model generalizes well to unseen data and meets the business or user expectations.

Evaluation Technique

1Underfitting

  1. The model is too simple (e.g., straight line).
  2. It cannot capture the pattern in the data.
  3. High bias, low variance.
  4. Poor performance on both training and test data.

2. Good Fit (Perfect Fit)

  1. The model fits the data well, capturing the actual trend without noise.
  2. Balanced bias and variance.
  3. Performs well on both training and unseen test data.
  4. This is the ideal model.

3. Overfitting

  1. The model is too complex (e.g., high-degree polynomial).
  2. It captures noise and fluctuations in training data.
  3. Low bias, high variance.
  4. Performs well on training data but poorly on test data.


Accuracy in Evaluation

Accuracy measures how many predictions your model got right out of the total number of predictions.

🔹 Formula:

Accuracy=Correct PredictionsTotal Predictions×100


Error in Evaluation

Error is the opposite of accuracy — it measures how many predictions were wrong.

🔹 Formula:

Error Rate=Incorrect PredictionsTotal Predictions×100orError Rate=100Accuracy


📦 Example: Email Spam Classifier

Imagine you're testing an AI model that classifies emails as either Spam or Not Spam. You tested it on 100 emails, and here’s what happened:

  1. Correct Predictions: 88 emails (classified correctly)
  2. Incorrect Predictions: 12 emails (misclassified)


    📊 Model Evaluation Chart (Accuracy vs Error)

    +------------------------+----------------------+
    | Correct Predictions    | Incorrect Predictions|
    |        (88%)           |        (12%)         |
    |  ✅ Spam/Not Spam       |  ❌ Misclassified  |
    +------------------------+----------------------+
    

    Or as a bar chart representation:

    Accuracy (Correct)       : ████████████████████████████████ 88%
    Error (Incorrect)        : ████                             12%
    


🔍 Accuracy Calculation:

Accuracy=88100×100=88%\text{Accuracy} = \frac{88}{100} \times 100 = 88\%

🔍 Error Calculation:

Error=12100×100=12%ORError=10088=12%

✅ Interpretation:

  1. The model is 88% accurate: It correctly classified most emails.
  2. The 12% error rate means it made 12 wrong predictions.
  3. If this was a medical diagnosis system, 12% might be too risky. But for email spam filtering, it may be acceptable.




No comments:

Post a Comment

Post Top Ad