Class 10 Artificial Intelligent Code 417 Solutions
Session 2025-26
Artificial Intelligence code 417 syllabus pdf class 10 solutions. Class 10 AI Book. Class 10 AI Notes. This article provides complete solution for class 10 AI (Artificial Intelligence) Code - 417 students according to new syllabus 2025 – 26. In this article first of all go through the AI Code - 417 syllabus and then follow chapter wise notes.
Classification Evaluation Metric
In classification problems, your model predicts categories (e.g., spam or not spam, disease or no disease).
Evaluation metrics tell us how good those predictions are by comparing them to the actual, correct labels.
What is Classification?
Classification is a supervised machine learning task where a model learns to assign input data to one of several predefined categories or classes.
Classification is when a machine learning model learns from labeled examples and predicts the category of new data.
In simple words:
You give the model labeled examples, it learns patterns, and then it predicts the class of new, unseen data.
How It Works
Training Phase
- You feed the model labeled data (features + known class labels).
- Example:
- Features → Email text content
- Label → “Spam” or “Not Spam”
- The algorithm finds patterns that distinguish the classes.
Prediction Phase
- You give the model new data without labels.
- It predicts the class based on what it learned.
Types of Classification
(a) Binary Classification
- Only two classes.
- Examples:
- Spam vs. Not Spam
- Has Disease vs. No Disease
(b) Multi-Class Classification
- More than two classes, but each item belongs to only one.
- Examples:
- Classifying fruit as Apple, Banana, or Mango
- Handwritten digit recognition (0–9)
(c) Multi-Label Classification
- Each item can belong to multiple classes at once.
- Example:
- A news article tagged as Politics and Economy simultaneously.
Common Algorithms for Classification
- Logistic Regression
- Decision Trees
- Random Forest
- Support Vector Machines (SVM)
- Naïve Bayes
- K-Nearest Neighbors (KNN)
- Neural Networks
Real-World Examples
- Email filtering: Spam or not spam
- Medical diagnosis: Disease present or absent
- Sentiment analysis: Positive, Negative, Neutral
- Image recognition: Identifying objects in pictures
Real-Life Example – Fruit Classification
Imagine you want a machine to identify whether a fruit is an Apple, Banana, or Orange.
![]() |
Evaluation Metrics for classification |
Step 1 – Training Data
You feed the model examples with labels:
Fruit Image | Color | Shape | Weight | Label |
---|---|---|---|---|
🍎 | Red | Round | 150g | Apple |
🍌 | Yellow | Long | 120g | Banana |
🍊 | Orange | Round | 30g | Orange |
The model learns patterns:
- Apples → Red, round, ~150g
- Bananas → Yellow, long, ~120g
- Oranges → Orange, round, ~130g
Step 2 – Prediction
Now, you show the model a new fruit with:
- Color: Red
- Shape: Round
- Weight: 155g
The model predicts: Apple 🍎
Confusion Matrix
A confusion matrix is a table that helps you evaluate the performance of a classification model by comparing predicted labels with actual labels.
It’s called "confusion" because it shows you where the model is getting confused — i.e., which classes are being mistaken for others.
![]() |
Confusion Matrix |
Structure of a Confusion Matrix
For a binary classification (two classes: Positive and Negative), the confusion matrix looks like this:
Predicted Positive | Predicted Negative | |
---|---|---|
Actual Positive | True Positive (TP) | False Negative (FN) |
Actual Negative | False Positive (FP) | True Negative (TN) |
Components
True Positive (TP)
- Model predicts Positive, and it is actually Positive.
- Example: A spam filter correctly detects a spam email as spam.
True Negative (TN)
- Model predicts Negative, and it is actually Negative.
- Example: A spam filter correctly detects a normal email as not spam.
False Positive (FP) – Type I Error
- Model predicts Positive, but it is actually Negative.
- Example: A normal email is wrongly marked as spam.
False Negative (FN) – Type II Error
- Model predicts Negative, but it is actually Positive.
- Example: A spam email is wrongly marked as normal.
Why It’s Important
The confusion matrix allows you to calculate important evaluation metrics:
1. Accuracy
Definition:
The proportion of total predictions that were correct.
Real-life example:
If a spam email detector checks 100 emails and correctly classifies 90 of them (spam or not spam), the accuracy is 90%.
2. Precision (Positive Predictive Value)
Definition:
Out of all the items predicted as positive, how many were actually positive?
Real-life example:
In spam detection, if your system marks 30 emails as spam but only 25 are actually spam, precision is
High precision = few false alarms.
3. Recall (Sensitivity / True Positive Rate)
Definition:
Out of all the actual positives, how many were correctly predicted as positive?
Real-life example:
If there are 40 spam emails and your system correctly catches 35 of them, recall is
High recall = few missed positives.
4. Specificity (True Negative Rate)
Definition:
Out of all the actual negatives, how many were correctly predicted as negative?
Real-life example:
If you have 60 non-spam emails and your system correctly marks 55 as not spam, specificity is
High specificity = few false positives.
5. F1 Score
Definition:
The harmonic mean of precision and recall. It balances both.
Real-life example:
If your spam filter’s precision is 83.3% and recall is 87.5%,
High F1 means the model is good at both catching positives and avoiding false alarms.
Case Study - Spam Email
Scenario
We’ll take a spam email detection system example.
You tested it on 100 emails:
- Actual spam emails: 40
- Actual non-spam emails: 60
The system predicted:
- Correctly caught 35 spam emails (True Positives = TP = 35)
- Missed 5 spam emails (False Negatives = FN = 5)
- Wrongly marked 4 non-spam emails as spam (False Positives = FP = 4)
- Correctly marked 56 non-spam emails as not spam (True Negatives = TN = 56)
Confusion Matrix
Predicted Spam | Predicted Not Spam | |
---|---|---|
Actual Spam | TP = 35 xxx | FN = 5 |
Actual Not Spam | FP = 4 | TN = 56 |
Metrics Calculation
-
Accuracy
-
Precision
-
Recall (Sensitivity)
-
Specificity
-
F1 Score
No comments:
Post a Comment