Class 9 Artificial Intelligence Code 417 Solutions
Session 2025 - 26
Artificial Intelligence class 9 code 417 solutions PDF and Question Answer are provided and class 9 AI book code 417 solutions. Class 9 AI Syllabus. Class 9 AI Book. First go through Artificial Intelligence (code 417 class 9 with solutions) and then solve MCQs and Sample Papers, information technology code 417 class 9 solutions pdf is the vocational subject.
--------------------------------------------------
Chapter - AI Project Cycle
Other Topics
Modeling in AI
Modeling in AI
is the process of creating a system that represents or mimics human
decision-making, reasoning, or learning. This model uses input data to
make predictions or solve problems.
In simple words: It’s
like teaching a computer how to think, act, or make decisions by
building a digital version (model) of a real-world process.
![]() |
AI Model |
Rule-Based Approach
Definition:
A rule-based approach uses a set of predefined "if-then" rules written by humans to make decisions.
Features:
- Based on human knowledge and logic.
- Rules are manually coded.
- Used for simple, well-defined problems.
Example:
if temperature > 30:
print("It's hot")
AI doesn't learn here — it only follows rules.
Advantages:
- Easy to understand and explain.
- Good for fixed or small problem domains.
Disadvantages:
- Not suitable for complex or changing environments.
- Difficult to scale.
Learning-Based Approach
Definition:
A learning-based approach allows the AI to learn patterns and make decisions by analyzing large amounts of data.
Features:
- Based on data and statistical models.
- AI learns from experience.
- Uses algorithms like neural networks, decision trees, etc.
Example:
A spam filter learns to recognize spam emails by analyzing thousands of labeled examples.
The model improves its performance over time by training on more data.
Advantages:
- Can handle complex and dynamic problems.
- Adapts and improves with more data.
Disadvantages:
- Requires large datasets and computing power.
- Harder to explain how decisions are made (black-box).
Evaluation in AI
Evaluation in the AI Project Cycle is the fifth stage, where we test and measure how well the AI model performs. After the model is trained in the Modeling stage, evaluation helps us determine whether the model is reliable, accurate, and suitable for solving the real-world problem we started with.
Evaluation means checking the performance of the AI model using testing data (data that was not used during training). It tells us:
- How accurate the model is
- Whether it can make good predictions
- What its strengths and weaknesses are
📊 Why is Evaluation Important?
- To identify errors in predictions
- To avoid using a model that works only on training data (overfitting)
- To compare different models and choose the best one
- To ensure the model meets the requirements of the problem
⚙️ How Evaluation is Done:
Split the dataset:
- Data is usually split into:
- Training set (used to build the model)
- Testing set (used to evaluate the model)
- Data is usually split into:
Run the model on the testing data
- The AI model makes predictions.
- These predictions are compared to the actual correct values.
-
Use evaluation metrics:
- Evaluation metrics are mathematical tools to measure performance.
📏 Common Evaluation Metrics:
🔹 For Classification Problems:
(Where the model chooses categories like "Spam" or "Not Spam")
- Accuracy = (Correct predictions / Total predictions) × 100
- Precision = (True Positives) / (True Positives + False Positives)
- Recall = (True Positives) / (True Positives + False Negatives)
- F1 Score = Harmonic mean of Precision and Recall
Example: In a disease detection model, high recall ensures that most sick people are found.
🔹 For Regression Problems:
(Where the model predicts numbers like prices, temperatures, etc.)
- Mean Absolute Error (MAE)
- Mean Squared Error (MSE)
- Root Mean Squared Error (RMSE)
- R² Score (Coefficient of Determination)
Example: A model predicting house prices might be evaluated using MSE, where lower values are better.
❌ Common Problems Discovered During Evaluation:
- Overfitting: The model works well on training data but poorly on new data.
- Underfitting: The model is too simple to capture patterns in data.
- Bias: The model gives wrong predictions for certain groups.
- Data leakage: Information from test data accidentally used in training.
🔄 After Evaluation:
Based on the results, you may:
- Tune model parameters
- Use more data or better features
- Try a different algorithm
- Go back to the Modeling or Data Exploration stage
📚 Case Study: Predicting Student Performance
🎯 Problem Statement:
A school wants to predict if a student will pass or fail the final exam based on:
- Attendance
- Homework submission
- Internal exam marks
🛠️ AI Project Cycle Stages:
1. Problem Scoping
- Goal: Predict “Pass” or “Fail”
- Output: Binary Classification (Pass = 1, Fail = 0)
2. Data Acquisition
- Collected data from past 500 students
- Features: attendance %, assignment completion %, internal exam score
3. Data Exploration
- Found that students with <50% attendance usually fail
- Internal scores have strong correlation with final result
4. Modeling
- Used a Decision Tree Classifier
- Trained on 70% of the data
✅ 5. Evaluation Stage (Detailed)
The remaining 30% data (150 students) is used to evaluate the model.
🔍 Predictions vs Actual Values
Let’s say:
- 120 students actually passed
- 30 students actually failed
The model predicted:
- 110 students passed
- 40 students failed
Confusion Matrix:
Predicted Pass | Predicted Fail | |
---|---|---|
Actual Pass | 105 | 15 |
Actual Fail | 5 | 25 |
📐 Evaluation Metrics Calculated:
- Accuracy = (105 + 25) / 150 = 86.67%
- Precision (Pass) = 105 / (105 + 5) = 95.45%
- Recall (Pass) = 105 / (105 + 15) = 87.5%
- F1 Score = 91.25%
🔁 What’s Next After Evaluation?
Based on results:
- The model performs well overall
- But misses some failing students (15 false negatives)
- Decision: Improve the model by including features like parent involvement or health data
✅ Summary
Aspect | Explanation |
---|---|
Stage | 5th stage of AI Project Cycle |
Purpose | Test model on unseen data |
Key Metrics | Accuracy, Precision, Recall, F1 Score |
Outcome | Decide if the model is ready or needs improvement |
In Case Study | Student performance model gave 86.67% accuracy |
🚀 Deployment Stage
Deployment is the final stage of the AI Project Cycle. It means putting the trained and tested AI model into real-world use so it can start solving actual problems and making predictions for users.
Deployment is the process of integrating the AI model into a software application, website, mobile app, or system where people or other systems can interact with it and use its predictions or outputs.
⚙️ Key Steps in Deployment:
Model Export
- Save the trained model in a suitable format (like
.pkl
,.h5
, or.onnx
).
- Save the trained model in a suitable format (like
Integration
- Connect the model to a real-world application using tools like Python, Java, REST APIs, or cloud platforms.
Testing in Production
- Run the model in the real environment to check if it works correctly with live data.
Monitoring
- Watch the model's performance over time to detect issues like errors, drift in data, or decreasing accuracy.
Updates and Maintenance
- Retrain or update the model regularly as new data comes in or conditions change.
🌐 Where Can AI Models Be Deployed?
- Web applications (e.g., product recommendation systems)
- Mobile apps (e.g., voice assistants, health tracking)
- Embedded systems (e.g., self-driving cars, IoT devices)
- Cloud platforms (e.g., AWS, Google Cloud, Azure)
- APIs (e.g., chatbots, fraud detection systems)
🧾 Example: Deployment Case Study
🎯 Problem:
A school wants to use an AI model that predicts whether a student will pass or fail based on their attendance and internal marks.
✅ Deployment Steps:
-
The model is trained and evaluated.
-
It’s saved and uploaded to a cloud server.
-
A web-based dashboard is built where teachers can enter student data.
-
The model instantly predicts Pass or Fail.
-
The predictions are saved in the student database.
Admin monitors prediction accuracy monthly.
📦 Summary Table:
Feature | Description |
---|---|
Stage | Final step in AI Project Cycle |
Purpose | Make the model available for use |
Tools | APIs, Web servers, Mobile apps, Cloud |
Involves | Integration, Monitoring, Updates |
Example | Student performance prediction system |
No comments:
Post a Comment