AI EducademyAIEducademy
๐ŸŒณ

AI Foundations

๐ŸŒฑ
AI Seeds

Start from zero

๐ŸŒฟ
AI Sprouts

Build foundations

๐ŸŒณ
AI Branches

Apply in practice

๐Ÿ•๏ธ
AI Canopy

Go deep

๐ŸŒฒ
AI Forest

Master AI

๐Ÿ”จ

AI Mastery

โœ๏ธ
AI Sketch

Start from zero

๐Ÿชจ
AI Chisel

Build foundations

โš’๏ธ
AI Craft

Apply in practice

๐Ÿ’Ž
AI Polish

Go deep

๐Ÿ†
AI Masterpiece

Master AI

๐Ÿš€

Career Ready

๐Ÿš€
Interview Launchpad

Start your journey

๐ŸŒŸ
Behavioral Mastery

Master soft skills

๐Ÿ’ป
Technical Interviews

Ace the coding round

๐Ÿค–
AI & ML Interviews

ML interview mastery

๐Ÿ†
Offer & Beyond

Land the best offer

View All Programsโ†’

Lab

7 experiments loaded
๐Ÿง Neural Network Playground๐Ÿค–AI or Human?๐Ÿ’ฌPrompt Lab๐ŸŽจImage Generator๐Ÿ˜ŠSentiment Analyzer๐Ÿ’กChatbot Builderโš–๏ธEthics Simulator
๐ŸŽฏMock InterviewEnter the Labโ†’
JourneyBlog
๐ŸŽฏ
About

Making AI education accessible to everyone, everywhere

โ“
FAQ

Common questions answered

โœ‰๏ธ
Contact

Get in touch with us

โญ
Open Source

Built in public on GitHub

Get Started
AI EducademyAIEducademy

MIT Licence. Open Source

Learn

  • Academics
  • Lessons
  • Lab

Community

  • GitHub
  • Contribute
  • Code of Conduct
  • About
  • FAQ

Support

  • Buy Me a Coffee โ˜•
  • Terms of Service
  • Privacy Policy
  • Contact
AI & Engineering Academicsโ€บ๐ŸŒฟ AI Sproutsโ€บLessonsโ€บTraining AI Models
๐Ÿ‹๏ธ
AI Sprouts โ€ข Beginnerโฑ๏ธ 15 min read

Training AI Models

Training AI Models

You now know that neural networks learn by adjusting weights and biases. But how does the full training process actually work? How do you know when a model has learned enough - or too much? In this lesson, we will walk through the complete training journey.

The Training Loop

Training an AI model follows a cycle that repeats over and over:

  1. Predict - Feed data through the model and get a prediction.
  2. Compare - Check how far the prediction is from the correct answer.
  3. Adjust - Update the weights to reduce the error.
  4. Repeat - Do it again with the next batch of data.

This loop runs thousands or even millions of times. Each repetition nudges the model slightly closer to the right answers.

A circular diagram showing the training loop: Predict, Compare, Adjust, Repeat, with arrows connecting each step in a cycle
The training loop is the heartbeat of AI learning - predict, compare, adjust, and repeat.
๐Ÿคฏ

Training GPT-4 reportedly cost over $100 million in computing power alone. The training loop ran across thousands of specialised chips for months.

Loss Functions: How Wrong Is the Model?

After each prediction, we need a way to measure how wrong the model was. This measurement is called the loss (or cost), and the formula that calculates it is the loss function.

  • Low loss = the prediction was close to the correct answer.
  • High loss = the prediction was far off.

Think of it like a dartboard. The bullseye is the correct answer. The loss is the distance from where your dart landed to the bullseye. The goal of training is to minimise that distance over time.

Common loss functions include:

  • Mean Squared Error (MSE) - measures the average squared distance between predictions and actual values. Used for predicting numbers.
  • Cross-Entropy Loss - measures how well predicted probabilities match the true categories. Used for classification tasks.
๐Ÿง Quick Check

What does a loss function measure in AI training?

Lesson 4 of 160% complete
โ†Introduction to Neural Networks

Discussion

Sign in to join the discussion

Suggest an edit to this lesson

Epochs: How Many Times Through the Data?

One complete pass through the entire training dataset is called an epoch. Training typically involves many epochs - the model sees the same data multiple times, getting slightly better each round.

  • Epoch 1: The model makes many mistakes; loss is high.
  • Epoch 10: The model has improved significantly; loss is dropping.
  • Epoch 50: Improvements slow down; the model is nearing its best.
  • Epoch 200: The model might start memorising - which brings us to our next topic.
๐Ÿค”
Think about it:

Revising for an exam is like running epochs. The first read-through is confusing, but each review builds understanding. However, if you re-read the same notes a hundred times, you might memorise the exact wording without truly understanding the concepts. AI has the same problem.

Overfitting: The Student Who Memorises

Overfitting is one of the most common problems in AI training. It happens when the model learns the training data too well - including its noise and quirks - and fails to perform on new, unseen data.

Imagine a student who memorises every past exam paper word for word. They score perfectly on old papers but struggle when the questions change even slightly. The student has not learned the subject - they have memorised the answers.

Signs of overfitting:

  • Training accuracy is very high (e.g., 99%).
  • Performance on new data is much worse (e.g., 75%).
  • The model has essentially memorised the training examples.
๐Ÿ’ก

The goal of training is not to score perfectly on data the model has already seen. It is to perform well on data it has never seen before. That is the true test of learning.

Underfitting: The Student Who Doesn't Study

The opposite problem is underfitting. This happens when the model has not learned enough from the data. It performs poorly on both training data and new data.

Causes of underfitting include:

  • The model is too simple for the complexity of the problem.
  • Training stopped too early (not enough epochs).
  • The features in the data are not informative enough.

If overfitting is like memorising past papers, underfitting is like walking into the exam having barely opened the textbook.

๐Ÿง Quick Check

A model scores 98% accuracy on training data but only 60% on new data. What is the most likely problem?

Validation and Test Sets

To detect overfitting and underfitting, we split our data into three parts:

| Set | Purpose | When Used | |-----|---------|-----------| | Training set | The model learns from this data | During training | | Validation set | Used to check progress and tune settings | During training | | Test set | Final evaluation on completely unseen data | After training |

A common split is 70% training, 15% validation, and 15% test. The model never sees the test set until the very end - it is the final exam.

๐Ÿค”
Think about it:

The validation set is like a practice test you take between study sessions. It tells you how well you are learning without spoiling the real exam. If your practice test scores start dropping while your study notes scores keep rising, you know something is wrong.

When to Stop Training

Knowing when to stop is crucial. Train too little and the model underfits. Train too much and it overfits. The sweet spot is where validation loss stops improving.

A technique called early stopping automates this:

  1. Monitor the validation loss after each epoch.
  2. If it has not improved for a set number of epochs (called patience), stop training.
  3. Roll back to the weights from the best epoch.

This prevents the model from going past the point of useful learning and slipping into memorisation.

๐Ÿง Quick Check

What is 'early stopping' in AI training?

๐Ÿคฏ

Some modern training runs use a technique called learning rate scheduling, which gradually reduces how much the weights change with each step - like taking smaller and more careful steps as you approach the summit of a mountain.

Key Takeaways

  • The training loop repeats: predict โ†’ compare โ†’ adjust โ†’ repeat.
  • A loss function measures how far predictions are from the truth.
  • An epoch is one full pass through the training data.
  • Overfitting means memorising data; underfitting means not learning enough.
  • Data is split into training, validation, and test sets.
  • Early stopping prevents training from going too far.

In the final lesson, we will explore the ethical dimensions of AI - bias, fairness, privacy, and what responsible AI looks like.