Chapter 28: Example 1 Training

What is “Example 1 Training”?

Example 1 Training = the model.fit() call in the classic first TensorFlow.js example.

It is the moment when:

  • The single-neuron model (1 weight + 1 bias) starts with random values
  • It sees the 10 training examples (xs and ys) over and over
  • It makes predictions → compares them to real ys → calculates how wrong it is (loss)
  • It gently adjusts the weight & bias a tiny bit (using gradient descent)
  • Repeats this many times (epochs) until the predictions become very close to the true pattern

After training finishes, the model has learned:

weight ≈ 2.0 (slope) bias ≈ 0.0–0.2 (small offset)

So when you give it x = 15 → it predicts ≈ 30.

Why This Training Step Feels Magical

Before training → model guesses randomly → predictions are terrible (loss maybe 100–300) During training → you watch loss drop dramatically (from hundreds → tens → single digits → decimals) After training → predictions are accurate even for numbers it never saw → that’s generalization

Full Code with Detailed Training Focus

This is the same Example 1, but with extra logging so you really see what’s happening during training.

HTML

What You Should See When You Click “Start Training”

Typical console / log output:

text

Step-by-Step: What Happens During Training (Teacher Explanation)

  1. Before any epoch
    • Weights are random (e.g., weight = -0.34, bias = 0.57)
    • Model predicts nonsense → huge errors → high loss (100–300)
  2. Each epoch
    • Model sees all 10 examples once
    • For each example: predicts ŷ → computes error = (ŷ – y)²
    • Averages errors → loss
    • Computes gradients (how much to nudge weight & bias)
    • Updates: new_weight = old_weight – learning_rate × gradient
    • Loss drops (because updates move toward better predictions)
  3. Why loss drops fast at first, then slowly
    • Early: big mistakes → big gradients → big updates → fast improvement
    • Later: already quite good → small mistakes → tiny gradients → slow fine-tuning
  4. After 300 epochs
    • Weight very close to 2.0
    • Bias close to 0 (or small offset matching the noise average)
    • Model can now predict correctly even for x = 15 (never seen)

Final Teacher Summary

Example 1 Training = the model.fit() step in the classic TensorFlow.js Hello World.

  • Starts with random weight & bias
  • Sees the 10 noisy examples many times (epochs)
  • Repeatedly: predict → measure loss → adjust weights via gradient descent
  • Ends with weight ≈ 2.0, bias ≈ 0 → learned y ≈ 2x

This tiny training loop teaches you everything fundamental:

  • Data → model → compile → fit → evaluate/predict
  • Loss decreasing = learning happening
  • Generalization (predicting unseen x=15 correctly)

In Hyderabad 2026, this exact training pattern is still the first thing every beginner runs — because once you watch this loss drop live, you believe in neural networks.

Understood completely? 🌟

Want next?

  • Add live graph of loss using tfjs-vis?
  • Show manual weight updates (no fit(), just for understanding)?
  • Train this model on Hyderabad flat prices instead?

Just say the word — next class is ready! 🚀

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *