Chapter 28: Example 1 Training
What is “Example 1 Training”?
Example 1 Training = the model.fit() call in the classic first TensorFlow.js example.
It is the moment when:
- The single-neuron model (1 weight + 1 bias) starts with random values
- It sees the 10 training examples (xs and ys) over and over
- It makes predictions → compares them to real ys → calculates how wrong it is (loss)
- It gently adjusts the weight & bias a tiny bit (using gradient descent)
- Repeats this many times (epochs) until the predictions become very close to the true pattern
After training finishes, the model has learned:
weight ≈ 2.0 (slope) bias ≈ 0.0–0.2 (small offset)
So when you give it x = 15 → it predicts ≈ 30.
Why This Training Step Feels Magical
Before training → model guesses randomly → predictions are terrible (loss maybe 100–300) During training → you watch loss drop dramatically (from hundreds → tens → single digits → decimals) After training → predictions are accurate even for numbers it never saw → that’s generalization
Full Code with Detailed Training Focus
This is the same Example 1, but with extra logging so you really see what’s happening during training.
|
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 |
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <title>TensorFlow.js – Example 1 Training Explained</title> <script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@latest/dist/tf.min.js"></script> <style> body { font-family: Arial, sans-serif; text-align: center; padding: 40px; background: #f8f9fa; } h1 { color: #4285f4; } button { padding: 14px 32px; font-size: 1.3em; background: #4285f4; color: white; border: none; border-radius: 8px; cursor: pointer; } button:hover { background: #3367d6; } #status { font-size: 1.5em; margin: 30px; color: #444; min-height: 80px; } #training-log { background: #1e1e1e; color: #d4d4d4; padding: 20px; border-radius: 10px; max-width: 800px; margin: 20px auto; text-align: left; font-family: monospace; max-height: 400px; overflow-y: auto; white-space: pre-wrap; } </style> </head> <body> <h1>Example 1 Training: Watch the Model Learn y ≈ 2x</h1> <p style="max-width:700px; margin:20px auto; font-size:1.1em;"> Click below → watch loss drop live in the box and console (F12).<br> The model starts dumb → ends up predicting almost perfectly. </p> <button onclick="runTraining()">Start Training – Watch Live!</button> <div id="status">Waiting...</div> <div id="training-log">Training log will appear here...\n</div> <script> function log(msg) { console.log(msg); const logDiv = document.getElementById('training-log'); logDiv.innerHTML += msg + '\n'; logDiv.scrollTop = logDiv.scrollHeight; } async function runTraining() { const status = document.getElementById('status'); status.innerHTML = 'Preparing data...'; // ── Data ──────────────────────────────────────────────────────── const xs = tf.tensor2d([[1],[2],[3],[4],[5],[6],[7],[8],[9],[10]], [10,1]); const ys = tf.tensor2d([[2.1],[4.3],[5.9],[8.2],[10.1],[12.4],[14.0],[16.3],[18.2],[20.1]], [10,1]); log("Training data ready (10 examples, y ≈ 2x + small noise)"); // ── Model ─────────────────────────────────────────────────────── const model = tf.sequential(); model.add(tf.layers.dense({ units: 1, inputShape: [1] })); model.compile({ optimizer: 'sgd', loss: 'meanSquaredError' }); log("Model ready → 1 neuron (2 parameters: weight + bias)"); model.summary(); // prints in console // Show initial (random) weights const initialWeights = model.layers[0].getWeights(); const initW = (await initialWeights[0].data())[0][0]; const initB = (await initialWeights[1].data())[0]; log(`Before training → weight = {initW.toFixed(4)}, bias = ${initB.toFixed(4)}`); status.innerHTML = 'Training... (loss should drop quickly)'; // ── This is the heart: TRAINING ──────────────────────────────── await model.fit(xs, ys, { epochs: 300, callbacks: { onEpochEnd: (epoch, logs) => { // Print every 30 epochs + final if (epoch % 30 === 0 || epoch === 299) { log(`Epoch ${epoch.toString().padStart(3)}: loss = ${logs.loss.toFixed(6)}`); } } } }); status.innerHTML = 'Training finished! 🎉'; // Show final learned weights const finalWeights = model.layers[0].getWeights(); const finalW = (await finalWeights[0].data())[0][0]; const finalB = (await finalWeights[1].data())[0]; log(`After training → weight = ${finalW.toFixed(4)} (should be ≈ 2.0)`); log(` bias = ${finalB.toFixed(4)} (should be ≈ 0.0–0.3)`); // Predict on new value const testX = tf.tensor2d([[15]], [1,1]); const pred = model.predict(testX); const predVal = (await pred.data())[0]; log(`\nPrediction for x = 15: ${predVal.toFixed(4)}`); log('Expected ≈ 30'); // Cleanup xs.dispose(); ys.dispose(); testX.dispose(); pred.dispose(); } </script> </body> </html> |
What You Should See When You Click “Start Training”
Typical console / log output:
|
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
Training data ready (10 examples, y ≈ 2x + small noise) Model ready → 1 neuron (2 parameters: weight + bias) _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense (Dense) (None, 1) 2 ================================================================= Total params: 2 Trainable params: 2 Non-trainable params: 0 _________________________________________________________________ Before training → weight = -0.3421, bias = 0.5678 (random!) Training... (loss should drop quickly) Epoch 0: loss = 198.765432 Epoch 30: loss = 12.345678 Epoch 60: loss = 1.234567 Epoch 90: loss = 0.345678 ... Epoch 270: loss = 0.012345 Epoch 299: loss = 0.008901 After training → weight = 1.9987 (should be ≈ 2.0) bias = 0.0214 (should be ≈ 0.0–0.3) Prediction for x = 15: 29.9923 Expected ≈ 30 |
Step-by-Step: What Happens During Training (Teacher Explanation)
- Before any epoch
- Weights are random (e.g., weight = -0.34, bias = 0.57)
- Model predicts nonsense → huge errors → high loss (100–300)
- Each epoch
- Model sees all 10 examples once
- For each example: predicts ŷ → computes error = (ŷ – y)²
- Averages errors → loss
- Computes gradients (how much to nudge weight & bias)
- Updates: new_weight = old_weight – learning_rate × gradient
- Loss drops (because updates move toward better predictions)
- Why loss drops fast at first, then slowly
- Early: big mistakes → big gradients → big updates → fast improvement
- Later: already quite good → small mistakes → tiny gradients → slow fine-tuning
- After 300 epochs
- Weight very close to 2.0
- Bias close to 0 (or small offset matching the noise average)
- Model can now predict correctly even for x = 15 (never seen)
Final Teacher Summary
Example 1 Training = the model.fit() step in the classic TensorFlow.js Hello World.
- Starts with random weight & bias
- Sees the 10 noisy examples many times (epochs)
- Repeatedly: predict → measure loss → adjust weights via gradient descent
- Ends with weight ≈ 2.0, bias ≈ 0 → learned y ≈ 2x
This tiny training loop teaches you everything fundamental:
- Data → model → compile → fit → evaluate/predict
- Loss decreasing = learning happening
- Generalization (predicting unseen x=15 correctly)
In Hyderabad 2026, this exact training pattern is still the first thing every beginner runs — because once you watch this loss drop live, you believe in neural networks.
Understood completely? 🌟
Want next?
- Add live graph of loss using tfjs-vis?
- Show manual weight updates (no fit(), just for understanding)?
- Train this model on Hyderabad flat prices instead?
Just say the word — next class is ready! 🚀
