Chapter 21: TensorFlow Operations
TensorFlow Operations (often called tf ops or just operations in TensorFlow).
This is a very important concept because TensorFlow is basically a huge library of these operations — everything you do in a model (adding numbers, multiplying matrices, convolutions for images, activations like ReLU, softmax for probabilities) is built from these ops. I’ll explain it like your favorite teacher: slowly, with intuition first, lots of real examples (code + output), analogies from everyday life, and why ops matter in 2026.
No heavy theory at first — we’ll build it like a story.
Step 1: What Exactly are “TensorFlow Operations”?
TensorFlow Operations (tf ops) are the basic building blocks — pre-written, highly optimized mathematical functions that take tensors as input, do some computation, and produce tensors as output.
Think of them like kitchen tools:
- Knife = tf.math.add (addition)
- Blender = tf.nn.conv2d (convolution for images)
- Oven = tf.nn.softmax (turns raw scores into probabilities)
You don’t write the low-level math yourself (like loops in C++); you call these ready-made ops, and TensorFlow runs them super-fast on CPU/GPU/TPU.
Key points:
- Ops are immutable (they don’t change input tensors — they create new output tensors).
- Ops are differentiable (most of them) → TensorFlow can automatically compute gradients (backpropagation magic!).
- Ops are grouped into modules like:
- tf.math → basic math (add, mul, sin, exp…)
- tf.nn → neural network specific (conv, relu, softmax, dropout…)
- tf.linalg → linear algebra (matmul, inv, eig…)
- tf.reduce → reductions (sum, mean, max…)
- tf.random → random numbers
- And many more (tf.image, tf.signal, tf.strings…)
In TensorFlow 2.x (2026 standard): ops run in eager mode by default — you see results immediately like normal Python.
Step 2: Simple Everyday Analogy – Cooking Biryani
Imagine making Hyderabadi biryani:
- Add spices (tf.math.add)
- Multiply flavors (tf.math.multiply — element-wise)
- Mix layers (tf.linalg.matmul — matrix multiply for attention)
- Apply heat non-linearly (tf.nn.relu — “only keep positive flavor”)
- Normalize portions (tf.nn.softmax — “make probabilities sum to 1”)
- Reduce to taste (tf.reduce_mean — average flavor check)
Each step is a TensorFlow op — you chain them → final dish (model output).
Step 3: Basic Categories with Real Code Examples
Let’s open Python (or Colab) and see ops live.
Import first (always):
|
0 1 2 3 4 5 6 7 |
import tensorflow as tf print(tf.__version__) # e.g., 2.16 or 2.17 in 2026 |
1. Basic Arithmetic (tf.math / direct overload)
|
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
a = tf.constant([[1., 2.], [3., 4.]]) b = tf.constant([[5., 6.], [7., 8.]]) print("a + b =\n", a + b) # overload: tf.math.add # [[ 6. 8.] # [10. 12.]] print("a * b (element-wise) =\n", a * b) # tf.math.multiply # [[ 5. 12.] # [21. 32.]] print("5 * a =\n", 5 * a) # broadcasting # [[ 5. 10.] # [15. 20.]] print("a @ b (matrix multiply) =\n", a @ b) # tf.linalg.matmul # [[19. 22.] # [43. 50.]] |
2. Element-wise Math (tf.math)
|
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
x = tf.constant([-2., -1., 0., 1., 2.]) print("Absolute value:", tf.math.abs(x)) # [2. 1. 0. 1. 2.] print("Square:", tf.math.square(x)) # [4. 1. 0. 1. 4.] print("Exponential:", tf.math.exp(x)) # e^x # [0.13533528 0.36787945 1. 2.7182817 7.389056 ] print("Log (natural):", tf.math.log(tf.math.abs(x) + 1e-10)) # avoid log(0) |
3. Reductions (tf.reduce_*)
These “collapse” tensors — very common in loss & metrics.
|
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
scores = tf.constant([[90., 85., 95.], [78., 92., 88.]]) print("Mean score:", tf.reduce_mean(scores)) # overall average # tf.Tensor(88.0, shape=(), dtype=float32) print("Mean per student:", tf.reduce_mean(scores, axis=1)) # [90. 86. ] print("Max score:", tf.reduce_max(scores)) # 95. print("Sum of all:", tf.reduce_sum(scores)) # 528. |
4. Neural Network Ops (tf.nn – Super Important!)
|
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
logits = tf.constant([[2.0, 1.0, 0.1], [0.5, 3.0, -1.0]]) # Softmax → probabilities (sum to 1 per row) probs = tf.nn.softmax(logits) print("Softmax probabilities:\n", probs) # Roughly [[0.659, 0.242, 0.099], [0.122, 0.843, 0.035]] # ReLU activation (most common in hidden layers) activ = tf.nn.relu(logits) print("ReLU:\n", activ) # [[2. 1. 0.1], [0.5 3. 0. ]] # Dropout (during training to prevent overfitting) dropped = tf.nn.dropout(activ, rate=0.3) # 30% dropped |
5. Convolution (tf.nn.conv2d – Images/Vision)
|
0 1 2 3 4 5 6 7 8 9 10 11 12 13 |
# Fake 1 image, 28x28, 1 channel image = tf.random.normal([1, 28, 28, 1]) # 32 filters, 3x3 kernel filters = tf.random.normal([3, 3, 1, 32]) conv = tf.nn.conv2d(image, filters, strides=1, padding='SAME') print("After conv shape:", conv.shape) # [1, 28, 28, 32] |
Step 4: Quick Summary Table (Keep This in Notes!)
| Category | Module/Example Ops | What It Does | Common Use Case |
|---|---|---|---|
| Arithmetic | tf.math.add, *, @, tf.math.square | Add, mul, matmul, power | Everywhere — forward pass |
| Element-wise | tf.math.abs, exp, log, sigmoid | Apply func to every element | Activations, preprocessing |
| Reductions | tf.reduce_sum, mean, max, min | Collapse tensor (sum/avg over axis) | Loss calculation, metrics |
| Neural Net | tf.nn.relu, softmax, conv2d, dropout | Activation, prob, convolution, regularization | Layers in models |
| Linear Algebra | tf.linalg.matmul, inv, eig | Matrix ops | Attention, PCA |
| Random | tf.random.normal, uniform | Generate random tensors | Weight init, dropout, augmentation |
Step 5: Teacher’s Final Words (2026 Perspective)
TensorFlow Operations = the vocabulary of TensorFlow — every model you build is just a chain of these ops flowing data from input to output.
- In Keras → high-level (layers hide ops)
- In low-level TF → you call ops directly (custom models, research)
In 2026: Most people use Keras (tf.keras.layers.Dense calls tf.linalg.matmul + tf.nn.bias_add + activation internally) — but understanding ops helps debug, optimize, or write custom layers.
Got the idea now? 🌟
Questions?
- Want full MNIST code using only ops (no Keras)?
- How ops work in tf.function / graph mode?
- Difference tf.math vs tf.nn?
Just say — next class ready! 🚀
