Chapter 30: AWS Provisioning

AWS provisioning

When people say “AWS provisioning” or “provision resources in AWS”, they are usually talking about the act of creating and configuring compute, storage, networking, or other cloud resources so your application can actually run.

But the word “provisioning” means very different things depending on which AWS service or model you are using — and this is where 90% of beginners get confused.

Let me explain it like we are sitting together with a whiteboard in a Gachibowli café — slow, step-by-step, real analogies, Hyderabad startup examples, and clear comparison between different provisioning styles in 2026.

1. What Does “Provisioning” Actually Mean in AWS? (Plain Language First)

Provisioning = the process of requesting, creating, and setting up the cloud resources your application needs to run.

Resources = EC2 instances, S3 buckets, RDS databases, Lambda functions, VPC subnets, IAM roles, EKS clusters, containers on Fargate, etc.

Different AWS compute models have very different provisioning experiences:

Model Provisioning Style How Long to Get Resource? Who Manages Servers? You Pay For… Typical Hyderabad Use Case
EC2 (classic) Manual / semi-automatic Seconds to minutes You Instance running 24/7 Long-running backend, custom software
ECS / EKS on EC2 Manual cluster + auto-scaling Minutes You EC2 instances Containerized microservices (older style)
Fargate (ECS or EKS) Almost instant (serverless) Seconds AWS vCPU + memory per second Modern container workloads
Lambda Fully automatic (serverless) Milliseconds (cold start) AWS Per request + duration (ms) APIs, event triggers, background jobs
Aurora Serverless v2 Automatic capacity units Seconds AWS Per Aurora Capacity Unit-second Variable database load
Redshift Serverless Automatic Redshift Processing Units Seconds AWS Per RPU-second Ad-hoc analytics

2. Three Main “Flavors” of Provisioning in AWS (2026 View)

Flavor 1: Traditional / Manual Provisioning (EC2 style – You control everything)

You explicitly say: “I want 4 servers with this CPU, this RAM, this OS, in this AZ”.

  • Console → EC2 → Launch instance → choose type → choose AMI → configure storage → launch
  • Or use CLI / CDK / Terraform: aws ec2 run-instances …

Hyderabad example: Small college project or legacy app You provision 2 t4g.medium instances in ap-south-2a → install Node.js → run your API → keep them running 24/7.

Pros: Full control, predictable performance Cons: You pay even when idle, you patch OS, you scale manually or with Auto Scaling Groups

Flavor 2: Semi-Automatic / Managed Provisioning (ECS on EC2, EKS Managed Node Groups)

You provision a cluster → define how many nodes → AWS helps with scaling, patching, updates.

  • ECS cluster → add capacity providers (EC2 Auto Scaling Group)
  • EKS → create managed node group → AWS patches nodes, replaces unhealthy ones

Example: Mid-size fintech startup in Financial District You provision an ECS cluster with EC2 capacity → define Auto Scaling Group (min 2, max 20 m7g.large instances) → ECS services scale tasks across those instances.

You still manage the EC2 fleet (but less than raw EC2).

Flavor 3: Fully Automatic / Serverless Provisioning (Fargate, Lambda, Serverless DBs)

You never provision servers — you just say “run this container / function / database with X capacity” — AWS creates the compute behind the scenes instantly.

  • Fargate: You provision a task / pod → AWS spins up isolated VM/container environment → runs your Docker image → tears it down when done
  • Lambda: You provision a function → AWS runs it on demand (cold start or warm)
  • Aurora Serverless v2 / Redshift Serverless: You provision a minimum capacity → AWS adds/removes compute units automatically

Real Hyderabad example (2026 favorite): Your short-video app startup in Madhapur

  • You provision an ECS service on Fargate → task definition with 0.5 vCPU + 1 GB
  • During Sankranti festival → 10× upload traffic → Fargate automatically provisions additional task execution environments → scales to 50 tasks → scales back to 2 at night
  • You never saw, never patched, never paid rent for any EC2 instance

Monthly bill: Only for seconds the tasks actually ran → often ₹3,000–10,000 instead of ₹20,000+ on EC2.

3. Provisioning in Practice – Quick Comparison Table

Question / Scenario Traditional Provisioning (EC2) Managed Provisioning (ECS/EKS on EC2) Serverless Provisioning (Fargate, Lambda)
How do I ask for compute? “Launch 4 t4g.medium” “Create cluster + ASG min 2” “Run this task with 0.5 vCPU”
Time to get resource Seconds–minutes Minutes (cluster warm-up) Milliseconds–seconds
Idle cost Full price 24/7 Full price when nodes running ₹0 when no tasks/functions
You manage OS/patching? Yes Yes (nodes) No
Best for Hyderabad startup in 2026? Legacy / long-running Transition phase New apps, variable traffic, small teams

4. Quick Hands-On Feel (What Provisioning Looks Like)

Traditional (EC2): Console → EC2 → Launch instance → t4g.micro → launch → wait 1–2 min → SSH in

Semi-automatic (ECS on EC2): Console → ECS → Create cluster → EC2 Linux + Networking → create → wait 5–10 min for nodes → launch tasks

Serverless (Fargate): Console → ECS → Create cluster → Fargate → Create task definition → Create service → wait ~30 seconds → task running

Lambda (pure serverless): Console → Lambda → Create function → write code → create → invoke → runs in milliseconds

Summary Table – AWS Provisioning Cheat Sheet (2026)

Question Answer (Beginner-Friendly)
What is provisioning? Creating & configuring cloud resources (instances, containers, functions, DBs)
Main styles? Traditional (EC2), Managed (ECS/EKS on EC2), Serverless (Fargate/Lambda)
Fastest provisioning? Serverless (Fargate, Lambda) — seconds or less
Zero idle cost? Serverless models (Fargate, Lambda, Aurora Serverless)
Best for new Hyderabad startup? Start with serverless provisioning (Fargate or Lambda)

Teacher’s final note: Provisioning is the moment you go from “I have an idea” to “my app is actually running in the cloud”. In 2026, most smart Hyderabad teams start with serverless provisioning styles (Fargate, Lambda, Aurora Serverless) because they are fastest to launch, cheapest at low traffic, and scale automatically during festival or IPL spikes.

Got it? This is the “how do I actually get compute power?” lesson.

Next?

  • Deep dive on Fargate provisioning vs EC2 provisioning?
  • How to provision a full serverless app (API Gateway + Lambda + DynamoDB)?
  • Or provisioning with IaC (Terraform/CDK examples)?

Tell me — next whiteboard ready! 🚀⚙️

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *