This is the heart of AWS networking, and once you understand CIDR and subnets, everything from ECS to EC2 to RDS setups will make total sense.
AWSTemplateFormatVersion: '2010-09-09'
Description: Full ECS + ECR + VPC Infra for Document Portal with secrets and logging
Parameters:
VpcCIDR:
Type: String
Default: 10.0.0.0/16
Subnet1CIDR:
Type: String
Default: 10.0.1.0/24
Subnet2CIDR:
Type: String
Default: 10.0.2.0/24
ImageUrl:
Type: String
Description: ECR Image URI to use for container
Let’s go step-by-step in very simple real-world language —
think of it like building a small city (your AWS VPC) with streets (subnets) where your houses (servers/containers) live.
VPC (Virtual Private Cloud) is your private network inside AWS.
It’s like your own isolated area on the internet where your resources (servers, containers, databases, etc.) live securely.
When you create a VPC, you give it a range of IP addresses — that’s your CIDR block.
CIDR = Classless Inter-Domain Routing
It defines the range of IP addresses your network (or subnet) will have.
🧠 Example:
10.0.0.0/16
Breakdown:
10.0.0.0 → starting IP address
/16 → subnet mask — determines how many IPs are available
👉 /16 = 65,536 IP addresses (from 10.0.0.0 → 10.0.255.255)
So your VPC CIDR defines the total size of your virtual network.
A Subnet (sub-network) divides your big VPC into smaller sections.
Each subnet has its own CIDR range within the main VPC range.
Think of your VPC as a city, and subnets as districts inside it.
Example:
VpcCIDR: 10.0.0.0/16
Subnet1CIDR: 10.0.1.0/24
Subnet2CIDR: 10.0.2.0/24
Here’s what happens:
The VPC can hold all IPs from 10.0.0.0 → 10.0.255.255
Subnet 1 takes 10.0.1.0 → 10.0.1.255
Subnet 2 takes 10.0.2.0 → 10.0.2.255
Each /24 subnet gets 256 IP addresses, which is enough for a few servers or ECS tasks.
Great question — it’s for high availability and fault tolerance.
In AWS:
Each region (like ap-south-1) has multiple Availability Zones (AZs).
Example: ap-south-1a, ap-south-1b, ap-south-1c.
If one zone goes down (power issue, maintenance, etc.), the other one still runs your app.
So we create:
Subnet1 → in Zone A (ap-south-1a)
Subnet2 → in Zone B (ap-south-1b)
Your ECS tasks or EC2 instances can be deployed across both.
This ensures:
✅ Better uptime
✅ Load balancing
✅ Disaster recovery
In a full architecture:
Public Subnet → For resources that need internet access (e.g., Load Balancer)
Private Subnet → For backend containers, databases, or internal services (no direct internet access)
For example:
| Subnet | CIDR | Type | Used For |
|---|---|---|---|
| 10.0.1.0/24 | Public | Load Balancer | |
| 10.0.2.0/24 | Private | ECS Tasks / App Containers |
VPC: 10.0.0.0/16
│
├── Subnet 1 (10.0.1.0/24) - Public - ap-south-1a
│ └── Load Balancer, NAT Gateway
│
└── Subnet 2 (10.0.2.0/24) - Private - ap-south-1b
└── ECS Service (your app containers)
Those parameters:
VpcCIDR: 10.0.0.0/16
Subnet1CIDR: 10.0.1.0/24
Subnet2CIDR: 10.0.2.0/24
will later be used in the Resources section to:
Create a VPC
Create two subnets in different availability zones
Attach an Internet Gateway and Routing Tables
Deploy ECS cluster and services inside those subnets
| Concept | What It Means | Example |
|---|---|---|
| VPC | Your private AWS network | 10.0.0.0/16 |
| CIDR | Defines IP address range | /16 = 65k IPs |
| Subnet | Smaller part of your VPC | /24 = 256 IPs |
| 2 Subnets | Deployed across 2 zones for high availability | 10.0.1.0/24, 10.0.2.0/24 |
| Usage | One public (LB) and one private (ECS tasks) | Load balancer + app separation |