Before any HTTP request hits your server, before a database connection is opened, before microservices talk to each other—there’s a quiet little negotiation happening underneath it all.
That negotiation is the TCP three-way handshake, and without it, reliable communication over the internet simply wouldn’t exist.
Let’s start with what problem TCP is solving
Unlike UDP, TCP is designed for reliable, ordered, and error-checked delivery. But reliability doesn't just happen—you need both sides to agree on things like:
- Are you ready to communicate?
- What sequence numbers should we start with?
- Can we trust the connection?
This is exactly what the TCP connection establishment process handles.
The three-way handshake (step by step)
The name sounds simple, but each step carries specific meaning. Let’s walk through it like an actual exchange between a client and a server.
Step 1: SYN (Client → Server)
The client initiates the connection by sending a SYN (synchronize) packet.
This packet includes an initial sequence number (ISN), for example:
Client → Server: SYN, Seq = 1000
What it really means:
- “I want to start a connection”
- “Here’s the sequence number I’ll begin with”
Step 2: SYN-ACK (Server → Client)
The server responds with a SYN-ACK packet:
Server → Client: SYN-ACK, Seq = 5000, Ack = 1001
This message does two things:
- Acknowledges the client’s SYN (Ack = client_seq + 1)
- Sends its own sequence number
In plain terms:
- “I received your request”
- “I’m ready too—here’s my starting number”
Step 3: ACK (Client → Server)
The client sends the final acknowledgment:
Client → Server: ACK, Ack = 5001
This confirms the server’s sequence number.
At this point, the connection is officially established.
A quick visual summary
| Step | Sender | Message | Purpose |
|---|---|---|---|
| 1 | Client | SYN | Initiate connection |
| 2 | Server | SYN-ACK | Acknowledge + respond |
| 3 | Client | ACK | Finalize connection |
Why three steps? Why not two?
This is where things get interesting.
You might wonder: why not just SYN → ACK and be done?
The third step ensures that both sides are synchronized and ready. Without it, the server wouldn’t know if the client actually received its response.
This prevents issues like:
- Half-open connections
- Duplicate or delayed packets causing confusion
- Resource allocation without confirmation
What developers often overlook
1. Every TCP connection has a cost
Each handshake introduces latency. In high-performance systems, this matters.
Example:
- HTTP/1.1 without keep-alive → new handshake per request
- HTTP/2 → multiplexing reduces handshake overhead
This is why connection reuse and pooling are critical in backend systems.
2. SYN flood attacks
The handshake can be abused.
In a SYN flood attack, an attacker sends many SYN requests but never completes the handshake. The server allocates resources for each half-open connection.
Mitigation techniques include:
- SYN cookies
- Connection timeouts
- Rate limiting
3. Debugging connection issues
When a service "isn't responding," it’s not always the app—it might be the handshake failing.
Tools that help:
- tcpdump
- Wireshark
- netstat / ss
Example tcpdump command:
1tcpdump -i eth0 tcp port 443You’ll literally see SYN, SYN-ACK, ACK packets in sequence.
Sequence numbers: the subtle backbone
The handshake isn’t just about saying “hello.” It sets up sequence numbers, which are critical for:
- Ordering packets
- Detecting lost data
- Ensuring reliable delivery
Every byte transmitted afterward depends on this initial agreement.
Where this shows up in real systems
You’ll encounter TCP handshake behavior in:
- API latency debugging
- Load balancer configuration
- Kubernetes service networking
- Database connection pooling
- CDN edge connections
For example, if your service is slow on the first request but fast afterward, you’re probably seeing handshake overhead plus TLS negotiation.
Handshake + TLS = more round trips
Important nuance: TCP handshake happens before TLS.
So a typical HTTPS connection involves:
- TCP three-way handshake
- TLS handshake
- Actual data transfer
This is why optimizations like:
- TCP Fast Open
- TLS 1.3 (reduced round trips)
exist—they aim to reduce startup latency.
A minimal mental model
If you had to compress everything into one idea:
TCP’s three-way handshake is a synchronization protocol that ensures both sides agree on starting conditions before exchanging real data.
Once you see it that way, a lot of networking behavior starts to make sense.
Wrapping it up
The TCP three-way handshake might feel like low-level detail, but it shows up everywhere—from slow APIs to scaling bottlenecks to security concerns.
Understanding it gives you a sharper instinct when something “feels off” in a distributed system.
And next time you hit an endpoint and it responds instantly, remember—there was a tiny three-step conversation that made it possible.