CloudMyLab Blog | Network Lab Guides, Tutorials & Automation Tips

EtherChannel on EVE-NG: Cisco LACP & PAgP Lab Walkthrough

Written by Shibi Vasudevan | May 7, 2026 12:45:00 PM

EtherChannel is Cisco's link aggregation technology, standardized as IEEE 802.3ad (LACP) and Cisco-proprietary as PAgP, that bundles 2-8 physical links into a single logical interface. It supports up to 8 active members per channel (16 on platforms with hot-standby), negotiates membership through five mode keywords (LACP active/passive, PAgP desirable/auto, or static "on"), distributes traffic with hash-based load balancing, and recovers from member failures in well under a second.

The first time most engineers really understand EtherChannel is when a "working" port-channel turns out to be carrying all of its traffic over a single member. The bundle shows up cleanly in show etherchannel summaryPo1 listed as in-use, member ports happily forwarding — but only one of the four supposedly bundled links is doing any work. The others sit there, link-up, line-protocol-up, doing absolutely nothing.

The cause is usually painfully simple. One end is configured with channel-group 1 mode on (static, no negotiation). The other is set to channel-group 1 mode active (LACP). Both ends think they have a port-channel. Neither is actually negotiating it. The "bundle" is one link making decisions while the other three sit in limbo, waiting for an LACP partner that will never speak up.

That is the sneaky thing about EtherChannel. When it works, it is beautiful — link aggregation, load balancing across member ports, automatic failover when a member drops. When it is misconfigured, it can fail in ways that don't immediately trip alarms. You think you have 40 Gbps of bandwidth and you really have 10 Gbps with three confused ghosts attached.

Spanning tree disables redundant links to prevent loops. EtherChannel does the opposite: it bundles redundant links into a single logical pipe that uses every member simultaneously, with hash-based load balancing across them and sub-second failover when one drops. Configure it right and you get four times the bandwidth. Configure it wrong and you get exactly what you started with, plus three confused ghosts attached.

 

EVE-NG solves half that problem by letting you build enterprise-grade network labs in software. CloudMyLab solves the other half by hosting those labs so you don't have to hear your cooling fans scream.

CloudmyLab is EVE-NG's official and only cloud partner.

Here is how to validate EtherChannel — Layer 2, Layer 3, LACP, PAgP, and the mismatched-mode failures — using a ready-made network topology from EVE-NG's Lab Library, hosted where it belongs: in the cloud.


Table of contents:

What this lab demonstrates

Lab 1-4 from EVE-NG's Switching series walks you through the full EtherChannel matrix: Layer 2 bundles using both PAgP and LACP, plus Layer 3 EtherChannels where the port-channel becomes a routed interface with its own IP. You configure each variant, verify with the right show commands, and then deliberately break things to see how EtherChannel actually behaves when humans get involved.

Here is what you are actually testing:

  • PAgP modes (auto / desirable): Cisco's proprietary aggregation protocol. Auto won't initiate; desirable will. Two desirable ends form a bundle. One desirable plus one auto also forms a bundle. Two autos sit there forever waiting for someone else to start the conversation.
  • LACP modes (active / passive): The IEEE 802.3ad standard. Active initiates; passive responds but won't start. Same general behavior as PAgP, different keywords, multi-vendor compatible — which matters the moment you connect to anything that isn't Cisco.
  • Static "on" mode: No negotiation at all. Both sides force the bundle. Fast, simple, and unforgiving. If the other side is not also set to "on," you get the silent half-failure described above.
  • Layer 3 EtherChannel: The port-channel becomes a routed interface (no switchport) with an IP address. Useful for distribution-layer interconnects where you want bandwidth aggregation without spanning tree blocking your uplinks.
  • Load balancing behavior: How traffic gets distributed across member links based on hash algorithms (src-mac, dst-mac, src-dst-ip, and so on).

The lab forces you to confront the most common production gotcha: the bundle forms successfully, but it is not balancing the way you expected. If your load-balancing hash uses src-mac and only one source MAC is sending traffic, you'll send 100% of frames over a single member link no matter how many you've bundled. You can have a 4x10 Gb EtherChannel that performs exactly like 1x10 Gb, and show etherchannel summary will tell you everything is fine.

EtherChannel modes at a glance

Before diving into configuration, here is how the five EtherChannel modes compare on the dimensions that matter operationally:

Protocol Mode Initiates negotiation Combines with Best for
LACP (IEEE 802.3ad) active Yes active, passive Multi-vendor, default choice
LACP (IEEE 802.3ad) passive No active only Edge ports waiting on a partner
PAgP (Cisco) desirable Yes desirable, auto All-Cisco environments
PAgP (Cisco) auto No desirable only Cisco edge ports waiting on a partner
Static on No (forces bundle) on only Tightly controlled, both ends managed

If you are designing distribution-to-access uplinks, LACP active/active is the safe default — it is the IEEE standard, multi-vendor, and supports up to 8 active members per bundle (16 on platforms with hot-standby). CloudMyLab's hosted EVE-NG gives teams a pre-built distribution/access topology to validate the bundle — including hash-based load balancing and sub-second failover — before touching production, no hardware required.

Topology overview

The Lab 1-4 topology continues the distribution/access pattern from earlier labs in the series, with explicit redundant links between switches that practically beg to be bundled.

Core components:

  • DS-1 and DS-2: Distribution switches running Cisco IOSvL2
  • AS-1 and AS-2: Access switches, also IOSvL2
  • Multiple inter-switch links: Two or more physical links between each pair of switches that you'll aggregate into port-channels

Key links that get bundled:

  • DS-1 ↔ DS-2: typically two or three links combined into a Layer 2 or Layer 3 EtherChannel
  • DS-1 ↔ AS-1: redundant uplinks that become a Layer 2 EtherChannel
  • DS-2 ↔ AS-2: same pattern

Topology adapted from the EVE-NG Lab Library.

This mirrors a real campus design where you have dual-homed access switches with LAG uplinks to your distribution layer. The whole point of running it in a lab is to confirm that the EtherChannel behavior you think will happen actually happens — before you order the cables and schedule the maintenance window.

Configuration workflow

The progression matters here. Don't try to configure all three EtherChannel variants simultaneously. Build them one at a time, verify each, then move to the next. Otherwise you'll be debugging four issues at once and you won't know which knob caused which symptom. If you are using CloudMyLab's hosted EVE-NG, the topology is pre-loaded and typically takes 60-90 minutes to configure and test fully.

Layer 2 EtherChannel using PAgP

Start on AS-1 with the simpler one. Pick two physical interfaces that connect to DS-1 (typically Gi0/1 and Gi0/2). Set them as trunks first — switchport trunk encapsulation dot1q followed by switchport mode trunk. Make sure the allowed VLAN list, native VLAN, and DTP settings match on both ends. Mismatched trunk parameters are the second most common reason a bundle won't form (just behind mode mismatches), and they fail in confusing ways because the physical links look healthy.

Now configure the channel group:

! On AS-1
interface range gi0/1 - 2
 switchport trunk encapsulation dot1q
 switchport mode trunk
 channel-group 1 mode desirable

! On DS-1, mirror with desirable or auto
interface range gi0/1 - 2
 switchport trunk encapsulation dot1q
 switchport mode trunk
 channel-group 1 mode desirable

Verify with show etherchannel summary. You're looking for Po1(SU)S means Layer 2, U means in-use. Member ports should show (P) for "in port-channel." If you see (I) for "individual" or (s) for "suspended," something is wrong with negotiation or trunk parameters.

Layer 2 EtherChannel using LACP

On AS-2, do the same thing but with LACP. Configure channel-group 2 mode active on one side and either active or passive on the other. Same verification pattern, same gotchas around trunk consistency.

This is where it pays to deliberately break things. Configure one side as mode active (LACP) and the other side as mode desirable (PAgP). Watch the bundle fail to form. The interfaces will look up at Layer 1 but won't aggregate. show etherchannel summary will show empty or suspended states. Now you've experienced a mode mismatch in a controlled environment, which is a much better venue than a 2 AM cutover.

Layer 3 EtherChannel between distribution switches

This is where the syntax shifts. On DS-1, create the port-channel interface explicitly first, then drop into the physical interface range:

! On DS-1
interface port-channel 3
 no switchport
 ip address 10.1.1.1 255.255.255.252

interface range gi0/3 - 4
 no switchport
 channel-group 3 mode active

! Mirror on DS-2 with 10.1.1.2

The no switchport on the physical interfaces is critical — you can't add a switchport to a routed port-channel. Verify with show ip interface brief (the port-channel should be up/up with its assigned IP), show etherchannel summary (Po3 should show R for routed and U for in-use), and a quick ping 10.1.1.2 from DS-1 to confirm reachability.

Test load balancing

Run show etherchannel load-balance to see your hash algorithm (typically src-mac by default on older platforms, src-dst-ip on newer ones). Generate test traffic from multiple sources to multiple destinations. Use show interfaces port-channel 1 etherchannel to see the per-member traffic distribution.

The first time you see one member link carrying 90% of the traffic while the others sit nearly idle, you'll understand why default load-balancing settings deserve more scrutiny than they usually get. Change the algorithm with port-channel load-balance src-dst-ip (or whichever hash matches your traffic pattern) and watch the distribution rebalance.

Verify member failure behavior

Shut one of the member ports in your bundle. Watch what happens — total bandwidth drops, but the bundle stays up and traffic redistributes across remaining members in well under a second. Restore the link and confirm it rejoins the bundle automatically. This sub-second failover behavior is the actual reason you're building EtherChannels in the first place, so it is worth confirming it works the way the documentation promises.

Why hosted matters for EtherChannel testing

Running EtherChannel labs locally is technically possible. You install EVE-NG Community on a workstation, find compatible IOSvL2 images, allocate enough RAM for four switches, and hope your laptop's thermals can sustain a 90-minute lab session without throttling. Then you reload everything and start over for Lab 1-5.

CloudMyLab's hosted EVE-NG skips that whole arc. The topology loads pre-configured with valid IOSvL2 images, and you're typing commands within minutes of connecting instead of hours. (See Lab 1-1 for the full breakdown of what hosted EVE-NG changes.)

For EtherChannel specifically, hosted environments are valuable because:

Snapshot-and-revert workflow: Save a working baseline, deliberately misconfigure to study failure modes, then revert in seconds. EtherChannel rewards this kind of structured break-fix testing because the failure modes — mode mismatches, suspended members, half-formed bundles — are subtle and worth experiencing on purpose so you can recognize them in production at a glance.

Multi-user sessions: Have your network architect in one city and your operations lead in another connect to the same topology. When someone shuts a member link, you both watch the bundle reconverge in real time. Beats screen-sharing every time.

Consistent performance: Local labs choke when you spin up four IOSvL2 instances. Hosted infrastructure runs them on resources designed for the workload, so you're testing protocol behavior rather than your laptop's thermal limits.

Who needs this lab

  • Network engineering teams preparing for distribution-layer upgrades or access switch refreshes use this lab to validate uplink designs before ordering cables. If you are moving from a single-uplink design to bundled uplinks, run Lab 1-4 first to confirm your channel-group strategy actually delivers the bandwidth you expect — not just on paper, but with hash algorithms that match your traffic patterns.
  • MSPs validate customer designs by reproducing client topologies in the lab. When a customer asks "do we really need LACP, or can we just hard-code 'on' mode?" you can demonstrate exactly what happens to the bundle when one end auto-negotiates and the other doesn't. Showing beats explaining, and a 30-second demo settles a debate that could otherwise drag through three meetings.
  • Engineers preparing for CCNP ENCOR practice the full EtherChannel matrix — PAgP, LACP, static, Layer 2, Layer 3, load balancing — without having to source the right physical hardware. The lab covers every variant the exam tests, plus the troubleshooting scenarios that show up in real interview questions.
  • Educational institutions teach link aggregation concepts using a consistent environment for every student. Every student gets identical lab performance regardless of their hardware, which means the lecture covers protocol behavior instead of debugging individual laptops.

See it running

EtherChannel is one of those technologies that looks deceptively simple in the configuration guide and reveals its complexity the first time you try to build a bundle that actually balances traffic the way you expected. Running Lab 1-4 from the EVE-NG Lab Library gives you the controlled environment to work through PAgP, LACP, and Layer 3 EtherChannel without staging hardware or risking a production link.

We'll provision Lab 1-4 with your specific bundle requirements. You'll see exactly how mode negotiation works (and how easy it is to misconfigure by one wrong keyword), test load-balancing across multiple members, and verify sub-second failover when a link drops. Most importantly, you'll catch configuration errors in the lab instead of during the maintenance window.

No hardware required. No local resource constraints. Just working EtherChannel behavior you can verify, document, and use as a reference when you deploy the real thing.

Schedule a demo →

 

Frequently asked questions

What is EtherChannel?

EtherChannel is Cisco's link aggregation technology that bundles 2-8 physical Ethernet links into a single logical interface. It supports the IEEE 802.3ad standard via LACP, the Cisco-proprietary PAgP protocol, and a static "on" mode. Bundles distribute traffic across members using hash-based load balancing and recover from member failures in under a second.

What is the difference between LACP and PAgP?

LACP (Link Aggregation Control Protocol) is the IEEE 802.3ad standard and is multi-vendor compatible — it works between Cisco, Arista, Juniper, and most other switches. PAgP (Port Aggregation Protocol) is Cisco-proprietary and only works between Cisco devices. Both negotiate bundle membership and detect mismatched configurations; LACP is the default choice in any environment that is not 100% Cisco.

How does EtherChannel load balancing work?

EtherChannel selects a member link for each frame using a hash algorithm against fields like source MAC, destination MAC, source IP, destination IP, or a combination. View the current algorithm with show etherchannel load-balance and change it with port-channel load-balance <method>. The default (often src-mac on older platforms, src-dst-ip on newer ones) does not always match real traffic patterns, which is why a 4-link bundle can end up sending most traffic over one member.

What is the difference between LACP active and passive modes?

Active initiates LACP negotiation by sending LACPDUs. Passive responds to LACPDUs but won't start the conversation. Two active ends form a bundle. One active and one passive form a bundle. Two passive ends sit there forever waiting for someone to initiate. The same pattern applies to PAgP desirable (initiates) and auto (responds).

How many links can you bundle in an EtherChannel?

The IEEE 802.3ad standard supports up to 8 active member links per channel. On Cisco platforms with hot-standby support, you can configure up to 16 members where 8 are active and 8 are standby (LACP 1:1 redundancy). Most production deployments use 2-4 members per bundle — enough for redundancy and bandwidth aggregation without diminishing returns from hash distribution.

Why should I avoid using "on" mode for EtherChannel?

"On" mode forces a bundle without negotiation. If both ends are configured "on" with matching parameters, it works. If one end is "on" and the other is LACP active or PAgP desirable, the link comes up at Layer 1 but the bundle never forms cleanly — you can end up with one member carrying traffic and the others suspended or sending frames into a black hole. Use LACP active/active by default and reserve "on" for tightly controlled scenarios where both ends are managed together.

How does EtherChannel handle link failures?

When a member link goes down, EtherChannel detects the failure within milliseconds and redistributes traffic across the remaining active members using the same hash algorithm. Total bundle bandwidth drops by the failed member's share, but the logical port-channel interface stays up and forwarding. When the link is restored, it rejoins the bundle automatically without operator intervention.

Is EtherChannel the same as link aggregation?

EtherChannel is Cisco's name for link aggregation. The IEEE generic term is "Link Aggregation Group" (LAG) defined by 802.3ad. Other vendors use names like Multi-Chassis LAG (MLAG), Bond (Linux), Trunk (HP/Aruba), or Port Aggregation. The underlying concept — bundle multiple physical links into one logical interface with shared load balancing and failover — is the same across all of them.