Slow Network Speeds

Common Causes and Diagnostic Approach

  • Physical layer issues - Cable degradation, interference, or duplex mismatches cause immediate speed reduction
  • Bandwidth saturation - Traffic exceeding link capacity (e.g., 100Mbps link handling 120Mbps demand)
  • Network congestion - Multiple devices competing for shared bandwidth on collision domains
  • Protocol overhead - TCP windowing, retransmissions, or inefficient application protocols
  • Device limitations - CPU/memory constraints on switches, routers, or end devices

Speed vs Throughput Distinction

  • Link speed = Physical capability (100Mbps, 1Gbps interface)
  • Throughput = Actual data transfer rate (often 60-80% of link speed due to overhead)
  • Latency affects perceived performance even when bandwidth is adequate
  • For example, satellite links have high bandwidth but 500ms+ latency makes web browsing feel slow

Diagnostic Commands and Tools

Tool Purpose Key Metrics
show interface Interface statistics Utilization %, errors, duplex
ping Basic connectivity/latency RTT, packet loss
traceroute Path analysis Hop-by-hop latency
show processes cpu Device performance CPU utilization
show ip route Routing efficiency Path selection

Layer-by-Layer Troubleshooting

Physical Layer (Layer 1)

  • Check cable integrity and specifications (Cat5e for Gigabit, Cat6 for 10Gb)
  • Verify duplex settings match on both ends (auto-negotiation failures common)
  • Look for excessive collisions on half-duplex links
  • Monitor error counters: CRC errors indicate physical problems

Data Link Layer (Layer 2)

  • Identify switching loops causing broadcast storms
  • Check for VLAN misconfigurations limiting available bandwidth
  • Verify STP convergence isn’t causing temporary outages
  • Monitor switch buffer utilization during peak traffic

Network Layer (Layer 3)

  • Analyze routing table for suboptimal paths
  • Check for routing loops or frequent reconvergence
  • Verify load balancing across multiple paths (ECMP)
  • Monitor fragmentation rates (MTU mismatches)

Bandwidth Calculation and Analysis

  • Utilization formula: (Current traffic / Link capacity) × 100
  • Sustained utilization >70% typically causes noticeable slowdowns
  • Burst traffic can exceed link capacity temporarily (buffering compensates)
  • Calculate effective throughput: Account for protocol overhead (TCP ~10%, Ethernet ~6%)

Common Speed Bottlenecks

Duplex Mismatches

  • One side full-duplex, other half-duplex = 50%+ speed reduction
  • Symptoms: Late collisions, excessive retransmissions
  • Use show interface to verify both sides match

Buffer Overruns

  • Occur when input rate exceeds output rate consistently
  • Check show interface for output drops
  • Implement QoS or upgrade link capacity

CPU Limitations

  • Process switching instead of hardware switching reduces performance
  • Monitor with show processes cpu history
  • Upgrade hardware or optimize routing/switching tables

Vocabulary

  • Collision Domain - Network segment where data collisions can occur (shared Ethernet)
  • Broadcast Domain - Area where broadcast frames are propagated (typically one VLAN)
  • Duplex Mismatch - Inconsistent duplex settings between connected devices
  • Buffer Bloat - Excessive buffering causing increased latency
  • Microburst - Very short duration traffic spike that can cause drops

Performance Optimization Strategies

Strategy Implementation Expected Improvement
Link Aggregation Bundle multiple physical links 2x-8x bandwidth increase
QoS Implementation Prioritize critical traffic Reduced latency for priority flows
VLAN Segmentation Separate broadcast domains Reduced collision/broadcast overhead
Upgrade Infrastructure Higher capacity links/devices Direct bandwidth multiplication

Notes

  • Always establish baseline performance before troubleshooting - document normal operating speeds and utilization
  • Network slowdowns often have multiple contributing factors - systematic layer-by-layer approach prevents missing root causes
  • End-to-end testing more valuable than segment testing - use tools like iperf between actual endpoints
  • Consider time-of-day patterns - many “slow network” complaints occur during peak usage hours
  • Don’t forget about wireless factors: interference, distance from AP, client device limitations significantly impact perceived speed
  • Modern networks rarely have single points of failure - slowdowns often indicate capacity planning issues rather than equipment failures