You've probably heard the terms 'bandwidth' and 'speed' used interchangably - even by ISPs and Government agencies. While this sort-of makes sense to the consumer (“more bits means more speed”), the two terms aren't interchangeable from a technical perspective. So let's find out more...
Speaking of speed
Bandwidth is best thought of as the capacity of a network connection. For an example everyone can understand, think of it as the number of lanes on a motorway. Having four empty lanes won't make one car go any faster, but make it 100 cars and having four lanes is suddenly much more of an advantage.
For a true measure of 'speed', we need to understand the word latency. This measures the delay between the transmission of an Internet packet and its arrival at the destination. The metric engineers use to measure it is called “round trip time”, and the you've probably heard of the tool used to find this out - it's ‘ping’. You'll find it baked into most operating systems. A ping tells us the time taken for a packet to leave its source, arrive at its destination, and its destination to “echo” it back to the source, and the source to receive it.
The history of performance
To understand the main reason why 'bandwidth' is mistaken for a line's speed, we need to look back at how things were some years ago, when the Internet was just catching on. During these early days, the state of technology at the time meant it was easy to saturate the last-mile link between a home/office and the local exchange just by doing something simple like loading a large web page or watching a small video. This would've been enough to congest the connection, meaning so many packets were trying to cross the link at the same time that they were stuck in buffers, or were dropped and had to be re-sent. Hence this low bandwidth meant a slow page-load time. Increasing the capacity of the connection would eliminate the congestion, removing the bottleneck between the home/office and its ISP. This, effectively, increased the performance and so a higher bandwidth appeared to give a greater speed to the end users. And so the association was made.
Nowadays, Internet access speeds(Note 1) are hitting double figures in megabits per second (Mbps) and so latency is becoming the important factor. Differnet types of connection, such as an Ethernet or leased line broadband connection are prone to higher and lower latency, depending on a variety of things, including the amount of encapsulation and transition that happens through the network layers. Here are some examples:
Local Ethernet --- Up to 1ms
Leased Line --- 2-4ms, depending on length
VDSL/Cable --- 15-25ms
ISDN --- 25-45ms
56k Modem(Note 2) --- 150-300 ms
3G/4G GSM --- 300-900ms
Satellite --- >700ms, mostly due to the distance from earth to space
Damn you, Einstein!
Reducing latency isn't always an easy challenge, as it typically comes down to the laws of physics. If you’re lounging around London and want to contact a server in Sydney, your packets have to travel around 17,000 km thereand 17,000 km back. In reality, this is likely to be much higher, as the undersea optical cables don't travel from your home to Sydney as the crow flies. When we add in other factors like protocol and routing overheads, each request and response typically takes approximately 1/3rd of a second to complete. Web content in 2016 has hundreds of elements on a page, meaning high latency has a noticeable (negative) impact on your page-loading times.
Bearing everything we've just learnt in mind, it's easy to see now low bandwidth is an easier obstacle to overcome than high latency. Dedicated fibre providers like our humbles selves can provide high bandwidth, low latency connections to pretty much anywhere on the globe. These top-of-the-line Internet connections absolutely provide jaw-dropping figures compared to DSL services, but they can still only transmit data at the speed of light. This means pages from the other side of the world can still take seconds to load. Happily, the wonder of content delivery networks (CDNs) means that in practice content is often served locally.
There's another technical term I was to cover in this blog: Jitter. To put it simply, jitter is how much the latency varies within a flow of traffic. If a connection has high jitter, packets arrive at their destination out-of-order. The effect of this varies wildly, depending on the application. For an example: jitter alone doesn't affect web browsing or e-mail too much, but streaming protocols like as H.323 and VoIP are much more susceptible, with noticable results. These jitter-sensitive applications will buffer packets to make sure they can be processed in the correct order, with the result of noticeable delay. The the case of VoIP, without these jitter buffers the call would degrade very quickly, and possibly drop out. VoIP protocols are typically tuned to better cope with jitter, and it can be a fine art getting the right balance of quality and delay. Whilst some protocols might cope with the latency on a 56k dial-up connection, the jitter would make the service pretty much unusable.
We measure jitter by making consecutive measures of latency, often using the RTT mentioned above, calculating the difference between the samples, then dividing by the number of samples (minus 1).
Here's an example:
macbook:~ hmerrett$ ping 184.108.40.206
PING 220.127.116.11 (18.104.22.168): 56 data bytes
64 bytes from 22.214.171.124: icmpseq=0 ttl=56 time=976 ms
64 bytes from 126.96.36.199: icmpseq=1 ttl=56 time=475 ms
64 bytes from 188.8.131.52: icmpseq=2 ttl=56 time=361 ms
64 bytes from 184.108.40.206: icmpseq=3 ttl=56 time=473 ms
64 bytes from 220.127.116.11: icmp_seq=4 ttl=56 time=391 ms
Here you can see 5 samples from a 3G connection. The average latency is 535.2 (add them, divide by 5). The jitter is calculated by taking the difference between samples.
976 to 475, diff = 501
475 to 361, diff = 114
361 to 473, diff = 112
473 to 391, diff = 82
The total difference is 809, and so the jitter is 809 / 4, or 202.25 ms. This is fairly typical for a 3G mobile connection. ADSL or VDSL are usually better, with jitter around the 2-3ms mark, and cable services sit in the middle at at 4-5ms, depending on the time of day. The winners once again are leased lines, which are your best option for jitter-sensitive applications. The worst contenders are dial-up and satellite - jitter varies hugely, but is always high.
There's been much research over the years looking at the best ways to measure performance of Internet connections. This blog has just touched on the subject, which is an interesting field of research for those who're interested in such things. Defining what's normal can be surprisingly different: compare the (relatively) small distances between content and users in the UK, to the large distances found in, say, Australia. The experience of the end user also varies according to the type and volume of content is served locally, and how much must be served by a remote server, be it domestic or international. As one of the top leased line providers in the UK, Ai Networks is always working to develop the fastest connectivity services. Learn more about us here.
Despite how this blog might sound, IP networks are still very, very impressive and resilient constructions. So next time someone complains of the Internet being slow, remember the little packet that took a trip to Australia and back in a third of a second...
Note 1 I of course mean capacity here. See how easy it is to be confused? Speed means different things depending on if you’re talking technically, or just using everyday language.
Note 2 Remember those?