The internet was born with two major flaws, says one of its ‘fathers’, Vint Cerf
- When the internet started, its inventors had no idea how big it would become
- As a result there were not enough IP addresses and secure transmissions weren’t possible
The internet was born flawed. But if it hadn’t been, it might not have grown into the worldwide phenomenon it’s become.
That’s the take of Vint Cerf, and if anyone would know, it’s him. He’s widely considered to be one of the fathers of the international network and helped officially launch it in 1983.
When the internet debuted, Cerf, who is now a vice-president at Google and its chief internet evangelist, basically didn’t set aside enough room to handle all the devices that would eventually be connected to it. Perhaps even more troubling, he and his collaborators didn’t build into the network a way of securing data that was transmitted over it.
You might chalk up the lack of room on the internet, which was later corrected with a system-wide upgrade, to a lack of vision. When Cerf was helping to set up the internet, it was a simple experiment, and he couldn’t really imagine it getting as large as it became.
The security flaw, on the other hand, can be chalked up, at least in part, to simple expediency, Cerf says.
“I had been working on this for five years,” he says. “I wanted to get it built and tested to see if we could make it work.”
The lack of room on the internet has to do with the addressing system Cerf created for it. Every device connected directly to the network must have a unique numerical address.
When Cerf launched it, the internet had a 32-bit addressing system, meaning that it could support up to 4.3 billion (232) devices. And that seemed plenty when he was designing the system in the 1970s.
That number “was larger than the population of the planet at the time, the human population of the planet”, he says.
But after the internet took off in the 1990s and early 2000s, and more and more computers and other devices were connecting to the network, it became clear that 4.3 billion addresses weren’t going to be nearly enough. Cerf and other internet experts realised relatively early that they needed to update the internet protocols to make room for the flood of new devices connecting to the network.
So, in the mid-1990s, the Internet Engineering Task Force started to develop Internet Protocol version 6, or IPv6, as an update to the software underlying the network. A key feature of IPv6 is its 128-bit 2128 addressing system to replace the old 32-bit system.
But it’s taken years for companies and other organisations to buy into, test, and roll out IPv6. The standard didn’t officially launch until 2012. And even today, Google estimates that only a little more than a quarter of users accessing its sites from around the world have an IPv6 address. The US only has about a 35 per cent adoption rate, according to Google.
“Now that we see the need for 128-bit addresses in IPv6, I wish I had understood that earlier, if only to avoid the slow pain of getting IPv6 implemented,” Cerf says.
But hindsight is 20-20, and he acknowledges that it’s highly unlikely that he could have pushed through a 128-bit addressing system at the time, because it would have seemed like overkill.
“I don’t think … it would have passed the red-face test,” Cerf says. “To assert that you need 2128 addresses to do this network experiment would have been laughable.”
Security was also something Cerf skipped for his experiment. Transmissions were generally sent “in the clear”, meaning they could potentially be read by anyone who intercepted them. And the network didn’t have built-in ways of verifying that a user or device was who or what it attested to be.
Even today, some data is still transmitted in the clear, a vulnerability that has been exploited by hackers. And authentication of users remains a big problem. The passwords that consumers use to log into various websites and services have been widely compromised, giving access to plenty of sensitive data.
One of the most widely used security methods on the internet was actually developed around the time that Cerf was putting together the protocols underlying the network.
The concept for what’s called public-key encryption technology was described publicly in a paper in 1976. The RSA algorithm – one of the first public-key cryptographic systems – was developed the following year.
But at the time, Cerf was busy trying to finalise the internet protocols so that after years of development, he could launch the system. He needed to get them ported to multiple operating systems and needed to be able to set a deadline for operators of the internet’s predecessor networks to switch over to the new protocols.
“It would not have aided my sense of urgency to have to … have to stop for a minute and integrate the public-key crypto into the system,” he says. “And so we didn’t.”
The lack of security may have helped boost usage.
Even with the benefit of hindsight, Cerf doesn’t think it would have been a good idea to build security into the internet when it launched. Most of the early users of the network were college students, and they weren’t likely to be very “disciplined” when it came to remembering and maintaining their password keys, he says. Many could easily have found themselves locked out of it.
“Looking back on it, I don’t know whether it would have worked out to try to incorporate … this key-distribution system. We might not have been able to get much traction to adopt and use the network, because it would have been too difficult.”
The security situation on the internet ended up being somewhat easier to address than its lack of space, Cerf says. It was relatively easy to add on public-key cryptography to the internet later on through various services and features, and several are now widely used.
For example, the protocol that websites rely on to secure the transmission of webpages – HyperText Transfer Protocol Secure, or HTTPS – relies on a public-key cryptographic system.
Other types of security features have also been bolted on after the fact, he noted, such as two-factor authentication systems, which typically require users to enter a randomly generated code in addition to their password when logging into certain sites.
Security “is retrofittable into the internet”, he says.