Posts Tagged ‘Internet’

Beyond Web 2.0

November 7, 2011

It has been an awfully long time since my last blog posting.

For those who don’t Twitter me, I’ve been writing a book. It’s called Web 2.0 and beyond: principles and technologies and it’s going to be published in May by CRC Press, the computer science imprint of Taylor & Francis.

I should say that it’s not your usual comp. sci. textbook. My brief was to ‘reinvent the textbook format’ and while that’s quite an exciting thing to do, it’s been a huge undertaking. The underlying premise is that understanding the Web is too big a job for computer scientists alone, and the book looks at where understanding the technical infrastructure behind Web 2.0 intersects a range of other subject areas such as business studies, economics, information science, law, media studies, psychology, social informatics and sociology.

This was not my idea. It was first put forward by Tim Berners-Lee and Nigel Shadbolt in an article for Scientific American in 2008. Since then Web Science, a new, interdisciplinary research area, has emerged. However, using this as a template for a textbook has been hard work: as well as linking to aspects of many different subject areas I’ve had to write the book so that non-engineers can not only understand it, but also find it interesting. So I’ve included some of the history of the Web, both for colour and context, and on the basis that a picture paints a thousand words I’ve developed and refined my ‘iceberg’ model of Web 2.0 (read the original description of the iceberg model in a 2007 JISC TSW report).

Finally, of course, there’s a section on the future (the beyond bit) – or rather, potential futures. By the time the reader gets to this part of the book they should have learned enough to be able to form their own ideas about Web 2.0 and to have an informed opinion on what might come next.

So, a huge undertaking. I’m still a bit dazed – can’t quite get used to the idea that when I get up I have a choice of what to do – but I have it on the highest authority that there is life beyond Web 2.0. All I can say is that there’d better be some pretty good lunches.

Vint Cerf in London

September 25, 2008

Vint Cerf, often described as the ‘father’ of the Internet, was the keynote speaker at the Visions of Computer Science conference, which I attended yesterday. Although he doesn’t like this moniker (it implies he did it single-handedly and he’s always keen to stress that he was part of a team) the reason for it is that he co-invented the basic protocol of the Net (TCP/IP) and was there in the early days of the ARPANET, the forerunner to today’s Internet. He is now employed as Google’s Chief Internet Evangelist.

Vint pointed out the enormous growth of the Internet, remarking that there are now half a million computer servers (i.e. hosts that provide some kind of service such as Web or email routing) on the system and a couple of billion ‘terminators’ at the ‘edge’ – the end user devices such as a home PC or a mobile phone.

This enormous growth presents huge challenges and he argued that the next few months are likely to be “dramatic” in the world of the Internet. He then went on to elaborate some of the issues that are coming to the fore, including the problem of network addressing.

Network addressing uses something called IPv4. This is the coded address that is given to every single device on the network (even your home PC). When he was helping to create the original designs for the Internet he designed this address to make use of 32-bits of data. This limits the number of devices that can be on the Net to around 4 billion (2 to the power 32). He admits that at the time he didn’t think that this would ever be reached, but we are fast approaching that limit. Vint speculated that we would hit the limit by mid-2010, if not before. The answer is a new addressing system called IPv6 which offers many billions more potential addresses. Internet Service Providers (ISP), network operates and the rest of us need to start moving to IPV6 and he mentioned Google’s efforts in this regard. (See ipv6.google.com).

During the questions and answers section I asked him about the capacity of the existing Internet to cope with heavy data uses like video. There have been many recent reports in the UK press about the Net being close to capacity. Vint agreed that this was an issue, but said he was not overly concerned. The main backbone of the Internet will be fine, since the fibre optics involved have plenty of spare capacity. The problem arises the nearer one gets to the end user (the last mile problem), which is where there may well be an issue. Vint argued that researchers and Internet companies need to rethink the process of distributing video over the Net and rely less on streaming and more on storage and caching locally nearer the actual users. He called this process ‘edge storage’. As I know that Google and Microsoft have been rolling plans out to distribute their data centres nearer to users, I suspect we will hear a lot more about this in coming months.