Digital convergence = mashups

ricky1.jpg

This article is the third part in a series of three articles on the impact of web 2.0 on mobility and digital convergence. The first part was published in December 2005 . Part two was published in Jan 2006 .

In this article, we shall discuss:

a) What is digital convergence

b) What is the impact of web 2.0 on digital convergence

WHAT IS DIGITAL CONVERGENCE

Digital convergence is a much-maligned concept. Mention Digital convergence, and it conjures up images of the intelligent fridge : a concept most people think they have no need for!

But Digital convergence is an idea whose dawn is near.

There is a lot of confusion about what exactly is meant by digital convergence. When people talk of Digital convergence, they could actually mean different things:

a) Co-mingled bits : The original definition of Digital convergence as outlined in Nicholas Negroponte’s 1995 book Being Digital

b) Device convergence: One device to rule them all! Think the iphone (A combination of the iPod and the mobile phone), Nokia N-gage etc etc.

c) Fixed to mobile convergence: A relatively new, telecoms specific area which is a part of a much broader concept called ‘seamless mobility’

d) Devices being able to speak to each other and share intelligence leading to a new service aka the ‘Intelligent fridge’.

Besides these definitions, there is also the question of “If all you have is a hammer, everything looks like a nail.” – as Mike Langberg so aptly put it in his article soon after CES

By that, we mean: your tools (focus) determine your viewpoint of the world. The ‘nail’ in this case, is ‘Digital convergence’. The ‘hammer’ is the viewpoint (strengths) from which each player is approaching Digital convergence.

For example: (as per the article)

For Microsoft, convergence is a software problem: to be solved using an upgrade of the windows operating system (Microsoft’s strength).

Intel sees convergence as a ‘microprocessor problem’, to be solved with a vague new branding program called “Viiv” (A new version of ‘Intel Inside’?)

Cisco sees convergence as a home networking problem, to be solved with .. guess what .. networking!

Yahoo and Google see convergence as an online services problem. To them, the solution lies through the web browser – a common element in all devices.

Sony sees convergence as a consumer hardware problem, to be solved with consumer devices, new standards built around its own strengths like the playstation (http://www.n-gage.com/).

No wonder there is confusion!

As expected, I am also wielding a hammer(i.e. I am biased by my own experience) and hence see the ‘nail’ in light of the hammer.

I shall discuss my viewpoint in this document but it’s important to note that the only things common between all these definitions is:

a) Digitization and

b) Communication

In other words, information must be digitised and it must flow freely. This leads to new services, which are greater than its parts i.e. greater than what the devices could provide on their own.

That’s all there is to it.

Let’s first discuss the definitions above in a little more detail

Co mingled bits : The first definition, ‘co-mingled bits’, was proposed by Nicholas Negroponte in his 1995 book ‘ Being Digital ’.

Negroponte’s definition of Digital convergence is “Bits co-mingle effortlessly. They start to get mixed up and can be used and re-used separately or together. The mixing of audio, video, and data is called multimedia. It sounds complicated, but it’s nothing more than co-mingled bits.

Another way to put it is: to a computer, there is no difference between a symphony, a voice call, a book, a song, a TV program, a shopping list etc as long as they are all digitised

The factors driving digital convergence/co mingled bits include the rapid digitisation of content, greater bandwidth, increased processing power and the Internet.

Digital convergence brings four (previously) distinct industry sectors in collaboration/competition with each other. Thus, we have Media/Entertainment, PC/Computing, consumer electronics and telecommunications industries all interacting closely with each other than before. This version of digital convergence is happening all around us . Terms like triple play or quadruple play are a part of this scenario. Triple play involves voice, broadband and mobile services and quadruple play adds digital TV to that mix(Richard Branson, in his own unique style, prefers the term fourplay to quadruple play :) ).

Whatever name you call it, here are co-mingled bits in action! If everything has become digital, then the boundaries between the providers fade away. The same trend was seen in the utilities market(gas and electricity suppliers being sold from the same entity).

Device convergence: Addresses the age old question .. ‘Will we carry one general purpose device or will we carry many specialised devices?’. Boundaries between devices are fading fast and devices are now capable of performing more than one function.

It is unclear if customers would really want a single device. Most people have a view on this – and so do the device manufacturers.

In March 2006, Microsoft confirmed that it was interested in a device combing the features of an iPod and a cellphone and rumours of an iphone launch are perpetually present

Fixed to mobile convergence : Fixed to mobile convergence is a relatively new area. It has emerged because fixed line telecoms operators and mobile telecoms operators are each vying for customers in each other’s traditional domains. Telecoms access networks are converging due to the emergence of new technologies. Thus, mobile network providers can provide fixed network services and vice versa. Services could also be converged. Thus, a user could access the same service from either a fixed or a mobile network. Fixed to mobile convergence could be a seen as a larger concept called ‘seamless mobility’ – the overall idea being that a customer should be able to ‘roam’ seamlessly between different network types(fixed, mobile, WiFi etc). Bodies like UMA – Unlicensed mobile access are driving the standards for seamless mobility.

Device communications: The capacity for a range of devices to share information between each other. We discuss this definition in greater detail below

THE INFORMATION SUPERHIGHWAY – A ROAD TO NOWHERE

Let us now come back to the two elements common to all these definitions. Firstly, information must be digitised. Secondly, information must be capable of ‘flowing freely’.

The first part, digitization, is a no brainer! It’s happening all around us. However, the second part ‘information flowing freely’ is the real bottleneck.

For information to indeed flow freely, there must be a common ‘lingua franca’ – a common standard. Some means for all the participants to communicate.

The big (and sadly predictable) battles are raging to control this communications medium(read the ‘hammers’ paragraph above to get an idea of who is trying to control what!) .

These battles have a feeling of Déjà vu from the early days of the Internet. In the early days, there used to be a term called ‘The Information Super Highway’.

Notice that it’s no longer being used .. Did you wonder why?

The term was popular with governments, politicians and people who wanted to exercise control. Because – highways mean toll booths and choke points!.

A few years down the road, we know that the Information superhighway is a road to nowhere!

superhighway.jpg

source: http://en.wikipedia.org/wiki/Image:PopularMechanics_InformationSuperhighway.jpg

AN INFORMATION SUPERHIGHWAY … BY ANOTHER NAME?Inspite of the failure of the Information superhighway concept, there have been other attempts to create(and control?) a common standard .. with mixed results.

Consider the case of South Korea and Japan. In both these cases, communications technology is far more advanced. In many cases, we see convergence that we can only dream about in the west!

Apart from other factors like cultural affinity to new technology, the biggest factor by far is a ‘managed collaboration’ – for the lack of a better word.

For example – In Japan, for mobile devices, there has been a dominant player in the form of NTT DoCoMo leading to market cohesion. In South Korea, the government has actively managed standardization with spectacular results.

While the results so obtained are commendable, they cannot(by definition) be global. That explains why Toyota can be the dominant car manufacturer but iMode is not the world’s preferred mobile platform i.e. Japan can export cars (physical goods) but not information based products which require adherence to open standards.

The only other attempt I can think of is – Jini .

According to the original definition of Jini

Jini is the name for a distributed computing environment, that can offer “network plug and play”. A device or a software service can be connected to a network and announce its presence, and clients that wish to use such a service can then locate it and call it to perform tasks.

Considering that one of my previous posts why mobile AJAX will replace both J2ME and XHTML as the preferred platform for mobile applications development , could be perceived to be ‘Anti Java’(for the record – it was never intended to be – but I don’t consider Java ME to be a preferred mobile solution either), I wanted to recheck the current status of Jini. As per this post on jini.org, A new dawn , there seem to be a lot of change about the status (and potentially the future of Jini itself in its current incarnation).

However, whichever way you look at it, Jini has not been the lingua franca which many hopes would spur digital convergence.

So, where does that leave us?

THE BASIS OF A LINGUA FRANCA

So far, we have seen that

a) Digitization is happening all around us

b) The communications mechanism facilitating the flow of digital content is unclear

c) Top down approaches (either from governments or from corporations) – do not work on a global scale.

Here is my view .. My view has actually been inspired by Irving Wladawsky-Berger (Vice President of Technical Strategy and Innovation – IBM) – whose thinking I follow with great interest.

Specifically, this article by Irving

where he says ..

Digital convergence can be viewed from different points of view, so let me share my own perspective. The standardization of technology components and interfaces at one level, opens up enormous opportunities for innovation in the application of the technologies for new products and services. Nowhere is this more apparent than in the innovation unleashed in the IT industry in the last 10 years by the move to standards and standard components and infrastructures, especially the Internet, coupled with the availability of increasingly powerful and affordable technologies. Going “up the stack,” I am very excited about the opportunities for innovation in the world of business, as software standards like SOA and standard business components help us better integrate and transform companies and industries.

There is no question in my mind that convergence is now coming to digital entertainment and consumer electronics. Consumer electronics products are being built using common hardware components from the computer industry, for example, microprocessors, memory, storage, and so on, and most of their capabilities are now being designed as software. The drive toward open standards to link all the components in the home parallels what has been going on in IT for the last 10 to 15 years, and without a doubt, broadband Internet is emerging as the major communications and content distribution platform into the home.

The viewpoint of ‘Going up the stack’ offers a potential road to Digital convergence. At the lower levels of the stack, the common element is IP(Internet Protocol). At the higher levels of the stack, the one common element to many new devices is http .

The web(by that I mean IP and http) are the common elements to almost all new devices.

Consider that the following five devices shown are all running a browser inspite of their obvious differences in form and functionality

browserdevices1.JPG

Images: http://www.opera.com/products/devices/markets/gallery/

Thus, the presence of a browser could offer a means to facilitate digital convergence.

DIGITAL CONVERGENCE = MASHUPS

Irving’s article hints at this by referring to SOA but my money is on a much lighter incarnation of SOA i.e. mashups.

Mashups are a core element of web 2.0. According to wikipedia, a mashup is a website or web application that seamlessly combines content from more than one source into an integrated experience.

Visual mashups are getting all the kudos at the moment .. for example housingmaps which combines craigslist and googlemaps

But, mashups need not be visual …

Consider the Yahoo Music Engine API

As per http://plugins.yme.music.yahoo.com/

linksys.jpg

Image source:

http://plugins.yme.music.yahoo.com/plugins/download/2005/0026/Linksys_Full.jpg

Hmmm… I’ve got all of this music on my PC, now how do I get it to my living room? Yahoo! has teamed with Linksys to answer this age old question. You can use the Linksys Music Bridge to wirelessly play all of your music directly to your home stereo. Already have a Music Bridge? Download the plug-in to select devices and control play output from within Yahoo! Music Engine.

While the Yahoo Music engine API is relatively obscure, it could point to a future trend where device manufacturers could enable other devices to mashup with them. As hardware becomes a commodity, the ease and connectivity popularity (number of mashups) could be a key differentiating factor for hardware manufacturers.

There is already a goldrush of sorts from makers of APIs to get their own API as ‘mashed up’ as possible. Even Microsoft is at it!

CONCLUSION

So, there you have it .. In my view, Digital convergence = Mashups.

I like this approach because it’s organic and it’s inclusive.

Of course, I am also wielding a ‘hammer’ here .. and my views are only as good as my understanding(or lack thereof!). So, all comments welcome at ajit.jaokar at futuretext.com

This concludes the three part series on web 2.0. Many thanks to all who have contacted me from all over the world with your feedback and thanks.

IMAGE ATTRIBUTE

The image is of David Brent from the popular BBC comedy The Office.

What has it got to do with mashups? I don’t really know! I searched for ‘mashup’ in google images and this image came up!. If anyone can figure out the connection, please let me know. Perhaps, its because David Brent considers himselves to be a renaissance man .. much like one hopes a new wave of renaissance and innovation is on us. That’s my best guess!

Anyway, I am a huge fan of The office .. and if you have never seen it .. worth having a look!

Why should bloggers not be the stars of a conference?

bloggers.JPG

Hello all,

I am involved with a new concept and seek your feedback.

The basic idea is simple. Today, bloggers are the main influencers in the marketspace. Maybe much more than analysts, researchers etc ..

So, why should bloggers not be the stars of a conference?

I am on the advisory board of a conference which is based on this simple concept. The conference covers wireless and mobile media (mobile / web 2.0 / media 2.0 etc)

We are running with a simple idea : Conference attendees, companies etc would like to meet and know top bloggers. Bloggers meanwhile also want to interact with the community ..

So, why not bring them together?

Initially, it will have a European focus – but I have already had interest from as far away as South Africa and New Zealand!. We define bloggers simply as people who have been running blogs getting a lot of traffic, links etc and who have clear views on the industry(not merely people who post for the sake of it!)

I have two questions

a) I seek your thoughts and feedback on this

b) Can you recommend any bloggers who we should invite?

Is there a long term synergy between advertisers and YouTube?

youtube.JPG

Is there a long term synergy between advertisers and YouTube?

No doubt, YouTube has gained some traction and the numbers speak for themselves

According to numbers provided by traffic-tracking company ComScore Networks, YouTube received 4.2 million unique visitors in February. Those numbers are good enough to outpace Apple Computer’s iTunes (3.5 million) and put it within spitting distance of eBaumsworld.com (4.4 million) and AOL Video (4.7 million), both of which have been in business longer.

Impressive as this trend is .. it hides some important observations

YouTube has gained it’s success by judiciously mixing professionally made clips, including music videos and movie trailers, with homemade content. This has seen number of viewings jump up from 3 million a day to 30 million since the Web site’s December launch. No doubt, some advertisers are gaining traction – like Nike with their Ronaldinho clip.

But YouTube cannot continue this indefinitely else it would disrupt the user experience as YouTube (rightly) fears and therefore, this model is not scaleable.

So, is there a long term synergy between advertisers and YouTube

The answer lies in advertisers providing something to users which they do not get currently. That ‘missing link’ is subsidising the mobile component of YouTube!

A mobile version of YouTube (as YouTube stands currently) could be viewed as an A2P(application to person) application. Users could simply download a clip on to their 3G phones. Unlike P2P (directly sending clips person to person), A2P is relatively simple.

(Note: This is because, P2P applications need us to know the capabilities of phones at both ends and that can be tricky – not just for the device itself – but for the support provided by the intervening infrastructure players such as the mobile operators because the sending and the receiving mobile operator may not support video clips uniformly. In fact, the success of A2P content has been demonstrated by A2P MMS(picture messaging) where people download a simple picture from a site as opposed to P2P MMS which involves sending pictures directly to each other. Video is no different conceptually except for being a richer medium

)

Thus, YouTube can be extended to mobile devices as it stands.

The critical gap is ‘bandwidth costs’. The moment we send a clip ‘over the air’ – someone incurs a cost. That someone is the ‘user’.

If advertisers could fund this gap(both for their own clips but also for the user’s homemade clips), we have the seeds of a viable and a scaleable model which is user centric.

The users are now getting something they never got before in return for viewing an advertisement. The content they are getting is ‘their’ content – hence valuable.

This could be a win win situation for all!

Jonathan schwartz becomes Sun CEO ..

One of the web’s highest profile tech bloggers becomes a CEO

congratulations are in order ..

Should be interesting times for Sun ..

The three characteristics of mobile web 2.0

mobile web2 charac1.jpg

I see web 2.0 as the Intelligent web or ‘harnessing collective intelligence’

Mobile web 2.0 extends the principle of ‘harnessing collective intelligence’ to restricted devices

The seemingly simple idea of extending web 2.0 to mobile web 2.0 has many facets – for instance :

a) What is a restricted device?

b) What are the implications of extending the web to restricted devices?

c) As devices become creators and not mere consumers of information – what categories of intelligence can be captured/harnessed from restricted devices?

d) What is the impact for services as devices start using the web as a massive information repository and the PC as a local cache where services can be configured?

Restricted devices: A broad definition of a ‘restricted device’ is not easy. The only thing they all have in common is – ‘they are battery driven’. But then – watches have batteries?

A better definition of restricted devices can be formulated by incorporating Barbara Ballard’s carry principle.

Thus, a restricted device could now be deemed as

a) Carried by the user

b) battery driven

c) Small(by definition)

d) Probably multifunctional but with a primary focus

e) A device with limited input mechanisms(small keyboard)

f) Personal and personalised BUT

g) Not wearable (that rules out the watch!). But, there is a caveat, a mobile device in the future could be wearable and it’s capacities may well be beyond what we imagine today. The input mechanism in the future will not be a key stroke on such devices, but a movement or sound. So, this is an evolving definition.

Finally, there is a difference between a ‘carried’ device and a ‘mobile device which is in a vehicle’.

For example – in a car, a GPS navigator is a ‘mobile device’ and in a plane, the in-flight entertainment screen is also ‘mobile’. However, both these devices are not ‘carried by a person’ and do not have the same screen/power restrictions as devices that are carried by people.

However, whichever way you look at it, it’s clear that the mobile phone is an example of a restricted device. From now on – we use the definition of mobile devices interchangeably with ‘restricted devices’ and the meaning will be clearer in the context.

Extending the web to restricted devices: It may seem obvious – but web 2.0 is all about the ‘web’ because web 2.0 could not have been possible without the web. Thus, in a ‘pure’ definition – web 2.0 is about ‘harnessing collective intelligence via the web’. When we extend this definition to ‘mobile web 2.0’ – there are two implications :

a) The web does not necessarily extend to mobile devices

b) Even though the web does not extend to mobile devices, intelligence can still be captured from mobile devices.

The seven principles of web 2.0 speak of this accurately when they discuss the example of the ipod/iTunes. The ipod uses the web as a back end and the PC as a local cache. In this sense, the service is ‘driven by the web and configured at the PC’ but it is not strictly a ‘web’ application because it is not driven by web protocols end to end(ipod protocols are proprietary to Apple).

Tim O Reilly puts it succinctly in his response to my post on the O Reilly radar when he says ..

So writes Ajit Jaokar, arguing that “Harnessing Collective Intelligence” is the root principle of Web 2.0, and the others make sense to the extent that you understand how they feed into (and draw from) this one. He’s absolutely right: the web is mechanism only. And it’s “web” only by naming convenience, because much as the internet was originally defined as “a network of networks,” the web is becoming “a web of webs,” as various mechanisms for harnessing and aggregating collective intelligence start to interconnect. In particular, Ajit’s focus is on the mobile web, which doesn’t have much in common technically with the http-based web, but everything in common with Web 2.0.

Thus, the characteristics(distinguishing principles) of mobile web 2.0 are:

a) Harnessing collective intelligence through restricted devices i.e. a two way flow where people carrying devices become reporters rather than mere consumers

b) Driven by the web backbone – but not necessarily based on the web protocols end to end

c) Use of the PC as a local cache/configuration mechanism where the service will be selected and configured

As usual, I seek your thoughts and feedback on this concept.

I make it to the O’Reilly radar ..

It’s been a great day for me!

Its flattering to get good feedback from two very clued on guys on the web ..

First Alex Barnett said about my post Tim O’ Reilly’s seven principles of web 2.0 make a lot more sense if you change the order

In my view Ajit has nailed it. What he’s done, brilliantly and simply, is made one of the seven principles as the higher-level ‘collective application’, making the remaining six principles components of the collective application. The ‘collective application’ is the Intelligent Web.

and

Tim O Reilly himself mentioned the same post on the O Reilly radar ..

Principles of Web 2.0 Make More Sense if You Change the Order

So writes Ajit Jaokar, arguing that “Harnessing Collective Intelligence” is the root principle of Web 2.0, and the others make sense to the extent that you understand how they feed into (and draw from) this one. He’s absolutely right:

Many thanks Alex and Tim!

A web 2.0 FAQ

web21.jpg

Based on some initial feedback, I have republished this entry now as a FAQ for web 2.0

What is web 2.0?

Because of my work with mobile web 2.0, I am often asked the question – ‘what is web 2.0′?. This is often a genuine question – since there is a lot of confusion out there and many bandwagon seekers.

In an attempt to explain web 2.0, this blog gives a simple FAQ

Web 2.0 is a ‘Soft concept’ – it’s not a standard, or a formula or a definition – which would have been a lot easier to explain.

Thus, a conventional FAQ would be too long(and would probably become out of date soon)

To me, web 2.0 is the collective application of the seven principles of web 2.0 as outlined by Tim O’ Reilly

Without deviating from the core concepts (i.e. the seven principles) and not adding to the existing confusion surrounding web 2.0,

A web 2.0 FAQ would be as follows ..

What is web 2.0? : It’s the intelligent web.

What makes it intelligent? We (the people) do.

How does it happen? : By harnessing collective intelligence

What do you need to harness collective intelligence? : The six principles of web 2.0 except principle two(which itself is ‘Harnessing collective intelligence’ !

To me, web 2.0 makes perfect sense if you observe that – of the seven principles – all the other principles feed into the second principle(harnessing collective intelligence)

Let me explain ..

web 2.0 can be described as the ‘Intelligent web’ or ‘Harnessing collective intelligence’(which is the second principle of web 2.0)

The capacity to acquire and apply knowledge is intelligence. Knowledge is the sum or range of what has been perceived, discovered, or learned.

What kind of intelligence can be attributed to the web? How is it different from web 1.0?

IMHO – web 1.0 was hijacked by the marketers, advertisers and the people who wanted to stuff canned content down our throat! Take away all that after the dot com bubble and what’s left is the web as it was originally meant to be – a global means of communication.

The intelligence attributed to the web(web 2.0) arises from us as we begin to communicate.

Thus, when we talk of the ‘Intelligent web’ or ‘harnessing collective intelligence’ – we are talking of the familiar principle of wisdom of crowds

Merely managing a community is not web 2.0! as many web 2.0 masqeraders will find out no doubt soon.

In order to harness collective intelligence

a) Information must flow freely

b) It must be harnessed/processed in some way – else it remains a collection of opinions and not knowledge

c) From a commercial standpoint, there must be a way to monetise the ‘long tail’ – but that’s the topic of another blog!

My essential argument is – if we consider web 2.0 as ‘Intelligent web’ or ‘Harnessing collective intelligence’(Principle two) – and then look at the other six principles feeding into it – it’s all a lot clearer

Since the wisdom of crowds is so important – lets consider that in a bit more detail from the wikipedia entry for the wisdom of crowds

Are all crowds wise?

No.

The four elements required to form a ‘wise’ crowd are

a) Diversity of opinion

b) Independence: People’s opinions aren’t determined by the opinions of those around them.

c) Decentralization: People are able to specialize and draw on local knowledge.

d) Aggregation: Some mechanism exists for turning private judgments into a collective decision.

Conversely, the wisdom of crowds fails when

a) Decision making is too centralized: The Columbia shuttle disaster occurred because the hierarchical management at NASA was closed to the wisdom of low-level engineers.

b) Decision making is too divided: The U.S. Intelligence community failed to prevent the September 11, 2001 attacks partly because information held by one subdivision was not accessible by another.

c) Decision making is imitative – choices are visible and there are a few strong decision makers who in effect, influence the crowd

Now .. let’s look at the seven principles again ..

1. The Web As Platform

The web is the only true link that unites us all together whoever we are and wherever we are in the world. Hence, to harness collective intelligence and to create the intelligent web – we need to include as many people as we can. The only way we can do this is to treat the web as a platform and use open standards. You can’t harness collective intelligence using the

ESA/390 - however powerful it is!

2. Harnessing Collective Intelligence

Now becomes the ‘main’ principle or the first principle

3. Data is the Next Intel Inside

By definition, to harness collective intelligence – we must have the capacity to process massive amounts of data. Hence, data is the ‘intelligence’ (Intel)

4. End of the Software Release Cycle

This pertains to ‘Software as a service’. Software as a ‘product’ can never keep upto date with all the changing information.

Ofcourse in the web 2.0 sense, we are dealing with code as well as data – so the service concept keeps the data relevant (and the harnessed decision accurate) by accessing as many sources as possible

5. Lightweight Programming Models

The heavy weight programming models catered for the few. In contrast, using lightweight programming models we can reach many more people(hence sources of information – to

enable data collection and a more intelligent web).

For example: from the seven principles

Amazon.com’s web services are provided in two forms: one adhering to the formalisms of the SOAP (Simple Object Access Protocol) web services stack, the other simply providing XML data over HTTP, in a lightweight approach sometimes referred to as REST (Representational State Transfer). While high value B2B connections (like those between Amazon and retail partners like ToysRUs) use the SOAP stack, Amazon reports that 95% of the usage is of the lightweight REST service.

6. Software Above the Level of a Single Device

More devices to capture information and better flow of information between these devices leads to a higher degree of collective intelligence

7. Rich User Experiences

A rich user experience is necessary to enable better web applications leading to more web usage and better information flow on the web – leading ofcourse to a more ‘Intelligent’ web.

And you need look no further than this blog .. itself a collaborative exercise and hopefully adding to the intelligence of the web itself

Thoughts/comments welcome at ajit.jaokar at futuretext.com

Note: I first heard of the phrase ‘Intelligent web’ from Michiel de Lange’s comment on another blog which referred to one of my older posts.

His entry using the phrase ‘Intelligent web’ is HERE

Tim O’ Reilly’s seven principles of web 2.0 make a lot more sense if you change the order

web21.jpg

What is web 2.0? Because of my work with mobile web 2.0, I am often asked – ‘what is web 2.0′?

This is often a genuine question – since there is a lot of confusion out there and many bandwagon seekers. Further, web 2.0 is a ‘Soft concept’ – it’s not a standard, or a formula or a definition – which would have been a lot easier to explain.

I must be one of the few people who actually understand web 2.0!

To me, it’s explained by the collective application of the seven principles of web 2.0 as outlined by Tim O’ Reilly

So, my standard response to this question was to ask people their email address and then send them the O Reilly link.

If they had an interest in mobility or digital convergence, I would send them my own work on mobile web 2.0 – which is based on the seven principles of web 2.0

Last week, I was discussing the seven principles yet again .. when suddenly it struck me – perhaps they should be in a different order!

I understand the rationale behind them – but not quite why they are in that specific order.

To me, it all makes perfect sense if the first and the second principles are switched over because all principles feed into the second principle!

Let me explain ..

web 2.0 can be described as the ‘Intelligent web’ or ‘Harnessing collective intelligence’(which is the second principle of web 2.0)

The capacity to acquire and apply knowledge is intelligence. Knowledge is the sum or range of what has been perceived, discovered, or learned.

What kind of intelligence can be attributed to the web? How is it different from web 1.0?

IMHO – web 1.0 was hijacked by the marketers, advertisers and the people who wanted to stuff canned content down our throat! Take away all that after the dot com bubble and what’s left is the web as it was originally meant to be – a global means of communication.

The intelligence attributed to the web(web 2.0) arises from us as we begin to communicate.

Thus, when we talk of the ‘Intelligent web’ or ‘harnessing collective intelligence’ – we are talking of the familiar principle of wisdom of crowds

Merely managing a community is not web 2.0! as many web 2.0 masqeraders will find out no doubt soon.

In order to harness collective intelligence

a) Information must flow freely

b) It must be harnessed/processed in some way – else it remains a collection of opinions and not knowledge

c) From a commercial standpoint, there must be a way to monetise the ‘long tail’ – but that’s the topic of another blog!

My essential argument is – if we consider web 2.0 as ‘Intelligent web’ or ‘Harnessing collective intelligence’(Principle two) – and then look at the other six principles feeding into it – it’s all a lot clearer

Since the wisdom of crowds is so important – lets consider that in a bit more detail from the wikipedia entry for the wisdom of crowds

Are all crowds wise?

No.

The four elements required to form a ‘wise’ crowd are

a) Diversity of opinion

b) Independence: People’s opinions aren’t determined by the opinions of those around them.

c) Decentralization: People are able to specialize and draw on local knowledge.

d) Aggregation: Some mechanism exists for turning private judgments into a collective decision.

Conversely, the wisdom of crowds fails when

a) Decision making is too centralized: The Columbia shuttle disaster occurred because the hierarchical management at NASA was closed to the wisdom of low-level engineers.

b) Decision making is too divided: The U.S. Intelligence community failed to prevent the September 11, 2001 attacks partly because information held by one subdivision was not accessible by another.

c) Decision making is imitative – choices are visible and there are a few strong decision makers who in effect, influence the crowd

Now .. let’s look at the seven principles again ..

1. The Web As Platform

The web is the only true link that unites us all together whoever we are and wherever we are in the world. Hence, to harness collective intelligence and to create the intelligent web – we need to include as many people as we can. The only way we can do this is to treat the web as a platform and use open standards. You can’t harness collective intelligence using the

ESA/390 - however powerful it is!

2. Harnessing Collective Intelligence

Now becomes the ‘main’ principle or the first principle

3. Data is the Next Intel Inside

By definition, to harness collective intelligence – we must have the capacity to process massive amounts of data. Hence, data is the ‘intelligence’ (Intel)

4. End of the Software Release Cycle

This pertains to ‘Software as a service’. Software as a ‘product’ can never keep upto date with all the changing information.

Ofcourse in the web 2.0 sense, we are dealing with code as well as data – so the service concept keeps the data relevant (and the harnessed decision accurate) by accessing as many sources as possible

5. Lightweight Programming Models

The heavy weight programming models catered for the few. In contrast, using lightweight programming models we can reach many more people(hence sources of information – to

enable data collection and a more intelligent web).

For example: from the seven principles

Amazon.com’s web services are provided in two forms: one adhering to the formalisms of the SOAP (Simple Object Access Protocol) web services stack, the other simply providing XML data over HTTP, in a lightweight approach sometimes referred to as REST (Representational State Transfer). While high value B2B connections (like those between Amazon and retail partners like ToysRUs) use the SOAP stack, Amazon reports that 95% of the usage is of the lightweight REST service.

6. Software Above the Level of a Single Device

More devices to capture information and better flow of information between these devices leads to a higher degree of collective intelligence

7. Rich User Experiences

A rich user experience is necessary to enable better web applications leading to more web usage and better information flow on the web – leading ofcourse to a more ‘Intelligent’ web.

And you need look no further than this blog .. itself a collaborative exercise and hopefully adding to the intelligence of the web itself

To recap – Here is a WEB 2.0 FAQ

What is web 2.0? : It’s the intelligent web.

What makes it intelligent? We do.

How does it happen? : By harnessing collective intelligence

What do you need to harness collective intelligence? : The other six principles!

Thoughts/comments welcome at ajit.jaokar at futuretext.com

Note: I first heard of the phrase ‘Intelligent web’ from Michiel de Lange’s comment on another blog which referred to one of my older posts.

His entry using the phrase ‘Intelligent web’ is HERE

Google calendar – A disruptive application

calen.JPG

Google has launched it’s calendar application today . What are the implications for the web office? Is it part of an upcoming Google online suite – or is it just another perpetual beta product from Google? Will it put the likes of CalendarHub out of business?

I present my insights below and I believe that this announcement reveals a much deeper strategic game plan than previous announcements from Google.

Firstly, consider that of all Google’s products, only adwords / adsense make any money for the company . Ofcourse, they make a LOT of money (according to their SEC filings – it’s 99% of their revenues).

Thus, Google can afford to experiment with a raft of products – most of which are making little money or no money. But I will bet that any company that has 99% of its revenue coming from one source will want to change that situation.

After all, that revenue was built in a matter of five or six years – and could go down the tube in less than that!

So, we can safely assume that the strategists at Google are looking for alternate revenue sources. And they don’t have to look far. In Microsoft’s home territory – one revenue source beckons – i.e. the office suite. The web office is a natural challenger to Microsoft Office.

For some time now, rumours have persisted about Google’s foray into Microsoft’s turf. These peaked around October last year, when Sun and Google announced a joint partnership for Open Office

In that announcement, many rightly perceived Google to be the more stronger partner. Both partners denied that they were going to take Microsoft head on – a wise choice considering the fate of many who have historically attempted it before.

Thus, if we combine these two announcements (SEC filing and OpenOffice partnership) – we see that Google has a majority of it’s revenue coming from one source coupled with a desire to at least explore the revenue streams from the corporate web office scenario.

Thus, a picture was formed late last year – where suggests that Google may persist in being the dominant player in the consumer sector and partner with others to address the corporate sector.

This made sense.

After all, Google is a company more familiar with the consumer front. The corporate sector is not very familiar to it. In keeping with that trend of Google being predominantly a consumer focussed company, many analysts have seen the Calendar announcement as just another Yahoo 2.0

On the consumer front, Google’s strategy has been predictable in some ways .. it could (cynically) be classed as build yahoo 2.0.

In other words, choose a popular yahoo offering, strip it down of excess baggage and give it the trademark simple Google interface. Then add one killer feature such as Gmail’s storage limit or Google Talk’s use of Jabber.

However, I believe that viewing the calendar announcement in terms of a consumer strategy is missing a critical point. That’s because, if you combine the calendar announcement with Google’s acquisition of writely a different pattern emerges.

There are some immediate observations

a) Combined with the writely announcement, it points to a trend to acquire/build best of breed applications which comprise the web office

b) The rationale and future of the Google/OpenOffice announcement becomes less clear

c) The calendar announcement should be viewed in context of writely acquisition and not just another yahoo 2.0. Although yahoo has a calendar, it does not have a writely type product. Combine the two and you see that the target is different in this case, it’s Microsoft – but not via the Sun partnership!

d) In the crossfire, smaller products like calendarhub are in trouble – but that’s not as interesting as the real significance i.e. the impact on Microsoft.

As one has come to expect of Google, the calendar application itself is a classic web 2.0 application with one extra sexy feature. In this case – natural language processing – where you can enter an event through a plain text feature like ‘Meeting with Jane at 3 tomorrow’

But a single sexy feature alone is not enough to make an impact on the CIOs in the corporate world.

Lets explore this subject more

a) Are companies interested in exploring options or switching from a Microsoft suite/Operating system – YES INDEED!

In 2001, Microsoft revamped it’s software licensing policy – and enriched it’s coffers considerably as a direct result of this !

According to http://news.com.com/2100-1001-908779.html

The old program: Previously, companies bought software licenses for each desktop and then picked up upgrades on an as-needed basis. Software upgrades cost about 59 percent to 72 percent of the original license. Typically, customers upgraded operating systems or applications every three to four years.

Software Assurance: Under the new plan, which is available to participants of Open and Select volume licensing programs, companies pay an annual fee that gives them the right to upgrade each desktop for a specified number of years, usually two to three. The fee is 29 percent of the initial license for desktop software and 25 percent for server software.

The rub: Software Assurance is not cheap. Assuming Windows XP costs $100, companies will pay $87 per desktop after three years for the right to upgrade ($29 x three years). By contrast, companies may have paid nothing under the old program if no upgrades were released or if companies decided not to upgrade. Because many companies upgrade only every other version, the no-cost scenario would be common.

Thus, customers WILL switch if they could

b) The Microsoft operating system release history definitely has a pattern . Leaving aside slipped deadlines, releases have happened roughly every three years and, generally, in the second half of the year. Predictably, the latest version (Vista) has slipped .On 21 March 2006, Microsoft announced it has delayed the consumer release of Windows Vista until January 2007

c) Yet, analysts like Gartner recommend that this delay is just a blip. They expect a large scale upgrade to Vista in 2008. . We can safely expect companies to follow this advice!

Thus, it would appear that – while weboffice is cheaper and Google is headed in the right direction with writely, calendar etc .. It won’t affect the next upgrade cycle in the corporate world.

So, is the web office a write off in the corporate world?

To answer that question, we have to look at weboffice for what it really is – A disruptive technology! Further, the critical element that distinguishes web office is its ability to share information and collaborate.

Will this make a difference? And if so how?

Disruptive technologies were first discussed by Dr Clayton Christensen in his seminal book – The Innovator’s dilemma.

According to the Innovator’s dilemma – Dr Christensen talks of new-market disruption which is driven by

a) Customers who cannot be previously served profitably by the incumbent. and / or

b) The new product has features which appeal to a niche sector and thus the product can get a foothold in this sector and subsequently move up the value chain.

This can be elaborated more as per wikipedia

“New market disruption” occurs when a product that is inferior by most measures of performance fits a new or emerging market segment. In the disk drive industry, for example, new generations of smaller-sized disk drives were both more expensive and had less capacity than existing, larger-sized drives. Since size was not an important factor for the early computer market, these new drives seemed worse in every way. With the development of the minicomputer (or afterwards, the desktop computer, the notebook, and the personal music player), size became an important dimension, and these new drives quickly dominated the market.

We then are faced with the question – which sector / segment could be the early adopter customers for this disruptive technology (web office)?

The distinctive feature of web office (sharing and collaboration) offers a clue

I believe that outsourcing companies could be the key driver of web office (offshore or otherwise). They provide a critical incentive for corporates of all sizes to consider web office as a direct means to collaborate with their partners either onshore or offshore through virtual teams. This means, the existing value chain will not be disturbed (for now!) but once web office has taken a foothold through outsourcing/virtual teams – it will move up the value chain – directly challenging Microsoft’s dominance.

To conclude..

I believe

a) The Google calendar announcement – coupled with the writely announcement – reveals a key trend towards a foray in the desktop domain through web office

b) Other players/partnerships will be negatively impacted(such as companies like Calendarhub and the OpenOffice agreement)

c) Inspite of some gains from Google and slippage from Vista – Vista will still be dominant for the 2008 upgrade cycle in the corporate world.

d) WebOffice will benefit from the outsourcing trend where it’s collaborative features and low cost structure offer it a critical advantage

e) Once having gained that foothold, web office will move up the value chain.

Carnival of the mobilists – No 22

Is at the technokitten (Helen Keegan’s blog). The carnival is fast becoming ‘must read’ reading on the lastest mobile thinking. enjoy!