Mobile web 2.0: A service blueprint ..

ussv.JPG

Following my previous blog about mobile web 2.0 , I wanted to find a blueprint/case study of a mobile web 2.0 service.

This blog is a bit of a gedankenexperiment – but I have drawn on the excellent work being done by Dr Marc Davis and his team at the Garage cinema research at the University of California (Berkeley).

The service I am considering here is a ‘mobile’ version of a combination of del.icio.us and flickr

As you probably know, both del.icio.us and flickr are based on tags. However, note that in a mobile context, a ‘tag’ would have a different meaning to the term on the web. People do not like to enter a lot of information on a mobile device. Thus, a tag in a mobile sense, would be explicit information entered by the user(i.e. a ‘web’ tag) but more importantly information captured implicitly when the image was captured(for example the user’s location).

The service would enable you to

a) Search related images and get more information about a ‘camera phone image’ using historical analysis of metadata (including tags) from other users. This bit works like del.icio.us i.e. searching via tags BUT with a mobile element because the ‘tag’ could include many data elements that are unique to mobility(such as location)

b) ‘Share’ your images with others (either nominated friends or the general public similar to flickr but as a mobile service)

From a user perspective, the user would be able to

a) Capture an image using a camera phone alongwith metadata related to that image

b) Gain more information about that image from an analysis of historical data (either a missing element in the image or identifying the image itself)

c) Search related images based on tags

d) Share her image with others – either nominated friends or the general public

Let’s break down the components further. We need -

a) A mobile ‘tagging’ system at the point of image capture

b) A server side processing component which receives data elements from each user. It then adds insights based on historical analysis from data gleaned from other users.

c) An ability to deliver the results to the user(these could be a list of related images based on the tag or ‘missing’ information about the image)

d) A means to capture the user’s feedback to the results

e) A means to share images with others.

Tagging an image

It’s not easy to ‘tag’ a mobile image at the point of capturing it. In fact, in a mobile context, implicit tagging is more important than explicit tagging(An explicit tag being a tag which the user enters themselves).

At the point the image is taken from a camera phone, there are three classes of data elements we could potentially capture

a) Temporal for example the time that the image was captured

b) Spatial – The GPS location or cell id

c) Personal/Social - Username (and other personal profile information which the user chooses to share), presence, any tags that the user has entered, other people in the vicinity(perhaps identified by Bluetooth), other places of interest recently visited etc

The client component captures all the data elements and sends them to the server. It also displays the results from the server. (The garage cinema research uses a system called Mobile Media Metadata (MMM)which performs this function).

Server side processing

The server aggregates metadata from all users and applies some algorithms to the data. The data could also be ‘enriched’ by data sources such as land registry data, mapping data etc.

It then sends the results back to the user who can browse the results.

Finding ‘missing elements’ of your image

In many cases, it’s not easy to identify elements of the image(or in some cases, the image itself).

Consider the three images of Big Ben shown below

The third image is not very clear. It also includes two neighbouring ‘points of interest’ i.e. the river Thames and the house of parliament .

bigben.JPG

Based on Meta data from other users, the ‘river Thames’ and ‘House of parliament’ could be identified to the person capturing the third image. This is because – potentially other users would have captured separate images of the three points of interest and tagged them.

Thus, if the third user wanted to know ‘the river in the image’ or the ‘building in the image’ – they would be presented with a likely set of related points of interest which could include the river Thames and the house of commons. (Laughably trivial – I know – but it illustrates the point!)

Sharing your images

This is the ‘flickr’ component. However, ‘sharing’ in a mobile context, also includes location. This is very similar to the ‘air graffiti’ system I described in my previous article.

To recap, from my previous blog, the air graffiti system is – the ability to ‘pin’ digital ‘post it notes’ at any physical point. Suppose you were at a holiday destination and you took a picture or a video of that location. You then ‘posted’ that note digitally with your comments and made it accessible to your ‘friends’. Many years later, one of your friends happened to come to that same place and as she walked to the venue, a message would pop up on her device with your notes, picture and comments.

Like flickr, ‘friends’ may be members of the general public with similar interests (i.e. like flickr ) or a closed group.

So, is this a mobile web 2.0 service?

Let’s consider some of the principles here(for a detailed explanation, please read my article Mobile web 2.0: Web 2.0 and its impact on the mobility and digital convergence (Part one of three) )

• It’s a service and not packaged software.

• It’s scaleable.

• It utilises the ‘long tail’ i.e. input from many users as opposed to a core few.

• The service is managing a data source(it’s not just software).

• The data source gets richer as more people use the service.

• Users are trusted as ‘co-developers’ i.e. users contribute significantly.

• The service clearly harnesses ‘collective intelligence’ and by definition is ‘above the level of a single device’.

• Implicit user defaults are captured.

• Data is ‘some rights reserved’ – people are sharing their images with others.

The two aspects not covered above are

• A rich user experience and

• A lightweight programming model

These are implementation issues and could easily be included. So, IMHO – indeed this is an example of a mobile web 2.0 service!

Notes:

a) The example may sound trivial since Big Ben is a well known location – but the same principle could apply to images of other lesser known sites.

b) Of course, other types of data could be captured from the mobile phone for example video and sound.

c) There are no major technical bottlenecks as far as I can see(there are some commercial/privacy issues though).

d) From the above, you can see that Moblogging , in itself, is not an example of a web 2.0 service

e) There are a whole raft of problems when it comes to the network effect and mobility. I have not discussed these here.

As usual, seek comments. You can email me at ajit.jaokar at futuretext.com

References:

Garage cinema research

Mobile Media Metadata for Mobile Imaging : Marc Davis University of California at Berkeley and Risto Sarvas Helsinki Institute for Information Technology

From Context to Content: Leveraging Context to Infer Media Metadata

Marc Davis, Simon King, Nathan Good, and Risto Sarvas

University of California at Berkeley

Image One

Image Two

Image three

USS Voyager Blueprint image : and http://www.startrek.com

Technorati tags

Permanent link: http://opengardensblog.futuretext.com/archives/2006/01/mobile_web_20_a_1.html

Comments

  1. Mobile 2.0 in the Enterprise

     
    Someone sent me this  link about Mobile 2.0,  the conjunction of community and…

  2. Ray Anderson says:

    Have a look at WAP.com (from your mobile)
    A project underway, and very much in the same direction.
    I can email you some docs if you want