Thursday 1 November 2012

The Future is Software.


Do a younger, more tech-savvy generation care more for personalizing their devices and the data they can access on them rather than the expensive branded hardware?

I was recently teaching some third year undergraduates and I noticed that one of them was working with an Android tablet which turned out to be a Google Nexus 7. It looked rather neat as he showed me all the various widgets available on his home screen, email and chat feeds. One comment he made struck me – ‘I’ve had enough with iOS’ he said. Him and his friends too. What he meant was that they were all a bit underwhelmed and actually frustrated by the lack of adaptability in the now rather basic iOS interface. Android was ‘the way to go’.

Then a few other things made me think. Firstly, there was the launch of the new iPads along with the departure from Apple of John Forstall, in charge of iOS as described by Adam Lashinsky at CNN Money. Then there was this article in the FT by the editor in chief of the MIT Technology review who was bold enough to say – “Microsoft knows it is slowly dying but declines to accept its fate. Apple, flush with cash, does not yet have to admit that with the death of its tutelary genius, it has lost its way.” Then there was the fact that Apple missed its targets for the second time.

The new iPad is basically the same as the old one, only quicker and different cameras, and the iPad mini is a smaller version. Not much new there. Whether it's true or not, Forstall was held responsible the things that went wrong for Apple recently – Siri and Apple Maps. Both were examples of software development that really weren’t in line with the usual Apple perfectionist ethos.

Now there are daily stories predicting the rise of this mobile player and demise of the other, be it Google, Blackberry, Microsoft or anyone else. What I am wondering here is if there is a larger trend emerging out of a generational shift that might explain Android’s increasing popularity. I wonder if the lack of attention to, and reluctance to change, the operating software of these devices by Apple isn’t flying in the face of what the younger, soon to be consumers, generation wants. All of a sudden, the walled garden approach their well-off parents have favoured with Apple doesn’t look that appealing anymore. The more flexible, adaptable Android appeals to them as does the commitment to software that solves the bigger problems that works and which has a genuine impact on lifestyles. Is the sleek hardware based approach beginning to look like the kind of thing well-off parents own rather than the kids? That would put it perilously close to being uncool.

Amazon and Google look like they are taking a long view on hardware becoming cheaper, maybe even free and it’s all about the software and crucially the data available to them and how they can access it, for example, Maps. It seems a younger generation might agree with them.

Thursday 18 October 2012

New Scientist takes note of our prototype for User Generated Content

Our colleagues at CWI, who were part of the development team on our prototype system for automatic compilation of videos from user-generated content, chatted recently with New Scientist magazine to tell them a little more about it. Here's the article - http://www.newscientist.com/article/mg21528835.800-video-mashups-give-you-personalised-memories.html


The narrative engine developed by the Narrative and Interactive Media (NIM) group at Goldsmiths works with the help of an authored symbolic representation of the story surrounding a social event. In the case of our prototype this was a school concert. Users, via a web-based app, were able to interact with a personalised movie created from all the video footage shot and uploaded by all the friends and family who witnessed and captured media at the concert.

I am about to present a paper - 'Interactive Video Stories from User Generated Content: a School Concert Use Case' -  about building the algorithm for the system at the upcoming International Conference on Digital Storytelling (ICIDS) in San Sebastian, Nov 12-15. In the paper I go into more details about how the notion of a 'sequence' was borrowed from existing TV editing practice and used to create a flexible computer based story model based on a previous NIM development, Narrative Structure Language (NSL). The last link takes you through to a publication in the ACM Multimedia Systems Journal about NSL.

Wednesday 10 October 2012

HTML5 shows what it can do with the help of Microsoft

HTML5 has often promised to offer the promised land  where web apps will replace all those downloads we all make to our devices. However, it always seems to be coming in the future with the odd exciting demo on a web site somewhere to whet our appetites.

In our own work in interactive narrative at the Narrative and Interactive Media group at Goldmiths we have started experimenting with HTML5/JavaScript to help assemble our complex interactive video stories in a way that was hard to achieve with previous web implementations, and it is proving to be powerful.

And, interestingly, it seems that Microsoft are beginning to agree that there's a future in it, enough to push it within their own ecosystem.


This video shows a Microsoft sponsored version of Contre Jour running in the browser. The interesting thing to see is how the aspects of the touch interface are all there, present and correct and functioning smoothly.

Ok. I agree it's a shame that it only runs properly in the soon to be released IE10 however, I think fair play to Microsoft for developing interactive media that works so well through the browser. You could be forgiven for thinking that this a locally downloaded application.

It used to be a common thread of thinking that the downloading app economy would soon be sacrificed to the onslaught of web apps. Well, with Microsoft on,board who knows, it might just happen. I for one am looking forward to the release of the Surface.




Thursday 27 September 2012

Exploring orchestrated performance @ University College Falmouth


We recently went down to visit the excellent facilities at University College Falmouth (UCF) as part of our ongoing work in the VConect project. The University and its research wing are just one of the institutions benefiting from the role out of super fast broadband in Cornwall, currently being achieved with the help of BT. Such an infrastructure of course lends itself well to requirements of the VConect project and its vision of incorporating high quality, intelligent video into social networking experiences.

You may be familiar with some of our work on orchestration in TA2. The Vconect system is building on the research done in that previous project about video conferencing for the home with the benefit of two different use case – a socialisation use case based on computers in the home and a performance use case based on creating linked performance spaces using video mediated communication. UCF are exploring the requirements of such a platform using their existing infrastructure with the help of dance and acting student from their courses.

When we visited we were treated to a demonstration of work in progress. This particular experiment was looking at the possibility of creating the impression of being ‘side by side’ in the same space. The future role of orchestration is going to be on how to best use a single/multiple screens and projectors with multiple camera feeds available. The challenge will become even greater towards the end of the project as the two paths – performance and home – become merged. How best to represent a mediated performance such as this to an audience of many people sat at their computers at home?

Wednesday 5 September 2012

NIM’s work in the recently completed TA2 project was featured in The Register.

In TA2, we hypothesised that the use of multiple cameras in each location could facilitate such communication experiences. However, this generated a new problem: how would content from multiple cameras be shown on each particular screen? One answer is to have an automatic decision making process that is aware of the conversation flow and is able to represent it by controlling the cameras and mixing their content onto the screen of each location. This is similar to what a TV director would do to cover a live event, but more complex, as each room effectively needs its own dedicated director choosing from the shots available in the other rooms. We called this process orchestration. The overriding assumption was that, should we be able to build an intelligent system able of orchestration, then a vibrant and engaging representation of the shared activity would be achieved on each screen, finally leading to the participants being immersed in the communication experience, unaware of the technology and devices trough which it takes place.

The Register seemed to agree describing our system as being a bit like Minority Report.
To find out more about our research into Orchestration check out our latest public deliverable on the subject.