IBM Virtual Worlds 1Q 2008 roundup

A brief summary of what’s been happening with IBM in virtual worlds in the first quarter of this year. It’s an impressive list.

Mike Rhodin, General Manager of IBM Lotus software, recently made five predictions about the future of collaborative working. They included open standards, increase in IM and other real-time tools. The number one prediction was

The Virtual Workplace will become the rule.  No need to leave the office.  Just bring it along.  Desk phones and desktop computers will gradually disappear, replaced by mobile devices, including laptops, that take on traditional office capabilities.  Social networking tools and virtual world meeting experiences will simulate the feeling on being their in-person.   Work models will be changed by expanded globalisation and green business initiatives that reduce travel and encourage work at home.

“The definition of “meetings” will radically transform and become increasingly adhoc and instantaneous based on context and need.  3-D virtual world and gaming technologies will significantly influence online corporate meeting experiences to deliver more life-like experiences demanded by the next generation workers who will operate more efficiently in this familiar environment.”
http://www.techradar.com/news/computing/ibm-sounds-death-knell-for-desktop-pc-270127

Bruce Morse (IBM VP of Unified Communications and Collaboration) and Steve Mills (IBM Senior VP, Software Group) are both quoted in a recent eWeek article, which discusses a major investment in UCC, as well as an announcement about a partnership with virtual worlds company Forterra Systems. Specifically,

Sametime development manager Konrad Lagarde gave a demo during LotusSphere this year. He demonstrated some early integration between IBM’s internal Metaverse and Sametime.

During the presentation, Lagarde text chatted with a participant, also a 3-D avatar, who shows his enthusiasm by jumping up and down. Lagarde also showed a conference call feature for the Sametime client with pictures of invited attendees arranged around a two-dimensional drawing of a conference table. Those that are already present are shown around the table, while at the bottom of the screen are shaded photos of those who are invited but have not yet arrived.
http://www.networkworld.com/news/2008/012308-lotusphere-sametime-virtual.html

Dan Pelino, General Manager, IBM Global Healthcare & Life Sciences Industry announced the IBM Virtual Healthcare Island in Second Life in February.

“We believe that the use of our new virtual world provides an important, next-generation Internet-based resource to show how standards; business planning; the use of a secured, extensible and expandable architecture; HIE interoperability; and data use for healthcare analytics, quality, wellness and disease management are all helping to transform our industry.“ IBM’s Healthcare & Life Sciences (HCLS) Industry will continue to develop the new island in months to come.  The island can perform as a virtually “always on” demonstration tool for IBM’s sales personnel.
http://www-03.ibm.com/press/us/en/pressrelease/23580.wss

Michael Osias of IBM Research is quoted in an announcement about a 3D visualisation of a data centre, which was implemented using OpenSim.

Implenia, a Swiss construction, building services and real estate company, used the IBM virtual data center solutions to extend its existing virtual operations center which was previously used mainly for the facilities management processes. Adding the data from datacenter equipment allowed Implenia a finer control of the HVAC and security system. The virtual data center is a tailored 3-D replica of servers, racks, networking, power and cooling equipment that allows data center managers to experience real-time enhanced awareness of their dispersed resources.

“Viewing information about your data center in 2-D text — even in real time — only tells a data center manager part of the story, because our brains are wired for sight and sound,” said IBM Researcher Michael Osias, who architected the 3-D data center service. “By actually seeing the operations of your data center in 3-D, even down to flames showing hotspots and visualizations of the utilization of servers allows for a clearer understanding of the enterprise resources, better informed decision-making and a higher level of interaction and collaboration.”
http://www.tradingmarkets.com/.site/news/Stock%20News/1129343/
see also http://www.virtualworldsnews.com/2008/02/ibm-launches-3.html

PowerUp (powerupthegame.org) is an educational game created by IBM, using the Torque engine. It teaches teenagers about engineering as well as environmental issues. PowerUp is

a free, online, multiplayer game that allows students to experience the excitement and the diversity of modern engineering. Playing the game, students work together in teams to investigate the rich, 3D game environment and learn about the environmental disasters that threaten the game world and its inhabitants.
http://www.powerupthegame.org/
see also http://annieok.com/tangent/?p=505

Emotiv (emotiv.com) and IBM announced a partnership in February around a headset which “interprets the interaction of neurons in the brain” and is due to go on sale later in 2008.

“It picks up electrical activity from the brain and sends wireless signals to a computer,” said Tan Le, president of US/Australian firm Emotiv.

Emotiv is working with IBM to develop the technology for uses in “strategic enterprise business markets and virtual worlds”  Paul Ledak, vice president, IBM Digital Convergence said brain computer interfaces, like the Epoc headset were an important component of the future 3D Internet and the future of virtual communication.
http://news.bbc.co.uk/1/hi/technology/7254078.stm

Bluegrass was discussed in January 2008 in the Virtual Worlds News blog

IBM Research is working to solve the digital divide in the workforce with Project Bluegrass, a project that integrates three key factors in motivating Millennials — collaboration, communication and visualization. Project Bluegrass takes the IBM Jazz technology and creates a virtual-world environment where software developers can work, chat and brainstorm around a virtual water cooler while “seeing” their teammates alongside interactive visual representations of ideas, data from the Web and from Jazz-based sources.
http://www.virtualworldsnews.com/2008/01/ibm-launches-pr.html

Long Live the infocenter !

I’ve always been a bit scared of infocenters – even though, deep down, I know they’re “just HTML”; they never quite seem that way. Javascript and to-the-pixel object placement is just getting too good these days. You could almost mistake it for a java applet or at least some kind of fancy AJAX application.

But no, it’s just a set of good-old framesets, frames, HTML content, hyperlinks and images, bound together with some javascript eggwhite and stirred vigorously for a few minutes to make the infocenters we know and (some, I hear) love.

However, to make it seem like it’s “alive”, there is a Java servlet lurking back at the server, generating parts of the Infocenter dynamically, including rendering the Table of Contents from a behind-the-scenes XML description, and running search and bookmarks and things like that.

What I became curious about, then, were two things:

  • Could we extract a sub-set of an infocenter and just display that, rather than having to wade through everything we were given? For example, I might only be interested in the administration section of a product, or might only need to know about one component of a toolkit of many components. Having a more navigable and less intimidating sub-set would greatly improve productivity.
  • Rather than having to install an Eclipse infocenter run time on a server to host a set of documentation, is there a way to run it on any plain old HTTPd (e.g. Apache)? I accept that search, bookmarks, and other dynamic features won’t work, but the real information – the useful stuff in the right-hand window, which we use to do our jobs with the products we’re trying to understand; and the all-important navigational Table of Contents structure in the left-hand window – would be available to us “anywhere” we can put an HTTPd.

With a ThinkFriday afternoon ahead of me, I thought I’d see what could be done. And the outcome (to save you having to read the rest of this!) is rather pleasing: Lotus Expeditor micro broker infocenter.

This is a subset of the Lotus Expeditor infocenter containing just the microbroker component, being served as static pages from an Apache web server.

First the information content. The challenge I set was to extract the sections of the Lotus Expeditor documentation which relate to the microbroker component. It has always been a bit of a struggle to find these sections hidden amongst all the other information, as it’s in rather non-obvious places, and somewhat spread around. This means creating a new navigation tree for the left-hand pane of the Infocenter. When you click on a link in the navigation tree, that particular topic of information is loaded into the right-hand window.

However, it quickly became apparent that just picking the microbroker references from the existing nav tree would yield an unsatisfactory result: the topics need to be arranged into a sensible structure so that someone looking for information on how to perform a particular task would be guided to the right information topic. Just picking leaf nodes from the Lotus Expeditor navigation tree would leave us with some oddly dangling information topics.

Fortunately Laura Cowen, a colleague in the Hursley User Technologies department for messaging products, does this for a living, and so was able to separate out the microbroker wheat from the rest of the Expeditor documentation and reorganise the topics into a structure that makes sense out of the context of the bigger Expeditor Toolkit, but also, to be honest, into a much more meaningful and sensible shape for micro broker users

First we needed to recreate the XML which the infocenter runtime server uses to serve up the HTML of the navigation tree. Laura gave me a sample of the XML, which contains the title and URL topic link. From the HTML source of the full Expeditor navigation tree, using a few lines of Perl, I was able to re-create XML stanzas for the entries in the navigation tree. Laura then restructured these into the shape we wanted, throwing out the ones we didn’t want, and adding in extra non-leaf nodes in the tree to achieve the information architecture she wanted to create.

Wave a magic wand, and that XML file becomes a plug-in zip file that can be offered-up to an infocenter run time, and the resulting HTML content viewed. After some iterative reviews with potential future users of the micobroker infocenter, we finalised a navigation tree that balanced usability with not having to create new information topics, apart from a few placeholders for non-leaf nodes in the new navigation tree.

So far so good – we had an infocenter for just the microbroker component of Expeditor, and it was nicely restructured into a useful information architecture.

Now for phase two of the cunning plan: can we host that on a plain-old HTTPd without the infocenter run time behind it? The information topics (the pages that appear in the right-hand window) are static already, and didn’t need to be rehosted – the existing server for the Lotus Expeditor product documentation does a perfectly good job of serving up those HTML pages. It’s the rest of the Infocenter, the multiple nested framesets which make up the Infocenter “app”, and the all-important navigation tree, which are dynamically served, from a set of Java Server Pages (JSPs).

A quick peek at the HTML source revealed that several JSPs were being used with different parameter sets to create different parts of the displayed HTML. These would have to be “flattened” to something that a regular web server could host. A few wgets against the infocenter server produced most of the static HTML we would need, but quite a few URLs needed changing to make them unique when converted to flat filenames. A bit of Perl and a bit of hand editing sorted that lot out.

Then it transpired there is a “basic” and an “advanced” mode which the back-end servlet makes use of to (presumably) support lesser browsers (like wget 😐 ). Having realised what was going on, and a bit of tweaking of the wget parameters to make it pretend to be Firefox, and the “advanced” content came through from the server.

Then we had to bulk get the images – there are lots of little icons for pages, twisties, and various bits of window dressing for the infocenter window structure. All of this was assembled into a directory structure and made visible to an Apache HTTPd.

Et voila! It worked! Very cool! An infocenter for the microbroker running on a straight HTTPd. Flushed with success, we moved it over to MQTT.org (the friendly fan-zine web site for the MQ Telemetry Transport and related products like microbroker). Tried it there…

Didn’t work. Lots of broken links, empty windows and error loading page stuff. Seems the HTTPd on MQTT.org isn’t quite as forgiving as mine: files with a .jsp extension were being served back with the MIME type text/plain rather than text/html, which may not look like much, but makes all the difference. So a set of symlinks of .jsp files to .html files, and another quick wave of a perl script over the HTML files put everything right.

So with an afternoon’s work, we were able to demonstrate to our considerable satisfaction, that we could excise a sub-set of an Infocenter from a larger book, restructure it into a new shape, and take the resulting Infocenter content and flatten it to a set of HTML pages which can be served from a regular HTTP server.

Building cities by generation – Introversion

A recent conversation reminded me that I had read something in edge magazine about city generation for games. The premise being that whilst real places or soon to be real places may need to be hand crafted, sprawling believable cities that are backdrops or scenery, like forests or mountains, just be able to be generated. Of course ‘just’ hides the complexity of what needs to be done. The guys at introversion seem to be on the case though.
This video shows some of the toolkit in action as it decides how and where to layout a sprawling city.

There is more from the developers on their forum
I had seen this in a number of places on the web too like kotaku and digital urban it is certainly of interest in gaming community but procedural generation has its place in all sorts of content and simulation arenas.

InstantAction in browser 3d multiplayer

As many of you know our IQ internal metaverse is based on the garage games Torque engine so we are always interested to see what is happening with it out there in the world. InstantAction.com teamed up with GarageGames to create this new multiplayer game experience in browser. The aim is to go way past the flash games approach and have a more detailed engine running that has grown up from game development.
So, being beta style Web2.0 people Roo, Rob and I dived in to see what it was like as the beta opened up some new games.
They have Marble Blast, Screwdriver and ThinkTanks all available as online or single player games.

marble

Now the company is focused on all sorts of gaming experiences, a little past casual and puzzle games, aiming to exceed xbox live and playstation network.

It will be interesting to see how the dev kits help us in the future with corporate style intraverses. At the moment we still have a client install, though one that runs on mac, pc and linux for our metaverse. Also as instantaction is still in beta we have not explored what happens server side with interactions and persistence of worlds. I have no doubt someone is building something out there in garagegames land and I know we woudl love to see it.
For now lets enjoy the fact the games look pretty good and see where this develops.

Augmented Reality head on a platter

Artag has come up again recently in a few posts. I was just getting an older webcam working my laptop so I used Artag to see if it was working ok and did this little render test. In this case with a head model of me generated by cyberextruder from a 2d photo. It looses registration a bit but as I said it was an old camera.

I was also intrigued to see whilst looking for a newer webcam that Logitech have some avatar webcam software. i.e. it responds to your talking movements but overlays or replaces you with a rendered object. I must try this. It would let me use seesmic more as I have not got the hang of small clips to camera yet.

Metaverse Evangelist in a top ten of jobs to have

I was trying not to post too much today and let the discussion run on Roo’s points about the TV ad. However this was both cool and funny as well as shameless self publicity.
http://www.fastcompany.com/articles/2008/01/ten-jobs.html
The metaverse evangelist role listed with 9 others 🙂 Flavourist, Brewmaster, Sensory Brander, Carbon Coach, Sleep Instructor, Interaction Designer, Roller Coaster Engineer, Animator and Travel Writer
It was interesting that Metaverse ended up in “enhancing life and the bottom line” especially given the recent dicussion on money and that TV ad.

Reality Augmented, Virtually

Wagner James Au has a brilliant post on New World Notes about a project at Georgia Tech called AR Second Life, which integrates Augmented Reality features into the open source Second Life client.

Last summer, Ian blogged here on Eightbar about an experiment with running the ARTag system alongside Second Life, and augmenting SL with additional 3D content, like this…

The Georgia Tech project goes the other way, augmenting the real world with live content from SL. Like this…

It hints at a future in which the lines between virtual worlds and real world will be crossed by more than just the use of a keyboard, mouse and monitor combination. The ability to see and interact with other people in virtual worlds is one of the things that has allowed interest in 3D environments to expand far beyond what we ever saw back in the days of (largely single-user) ‘virtual reality’. Being able to go beyond clunky user interfaces and blend those interactions naturally and intuitively with the real world is something I expect we’ll see a lot more of this year.

Data portability – 08 is the year?

Last year I kept using a “catchy” phrase Web 2 is Web Do which still stands. This year it would seem that data portability with all the talk of openid, Scoble and facebook, plaxo etc combined with the conversations in October about virtual world data interchange means that this year the phrase will have to be about migration and interoperability on social media of all sorts. Being able to have the stuff you own and need where you are when you need it.
“08 is about how to migrate” or migr8te or even…..
we may have to become Migr8tebar 🙂
I suspect it will not happen to quite the level anyone expects though technically moving data around and providing services is not overly tricky the complexity lies in the social fabric too. If I move to a place, 2d, 3d, 4d etc. How do I determine what I need or would like with me. How does what I take with me alter those around me. e.g. friend links in one arena may not be appropriate in another. Equally I think we might surprise ourselves by what happens when/if things do become more portable.
This ability to access things we already have will need to spread to the virtual worlds, in particular when used in a business of consumer context. Being able to re-use existing web applications and sites within a virtual world adding the benefit of avatar interactions will make a huge difference in a mass market.
That is less about data portability, but does introduce data access. It does happen in some of the virtual worlds, but we tend not to see embedded web browsing the massive public worlds. There are some security implications obviously but ot use the Second Life Parlence “web of a prim” would make a massive difference if it can be solved safely.

If you thought that was good look at this

The last post showed some of the work Johnny Lee is showing with wiimotes, this one is with projectors. Calibrating to a surface regardless of angle and in some case shape (watch it until the end).

It seems that the various URL’s including http://www.johnnylee.net are currently dead probably due to the insane amount of traffic generated by being in the top 30 on youtube!
The other thing to note is that IBM has its everywhere display a commercial way of targetting projection onto surfaces and dealing with scale etc.
The original research site is here, the brochure is here and a recent article in fortune, around retail is here
I have always wanted to be able to project things, it woudl be great to have a small device like a PDA or smartphone be able to project onto a table or wall to show someone something.
I know we have an everywhere display installation in Zurich, it is there with the other project I have a soft spot for, Blueeyes. This is a camera and sensors that senses your mood happy, sad etc. Back in 2000 we were trying to get that installed in out UK innovation centres as part of our time and emotion project. using clever sensors to help instrument buildings and capture the atmosphere in order to be able to represent it elsewhere. An ongoing theme as you will have noticed with the metaverse work and things like Wimbledon 🙂
If I can find some everywhere display footage I will try and post it on Youtube or see if IBMtv have any

Using the Wiimote for tracking

For a little while I have been trying to find the time to give the wiimote a try as a PC interface. Roo had some things things running last year. Today as the first day back at work 2 people Gareth and Tim both pointed me to some youtube videos by Johnny Chung Lee.
His website has all the code and instructions you need, but I am putting this video on here because I think it is stunning.
1. Its a great idea, 2. its very well presented yet homebrew, 3. It gets the effect across so well.

So now if we combine ArtTag and the other camera based thing what on earth will we come up with?