Crossing worlds – Video Avatars

I recently tried a little experiment using the excellent Live! Cam Avatar application I have, the one that I re-did Daz with.
In this is tried using a live avatar from one place, namely my machine and injecting it into Second Life as video feed. Its a simple but effective demonstration of pushing live content around.
The key to this is that the video avatar was a live puppet, in this case singing a song from an audio stream. The mapping is just a sphere texture, but I know done with more care it would work on a scultpy.
So I can have an avatar from one system injected into another in a simple but effective way.
We can of course take this further with better data interchange, but this show what we can do now. I know that this video avtar on a bubble can be moved around from external stimuli too. i.e. the mapped prim can be moved becuase of an event external to second life. So technically (as I own the parcel) I could walk around and act as a normal avatar despite being rendered on another system entirely.

Compare and contrast – SXSW

Roo is off at the fantastic SXSW conference in texas. You can follow his fun and frolis over on his blog I found it quite amusing that we posted individual photos up on flickr customizing out respective laptops, which I thought worth a comparison.
Mine was the arrival of my epredator Moo stickers with QR codes which I thenk stuck on my work Lenovo T61 Thinkpad epredator.com and eightbar.
Moo Stickers
Roo on the other hand had got his new personal MacBook Pro custom laser etched in Texas with an autobot logo
autobot macbook
Just to keep the linked flow of things Roo also twittered he was just off to the Moo party 🙂
Note also how this was not about us hooking up and following one another via one single social network or virtual world. Both Twitter and Flickr and various blogs also feature in keeping us appraised of what one another is up to. Even though in this case Roo is having the lions share of the fun and a little bit of Metaverse Evangelist PR 🙂
*update I just noticed this said comments were turned off. That was unintentional despite the spam we get, normal service is now resumed, it must have been the mad UK weather ATM.

Spimes, Motes and Data Centres

A few of you may have noticed recent coverage around on the blogs about Michael Osias’s 3d datacentres. Ugotrade has as usual a great write up and analysis. You may have seen the work that our friend David Orban has been doing with OpenSpime. What’s that all about I here some of you ask?
Spime is a word that Bruce Sterling created, along with Spime Wranglers(The people who control and gather information from Spimes). The Spime being a self contained small device that broadcasts all sorts of information about its surroundings. Again Ugotrade has covered this in some depth in Tish’s most recent post
For a while here in Hursley Andy Stanford-Clark has been using the term “mote” as in remote and we have shown his instrumented house replicated in Second Life. Also Dave Conway-Jones has been busy with various forms of sensors and actuators. Also in some of the public research going on here for sensor arrays of the future spime like devices are being simulated in game environments to aid in understanding what would happen if they were applied to a large area.
So it would appear that Hursley and many members of eightbar are in fact Spime Wranglers already.
This ability to instrument the world fits into the principles of mirror worlds rather than the pure escapist virtual worlds and metaverses. Being able to augment reality, or augment virtual reality requires masses of live information, so a Spime or a Mote array is fairly crucial to the whole thing. In many ways instrumenting a data centre makes the data centre an entire spime in its own right, so you can see these things are linked very closely.
The balance of a nanotech generic smart dust gathering generic information but organic patterns forming for that versus specific devices understanding the monitoring of piece of information will make for interesting wrangling decisions too.
My key interest in this at the moment is using this approach to not just monitor and report on the real world, but on multiple virtual environments. A spime can be virtual too.
Finally, as I twittered this to David. I cant get the song out of my head Spimerman, Spimerman does whatever a Spime can. So maybe the Spime wranglers are going to be the information superheros for the next generation.

Interaction the way we want it as Humans

I noticed a great post by Christian over on the Cisco virtual world blog about the rise of expectation in new interfaces. I think in nearly every pitch I do towards the end I remind people that we seem to have tied ourselves to keyboards, supposedly to stop typewriters jamming, mice for navigation of 2d windows, and a few other metaphors for interaction that we are now in a position to break away from. I wrote some of this last year in a mini predictions post
One of the things people always seem to say on entering a virtual world (those who are not metarati or gamers) is the fact it is hard to move around. That may not be the case in reality, but just as people struggled with a mouse and menus and windows 15 years ago, they are doing the same with arrow keys, mouselook and the various other convaluted ways we seek to interact with the computer.
Clearly people’s expectation of display devices will be changed by the multitouch iphone, or simple gesture interaction as we see with the wii controller. All that is well trodden technology in some respects now. It has become commercially robust and is now in all our hands to push things forward.
Another exciting development and one I am sure we will cover in a lot more depth in the near future is Emotiv. There is a great BBC article on it here and you will notice a certain company mentioned alongside it and those of you at GDC may well have seen it. A very soon to be available commercial device to detect brain patterns and allow us to interact with the machines in yet another way.
Combine all these with the augmented reality, projection, headset approaches and we have a very rich set of tools to work with to see how we as humans are able to free ourselves from some of the self imposed shackles we have for interaction. Another article here on Kurzweil’s keynote at GDC hints at an even deeper future
Of course, thats not to throw away any of the old ways, we still use command lines where needed, we still use books and print where needed, but having more and richer things more suited to an indivuals neuro linguistic programming stack, or adding in accessibility for all so we can all interact however and wherever regardless of particular limitations can only be a good thing?

The Eightbar brand – another angle

Well you have seen eightbar represent itself all over the place. A meta-guild across multiple virtual worlds, a very large Second Life group, Eve corporation, even a Halo 3 clan. We have tshirts virtual and real, custom ferrari’s on Xbox and even been in a book. However our very own Graham spotted that we were missing what can best be described as a gang sign.
Last night in Winchester as a night out paid for by Daz (thankyou again Daz, who also pays for the hosting of eightbar.co.uk) this sign was thrown for the first time. It’s fairly self explanatory 🙂
Eightbar street sign

Holly on the BBC, Its not all Roo and I on virtual worlds you know

At the VW Forum Europe in October Holly Stewart was thrown in front of the camera for a BBC click interview. Holly/Ada Alfa has been in with this since very early on and is very much core to eightbar. Not only that but she is currently the jointly elected guildmaster for our Virtual Universe Community.
It has taken a while but the BBC just ran it on BBC click as part of a virtual worlds piece. The page is here and UK people can watch the video.
A few small things, Its Holly Stewart not Stewarts, and the other IBMer is Paul Ledak not Paul Ladek, but you cant have everything can you.
**Update as Holly just pointed out on twitter the IBM SL machinima used in the piece was Rob Smart’s work too so credit where credit is due too 🙂
The key angle is also about interoperability, as you will notice this has been a bit of subject lately.
Anyway well to done our Holly for a great piece to camera. At last I cant take Click off series link on Sky+
Also greta perfomances from Valerie from ESC, the ubiquitous Justin from RRR and a great advert for Corey’s Multiverse in the middle of it.
Just for the record I think we (eightbar) have appeared on virtual worlds on click 3 times now in some way or another. Now where is the royalty cheque?
Holly on tv

Exploring communication options in that metaverse middleground

As part of a bit of forward thinking I have been doing more experimenting with some levels of visualization. Working on the assumption that all video or all avatar is not the only way forward.
One way and another I ended up using Daz’s very amusing photo from flickr to illustrate the point here.
This is a mix of a static photo (an insane one) blended into an emotive 3d-ish representation, but with a synthesized voice from txt.

Daz crazy talked up from his 2d http://www.flickr.com/photos/shawdm/2… photo they will not read my mind
and speech synth lyrics by kanye west
I did a one of these the other day on my epredator.com blog which was a little more just to see if the technology worked, but also let me see if that would help in expressiveness and enhance a pure video conversation (which I think it does).

Here comes another wave of ideas for metaverses?

Metaverse technology and approaches to how people can interact in a MMO type way are appearing thick and fast. It always opens debates around one world versus many, it starts technical arguments around platforms. However that diversity is both a rich source of ideas and approaches and a restrictivce and confusing situation in social media circles.
Eric “Spin Martin” Rice comments on some of the problems of this in a recent post Just where and why are people choosing to gather in 3d spaces.
Recently Roo and I have been discussing the evolution of all these fragmented spaces. I dont think it is enough that we just tell people what is out there at the moment. It is by no means solved or possible may not be solvable, but it is worth considering some things here. Often interoperability reduces to a pure technical discussion when in fact its a social and organizational problem too. As virtual world companies and communities attempt to own their customers/members in a traditional sense they clearly want you to come to them to experience their wares and their way of doing things. This is a wider web2.0 conversation around who owns me and my stuff.
We are starting to see some words appear in up and coming virtual environments that start to hint at maybe some different metaphors. “Widgetized” is a forced word but if you read the press around RocketOn (props to Xantherus for twittering this company the other day) you start to see that we do not have to stick with the real world analogies that we have today. I am second guessing what Rocketon is doing but having a thing you take around with you from world to world appears to be their approach.
So I made a little picture, not so much a roadmap as a suggestion of where we are today and the ? as to where we need to evolve to in our understanding tomorrow. It is fairly self explanatory I hope.
We have gone from not knowing about anything going on around us, to our friends being online and sharing their thoughts/pictures/videos asynchronously to a set of single worlds where our avatar presence is part of the experience for us and those around us with a nominal amount of the previous steps awareness pulled into that environment too.
The trick is to think about the evolution from that, not to just replace real world metaphors but to extend them.
We already see this adoption as people start thinking about metaverses. They start with the replicas of themselves and of their offices and of their existing assets. They very quickly start to evolve their thinking and challenge why we need to stay on the floor in an office, do powerpoint, market with billboards etc. The non-real world representations start to flow as ideas.
My suggestion here is that the very container of those ideas, the world itself may also need to have this sort of evolutionary thought applied to it.
Single worlds and single avatars and a single live presence may be too restrictive, though is a comfortable metaphor to help people adopt metaverses and to feel some benefit from the.
evolution
This idea I think flows across each of the quadrants we see from the metaverse roadmap with the distinction being made with the types of virtual worlds and metaverses. Mirror Worlds, Virtual Worlds, Augmented Reality and Lifelogging.
Any thoughts?

Long Live the infocenter !

I’ve always been a bit scared of infocenters – even though, deep down, I know they’re “just HTML”; they never quite seem that way. Javascript and to-the-pixel object placement is just getting too good these days. You could almost mistake it for a java applet or at least some kind of fancy AJAX application.

But no, it’s just a set of good-old framesets, frames, HTML content, hyperlinks and images, bound together with some javascript eggwhite and stirred vigorously for a few minutes to make the infocenters we know and (some, I hear) love.

However, to make it seem like it’s “alive”, there is a Java servlet lurking back at the server, generating parts of the Infocenter dynamically, including rendering the Table of Contents from a behind-the-scenes XML description, and running search and bookmarks and things like that.

What I became curious about, then, were two things:

  • Could we extract a sub-set of an infocenter and just display that, rather than having to wade through everything we were given? For example, I might only be interested in the administration section of a product, or might only need to know about one component of a toolkit of many components. Having a more navigable and less intimidating sub-set would greatly improve productivity.
  • Rather than having to install an Eclipse infocenter run time on a server to host a set of documentation, is there a way to run it on any plain old HTTPd (e.g. Apache)? I accept that search, bookmarks, and other dynamic features won’t work, but the real information – the useful stuff in the right-hand window, which we use to do our jobs with the products we’re trying to understand; and the all-important navigational Table of Contents structure in the left-hand window – would be available to us “anywhere” we can put an HTTPd.

With a ThinkFriday afternoon ahead of me, I thought I’d see what could be done. And the outcome (to save you having to read the rest of this!) is rather pleasing: Lotus Expeditor micro broker infocenter.

This is a subset of the Lotus Expeditor infocenter containing just the microbroker component, being served as static pages from an Apache web server.

First the information content. The challenge I set was to extract the sections of the Lotus Expeditor documentation which relate to the microbroker component. It has always been a bit of a struggle to find these sections hidden amongst all the other information, as it’s in rather non-obvious places, and somewhat spread around. This means creating a new navigation tree for the left-hand pane of the Infocenter. When you click on a link in the navigation tree, that particular topic of information is loaded into the right-hand window.

However, it quickly became apparent that just picking the microbroker references from the existing nav tree would yield an unsatisfactory result: the topics need to be arranged into a sensible structure so that someone looking for information on how to perform a particular task would be guided to the right information topic. Just picking leaf nodes from the Lotus Expeditor navigation tree would leave us with some oddly dangling information topics.

Fortunately Laura Cowen, a colleague in the Hursley User Technologies department for messaging products, does this for a living, and so was able to separate out the microbroker wheat from the rest of the Expeditor documentation and reorganise the topics into a structure that makes sense out of the context of the bigger Expeditor Toolkit, but also, to be honest, into a much more meaningful and sensible shape for micro broker users

First we needed to recreate the XML which the infocenter runtime server uses to serve up the HTML of the navigation tree. Laura gave me a sample of the XML, which contains the title and URL topic link. From the HTML source of the full Expeditor navigation tree, using a few lines of Perl, I was able to re-create XML stanzas for the entries in the navigation tree. Laura then restructured these into the shape we wanted, throwing out the ones we didn’t want, and adding in extra non-leaf nodes in the tree to achieve the information architecture she wanted to create.

Wave a magic wand, and that XML file becomes a plug-in zip file that can be offered-up to an infocenter run time, and the resulting HTML content viewed. After some iterative reviews with potential future users of the micobroker infocenter, we finalised a navigation tree that balanced usability with not having to create new information topics, apart from a few placeholders for non-leaf nodes in the new navigation tree.

So far so good – we had an infocenter for just the microbroker component of Expeditor, and it was nicely restructured into a useful information architecture.

Now for phase two of the cunning plan: can we host that on a plain-old HTTPd without the infocenter run time behind it? The information topics (the pages that appear in the right-hand window) are static already, and didn’t need to be rehosted – the existing server for the Lotus Expeditor product documentation does a perfectly good job of serving up those HTML pages. It’s the rest of the Infocenter, the multiple nested framesets which make up the Infocenter “app”, and the all-important navigation tree, which are dynamically served, from a set of Java Server Pages (JSPs).

A quick peek at the HTML source revealed that several JSPs were being used with different parameter sets to create different parts of the displayed HTML. These would have to be “flattened” to something that a regular web server could host. A few wgets against the infocenter server produced most of the static HTML we would need, but quite a few URLs needed changing to make them unique when converted to flat filenames. A bit of Perl and a bit of hand editing sorted that lot out.

Then it transpired there is a “basic” and an “advanced” mode which the back-end servlet makes use of to (presumably) support lesser browsers (like wget 😐 ). Having realised what was going on, and a bit of tweaking of the wget parameters to make it pretend to be Firefox, and the “advanced” content came through from the server.

Then we had to bulk get the images – there are lots of little icons for pages, twisties, and various bits of window dressing for the infocenter window structure. All of this was assembled into a directory structure and made visible to an Apache HTTPd.

Et voila! It worked! Very cool! An infocenter for the microbroker running on a straight HTTPd. Flushed with success, we moved it over to MQTT.org (the friendly fan-zine web site for the MQ Telemetry Transport and related products like microbroker). Tried it there…

Didn’t work. Lots of broken links, empty windows and error loading page stuff. Seems the HTTPd on MQTT.org isn’t quite as forgiving as mine: files with a .jsp extension were being served back with the MIME type text/plain rather than text/html, which may not look like much, but makes all the difference. So a set of symlinks of .jsp files to .html files, and another quick wave of a perl script over the HTML files put everything right.

So with an afternoon’s work, we were able to demonstrate to our considerable satisfaction, that we could excise a sub-set of an Infocenter from a larger book, restructure it into a new shape, and take the resulting Infocenter content and flatten it to a set of HTML pages which can be served from a regular HTTP server.

Geek Rockets 2.0

We have just had a note to the emerging tech group here in Hursley inviting us to the 2nd rocket day. (I nearly said annual but it was back in September 2005 that we had the last one).
It reminded me that it was so long ago I had not youtubed the video I cut of the event. Unlike most video I do now which is small mobile snippets I had used a DV camera and then spent hours editing it up and cutting a soundtrack into it. Daz did a great one too with stills and a Kanye West track.
Making video like this is a very rewarding experience. So here is the 5 minutes of madness in a field in Hampshire.

Who knows, if I get to go to this maybe this will be a live webcast on epredator.tv into multiple virtual worlds (batteries and people willing)