Archive for the ‘Machine Vision’ Category
First, thank you to the awesome people, especially Sean White of Columbia University, who helped make it possible for me to be there.
Right now I”m just going to give you the beginning of my takeaway.
The paper that resonated most with my basic desire to see the big platform problems handled first was “Global Pose Estimation using Multi-Sensor Fusion for Outdoor Augmented Reality” by Gerhard Schall, Daniel Wagner, Gerhard Reitmayr, Elise Taichmann, Manfred Wieser, Dieter Schmalstieg, and Bernhard Hofmann-Wellenhof, all out of TU Graz, Austria, with the exception Mr. Reitmayr, who is at Oxford. This is the kind of fusion work that I’ve been talking about since my first post, and it was really exciting to see people actually doing it seriously on the hardware side. The two XSens MTi OEM boards headed to the new lab for a non-AR project should have cleared customs by now. I’ll find out if they’re there on Tuesday. :-) I only mentioned it because it’s more-or-less the same device that was used for the inertial portion of this project, and I can’t wait to build them into something.
I also loved reading Mark Livingston’s paper on stereoscopy.
Incidentally, all of the papers, and video of all the sessions, should be getting posted soon to ISMARSociety.org. Serious props to the student volunteers who appeared to really keep things running smoothly, and who performed the awesome task of capturing all of the content on video. This, the first year of AR as a popular buzzword, is the time to share with the rest of the world just how much scientific effort is going into making real progress.
I’ve got lots to say about the HMDs, including Nokia Reasearch Center’s cool eye-tracking see-through display sunglasses prototype, but I’m going to save it for tomorrow, or perhaps for another forum. For the moment, just enjoy this photograph of Dr. Feiner stylishly rockin’ the Nokia prototype.
Hell yeah, dude.
Though we were still notably lacking Tish Shute and Rouli, this pic has a pretty stacked roster of AR blogosphere heavy-hitters in it. And speaking of Tish, I think she may be onto something with the AR Wave initiative. The diagram in her most recent post makes a great deal of sense.
And sorry to flake on the daily updates. I did end up demoing some glove stuff, and I was just generally pretty wiped out by the time I got back to my hotel each evening. ISMAR was terribly exciting for me, and have a ton more to recount.
Italian company Seac02 just released LinceoVR 3.0, which is a marker-based AR visualization package featuring advanced shadow and ambient lighting effects. Cool. I’d love to try it out with some of my girlfriend’s architectural models from Rhino, and I’ll post something about it as soon as we get ourselves a student license and give it a spin.
This short, by Sorin Voicu (of Sapienza University of Rome), may rival Iron Man for best personal HUD in special effects… though there is another Terminator movie coming up. Hmmm… actually, it isn’t so much the HUD. It’s the totality of the vision depicted. The universality of the applicability of the technology! [sic]
Easily one of the most superb speculative depictions of visual AR that I’ve seen. Still not much physical interaction beyond button pressing. I think that a lot of my reaction to it is from the quality of his production. Also, follow the link to the sfx reel accompanying this for a great montage, with successive layers of filtering and compositing being added for each of the shots from the short.
Oh. Facial scanning. It’s a technology, and people will use it. People use technologies. I know this one feels icky to a lot of people. Access to comprehensive databases should probably be secured… but no, it’ll be pretty tough to stop. Even just using something to crawl cached Facebook profile pictures would give you a nice chunk of people these days. How far do we want to take the sharing, or now that we’ve shared with the system, is it out of our hands?
I guess I’m not staying away from this stuff like I said I would.
So assuming it is, in fact, out of our hands, how will it be done? One scenario I can imagine would be local batch facial detection, and then upload of cropped frames (or scanned point sets, if the rig has the horsepower for it) for pattern analysis in cloud resources. With stereoscopic cameras, one might even be able to generate a decent set of data from just one set of frames. I don’t know enough about depth-mapping to say. Regardless, with a stereoscopic imaging system it shouldn’t be hard (to conceive in detail, not execute. I’m not claiming for a second that I’d know how to code it ). I like this scenario because we generate the system… if it’s gonna’ be a surveillance society, let’s make it our surveillance society. Hey, it would keep people honest. (disclaimer: this society is only designed to work under idealized circumstances and universal distribution of technology. GI/GO)
Oh wow… run everyone’s feeds into a Microsoft Photosynth system in the cloud…
*** session closed by client ***
Okay. I lied about what my next post would be. I’ll get to that. In the meantime, this is pretty.
Very pretty. Beats the hell out of Processing. OpenFrameworks is a visually-oriented C++ framework for interactive artists and researchers. It’s target user base is pretty much the same. I use Processing at the moment, and it compiles to Java. Quick to start, but not nearly so quick to run as a natively compiled C++ app. OpenFrameworks has much better support for OpenCV, the defacto Open Source image-processing library for motion detection and analysis. I’m using JMyron in Processing, and it crawls.
I’ve been waiting to see somebody’s example code, and for GL in OpenFrameworks, Andrew’s looks like one worth reading. Very nice, and very gracious to make the code available. Another plus of using a C++ framework is Vuzix stereo support in the full Mac SDK, though nobody’s exploited that yet, that I’ve seen or heard of. VR920 tracker support is already there, and is present in a Processing library. Just saying. Drop me a line, BTW, if you find this stuff compelling and have experience with OF and GL.