Immersive technologies, in general, have been in “beta” for the last few years. Don’t get me wrong, there are some very cool technologies out there that demonstrate endless potential and opportunity but most, if not all, lack practical application for most consumers. This is rapidly changing.
Removing barriers for developing immersive content is a big reason behind this change. The best analogy I can come up with to help describe what I mean is from my experience making brownies. I have no idea how to make brownies from scratch, even with a recipe it would likely take me forever and the final product probably wouldn’t taste that great. That said, give me a box of brownie mix, and I’ll make some of the best brownies you’ve ever had in less than 5 minutes.
This past month, ARkit (an API) was announced at WWDC (Apple’s developers conference). ARkit is basically like brownie mix for developers to more easily and quickly create AR (augmented reality) content with. In less than 24 hours of being released, developers were posting demonstrations of what they’d already come up with (see below). In ARkit, Apple has made it possible for most developers to create Pokémon or Snapchat filter experiences with ease. In addition to lowering the barrier for development of new AR experiences, Apple’s existing hardware base (100s of millions of iOS devices) increases the likelihood we start seeing a raise in practical consumer AR applications soon.
The lead Apple has taken (over other tech giants like Google and Microsoft) with ARkit can’t be overstated. Look at these demos and compare them to anything Microsoft or Google have demoed over the last 2 years and remember these developers have had access to the software for like ~2-3 weeks and the experiences are made with off the shelf, single camera iOS devices…
developer brownie mix…
Minecraft
Space X Rocket Landing in your Backyard
Within’s storybook
Watch a sneak peek of our Goldilocks storybook that comes to life on any surface. We can’t wait to release this app update later this year. pic.twitter.com/BNEOUbuGhV
Based on what I’ve seen from these demos and read about the API, I’m convinced the software will be there for rich AR experiences but the looming question is whether wearable hardware (i.e. glasses) will become a consumer product (enabling new interaction models and driving down costs of production) or if we’ll only see them in limited (expensive) use cases (i.e. computer labs, etc). Not to throw water on all this hype, I think there is huge potential for AR applications, but I really don’t see a future where everyone wears expensive, battery-powered glasses all day…
“VR’s power is not in simulating reality but giving new ability to reason, communicate, and reflect” eleVR
I talk a lot of about the technologies behind the emerging immersive tech field but equally important are the creatives and thinkers that develop new paradigms around the technology. In order for a technology to become a part of life, or an industry like education, hardware/software needs to be developed but people also need to develop uses for the tools. An interesting group of thinkers/technologists I’ve been covering for the last year are eleVR (el-uh-V-R).
eleVR “studies and experiments with immersive media, particularly virtual and augmented reality, as a tool to reveal new ways to understand our world and express ourselves within it.”
They recently released a 1 year of research review video of their work and I was blown away by the insights they’ve gained and moved by the kind of perspective they bring to immersive technologies.
We don’t think VR’s power is in simulating reality. We’re interested in using it to create wholly new kinds of experiences that give us new abilities of reason, of communication, of self-expression and self-reflection, that last through the rest of our lives. What the headset shows us isn’t reality but the experience is real and it changes how we feel and how we think.
Gardner Treneman from Randolph Macon visiting campus in Google Earth VR
I enjoyed visiting with many of you at our Oculus Rift and Vive demo last week. We were fortunate to be joined by colleagues from area universities Randolph Macon and VCU. They are grappling with many of the same things we are grappling with in terms of juggling the hype/potential with practical/meaningful applications. Lots of fun, idea sharing, and stimulating conversation was had. Major thank you to Joedy Felts from Communications for letting us use his Oculus Rift! If you couldn’t make it last week, we’ll have more later this summer.
With immersive computing around us and woven into our environment, information will be richer, more relevant, and more helpful to us.
Ever since I started this community of practice / newsletter I’ve struggled with how to frame the conversation and how to name the group. I wanted it to be both focused but wide ranging in scope. After reading a particularly insightful article from Google’s VR/AR led VR Clay Bavor, I’ve finalized on a name: Immersive Technologies at UR. Clay makes a strong case that VR and AR technologies are not competing but rather exist on two unique places on the spectrum of a ‘immersive technology’ spectrum.
He also really does a good job of summarizing where these immersive technologies are now and provides insight into the ways they will potentially change computing in the future.
As value goes up and costs come down, immersive computing will make sense for more and more people. And this isn’t a question of if — it’s a question of when.
More cool things coming from Google I/O, Googles annual event.
Say your team wants to workshop an idea. Today you grab a dry erase marker, find a room with a whiteboard, and start working. Would it be possible to replicate this process in virtual reality? If so, would VR provide benefits / problems? This article talks about the challenges and insights gleaned from developing such an app. I thought the process of development they describe was fascinating and I can’t wait to see what final product they end up with.
Some ideas explored:
How would it be if it was effortless to move whiteboards, Post-Its, and work sessions from the physical space into VR?
Do the physical constraints of these objects affect the way we think while ideating?
There’s something that gets lost when trying to have meetings on digital platforms like Slack or Hangouts / Skype; is it the physical presence, our facial expressions, or the limited tools that make these platforms so inefficient and awkward when it comes to ideation?
What if all of these physical nuances could be moved to a digital space and still remain tangible for the user?
Virtual and Augmented Reality: Stepping into the New Frontier of Learning Webinar – Presented by Emory Craig and Maya Georgieva: May 1 1:00-2:30 pm, Boatwright 322. We’d love for you to join us to hear from Emory and Maya then discuss how their vision for the future fits into the University’s new strategic plan. More information.
Oculus Rift Technology Demo – in collaboration with Communications Joedy Felts. May 11th 9:00 – 4:00 PM, Boatwright 322. As many of you know, we’ve invested in the HTC Vive for our initial explorations into immersive VR technology. This will be an opportunity for the VR community to try the new Oculus Rift and it’s touch controls.
Mobile technology analyst Benedict Evans (of A16z) analyzes the current state of augmented reality technology and speculates how recent developments compares to mobile phone technology in the 2000s. I always find Benedict’s analysis thoughtful and nuanced, this article is no different.
I believe the multitouch -> iPhone, AR -> ?!? analogy is spot on. Demos of AR technology are getting cooler and cooler (Hololens, Magic Leap, etc) but we are still waiting for a breakthrough product that truly changes the way people think/interact.
I also agree with his assertion that ‘real’ augmented reality will be when a device can see and interpret the world around us. This is something I am constantly reminded of when I hear people talk about Pokemon Go. A “dumb” heads up display (HUD) wouldn’t add enough compelling incentive to be a breakthrough consumer product, a cool fad perhaps, but not a lasting computing revolution.
Evans writes: an AR device with “an ambient, intangible, AI-led UI would change everything”. I agree and would add that education in particular will be revolutionized with these advances.
A VR/AR Sandbox
Stéphan Faudeux’s VR talk at the French Film Festival last month was a terrific survey of VR’s past, present and future. Among many insights, he talked about how some French movie theaters were installing VR arcades. Turns out the concept of a VR arcade isn’t new. IMAX has recently opened their first VR arcade in the states this year and similar projects are popping up all over the country. Norm Laviolette, founder of Asylum Gaming and eSports in New England says:
“Ultimately, we are creating an experience for people, and really there are few things out there that can elicit such an amazing physical, emotional, psychological reaction like VR,” he said. “We plan to have a dedicated wing just for VR, and keep it flexible to evolve as VR evolves and becomes more and more sophisticated.
As I talk with more people on campus about implementing VR technologies, the more I believe, in addition to faculty driven academic and research developments, we should also be student focused. What types of experiences will our students expect in 3-5 years when they are campus? We should be giving student’s access to these new technologies and I think a VR sandbox/arcade concept like Mk2 VR might work at an institution our size.
A very special event is happening on campus this month. The French Film Festival will be featuring a lecture by Stéphan Faudeux, titled: Virtual Reality and Cinema: Complementary or Competitors? The talk happens March 28th, 10 am to 11:30 am and will “cover the progress of virtual reality in various areas, with a pragmatic, practical and fun approach.” For more information check out the French Film Festival site.
Two Events Coming in April: VR Student Research Project Pizza & Pedagogy and Organon VR Anatomy Demo
Alyssa Ross and Dr. Kristin Bezio will discuss using the HTC Vive in the classroom and research lab for our April Pizza & Pedagogy lunch – free pizza! Register here
We will be hosting a demo of the Organon VR Anatomy app for the HTC Vive on April 11th from 1 to 4pm. This app will change the way you think about the human body (I’m not exaggerating). http://www.3dorganon.com/site/
[Tim] Cook (Apple’s CEO) has likened AR’s game-changing potential to that of the smartphone. At some point, he said last year, we will all “have AR experiences every day, almost like eating three meals a day. It will become that much a part of you.”
Mark Gurman at Bloomberg gets some new, interesting details on Apple’s AR efforts. However, it’s still uncertain, how exactly Apple is going to define “AR”. Some argue Pokemon on a large slab of glass (i.e. an iPhone) is AR, others believe, in order to be a new platform, glasses/new hardware have to be involved. I think the Google Glasses are a lesson that not everyone is keen on wearing and/or being seen by technology glasses. I tend to think Apple’s strategy in the near term will be focused on extending the AR functionality in the iPhone. From an ed tech stand point, that would be great due to the popularity of iPhones on campus.
Emory Craig argues that new AR apps from Shazam, Blippar, and others have the “potential to pull augmented reality out of gaming and into our everyday lives.”
The apps certainly look like a lot of fun. Emory argues the critical missing piece is having to hold up a heavy phone to enjoy the experience. He clearly believes glasses are the future.
This article articulates exactly where I think we are in terms of VR education technology. Looking in 360 is great but not revolutionary, collectively interacting in virtual or augmented worlds is. Paul at W&L is doing some great things with VR and has great vision for how success will be defined in the future.
Paul Low, who taught undergraduate geology and environmental sciences and is now a research associate at Washington and Lee University, is among a small group of profs-turned-technologists who are experimenting with virtual reality’s applications in higher education. Early VR programs were about showing students places, say the Louvre or ancient Rome, in low-cost headsets like the Google Cardboard viewer. But these latest iterations go further, creating entire environments—from the subatomic level to the solar system—that students can manipulate. Low and his colleagues at other campuses are trying to shepherd VR educational content from being something “that students look at” to something they can interact with.
… Low is excited about creating virtual environments where students and faculty can inhabit the same space and interact, like in the earthquake study. That experience is easier for learning designers to create, he says, because they don’t have to program all of the sequences of events that could potentially happen during a solo activity. Instead, the instructor handles the interactive components on the spot.
If you are planning on attending Stéphan’s lecture next week, I highly recommend this article from MIT as a primer.
VR will never become the new cinema. Instead, it will be a different thing. But what is that thing? And will audiences trained in passive linear narrative—where scene follows scene like beads on a string, and the string always pulls us forward—appreciate what the thing might be? Or will we only recognize it when the new medium has reached a certain maturity, the way audiences in 1903 sat up at The Great Train Robbery and recognized that, finally, here was a movie?
Visualization can reveal the knowledge hidden in data, but traditional 2-D and 3-D data visualizations are inadequate for large and complex data sets. Our solution is to visualize as many as 10 dimensions in VR/AR all via a Shared Virtual Office, which allows even untrained users to spot patterns in data that can give companies a competitive edge.
We are just at the early stages in understanding what kind of tools will prove useful in VR but this looks very promising. It feels super nerdy to say but being able to walk around a 3D scatter plot sounds exhilarating.
The “get a big screen TV when you strap a small screen on your face” has gained some traction recently but I really wonder if it is a fad like the 3D TVs of the last few years. Same with the 360 live video cameras. Both have some utility and “oh, this is cool” moments but neither seems like virtual reality to me.
I’m more interested in the new recording technology that uses light field (depth + photographs) technology to recreate a place / event for you in VR. Imagine walking on the sideline at the Super Bowl as opposed to these pseudo-“VR” experience for Super Bowl LI.
The new Lithodomos VR App will turn archaeological sites into completed visualizations of how they once appeared. Apps like it will not only impact tourism, but transform how we teach history and archaeology. The days of hand drawn renderings are coming to an end; students will explore deeply immersive environments depicting the past.
I think stating this app and others like it will transform how we teach history and archaeology is a bit of a stretch right now but the technology is looking more and more promising. Hope this app comes out on the Vive soon!
This experience is segmented into 3 “rounds” based on 3 different themes: (1) VR Experiences in Art, Museums and Cultural Sites, (2) VR and AR Experiences with the Human Self, and (3) VR Experiences in Storytelling, Journalism and Social Science. I highly recommend trying out this experience – if you need a Google Cardboard viewer, come by the CTLT in Boatwright library.
Unity is, and has been, the ‘go-to’ tool for VR game / environment creation but, until now, development occurred on your basic PC and monitor. It makes sense that building in VR would be a natural development for the tool but this is exciting regardless. I wonder if it will reduce the learning curve for new developers? As an inexperienced developer, I hope so!