At the close of the 2016-2017 academic year, we recognized that in order to grow the Immersive Technology Initiative and impact more faculty and students, we needed a permanent space for our equipment and workshops. Working in collaboration with the A&S Dean’s office, the Library, and Information Services, we finally got our space on Gottwald Science Center’s third floor halfway through the spring 2018 semester. Since then, we’ve hosted weekly open houses for students and faculty and I’ve held 3 in-class workshops (see below). Our community of interested faculty, staff, and students continues to grow and we are really looking forward to new projects and collaborations in the 2018-2019 academic year!
Looking forward to this summer and next academic year, we are actively researching new software and hardware and seeking new collaborations in an effort to best support the University of Richmond student experience. If you are interested in using VR/AR technologies in your classroom and research program, please reach out to me at email@example.com.
PSYCH 333 Cognitive Science Dr. Beth Crawford
FYS 101 The Search for the Self Prof. Marcia Whitehead
FYS 101 The Neuroscience of Photography Dr. Andrew Bell
FYS 100 Games, Game Theory, and Leadership Studies Dr. Kristin Bezio (community activity)
Independent Student Research
Erin Bonilla Exploring Science Education through Google Daydream Advisor Scott Bray
Gottwald Science Reading Room (HTC Vive Pro)
Gottwald Liaison Office (HTC Vive, as needed)
Mobile Immersive Set (HTC Vive, can be set up anywhere within < 1 hr)
Weekly Wednesday Open House in Gottwald Science Reading Room
Another semester is in the book here at the University of Richmond. I’m looking forward to big developments next semester for the Immersive Technologies community. The Gottwald Reading room will open with new VR and AR capabilities, probably students, faculty and staff access to on-demand VR technologies for research, study, and scholarship.
Prior to the creation of Photoshop in 1988, creating digital graphics and images were extremely challenging. Only a few engineers had the skills to create even the most rudimentary images. I believe 2017 will be the turning point for VR/AR content creation. The learning curve and development environment have started to look more and more manageable with developments from Unity, Google, and Apple in particular.
Google is putting together an impressive suite of development tools for aspiring VR artists and developers. Google Blocks is a great way to create shape files within VR, Google Tilt Brush is a great way to create environments within VR. and Google Poly is proving to be a compelling site to store and share one’s creations. I have some reservations about Google controlling these tools but the developments are certainly exciting.
The Future of Memory
If you’ve met me, you probably know how obsessed I am with the impact technology, particularly imagery, has on memory and recognition. In a general sense, medicine has gotten remarkably better in diagnosing and treating cardiovascular disease and cancer, so much so, life expectancy in the US is now above 80. So while doctors have gotten more capable of extending our lives, doctors have not made large impacts on reducing the impacts of dementia and the toil time takes on our brain function.
Can technology prolong our ability to transport ourselves into the past to relive our lives? Research and time will tell but the developments out of 8i are exciting and point towards a VR/AR use case that might cause massive consumer adoption. Holographic memories of our loved ones.
“Everybody wants holograms of their kids,” Nicole St. Jean, 8i’s vice president of content and a former Twitter executive, told me. St. Jean held up her iPhone and showed me an Instagram video of her son Lowell as an example. Only, it wasn’t just one Lowell in the clip: there was a one-year-old Lowell, juxtaposed with an almost two-year-old Lowell. One of these toddlers was a hologram.
We recently let an Education student borrow our Google Daydream to review. Here are her thoughts:
The google daydream itself was a really neat piece of equipment with some glaring disadvantages. Besides the cost, the daydream is very heavy, then add on your phone and it becomes even heavier. The strap was difficult to utilize and never got quite small enough for my head. The device itself fit my head very well, but it did not fit my husband when he tried it out; it really smashed his nose and hit his forehead in a weird spot. The highlights for the daydream are the fact that there is no outside light that can enter the device; I found this to be a draw back to the cardboard. And the remote is unbelievably useful. You can even see it virtually, which is a big plus when you need to orient yourself.
The app for the daydream was clunky and definitely has some kinks to work out. The tutorial is very helpful, but when downloading through the VR App store, it rarely works, and there is a lot of taking your device out, and putting it back in. It’s obvious that this technology is still in its infancy.
This technology would still be fun to implement in my future classrooms, but I have concerns about accountability. There’s no way for me to tell if they’re doing what they should be doing.
VR and Pain Management
Using VR as a therapy is gaining popularity and as the technology matures I imagine the use cases will only increase. Jeremy Bailenson is doing some great stuff using VR in pain management. Learn more here.
Chemistry and VR
One of the educational advantages of virtual reality content is that you are not bound by the laws of physics. You can visualize microscopic mechanisms and slow millisecond reactions. For this reason, I believe VR has the potential to change the way we teach sciences, particularly chemistry, biology, and physics. The first step for chemistry and biology is to get existing 3D models into a virtual environment. The ChimeraX team at UCSF have successfully done this. Exciting times!
I apologize for the hiatus in our newsletter. I was on parental leave welcoming my son, Jack, into this world. It’s good to be back!
ARKit now available on iOS 11!
If you follow this newsletter closely, you know I’m very excited about the lo-fi augmented reality experiences enabled by Apple’s ARKit and Google’s ARCore. Both are available to the public and the initial apps do not disappoint. Check out Atlas (an anatomy app) and Sky Guide (star atlas) for some of the best educational apps so far.
These AR experiences work great with the CTLT’s iPads. If you are faculty and want to use these experience in your classroom, connect with me!
Virtual Reality Casting Call?
Have you ever wondered what it’s like to act on stage?
Acting takes time, courage and a lifelong dedication that very few possess, and even less can do well… The creators behind Theatre VR say the program will allow you to experience what it is like to be an actor. Giving you the choice of which play and role to engage in, you’re fed your lines and given the opportunity to interact with either AI-driven or real players. (roadtoVR.com)
In theory, this sounds thrilling but I wonder how well the development comes together. I’ll be sure to download their demo if anyone on campus wants to try it out.
The educational power of immersive technologies comes from, in part, the sense of presence and scale that is simply not possible with traditional media. Timelooper is a mobile VR application that takes viewers to famous places during important moments in history (i.e. Trafalger Square, 1940 during Blitz). For me, the app is a bit cartoonish but is an interesting start to what I expect to be much more exciting and immersive experiences – particularly once AR merges with this. For instance, standing in Trafalgar Square and experience the square at different moments in history. This type of AR will likely require an ‘AR cloud’ – a topic I’ll talk more about next month.
Immersive technologies, in general, have been in “beta” for the last few years. Don’t get me wrong, there are some very cool technologies out there that demonstrate endless potential and opportunity but most, if not all, lack practical application for most consumers. This is rapidly changing.
Removing barriers for developing immersive content is a big reason behind this change. The best analogy I can come up with to help describe what I mean is from my experience making brownies. I have no idea how to make brownies from scratch, even with a recipe it would likely take me forever and the final product probably wouldn’t taste that great. That said, give me a box of brownie mix, and I’ll make some of the best brownies you’ve ever had in less than 5 minutes.
This past month, ARkit (an API) was announced at WWDC (Apple’s developers conference). ARkit is basically like brownie mix for developers to more easily and quickly create AR (augmented reality) content with. In less than 24 hours of being released, developers were posting demonstrations of what they’d already come up with (see below). In ARkit, Apple has made it possible for most developers to create Pokémon or Snapchat filter experiences with ease. In addition to lowering the barrier for development of new AR experiences, Apple’s existing hardware base (100s of millions of iOS devices) increases the likelihood we start seeing a raise in practical consumer AR applications soon.
The lead Apple has taken (over other tech giants like Google and Microsoft) with ARkit can’t be overstated. Look at these demos and compare them to anything Microsoft or Google have demoed over the last 2 years and remember these developers have had access to the software for like ~2-3 weeks and the experiences are made with off the shelf, single camera iOS devices…
developer brownie mix…
Space X Rocket Landing in your Backyard
Watch a sneak peek of our Goldilocks storybook that comes to life on any surface. We can’t wait to release this app update later this year. pic.twitter.com/BNEOUbuGhV
Based on what I’ve seen from these demos and read about the API, I’m convinced the software will be there for rich AR experiences but the looming question is whether wearable hardware (i.e. glasses) will become a consumer product (enabling new interaction models and driving down costs of production) or if we’ll only see them in limited (expensive) use cases (i.e. computer labs, etc). Not to throw water on all this hype, I think there is huge potential for AR applications, but I really don’t see a future where everyone wears expensive, battery-powered glasses all day…
“VR’s power is not in simulating reality but giving new ability to reason, communicate, and reflect” eleVR
I talk a lot of about the technologies behind the emerging immersive tech field but equally important are the creatives and thinkers that develop new paradigms around the technology. In order for a technology to become a part of life, or an industry like education, hardware/software needs to be developed but people also need to develop uses for the tools. An interesting group of thinkers/technologists I’ve been covering for the last year are eleVR (el-uh-V-R).
eleVR “studies and experiments with immersive media, particularly virtual and augmented reality, as a tool to reveal new ways to understand our world and express ourselves within it.”
They recently released a 1 year of research review video of their work and I was blown away by the insights they’ve gained and moved by the kind of perspective they bring to immersive technologies.
We don’t think VR’s power is in simulating reality. We’re interested in using it to create wholly new kinds of experiences that give us new abilities of reason, of communication, of self-expression and self-reflection, that last through the rest of our lives. What the headset shows us isn’t reality but the experience is real and it changes how we feel and how we think.
I enjoyed visiting with many of you at our Oculus Rift and Vive demo last week. We were fortunate to be joined by colleagues from area universities Randolph Macon and VCU. They are grappling with many of the same things we are grappling with in terms of juggling the hype/potential with practical/meaningful applications. Lots of fun, idea sharing, and stimulating conversation was had. Major thank you to Joedy Felts from Communications for letting us use his Oculus Rift! If you couldn’t make it last week, we’ll have more later this summer.
With immersive computing around us and woven into our environment, information will be richer, more relevant, and more helpful to us.
Ever since I started this community of practice / newsletter I’ve struggled with how to frame the conversation and how to name the group. I wanted it to be both focused but wide ranging in scope. After reading a particularly insightful article from Google’s VR/AR led VR Clay Bavor, I’ve finalized on a name: Immersive Technologies at UR. Clay makes a strong case that VR and AR technologies are not competing but rather exist on two unique places on the spectrum of a ‘immersive technology’ spectrum.
He also really does a good job of summarizing where these immersive technologies are now and provides insight into the ways they will potentially change computing in the future.
As value goes up and costs come down, immersive computing will make sense for more and more people. And this isn’t a question of if — it’s a question of when.
More cool things coming from Google I/O, Googles annual event.
Say your team wants to workshop an idea. Today you grab a dry erase marker, find a room with a whiteboard, and start working. Would it be possible to replicate this process in virtual reality? If so, would VR provide benefits / problems? This article talks about the challenges and insights gleaned from developing such an app. I thought the process of development they describe was fascinating and I can’t wait to see what final product they end up with.
Some ideas explored:
How would it be if it was effortless to move whiteboards, Post-Its, and work sessions from the physical space into VR?
Do the physical constraints of these objects affect the way we think while ideating?
There’s something that gets lost when trying to have meetings on digital platforms like Slack or Hangouts / Skype; is it the physical presence, our facial expressions, or the limited tools that make these platforms so inefficient and awkward when it comes to ideation?
What if all of these physical nuances could be moved to a digital space and still remain tangible for the user?
Virtual and Augmented Reality: Stepping into the New Frontier of Learning Webinar – Presented by Emory Craig and Maya Georgieva: May 1 1:00-2:30 pm, Boatwright 322. We’d love for you to join us to hear from Emory and Maya then discuss how their vision for the future fits into the University’s new strategic plan. More information.
Oculus Rift Technology Demo – in collaboration with Communications Joedy Felts. May 11th 9:00 – 4:00 PM, Boatwright 322. As many of you know, we’ve invested in the HTC Vive for our initial explorations into immersive VR technology. This will be an opportunity for the VR community to try the new Oculus Rift and it’s touch controls.
Mobile technology analyst Benedict Evans (of A16z) analyzes the current state of augmented reality technology and speculates how recent developments compares to mobile phone technology in the 2000s. I always find Benedict’s analysis thoughtful and nuanced, this article is no different.
I believe the multitouch -> iPhone, AR -> ?!? analogy is spot on. Demos of AR technology are getting cooler and cooler (Hololens, Magic Leap, etc) but we are still waiting for a breakthrough product that truly changes the way people think/interact.
I also agree with his assertion that ‘real’ augmented reality will be when a device can see and interpret the world around us. This is something I am constantly reminded of when I hear people talk about Pokemon Go. A “dumb” heads up display (HUD) wouldn’t add enough compelling incentive to be a breakthrough consumer product, a cool fad perhaps, but not a lasting computing revolution.
Evans writes: an AR device with “an ambient, intangible, AI-led UI would change everything”. I agree and would add that education in particular will be revolutionized with these advances.
A VR/AR Sandbox
Stéphan Faudeux’s VR talk at the French Film Festival last month was a terrific survey of VR’s past, present and future. Among many insights, he talked about how some French movie theaters were installing VR arcades. Turns out the concept of a VR arcade isn’t new. IMAX has recently opened their first VR arcade in the states this year and similar projects are popping up all over the country. Norm Laviolette, founder of Asylum Gaming and eSports in New England says:
“Ultimately, we are creating an experience for people, and really there are few things out there that can elicit such an amazing physical, emotional, psychological reaction like VR,” he said. “We plan to have a dedicated wing just for VR, and keep it flexible to evolve as VR evolves and becomes more and more sophisticated.
As I talk with more people on campus about implementing VR technologies, the more I believe, in addition to faculty driven academic and research developments, we should also be student focused. What types of experiences will our students expect in 3-5 years when they are campus? We should be giving student’s access to these new technologies and I think a VR sandbox/arcade concept like Mk2 VR might work at an institution our size.
A very special event is happening on campus this month. The French Film Festival will be featuring a lecture by Stéphan Faudeux, titled: Virtual Reality and Cinema: Complementary or Competitors? The talk happens March 28th, 10 am to 11:30 am and will “cover the progress of virtual reality in various areas, with a pragmatic, practical and fun approach.” For more information check out the French Film Festival site.
Two Events Coming in April: VR Student Research Project Pizza & Pedagogy and Organon VR Anatomy Demo
Alyssa Ross and Dr. Kristin Bezio will discuss using the HTC Vive in the classroom and research lab for our April Pizza & Pedagogy lunch – free pizza! Register here
We will be hosting a demo of the Organon VR Anatomy app for the HTC Vive on April 11th from 1 to 4pm. This app will change the way you think about the human body (I’m not exaggerating). http://www.3dorganon.com/site/
[Tim] Cook (Apple’s CEO) has likened AR’s game-changing potential to that of the smartphone. At some point, he said last year, we will all “have AR experiences every day, almost like eating three meals a day. It will become that much a part of you.”
Mark Gurman at Bloomberg gets some new, interesting details on Apple’s AR efforts. However, it’s still uncertain, how exactly Apple is going to define “AR”. Some argue Pokemon on a large slab of glass (i.e. an iPhone) is AR, others believe, in order to be a new platform, glasses/new hardware have to be involved. I think the Google Glasses are a lesson that not everyone is keen on wearing and/or being seen by technology glasses. I tend to think Apple’s strategy in the near term will be focused on extending the AR functionality in the iPhone. From an ed tech stand point, that would be great due to the popularity of iPhones on campus.
This article articulates exactly where I think we are in terms of VR education technology. Looking in 360 is great but not revolutionary, collectively interacting in virtual or augmented worlds is. Paul at W&L is doing some great things with VR and has great vision for how success will be defined in the future.
Paul Low, who taught undergraduate geology and environmental sciences and is now a research associate at Washington and Lee University, is among a small group of profs-turned-technologists who are experimenting with virtual reality’s applications in higher education. Early VR programs were about showing students places, say the Louvre or ancient Rome, in low-cost headsets like the Google Cardboard viewer. But these latest iterations go further, creating entire environments—from the subatomic level to the solar system—that students can manipulate. Low and his colleagues at other campuses are trying to shepherd VR educational content from being something “that students look at” to something they can interact with.
… Low is excited about creating virtual environments where students and faculty can inhabit the same space and interact, like in the earthquake study. That experience is easier for learning designers to create, he says, because they don’t have to program all of the sequences of events that could potentially happen during a solo activity. Instead, the instructor handles the interactive components on the spot.
If you are planning on attending Stéphan’s lecture next week, I highly recommend this article from MIT as a primer.
VR will never become the new cinema. Instead, it will be a different thing. But what is that thing? And will audiences trained in passive linear narrative—where scene follows scene like beads on a string, and the string always pulls us forward—appreciate what the thing might be? Or will we only recognize it when the new medium has reached a certain maturity, the way audiences in 1903 sat up at The Great Train Robbery and recognized that, finally, here was a movie?
Visualization can reveal the knowledge hidden in data, but traditional 2-D and 3-D data visualizations are inadequate for large and complex data sets. Our solution is to visualize as many as 10 dimensions in VR/AR all via a Shared Virtual Office, which allows even untrained users to spot patterns in data that can give companies a competitive edge.
We are just at the early stages in understanding what kind of tools will prove useful in VR but this looks very promising. It feels super nerdy to say but being able to walk around a 3D scatter plot sounds exhilarating.
The “get a big screen TV when you strap a small screen on your face” has gained some traction recently but I really wonder if it is a fad like the 3D TVs of the last few years. Same with the 360 live video cameras. Both have some utility and “oh, this is cool” moments but neither seems like virtual reality to me.
I’m more interested in the new recording technology that uses light field (depth + photographs) technology to recreate a place / event for you in VR. Imagine walking on the sideline at the Super Bowl as opposed to these pseudo-“VR” experience for Super Bowl LI.
The new Lithodomos VR App will turn archaeological sites into completed visualizations of how they once appeared. Apps like it will not only impact tourism, but transform how we teach history and archaeology. The days of hand drawn renderings are coming to an end; students will explore deeply immersive environments depicting the past.
I think stating this app and others like it will transform how we teach history and archaeology is a bit of a stretch right now but the technology is looking more and more promising. Hope this app comes out on the Vive soon!
This experience is segmented into 3 “rounds” based on 3 different themes: (1) VR Experiences in Art, Museums and Cultural Sites, (2) VR and AR Experiences with the Human Self, and (3) VR Experiences in Storytelling, Journalism and Social Science. I highly recommend trying out this experience – if you need a Google Cardboard viewer, come by the CTLT in Boatwright library.
Unity is, and has been, the ‘go-to’ tool for VR game / environment creation but, until now, development occurred on your basic PC and monitor. It makes sense that building in VR would be a natural development for the tool but this is exciting regardless. I wonder if it will reduce the learning curve for new developers? As an inexperienced developer, I hope so!
I highly recommend experiencing both “Pearl” and a short animated VR experience (it’s more than a movie) Allumette with the Vive. Both experiences give you a glimpse of how this new technology can be a beautiful storytelling tool.
Designing for virtual reality presents new challenges to a UX designer because good VR prioritizes presence over simplicity and function. How can we design for presence?
The devices that formerly relied on more external cues now rely heavily on how our minds are built and wired. Although user-experience designers have traditionally accounted for cognitive science in how they design mobile and desktop interfaces, the user-experience of virtual reality is different because it does not prioritize function but instead prioritizes displacement.
As a neuroscientist, I don’t know if I 100% agree with the author’s conclusions about the brain on virtual reality but I think they are onto something with how impactful sublime experiences are in VR (i.e. Google Earth).
An immersive experience in a virtual reality classroom, however, would be a fundamentally different proposition. The study of anatomy could go beyond frogs to embrace large mammals and even humans, whose computer-imaged insides could be examined in detail. (emphasis is mine)
I’m really looking forward to the day when we can start writing and reading articles that address actual VR education software/hardware instead of hypotheticals.