CHI 2019: What is Mixed Reality?

Five days ago, on a train traveling home for Christmas, I was thinking about my personal highlights of 2019. While a lot of good things happened in the past 12 months (and I’m not going to talk about private matters here), from a professional point of view, there’s a clear winner: Giving a talk about mixed reality at the ACM Conference on Human Factors in Computing Systems (a.k.a. CHI) in Glasgow.

Myself giving a very professional talk.

The talk was based on research I conducted together with friends from the University of Michigan (where I was a post-doc from 2017‒18), Michael Nebeling and Brian Hall. We had noticed that a lot of people we talked to had differing and partly competing understandings of what mixed reality (or MR) is. For instance, some relied on the original definition by Milgram and Kishino from 1994, which defines MR as a continuum (see below), while others adhered to a newer notion pushed by Microsoft, which also applies to experiences that are clearly VR.

Milgram et al.’s continuum: one of six notions of mixed reality.

Hence, we concluded that—even though it might seem the question What is Mixed Reality? should have a relatively simple answer—it would be worthwhile to discover and investigate all the different notions of mixed reality that are out there. And we were right, the situation wasn’t as easy as you’d think.

What did we find?

As we hypothesized, there is indeed not a single, “best” definition of mixed reality. Instead, we found six distinct and widely used working definitions:

  1. MR according to Milgram et al.’s continuum (see above)
  2. MR as a synonym for AR
  3. MR as a type of collaboration (interaction between AR and VR users that are potentially physically separated)
  4. MR as a combination of AR & VR (a system combining distinct AR and VR parts)
  5. MR as an alignment of environments (e.g., synchronization between a physical and virtual environment)
  6. MR as a “stronger” version of AR (e.g., HoloLens)

These can be classified based on a conceptual framework (some would call it a taxonomy) with seven dimensions:

  1. number of environments
  2. number of users
  3. level of immersion (e.g., not immersive ‒ partly immersive ‒ fully immersive)
  4. level of virtuality (e.g., not virtual ‒ partly virtual ‒ fully virtual)
  5. degree of interaction (e.g., implicit ‒ explicit)
  6. input (e.g., motion, location)
  7. output (e.g., visual, audio)

I have also distilled our findings into an infographic:

A nice little infographic. (Also available as PDF.)

How did we do it?

To discover the six working definitions as well as the seven dimensions of the conceptual framework, we conducted expert interviews that were augmented (clever wordplay, huh?) by an extensive literature review. First, we interviewed a total of ten experts working on augmented and/or virtual reality, from both, academia and industry (occupations ranged from professor to R&D executive to CEO of an AR company). These interviews yielded a preliminary set of four working definitions. Subsequently, we reviewed a total of 68 sources, mainly from the CHI, CHI PLAY, UIST, and ISMAR conferences from 2014‒18 (inclusive). These confirmed the four preliminary notions while we also discovered two more that were added to the set.

Ultimately, we derived the conceptual framework by identifying the minimum number of dimensions that still allowed us to classify all of the working definitions unambiguously.

Example: Pokémon GO

To give just one example (from our paper), let’s have a look at how Pokémon GO would fit into the conceptual framework. First of all, the viral game constitutes MR according to notion № 4: a combination of AR and VR in a single system.

  • It comprises one environment since everything happens on the same device.
  • It can be played by one user on that device.
  • The level of immersion lies between not immersive and partly immersive.
  • The level of virtuality lies between partly virtual (the game’s AR view) and fully virtual (the game’s map view).
  • Interaction is implicit (the player moves in the real world, all explicit interaction happens via a HUD).
  • It uses the user’s geolocation as input and provides visual and auditory output.

Conclusion

Now, why is this important? Mixed reality is a trending topic. Many people are talking about it nowadays and the number of papers, research artifacts, hardware, and apps is steadily increasing. MR has the potential to become omnipresent in our everyday lives. Therefore, it is important to put one’s words into context. With our research, we hope to provide researchers, students, and professionals with a tool that lets them better communicate what they mean when talking about MR, and to reduce misunderstandings in a rapidly evolving field. We are also proud that our paper received an 🏅 Honorable Mention Award, which stresses the importance of the question at hand.

As further reading, I recommend Milgram and Kishino’s original article about the Reality‒Virtuality Continuum, our own paper from CHI 2019 (of course), as well as my article What is augmented reality, anyway?

S.O.S. Receives Best Paper Honarable Mention Award at CHI ’15

TrophyOur paper “S.O.S.: Does Your Search Engine Results Page (SERP) Need Help?”—co-authors are Dr. Andreas Both (Unister) and Prof. Martin Gaedke (TU Chemnitz)—has been awarded a Best Paper Honarable Mention Award by ACM SIGCHI, the Special Interest Group on Computer–Human Interaction of the Association for Computing Machinery. According to Wikipedia, ACM SIGCHI is “the world’s leading organization in Human–Computer Interaction (HCI), and essentially created and defined the field.”1 Our paper is to be presented at the 2015 edition of the CHI Conference on Human Factors in Computing Systems2, which is the premier conference in the field of HCI and takes place in Seoul, South Korea.

S.O.S., which is short for “SERP Optimization Suite”, is a tool for determining the usability of a SERP in terms of quantitative scores by analyzing user feedback and interactions. If suboptimal scores are detected for a certain factor of usability (e.g., readability), adjustments are automatically proposed based on a catalog of best practices (e.g., adjust font size, among others). The catalog contains sets of potential causes for suboptimal scores and maps them to sets of corresponding countermeasures. Determining usability scores is based on WaPPU.

S.O.S.’s GitHub repository can be found at https://github.com/maxspeicher/sos. It’s free for non-commercial use. Resources and results of the evaluation we describe in our paper are available at https://github.com/maxspeicher/sos-resources.

(CC BY trophy icon by icomoon.io.)

1 http://www.wikiwand.com/en/SIGCHI
2 http://chi2015.acm.org/program/best-of-chi/#honorable-mentions

#papershizzle

Phew! It was a rather busy (that’s why I haven’t been posting in a while), but also very successful start into the new year. A total of three full papers have been accepted at various conferences and journals. So basically, I’ve been revising and resubmitting papers since Christmas.

First, our paper about Inuit has been accepted at the 4th International Conference on Design, User Experience and Usability (DUXU), which will be held as a part of HCI International 2015 in Los Angeles, California. Inuit is a new usability instrument for interfaces that has been specifically designed for our concept of Usability-based Split Testing. An instrument of this kind contains a set of observable items that are used to predict a latent (i.e., non-observable) variable—in our case, usability. For instance, a person’s intelligence is a latent variable that can only be assessed with a number of specific questions (or items). Therefore, IQ tests are instruments.

Second, an article that is an extended version of our ICWE 2014 paper about SMR has been conditionally accepted by the Journal of Web Engineering (JWE). SMR is a streaming-based system that allows for the prediction of search result relevance from user interactions. In the extended version, we further elaborate on specifics of SMR’s data processing algorithm and complexity. Also, we describe the integration of our system into a real-world industry setting.

Image taken from http://chi2015.acm.org/.

Finally—and probably most importantly—our paper titled “S.O.S.: Does Your Search Engine Results Page (SERP) Need Help?” has been accepted at CHI 2015, which is the premier conference on human–computer interaction and will take place in Seoul! What a great success 🙂 . S.O.S. is the abbreviation for SERP Optimization Suite, which comprises two components. (1) WaPPU, which is a tool for inferring usability scores from users’ interactions with an interface. WaPPU was already presented at ICWE 2014 and has been augmented with (2) a catalog of best practices to form S.O.S. That catalog contains potential causes and corresponding countermeasures for suboptimal usability scores. An extension to WaPPU now automatically detects such suboptimal scores and proposes optimizations based on the catalog.

I am very excited about these accepted papers and definitely looking forward to presenting them to an audience of world-renowned experts. As a side note, a revised and extended version of my post titled What is ›Usability‹? has been published as a technical report in the series “Chemnitzer Informatik-Berichte” (roughly translated: “Computer Science Reports of Chemnitz University of Technology”).

So after this very successful start of the year, let’s see what else 2015 will bring. Stay tuned! 🙂