Even though I’m not working in academia anymore, I still try to find the time to contribute to some quality scientific research now and then. Two such contributions I co-authored have been recently accepted to prestigious international conferences. The first—MRAT—stems from a project I originally started together with Prof. Michael Nebeling in 2017, when I was still a post-doc at the University of Michigan. The second is a collaboration with a friend from Novosibirsk State Technical University as well as my alma mater Chemnitz University of Technology.
MRAT: The Mixed Reality Analytics Toolkit
MRAT is a general toolkit for instrumenting AR/VR apps made in Unity. It tracks users’ interactions in the app, which can then be visualized by means of a 2D dashboard as well as in mixed reality in situ, thus supporting the evaluation phase of AR/VR interaction design. Instrumentation and configuration require basic knowledge of Unity, but no programming skills. MRAT was successfully applied to analyze and optimize a crisis informatics simulation for HoloLens.
This paper has been accepted to this year’s ACM Conference on Human Factors in Computing Systems, which is the premier scientific conference on Human-Computer Interaction and UX, and has been awarded a 🏆 Best Paper Award. MRAT became a huge project with, in the end, a list of 14 authors. Thank you all very, very much for your contribution, especially Michael, the first author and main driver behind the project, and the rest of the Michigan Information Interaction Lab.
I Don’t Have That Much Data! Reusing User Behavior Models for Websites from Different Domains
This project revisited one of the main topics of my Ph.D. thesis, which is using machine-learning models to automatically assess the usability or user experience of a web interface. Building on this as well as Maxim Bakaev’s previous work, we investigated whether it’s possible to apply models that have been learned for a particular type of website (say, a news website) to websites from other domains as well. Based on a study involving 137 participants and more than 3000 websites from 7 different domains, we found that, to a certain degree, it is possible to do so, which could make the automatic assessment of web interfaces much more efficient.
This paper was accepted at this year’s International Conference on Web Engineering. Many thanks go out to Maxim, the first author and main driver behind this project, as well as Sebastian Heil and Martin Gaedke.