REFOCUS: Current & Future Search Interface Requirements for German-speaking Users

REFOCUSWhen looking at current research, there is plenty of existing work inquiring into how users use search engines1 and how future search interfaces could look like2. Yet, an investigation of users’ perceptions of and expectations towards current and future search interfaces is still missing.

Therefore, at this year’s International Conference on WWW/Internet (ICWI ’16) my co-author Martin Gaedke presented our paper “REFOCUS: Current & Future Search Interface Requirements for German-speaking Users”, which we wrote together with Andreas Both. To give you an idea of what our work aims at, I’m going to provide a step-by-step explanation of the research paper’s title.

REFOCUS. An acronym for Requirements for Current & Future Search Interfaces.

Search Interface Requirements. From an exploratory study with both qualitative and quantitative questions we have derived a set comprising 11 requirements for search interfaces. The initial set of requirements was validated by 12 dedicated experts.

Current. The requirements shall be valid for current search interfaces. According to the experts’ reviews, this applies to eight of the requirements.

Future. Also, the set of requirements shall inform the design and development of future search interfaces. According to the experts’ reviews, this applies to ten of the requirements. Supporting the design of future search interfaces is particularly important with the wide variety of Internet-capable novel devices, like cutting-edge video game consoles, in mind.

German-speaking Users. Due to the demographics of our participants, the set of requirements can be considered to be valid for German-speaking Internet users. 87.3% of the participants were German while 96.6% lived in a German-speaking country at the time of the survey.

If this sounds interesting to you, please go check out our research paper at ResearchGate or arXiv. The original publication will be available via the IADIS Digital Library.

1 For instance, http://www.pewinternet.org/2012/03/09/search-engine-use-2012/ (accessed November 8, 2016).
2 For instance, Hearst, M. A. ‘Natural’ Search User Interfaces. In Commun. ACM 54(11), 2011.

Advertisements

Enabling Industry 4.0 with HoloBuilder

At this year’s INFORMATIK conference held by the GI in Cottbus, I had the chance to present a research paper (full text here) about HoloBuilder—officially titled “Enabling Industry 4.0 with holobuilder”1—that I wrote together with my colleagues Kristina Tenhaft, Simon Heinen and Harry Handorf. In our paper, we examine HoloBuilder from a research rather than a marketing perspective by explaining and demonstrating how it acts as an enabler for Industry 4.0.

The paper was presented in the session named “Industry 4.0: Computer Science Forms New Production Systems”, which featured a selection of renowned experts for Industry 4.0—including Prof. Dr.-Ing. Peter Liggesmeyer of TU Kaiserslautern, Prof. Dr. Jürgen Jasperneite of OWL University and Prof. Dr.-Ing. Jörg Wollert of Aachen University of Applied Sciences, among others. The presenters set a particular focus on topics such as Internet of Things, smart factories, wireless communication and OPC UA, with which our presentation fitted in seamlessly—as will be explained in the following. The feedback we received was consistently positive.

Industry 4.0

Industry 4.0 was the original use case of our platform, i.e., the use case based on which the first prototypes had been created. From those, the current form of HoloBuilder evolved. The term Industry 4.0 was first coined in the context of the High-Tech Strategy 2020 of the German government. Basically, the smart factory, in which people, machines and products are ubiquitously interconnected, is at the center of Industry 4.0.2 Particular focus is moreover on cyber-physical systems, which merge the virtual and the real world.

HoloBuilder & Industry 4.0

From the technical perspective, implementing Industry 4.0 to a high degree means realizing the smart factory including cyber-physical systems. For this, two prime concepts to consider are Augmented Reality and machine-to-machine communication. Augmented Reality (AR) adds virtual objects to the real world in a see-through scenario, e.g., with smart glasses or a tablet PC. On the one hand, AR provides a “fusion of the physical and the virtual world”3 and thus forms a framework for cyber-physical systems while on the other hand it facilitates efficient human–machine interfaces. Yet, AR alone cannot realize a smart factory, because it only caters for displaying objects, which is a form of one-way communication. Hence, AR needs to be complemented with capabilities for machine-to-machine communication (M2M).

Current temperature of a machine displayed in AR.
Current temperature of a machine displayed in AR.

To enable the implementation of Industry 4.0, HoloBuilder has been designed as a platform that makes it possible for everyone concerned to create and consume arbitrary AR content. This is a particular advantage over other AR solutions, which require specific skills for creating the desired content, among other things. In contrast, HoloBuilder facilitates end-user design, which enables, e.g., engineers and mechanics without programming skills to create AR applications in the context of Industry 4.0. To also cater for M2M, the platform as well incorporates OPC UA capabilities, which is a standardized protocol. In this way, information provided by a machine (e.g., its current temperature) can be presented in terms of virtual objects in an AR scenario. Moreover, by manipulating such virtual objects, the user can also give commands to the machine via OPC UA. This makes it possible to, e.g., display a virtual button that can switch a machine on or off.

Design Principles

Hermann et al.4 define six design principles for Industry 4.0, upon which we build to show HoloBuilder’s potential for being an enabler of Industry 4.0:

  • Interoperability,
  • Virtualization,
  • Decentralization,
  • Real-Time Capability,
  • Service Orientation and
  • Modularity.

Conclusion

To summarize the above, Augmented Reality and machine-to-machine communication are two core principles to be considered when implementing Industry 4.0 in terms of a smart factory with cyber-physical systems. HoloBuilder, a platform for end-user design of arbitrary AR content, provides support for both. Our platform moreover fulfills all of the six design principles for Industry 4.0, which underpins HoloBuilder’s potential as an enabler.

Our paper has been published in the proceedings of the 2015 INFORMATIK conference and is also available via ResearchGate (including full text).

1 At the time the paper was accepted, we still had the company-internal convention to write HoloBuilder in lowercase letters, which has changed by now.
2 http://www.plattform-i40.de/
3 Kagermann, Henning: Chancen von Industrie 4.0 nutzen [Taking the Chances of Industry 4.0]. In (Bauernhansl, Thomas; ten Hompel, Michael; Vogel-Heuser, Birgit, eds): Industrie 4.0 in Produktion, Automatisierung und Logistik [Industry 4.0 in Production, Automation and Logistics], pp. 603–614. Springer, 2014.
4 Hermann, Mario; Pentek, Tobias; Otto, Boris: Design Principles for Industrie 4.0 Scenarios: A Literature Review. 2015. Working Paper No. 01/2015, Audi Stiftungslehrstuhl Supply Net Order Management, TU Dortmund.

INUIT: The Interface Usability Instrument

INUIT LogoAs one of the building blocks of my PhD thesis, I have developed a novel instrument for measuring the usability of web interfaces, which is simply called Inuit—the Interface Usability Instrument1. This was necessary because a usability instrument that is suited for the automatic methods for Search Interaction Optimization I have developed in my PhD project must fulfill three particular requirements, which are not met by any existing instruments:

(R1) A minimal number of items.
(R2) Items with the right level of abstraction for meaningful correlations with user interactions recorded on the client.
(R3) Items that can be applied to a web interface in terms of a stand-alone webpage.

The Instrument

Inuit has been designed and developed in a two-step process: First, over 250 rules for good usability from established guidelines and checklists were reviewed to identify a set of common underlying factors (or items) according to R2. From these underlying factors, a “structure” of usability based on ISO 9241-11 was created, which was then shown to 9 dedicated usability experts in the second step. The experts—all of which were working in the e-commerce industry—reviewed the given “structure” and proposed changes according to their perception of web interface usability. Finally, seven items have been identified:

  1. Informativeness
  2. Understandability
  3. Confusion
  4. Distraction
  5. Readability
  6. Information Density
  7. Reachability

These items can be translated to, e.g., the following yes/no questions for use in a questionnaire for determining the usability of a webpage:

  1. Did you find the content you were looking for?
  2. Could you easily understand the provided content?
  3. Were you confused while using the webpage?
  4. Were you distracted by elements of the webpage?
  5. Did typography & layout add to readability?
  6. Was there too much information presented on too little space?
  7. Was your desired content easily and quickly reachable (concerning time & distance)?

Conclusions

A confirmatory factor analysis based on a user study with 81 participants has proven that our instrument reasonably well reflects real-world perceptions of web interface usability. Inuit was first introduced at the workshop “Methodological Approaches to Human–Machine Interaction”, which was held as part of the 2013 Mensch & Computer conference. The corresponding paper is named Towards Metric-based Usability Evaluation of Online Web Interfaces (full-text here). The final version of the instrument has been presented at this year’s International Conference on Design, User Experience and Usability (DUXU), which has been held in Los Angeles. The full research paper is titled Inuit: The Interface Usability Instrument and available via Springer (full-text here).

Future Work

In the future, I intend to transfer Inuit into the context of my current work. That is, I intend to use it for evaluating the web interface of HoloBuilder, which enables users to create 3D content for the web, in contrast to the usual 2D content that is consumed nowadays. It will be particularly interesting to see whether both, 2D and 3D web interfaces can be meaningfully evaluated using the same minimal instrument. Furthermore, Inuit will be applied in the context of the research on evidence-based computing that is happening at the VSR research group at Technische Universität Chemnitz.

P.S.: Thanks a lot to Viet Nguyen for the awesome Inuit logo! 🙂

1 Please note the small caps!

S.O.S. Receives Best Paper Honarable Mention Award at CHI ’15

TrophyOur paper “S.O.S.: Does Your Search Engine Results Page (SERP) Need Help?”—co-authors are Dr. Andreas Both (Unister) and Prof. Martin Gaedke (TU Chemnitz)—has been awarded a Best Paper Honarable Mention Award by ACM SIGCHI, the Special Interest Group on Computer–Human Interaction of the Association for Computing Machinery. According to Wikipedia, ACM SIGCHI is “the world’s leading organization in Human–Computer Interaction (HCI), and essentially created and defined the field.”1 Our paper is to be presented at the 2015 edition of the CHI Conference on Human Factors in Computing Systems2, which is the premier conference in the field of HCI and takes place in Seoul, South Korea.

S.O.S., which is short for “SERP Optimization Suite”, is a tool for determining the usability of a SERP in terms of quantitative scores by analyzing user feedback and interactions. If suboptimal scores are detected for a certain factor of usability (e.g., readability), adjustments are automatically proposed based on a catalog of best practices (e.g., adjust font size, among others). The catalog contains sets of potential causes for suboptimal scores and maps them to sets of corresponding countermeasures. Determining usability scores is based on WaPPU.

S.O.S.’s GitHub repository can be found at https://github.com/maxspeicher/sos. It’s free for non-commercial use. Resources and results of the evaluation we describe in our paper are available at https://github.com/maxspeicher/sos-resources.

(CC BY trophy icon by icomoon.io.)

1 http://www.wikiwand.com/en/SIGCHI
2 http://chi2015.acm.org/program/best-of-chi/#honorable-mentions

#papershizzle

Phew! It was a rather busy (that’s why I haven’t been posting in a while), but also very successful start into the new year. A total of three full papers have been accepted at various conferences and journals. So basically, I’ve been revising and resubmitting papers since Christmas.

First, our paper about Inuit has been accepted at the 4th International Conference on Design, User Experience and Usability (DUXU), which will be held as a part of HCI International 2015 in Los Angeles, California. Inuit is a new usability instrument for interfaces that has been specifically designed for our concept of Usability-based Split Testing. An instrument of this kind contains a set of observable items that are used to predict a latent (i.e., non-observable) variable—in our case, usability. For instance, a person’s intelligence is a latent variable that can only be assessed with a number of specific questions (or items). Therefore, IQ tests are instruments.

Second, an article that is an extended version of our ICWE 2014 paper about SMR has been conditionally accepted by the Journal of Web Engineering (JWE). SMR is a streaming-based system that allows for the prediction of search result relevance from user interactions. In the extended version, we further elaborate on specifics of SMR’s data processing algorithm and complexity. Also, we describe the integration of our system into a real-world industry setting.

Image taken from http://chi2015.acm.org/.

Finally—and probably most importantly—our paper titled “S.O.S.: Does Your Search Engine Results Page (SERP) Need Help?” has been accepted at CHI 2015, which is the premier conference on human–computer interaction and will take place in Seoul! What a great success 🙂 . S.O.S. is the abbreviation for SERP Optimization Suite, which comprises two components. (1) WaPPU, which is a tool for inferring usability scores from users’ interactions with an interface. WaPPU was already presented at ICWE 2014 and has been augmented with (2) a catalog of best practices to form S.O.S. That catalog contains potential causes and corresponding countermeasures for suboptimal usability scores. An extension to WaPPU now automatically detects such suboptimal scores and proposes optimizations based on the catalog.

I am very excited about these accepted papers and definitely looking forward to presenting them to an audience of world-renowned experts. As a side note, a revised and extended version of my post titled What is ›Usability‹? has been published as a technical report in the series “Chemnitzer Informatik-Berichte” (roughly translated: “Computer Science Reports of Chemnitz University of Technology”).

So after this very successful start of the year, let’s see what else 2015 will bring. Stay tuned! 🙂

ICYMI: This is a motherfucking website and I wrote a motherfucking conference article about it

(Disclaimer: motherfuckingwebsite.com was not made by me!)

motherfuckingwebsite.com short paperThe post in which I review motherfuckingwebsite.com and propose some changes to make it even more perfect (see here) is the most widely read post on my blog by far. To date, it has received 8,715 views, with the front page of my blog being in second place having 884 views. Most of the organic traffic I receive from search engines ends up on that very article.

What you probably haven’t noticed yet: Based on the blog post, I also wrote a short paper about my review and the proposed changes, which was accepted at the 2014 International Conference on Web Engineering (ICWE) and presented during its poster session. My conference article has now been published in the Lecture Notes on Computer Science series by Springer and is also available via http://link.springer.com/chapter/10.1007/978-3-319-08245-5_44. I think this is a nice paper to cite in your work, just for the sake of citing it. I mean … who wouldn’t want to cite a paper about motherfuckingwebsite.com? 😉

Speicher, Maximilian. “Paving the Path to Content-Centric and Device-Agnostic Web Design.” In Web Engineering, pp. 532-535. Springer International Publishing, 2014.

P.S.: In case you wonder why I had the more or less stupid idea to actually write a conference article about the motherfucking website: I was slightly inspired by the following conversation: https://twitter.com/michinebeling/status/440527478009122816.

Usability-based Split Testing or How to infer web interface usability from user interactions

The continuous evaluation of an e-commerce company’s web applications is crucial for ensuring customer satisfaction and loyalty. Such evaluations are usually performed as split tests, i.e., the comparison of two slightly different versions of the same webpage with respect to a target metric. Usually, metrics that stakeholders are interested in include completed checkout processes, submitted registration forms or visited landing pages. To give just one example, a dating website could present 50% of their users with a blonde woman on the cover page while the other half see a dark-haired one. It is then possible to choose the “better” front page based on the number of registrations it generated—if you pay attention to the underlying statistics1.

While metrics of this type very well reflect how much money you make, you can’t make well-founded statements about usability based on such numbers (“you don’t know why you get the measured results”)2. Thus, in the long-term a way better solution is to provide your customers with a site they love to use instead of confusing them in such a way that they accidentally buy your products, isn’t it? This calls for the introduction of usability as a target metric in split tests.

The WaPPU dashboardWe have developed WaPPU, the prototype of a usability-based split testing service. The underlying principle is to track interactions (mouse, scrolling etc.) in both versions of the tested interface. The one version additionally asks for an explicit rating of its usability by using a previously developed questionnaire3. WaPPU then takes all of these data and automatically trains models (based on existing machine learning techniques4) that are instantly used to predict the usability of the other interface from user interactions alone. This makes it possible to compare the interfaces based on their usability as perceived by users, e.g., “interface A has a usability of 85%, interface B of only 57%”.

The feasibility of our approach has been evaluated in a split test involving a real-world search engine results page. We were able to train the above mentioned models, from which we also derived general heuristics for search results pages, such as “better readability is indicated by a lower page dwell time” or “less confusion is indicated by less scrolling”.

Usability-based Split Testing  paper @ ICWE2014We have described our novel approach and the corresponding evaluation in a full research paper5 and an accompanying demo paper6. Both will be presented at the 2014 International Conference on Web Engineering (ICWE). The conference proceedings will be published by Springer and the final versions of our papers will be available at link.springer.com.

1 http://www.sitepoint.com/winning-ab-test-results-misleading/
2 http://www.nngroup.com/articles/putting-ab-testing-in-its-place/
3 Maximilian Speicher, Andreas Both and Martin Gaedke (2013). “Towards Metric-based Usability Evaluation of Online Web Interfaces”. In Mensch & Computer Workshopband.
4 http://www.cs.waikato.ac.nz/ml/weka/
5 Maximilian Speicher, Andreas Both and Martin Gaedke (2014). “Ensuring Web Interface Quality through Usability-based Split Testing”. In Proc. ICWE.
6 Maximilian Speicher, Andreas Both and Martin Gaedke (2014). “WaPPU: Usability-based A/B Testing”. In Proc. ICWE (Demos).

StreamMyRelevance! Predicting Search Result Relevance from Streams of Interactions

SMR paper @ ICWE2014Guessing the relevance of delivered search results is one of the biggest issues for today’s search engines. The particular problem is that it’s difficult to obtain explicit statements from users about whether they found what they were searching for. Clicks are commonly used to guess relevance (using so-called “click models”) but they are far from being a perfect indicator. Particularly, a user might click a search result, but then return to the results page because the visited webpage was useless. Also, it’s possible that no clicks happen at all if the desired piece of information is already shown on the results page (e.g., in terms of an info box).

To tackle the above shortcoming, we have investigated the suitability of implicit feedback in terms of mouse cursor interactions for predicting the relevance of search results. For this, we developed StreamMyRelevance!—a system that receives streams of interactions and relevance judgments and trains statistical models from these in near real-time. The models can then be used to infer relevance from interactions in the future. The relevance judgments we’re using to train our models can either be implicit (e.g., a completed booking process in the case of hotel search) or explicit (e.g., statements by paid quality raters/crowdworkers).

Analysis of a large amount of real-world interaction data from two e-commerce portals showed that StreamMyRelevance! is able to train good models that show the tendency to perform better than a state-of-the-art click model solution1 that is successfully used in industry. Our results particularly underpin the benefit of using interaction data other than clicks for guessing the relevance of search results.

We have summarized the design and evaluation of our system in a full research paper2 that will be presented at the 2014 International Conference on Web Engineering (ICWE). The conference proceedings will be published by Springer and the final version of our paper will be available at link.springer.com. Special thanks go to Sebastian Nuck, who helped with development and evaluation of StreamMyRelevance! in the context of his Master’s Thesis at Leipzig University of Applied Sciences.

1 Chao Liu, Fan Guo, and Christos Faloutsos (2009). “BBM: Bayesian Browsing Model from Petabyte-Scale Data”. In Proc. KDD.
2 Maximilian Speicher, Sebastian Nuck, Andreas Both and Martin Gaedke (2014). “StreamMyRelevance! Prediction of Result Relevance from Real-Time Interactions and its Application to Hotel Search”. In Proc. ICWE.