Five days ago, on a train traveling home for Christmas, I was thinking about my personal highlights of 2019. While a lot of good things happened in the past 12 months (and I’m not going to talk about private matters here), from a professional point of view, there’s a clear winner: Giving a talk about mixed reality at the ACM Conference on Human Factors in Computing Systems (a.k.a. CHI) in Glasgow.
The talk was based on research I conducted together with friends from the University of Michigan (where I was a post-doc from 2017‒18), Michael Nebeling and Brian Hall. We had noticed that a lot of people we talked to had differing and partly competing understandings of what mixed reality (or MR) is. For instance, some relied on the original definition by Milgram and Kishino from 1994, which defines MR as a continuum (see below), while others adhered to a newer notion pushed by Microsoft, which also applies to experiences that are clearly VR.
Hence, we concluded that—even though it might seem the question What is Mixed Reality? should have a relatively simple answer—it would be worthwhile to discover and investigate all the different notions of mixed reality that are out there. And we were right, the situation wasn’t as easy as you’d think.
What did we find?
As we hypothesized, there is indeed not a single, “best” definition of mixed reality. Instead, we found six distinct and widely used working definitions:
MR according to Milgram et al.’s continuum (see above)
MR as a synonym for AR
MR as a type of collaboration (interaction between AR and VR users that are potentially physically separated)
MR as a combination of AR & VR (a system combining distinct AR and VR parts)
MR as an alignment of environments (e.g., synchronization between a physical and virtual environment)
MR as a “stronger” version of AR (e.g., HoloLens)
These can be classified based on a conceptual framework (some would call it a taxonomy) with seven dimensions:
number of environments
number of users
level of immersion (e.g., not immersive ‒ partly immersive ‒ fully immersive)
level of virtuality (e.g., not virtual ‒ partly virtual ‒ fully virtual)
degree of interaction (e.g., implicit ‒ explicit)
input (e.g., motion, location)
output (e.g., visual, audio)
I have also distilled our findings into an infographic:
How did we do it?
To discover the six working definitions as well as the seven dimensions of the conceptual framework, we conducted expert interviews that were augmented (clever wordplay, huh?) by an extensive literature review. First, we interviewed a total of ten experts working on augmented and/or virtual reality, from both, academia and industry (occupations ranged from professor to R&D executive to CEO of an AR company). These interviews yielded a preliminary set of four working definitions. Subsequently, we reviewed a total of 68 sources, mainly from the CHI, CHI PLAY, UIST, and ISMAR conferences from 2014‒18 (inclusive). These confirmed the four preliminary notions while we also discovered two more that were added to the set.
Ultimately, we derived the conceptual framework by identifying the minimum number of dimensions that still allowed us to classify all of the working definitions unambiguously.
Example: Pokémon GO
To give just one example (from our paper), let’s have a look at how Pokémon GO would fit into the conceptual framework. First of all, the viral game constitutes MR according to notion № 4: a combination of AR and VR in a single system.
It comprises oneenvironment since everything happens on the same device.
It can be played by oneuser on that device.
The level of immersion lies between not immersive and partly immersive.
The level of virtuality lies between partly virtual (the game’s AR view) and fully virtual (the game’s map view).
Interaction is implicit (the player moves in the real world, all explicit interaction happens via a HUD).
It uses the user’s geolocation as input and provides visual and auditory output.
Now, why is this important? Mixed reality is a trending topic. Many people are talking about it nowadays and the number of papers, research artifacts, hardware, and apps is steadily increasing. MR has the potential to become omnipresent in our everyday lives. Therefore, it is important to put one’s words into context. With our research, we hope to provide researchers, students, and professionals with a tool that lets them better communicate what they mean when talking about MR, and to reduce misunderstandings in a rapidly evolving field. We are also proud that our paper received an 🏅 Honorable Mention Award, which stresses the importance of the question at hand.
Abstract: In the first year, a clear majority of the investigated companies secured more than $0.395m of seed or angel funding; in the second year, a clear majority secured more than $4.366m of Series A or venture funding; and in the third year, a clear majority secured more than $11.131m of Series B funding.
At bitstars GmbH / HoloBuilder Inc. I’ve been a part of “start-up grad school” for almost two years now. Most of the time the early years of a start-up are a constant fight for a good valuation and big investments. The first steps towards a real, innovative product, the hunt for customers who (are going to) pay actual money to use it and pitching nice figures to potential investors are at the core of this process. Recently, I’ve repeatedly asked myself what the early rounds of funding of the most successful start-ups looked like and whether they might all have something in common. So I’ve done some number crunching on the topic.
In the next step I consulted CrunchBase and for every company in my list—as far as the data was available—looked up the money raised in the first three major rounds of funding and in which month/year it happened.1 Using the Consumer Price Index (CPI), all numbers have been inflation-adjusted, i.e., they are the equivalent amount of money that would (have to) be raised in October 2016. There was no data available for 5 companies from Group A, which left me with a total of 55 data sets—39 in Group A (71%) and 16 in Group B (29%).
In the following I report on my findings for groups A and B as well as both groups combined. They particularly focus on how much money was raised in the different rounds of funding and how many months after the founding of a company it happened.2 Before the analysis, outliers were removed from both data series (the amounts of money raised and months after founding) separately using Tukey’s test for outliers based on groups A and B combined.
1 Multiple investments of the same type (e.g., “Series A”) that happened in a relatively short time span were aggregated considering the month/year of the latest investment as the effective date. 2 The date of the first investment was assumed as the founding date of a company if it was earlier than the founding date given by Crunchbase. In case only the founding year of a company was given, I assumed June of that year as the founding date; or January if the first investment already happened in June or earlier.
1st Round of Funding
The seed or first angel investment (in case there was no dedicated seed round) of a start-up was considered as the first major round of funding. For this, Crunchbase provided data on 16 companies from Group A and 8 from Group B.
When looking at Group A—the most valuable start-ups—on average they raised roughly $0.932m (σ ≈ $0.692m) and reached this milestone an average 7 months (σ ≈ 5) after the company was founded. Interestingly, the Group B start-ups on average raised more money in this first round, i.e., $1.374m (σ ≈ $0.976m), which averagely happened 6.5 months after founding (σ ≈ 6.5).
Combining the two groups gives us an average of $1.080m (σ ≈ $0.804m). This money was raised roughly 7 months after founding the start-up (avg. ≈ 6.8, σ ≈ 5.5). Neglecting outlier investments1, 75% of all considered companies raised more than $0.395m and managed to do so within the first 10.5 months after founding. For Group A only, these numbers are $0.395m and 11.25 months.
1 That is, an investment that is an outlier in either the money or the time dimension.
2nd Round of Funding
The Series A or first VC investment (in case there was no dedicated Series A) of a company was considered as the second major round of funding. Crunchbase provided data on 37 companies from Group A and 16 from Group B for this.
Group A secured an average $11.372m (σ ≈ $7.573m) in this round, roughly 16 months after founding (avg. = 16.25, σ ≈ 10.12). Group B falls a little short, with “only” $7.032m raised on average (σ ≈ $5.361m), but in a similar timeframe (avg. = 16.8, σ ≈ 10.73). The average amount raised by Group B is about 62% of that raised by Group A.
Combining the two groups yields an average of roughly $9.925m (σ ≈ $7.160m) raised after approximately 16.5 months (avg. ≈ 16.43, σ ≈ 10.20). Not considering outlier investments, 75% of the start-ups in both groups achieved a Series A/Venture funding of over $4.366m within the first 23.25 months of their existence. When looking at Group A only, these numbers change to $5.471m and 23 months.
3rd Round of Funding
The Series B investment was considered as the third major round of funding. For this, Crunchbase provided data on 31 companies from Group A and 13 from Group B.
Group A companies raised an average investment of roughly $25.619m (σ ≈ $14.093m) in this round, at an average 27.54 months (σ ≈ 12.35) after having been founded. The relative difference to Group B stays almost constant compared to the second round of funding, with an average investment of roughly $15.614m (σ ≈ $14.093m). These are 61% of the funding secured by Group A. Group B companies secured their investments an average 32.92 months (σ ≈ 17.22) after having been founded.
When looking at both groups combined, the average investment is roughly $22.196m (σ ≈ $12.998m), averagely 29.24 months (σ ≈ 14.09) after the founding of the company. 75% of all considered start-ups secured an investment of at least $11.131m within the first 35 months (without outlier investments). For Group A only, these numbers are $13.167m and 33 months.
The question posed in the title of this article is “What Do Highly Most Successful Start-ups Have in Common?”. So let’s see what we’ve learned. First off, Seed/Angel funding seems to be usually secured within the first, Series A/Venture funding within the second and Series B funding within the third year of existence.
When looking at the amounts of money raised, it becomes evident that the difference between the most successful (Group A) and the slightly less famous (Group B) companies is neglectable in the first round of funding, but becomes more considerable in the following two rounds. This is most probably due to a mutual effect of “If you raise more money you become more famous” and “If you are more famous you can raise more money”. Still, the first huge investment usually comes before the fame. Therefore, the numbers given in this article can be (cautiously) considered a common trait of highly successful start-ups.
Hence, to answer the initial question: Based on the first quartiles determined earlier we can state that in the first year, a clear majority of the investigated companies secured more than $0.395m of seed or angel funding; in the second year, a clear majority secured more than $4.366m of Series A or venture funding; and in the third year, a clear majority secured more than $11.131m of Series B funding.
Additionally, the following scatter plot maps all investigated investments (without outliers) from all three rounds. As can be seen, the rounds overlap and the variance in both money and time becomes bigger with each round of funding.
Finally, it is important to note that my analysis—as originally intended—only investigates the commonalities in the funding of the considered companies. What I haven’t done was to look at what explicitly distinguishes these (highly) successful start-ups from start-ups that failed. That being said, although proper funding most of the time is a key factor to success, you can raise just as much money as the companies described above in the same amount of time and still fail if you don’t make proper use of your investments. The other way round, it’s of course also possible to fall short of the figures above and still build a highly successful start-up. Always bear in mind that it takes more than money to transform a start-up idea—no matter how awesome it is—into a successful company!
From 2013 till 2016 I’ve done an industrial Ph.D. at Unister GmbH in Leipzig in cooperation with Chemnitz University of Technology. This means that I worked in Unister’s R&D department, which at that time developed a novel semantic search engine, and wrote my thesis about the part I contributed to the project. A few days ago—in an old notebook—I found a list of blog posts I wanted to write with the entry “Pros & Cons of an Industrial Ph.D. Program” not yet crossed out. I remember that I added this idea to the list after talking to Jürgen Cito, a Ph.D. student at the University of Zurich, at the 2014 International Conference on Web Engineering (ICWE). After chatting about the topic for a bit, Jürgen said something like “The pros and cons of an industrial Ph.D. program would really make a good blog post”. So here it is, in terms of an infographic made with Adioma, which I wanted to play around with for a pretty long time now.
I have to note that the con “less contact to your professor” wasn’t too bad for me because my second advisor Dr. Andreas Both, who was the Head of R&D at Unister, attached great importance to good and quality scientific work and publishing scientific results. This, however, isn’t something you can expect at any company. Overall, I list one more pro than cons because I had a really good experience with my industrial Ph.D. program and would definitely do it again!
When looking at current research, there is plenty of existing work inquiring into how users use search engines1 and how future search interfaces could look like2. Yet, an investigation of users’ perceptions of and expectations towards current and future search interfaces is still missing.
Therefore, at this year’s International Conference on WWW/Internet (ICWI ’16) my co-author Martin Gaedke presented our paper “REFOCUS: Current & Future Search Interface Requirements for German-speaking Users”, which we wrote together with Andreas Both. To give you an idea of what our work aims at, I’m going to provide a step-by-step explanation of the research paper’s title.
REFOCUS. An acronym for Requirements for Current & Future Search Interfaces.
Search Interface Requirements. From an exploratory study with both qualitative and quantitative questions we have derived a set comprising 11 requirements for search interfaces. The initial set of requirements was validated by 12 dedicated experts.
Current. The requirements shall be valid for current search interfaces. According to the experts’ reviews, this applies to eight of the requirements.
Future. Also, the set of requirements shall inform the design and development of future search interfaces. According to the experts’ reviews, this applies to ten of the requirements. Supporting the design of future search interfaces is particularly important with the wide variety of Internet-capable novel devices, like cutting-edge video game consoles, in mind.
German-speaking Users. Due to the demographics of our participants, the set of requirements can be considered to be valid for German-speaking Internet users. 87.3% of the participants were German while 96.6% lived in a German-speaking country at the time of the survey.
According to Patnaik (2009), Design Thinking is “any process that applies the methods of industrial designers to problems beyond how a product should look.” The term was already used as early as 1987 by Rowe in his eponymous book in an architectural context and has lately become popular through research done at Stanford University and the Hasso Plattner Institute in Potsdam, Germany (Schmalzried, 2013). In his introductory article about Design Thinking, Brown (2008), the CEO and president of IDEO, uses the example of Thomas Edison to illustrate the underlying methodology. While Edison invented the lightbulb—which undoubtedly was a significant innovation in itself from a pure engineering perspective—he did not stop at that point. Rather, he understood that the lightbulb alone would be of no use to people, so he also created “a system of electric power generation and transmission to make it truly useful” (Brown, 2008). This means that “Edison’s genius lay in his ability to conceive a fully developed marketplace, not simply a discrete device” (Brown, 2008), which underpins that one of the prime principles of Design Thinking is to consider a broader context with the user at its center.
Three Requirements of Design Thinking
Brown (2008) characterizes Design Thinking as “a discipline that uses the designer’s sensibility and methods to match people’s need with what is technologically feasible and what a viable business strategy can convert into customer value and market opportunity”. Therefore, Design Thinking does not solely concentrate on users, but also takes into account the company perspective. This is necessary since without profitable companies, no human-centered products could be realized, which clearly do not come at no cost. Hence, Design Thinking focuses on people and industry to ultimately yield a methodology that serves both sides—if applied correctly; i.e., companies should not see designers as pure means to make existing products more beautiful, but to “create [new] ideas that better meet consumers’ needs and interests” (Brown, 2008). From all this, we can derive three requirements that have to be met if one wants to successfully apply Design Thinking for the creation of a new product. A product that is based on Design Thinking
matches people’s needs (Brown, 2008),
is based on feasible technological requirements (Brown, 2008), and
creates customer value and market opportunity based on a viable business strategy (Brown, 2008).
According to Schmalzried (2013), these are similar to the “R-W-W” method by Day (2007), which can be summarized as: “Is it real? Can we win? Is it worth?”
To give an example, consider the Search Interaction Optimization methodology and toolkit that are at the heart of my PhD thesis. As for requirement (1) above, in the context of my thesis I had to consider two target groups. First, I developed means for human-centered design and development that are both, effective and efficient from a company’s point of view. Therefore, my primary target group were the stakeholders, designers and developers applying the new Search Interaction Optimization methodology and toolkit. Then, the secondary target group were the users who are ultimately provided with more usable products by the companies applying the approach. Hence, matching people’s needs corresponded to matching companies’ and users’ needs in that specific case.
What is a Design Thinker?
When it comes to the characterization of a person who applies Design Thinking, Brown (2008) specifies that Design Thinkers are empathic, which supports the successful application of a “people first” approach. Furthermore, they exert integrative thinking (Martin, 2009), i.e., thinking beyond the scope of purely analytical approaches in order to create “solutions that […] dramatically improve on existing alternatives” (Brown, 2008). Third, a Design Thinker is optimistic, which means they believe that there exists at least one solution that is better than the status quo Brown (2008). In addition, a certain amount of experimentalism is required, i.e., a Design Thinker must be happy to try out new (and potentially radical) things instead of just doing “incremental tweaks” (Brown, 2008). Finally, and probably most importantly, Design Thinkers collaborate, particularly in an interdisciplinary manner and also have experience in multiple fields (Brown, 2008).
Three Spaces of Design Thinking
Contrary to existing processes and methodologies that are established and predominantly used in today’s IT industry—i.e., “linear, milestone-based processes” (Brown, 2008)—, Design Thinking does not happen sequentially. Rather, Brown (2008) states that it “is best described metaphorically as a system of spaces rather than a pre-defined series of orderly steps.” These spaces are given as follows:
Inspiration relates to actions such as investigating the status quo, defining potential target audiences, exploring the context the new product will be embedded in, going beyond that context to obtain a broader view, observing people, observing the current market situation etc. All of these are actions that “motivate the search for solutions” (Brown, 2008).
Ideation In the ideation space, scenarios and user stories are created, prototypes are built and tested (both informally and formally), outcomes are communicated etc., all in multiple iterations. That is, the “generati[on], develop[ment], and testing [of] ideas that may lead to solutions” (Brown, 2008).
Implementation The implementation space does not correspond to the technical implementation alone, i.e., programming tasks carried out by developers. Rather, it again involves a huge amount of interdisciplinary communication to pave the path to a usable product that is put into a broader context. This particularly includes business solutions and marketing, i.e., the implementation space “chart[s] […] a path to market” (Brown, 2008).
While a Design Thinking process usually starts in the inspiration space, transition between any two of the spaces is possible at any time (Brown, 2008), which clearly distinguishes it from established business processes. One example could be that while working in the implementation space, a Design Thinker notices that the new product can not be well communicated to customers, which might make it necessary to enter the inspiration space (again) and perform a new analysis of the current market situation and customers needs. If it turns out that a different marketing strategy would be sufficient, they can then return to the implementation space. A second example would be a series of several paper prototypes that all indicate the previously developed user stories to be irrelevant. This could result in a return to the inspiration space for defining new target audiences or paying closer attention to specific groups of users.
Design Thinking as a Process
A more concrete implementation of the Design Thinking methodology has been realized in the Human-Centered Design Toolkit by IDEO.org (2011). That is, although the underlying principles remain the same, they build on a more defined process. The toolkit guides designers (which, according to the Design Thinking methodology, can be project managers, developers etc.) through a three-step process, i.e., “Hear”, “Create” and “Deliver”. This happens in order to provide solutions that are “desirable, feasible and viable” (IDEO.org, 2011) from a human-centered point of view.
Complementary to this, David Kelley, the founder of IDEO and d.school, defines five elements for the Design Thinking process: Empathize, Define, Ideate, Prototype and Test (Kliever, 2015). By empathizing, you understand “the beliefs, values, and needs that make your audience tick” (Kliever, 2015). In the next step, the collected information are analyzed and translated into insights about the audience and the challenge to be faced (Kliever, 2015). Once the challenge has been defined, in the Ideation phase (which is also one of Brown’s Design Thinking spaces), everything is about finding possible solutions, i.e., it “is a brain dump of ideas, and nothing is off limits” (Kliever, 2015). Finally, in the last two stages, multiple ideas are translated into prototypes and tested with the audience (Kliever, 2015). Depending on whether or not a prototyped and tested solution proves suitable, it might be necessary to iterate one or more of the previous steps (Kliever, 2015).
It might seem counterintuitive to talk about Design Thinking processes after having introduced Design Thinking spaces earlier (“Design Thinking does not happen sequentially”). However, Kelley’s process is perfectly in line with the concept of Design Thinking spaces, where you do not follow a predefined path. The spaces and processes of Design Thinking go hand in hand. While you follow the above process, you necessarily move through the three spaces of Inspiration, Ideation and Implementation in no particular order, iterating previous steps if required.
To conclude, I would like to quote Steve Jobs, who said:
Design is a funny word. Some people think design is how it looks. But of course, if you dig deeper, it’s really how it works.
Moreover, I would like to refer to Google’s principle
Focus on the user and else will follow.
To give just one example for this, if you write a blog post, it will not become popular because you are using some fancy SEO tools. It will become popular if and only if you have created a piece of great content that fulfills the needs of your audience. As a Design Thinker, be bold, be unpredictable, be creative! You do not (necessarily) have to be a Photoshop artist for this. Hypothesize, but make your hypotheses testable—and test them. But still, do not rely on data alone as a starting point, which might prevent radical and potentially better solutions. Finally—and most importantly—do not let legacy processes restrict yourself. Yet, at the same time make sure that Design Thinking remains too unpredictable to become a legacy process itself.
Usability testing is often perceived as cumbersome and time-consuming and therefore not thoroughly applied. This was one of the key observations leading to the topic of my PhD thesis. Particularly conducting tests with actual users is often omitted, which results in the release of suboptimal products and websites. In my thesis, I tackle this problem through more automatic evaluation and optimization, however, in the specific context of search engines. Yet, every type of website—no matter if private or professional—should undergo at least one usability test before its release. Therefore, we need to redesign usability testing itself:
It must be quicker.
It must be cheaper.
It must be easier to understand.
Still, the result must be as precise as possible.
The U Score is a more general derivative of the findings of my PhD project that provides quick and precise usability evaluation for everyone based on actual research. Any designer or developer who isn’t able to conduct a regular usability test can answer a minimal but exhaustive set of yes/no questions and receives a single usability score for their website or web app. The questions have been designed to be as objective as possible and are based on established research findings. Also, for time reasons I try to minimize the need to involve other people, which, however, cannot be completely eliminated (still, you can receive a complete U Score with the help of only three friends who have a look at your site).
In this way, the U Score provides an approach to usability testing that is as precise as possible given the minimal effort it requires. It’s intended for situations in which designers/developers don’t have the chance to conduct a traditional usability test. Also, it addresses everyone who needs a quick assessment, has never tested the usability of a website before or is new to usability testing. However, please note that the U Score can only be an approximation and is not a complete substitute for established usability testing methods. Still, it gives you a very good baseline according to the motto: Any usability test is better than no usability test!
The current version of the U Score is still in beta development status. Therefore, I highly appreciate your feedback, which you can add to this public Trello board.
For implementation, I’ve relied on a number of well-known technologies and frameworks in combination with some that were new to me (the ones marked with an asterisk):
At this year’s INFORMATIK conference held by the GI in Cottbus, I had the chance to present a research paper (full text here) about HoloBuilder—officially titled “Enabling Industry 4.0 with holobuilder”1—that I wrote together with my colleagues Kristina Tenhaft, Simon Heinen and Harry Handorf. In our paper, we examine HoloBuilder from a research rather than a marketing perspective by explaining and demonstrating how it acts as an enabler for Industry 4.0.
The paper was presented in the session named “Industry 4.0: Computer Science Forms New Production Systems”, which featured a selection of renowned experts for Industry 4.0—including Prof. Dr.-Ing. Peter Liggesmeyer of TU Kaiserslautern, Prof. Dr. Jürgen Jasperneite of OWL University and Prof. Dr.-Ing. Jörg Wollert of Aachen University of Applied Sciences, among others. The presenters set a particular focus on topics such as Internet of Things, smart factories, wireless communication and OPC UA, with which our presentation fitted in seamlessly—as will be explained in the following. The feedback we received was consistently positive.
Industry 4.0 was the original use case of our platform, i.e., the use case based on which the first prototypes had been created. From those, the current form of HoloBuilder evolved. The term Industry 4.0 was first coined in the context of the High-Tech Strategy 2020 of the German government. Basically, the smart factory, in which people, machines and products are ubiquitously interconnected, is at the center of Industry 4.0.2 Particular focus is moreover on cyber-physical systems, which merge the virtual and the real world.
HoloBuilder & Industry 4.0
From the technical perspective, implementing Industry 4.0 to a high degree means realizing the smart factory including cyber-physical systems. For this, two prime concepts to consider are Augmented Reality and machine-to-machine communication. Augmented Reality (AR) adds virtual objects to the real world in a see-through scenario, e.g., with smart glasses or a tablet PC. On the one hand, AR provides a “fusion of the physical and the virtual world”3 and thus forms a framework for cyber-physical systems while on the other hand it facilitates efficient human–machine interfaces. Yet, AR alone cannot realize a smart factory, because it only caters for displaying objects, which is a form of one-way communication. Hence, AR needs to be complemented with capabilities for machine-to-machine communication (M2M).
To enable the implementation of Industry 4.0, HoloBuilder has been designed as a platform that makes it possible for everyone concerned to create and consume arbitrary AR content. This is a particular advantage over other AR solutions, which require specific skills for creating the desired content, among other things. In contrast, HoloBuilder facilitates end-user design, which enables, e.g., engineers and mechanics without programming skills to create AR applications in the context of Industry 4.0. To also cater for M2M, the platform as well incorporates OPC UA capabilities, which is a standardized protocol. In this way, information provided by a machine (e.g., its current temperature) can be presented in terms of virtual objects in an AR scenario. Moreover, by manipulating such virtual objects, the user can also give commands to the machine via OPC UA. This makes it possible to, e.g., display a virtual button that can switch a machine on or off.
Hermann et al.4 define six design principles for Industry 4.0, upon which we build to show HoloBuilder’s potential for being an enabler of Industry 4.0:
Service Orientation and
To summarize the above, Augmented Reality and machine-to-machine communication are two core principles to be considered when implementing Industry 4.0 in terms of a smart factory with cyber-physical systems. HoloBuilder, a platform for end-user design of arbitrary AR content, provides support for both. Our platform moreover fulfills all of the six design principles for Industry 4.0, which underpins HoloBuilder’s potential as an enabler.
Our paper has been published in the proceedings of the 2015 INFORMATIK conference and is also available via ResearchGate (including full text).
1 At the time the paper was accepted, we still had the company-internal convention to write HoloBuilder in lowercase letters, which has changed by now. 2http://www.plattform-i40.de/ 3 Kagermann, Henning: Chancen von Industrie 4.0 nutzen [Taking the Chances of Industry 4.0]. In (Bauernhansl, Thomas; ten Hompel, Michael; Vogel-Heuser, Birgit, eds): Industrie 4.0 in Produktion, Automatisierung und Logistik [Industry 4.0 in Production, Automation and Logistics], pp. 603–614. Springer, 2014. 4 Hermann, Mario; Pentek, Tobias; Otto, Boris: Design Principles for Industrie 4.0 Scenarios: A Literature Review. 2015. Working Paper No. 01/2015, Audi Stiftungslehrstuhl Supply Net Order Management, TU Dortmund.
My PhD thesis introduces a novel methodology that is named Search Interaction Optimization (SIO) and is used for designing, evaluating and optimizing search engine results pages (so-called SERPs). As a proof-of-concept of this new methodology, I’ve developed a corresponding SIO toolkit, which comprises a total of seven components1 (most of which have already been introduced in previous posts):
Inuit, a new instrument for usability evalutation;
Describing the design and development of the above components and evaluating their effectiveness and feasibility makes for a major part of my thesis. Now, I’ve finally managed to organize all of them in terms of GitHub repos2, which I make available through a new website I have specifically created for my PhD project: http://www.maxspeicher.com/phdthesis/. In particular, on that site you can filter the components depending on whether you want to design, evaluate and/or optimize a SERP. It also lists all of the related publications including links to the corresponding full texts (via ResearchGate). In case you are actually interested in all that fancy research stuff3—have fun browsing, reading & playing around! 🙂
1 The logo of the SIO toolkit features only six tiles because S.O.S. and the catalog of best practices are treated as one component there. 2 Because my PhD project was carried out in cooperation with Unister GmbH (Leipzig), unfortunately it’s not possible for me to provide the source codes of all components via GitHub, as some contain company secrets. 3 Which I doubt. 😉
As one of the building blocks of my PhD thesis, I have developed a novel instrument for measuring the usability of web interfaces, which is simply called Inuit—the Interface Usability Instrument1. This was necessary because a usability instrument that is suited for the automatic methods for Search Interaction Optimization I have developed in my PhD project must fulfill three particular requirements, which are not met by any existing instruments:
(R1) A minimal number of items. (R2) Items with the right level of abstraction for meaningful correlations with user interactions recorded on the client. (R3) Items that can be applied to a web interface in terms of a stand-alone webpage.
Inuit has been designed and developed in a two-step process: First, over 250 rules for good usability from established guidelines and checklists were reviewed to identify a set of common underlying factors (or items) according to R2. From these underlying factors, a “structure” of usability based on ISO 9241-11 was created, which was then shown to 9 dedicated usability experts in the second step. The experts—all of which were working in the e-commerce industry—reviewed the given “structure” and proposed changes according to their perception of web interface usability. Finally, seven items have been identified:
These items can be translated to, e.g., the following yes/no questions for use in a questionnaire for determining the usability of a webpage:
Did you find the content you were looking for?
Could you easily understand the provided content?
Were you confused while using the webpage?
Were you distracted by elements of the webpage?
Did typography & layout add to readability?
Was there too much information presented on too little space?
Was your desired content easily and quickly reachable (concerning time & distance)?
A confirmatory factor analysis based on a user study with 81 participants has proven that our instrument reasonably well reflects real-world perceptions of web interface usability. Inuit was first introduced at the workshop “Methodological Approaches to Human–Machine Interaction”, which was held as part of the 2013 Mensch & Computer conference. The corresponding paper is named Towards Metric-based Usability Evaluation of Online Web Interfaces (full-text here). The final version of the instrument has been presented at this year’s International Conference on Design, User Experience and Usability (DUXU), which has been held in Los Angeles. The full research paper is titled Inuit: The Interface Usability Instrument and available via Springer (full-text here).
In the future, I intend to transfer Inuit into the context of my current work. That is, I intend to use it for evaluating the web interface of HoloBuilder, which enables users to create 3D content for the web, in contrast to the usual 2D content that is consumed nowadays. It will be particularly interesting to see whether both, 2D and 3D web interfaces can be meaningfully evaluated using the same minimal instrument. Furthermore, Inuit will be applied in the context of the research on evidence-based computing that is happening at the VSR research group at Technische Universität Chemnitz.
P.S.: Thanks a lot to Viet Nguyen for the awesome Inuit logo! 🙂
Our paper “S.O.S.: Does Your Search Engine Results Page (SERP) Need Help?”—co-authors are Dr. Andreas Both (Unister) and Prof. Martin Gaedke (TU Chemnitz)—has been awarded a Best Paper Honarable Mention Award by ACM SIGCHI, the Special Interest Group on Computer–Human Interaction of the Association for Computing Machinery. According to Wikipedia, ACM SIGCHI is “the world’s leading organization in Human–Computer Interaction (HCI), and essentially created and defined the field.”1 Our paper is to be presented at the 2015 edition of the CHI Conference on Human Factors in Computing Systems2, which is the premier conference in the field of HCI and takes place in Seoul, South Korea.
S.O.S., which is short for “SERP Optimization Suite”, is a tool for determining the usability of a SERP in terms of quantitative scores by analyzing user feedback and interactions. If suboptimal scores are detected for a certain factor of usability (e.g., readability), adjustments are automatically proposed based on a catalog of best practices (e.g., adjust font size, among others). The catalog contains sets of potential causes for suboptimal scores and maps them to sets of corresponding countermeasures. Determining usability scores is based on WaPPU.