The Design Philosophy Behind My New Website

TL;DR: After analyzing my old website, I decided to put more emphasis on (1) identifying and highlighting the pieces of information that are actually useful and (2) a two-dimensional approach to displaying my CV along traditional categories and skills/topics. Moreover, I set myself design constraints that forced me to keep my new website as clean and simple as possible, following the design philosophies of brutalism and Mies van der Rohe.

My new website has been up and running for a few weeks now and I suppose it might seem unusual to some of you. Therefore, in this post, I want to explain how I ended up with what I did. What motivated me to create a new website was the fact that OpenShift v2 was shut down by Red Hat and the new version of their hosting service was so unsatisfying that I decided to move to Github pages instead. This, however, also meant that I could not build on a Node server anymore, so I took the opportunity to start over with a blank slate and create something completely different.

Core Questions

As a first step, I analyzed my old website, and I noticed two things. First, it was more or less structured exactly like my CV, which is, to be honest, not the most creative way to lay out an online presence. Second, my publications were listed in a format that I would also use in the reference section of an actual research paper. However, that format was only really understandable and readable for scientists, which—as I could safely assume—excluded a certain amount of my visitors. After all, the only necessary pieces of information are the title (plus a link for those who want to have a closer look), the list of authors, and which topics the paper is actually about. From a title like “Ensuring Web Interface Quality through Usability-based Split Testing” one may conclude that I’ve done work on usability, but how would you know that this publication also addresses analytics and machine learning without reading it? The acronym of the conference where it was published or the CORE rating of that conference certainly don’t help the average user.

MaxSpeicher.com v1

Therefore, the two central questions that informed the design of my new website were:

  • How can I present my work and skills in a better, more useful, and more memorable way than the standard paper CV structure—projects, work, education, etc.?
  • How can I more effectively communicate to people the topics I actually worked on and which skills I acquired rather than just telling them that I was an “Intern R&D at Unister”?

Design Constraints

Additionally, I set myself the following design constraints to ensure I had to come up with something completely different and create a novel, unusual, and more memorable experience. The main drivers for these constraints were my love for minimalism and a desire to prevent unnecessary overhead as much as possible.

  • Make it brutalist. Originally, brutalism was an architectural movement known for “its ruggedness and lack of concern to look comfortable or easy”. Accordingly, brutalist websites have a rough, simple, and unfinished look. They are almost nihilistic towards user experience.
  • Make it as simple as possible. I chose this constraint in accordance with Mies van der Rohe’s architectural philosophy of “extreme clarity and simplicity”. He made his buildings purely functional and refrained from using anything ornamental.
  • Don’t use icons or images. Standalone icons are a bad idea in most of the cases anyway (as this nice summary explains). This led me to using smaller text with a solid, edgy border in places where I used icons for social media sites and the different sections on my old website.
  • Use only one primary color. The rest must be kept in black, white, and gray.
  • Work with typography as much as possible. Apart from the one primary color and white space, I tried to use only different font sizes and weights for structuring and highlighting information.

Finally, I came up with and realized a concept that is largely based on hashtags to communicate my skills and the topics I’ve worked on. Every CV entry on my website—be it a university degree, a job, or a publication—is annotated with a set of such tags. The entry about “Ensuring Web Interface Quality through Usability-based Split Testing” now tells the visitor that the paper is about #analytics and #machine learning, among other things. At the top of the page, I feature a list of skills that enables users to filter the page and hide everything that’s not related to a specific skill or topic they’re interested in. Moreover, I chose a two-dimensional approach to presenting my CV. That is, the visitor has the chance to display it either according to the traditional structure of a CV, or grouped by skill/topic. In the latter case, all CV entries that feature a certain tag are displayed in the corresponding section to be able to view them at a glance.

Technology-wise, my new website is based on standard web technologies, Less, and gulp.

MaxSpeicher.com v2

Why I Don’t User Test My New Website

I asked some friends to have a look at what I created and they immediately came up with the idea of having a sticky menu—as they are used to from other websites—so that they wouldn’t have to go back to the top of the page (using the #top button in the bottom right corner) when they wanted to change the current filter or the way the CV is displayed. I started implementing this supposed improvement yesterday, but became more and more dissatisfied the more I progressed. While it would have made the website slightly more usable (sparing users a click from time to time), a sticky menu would violate the constraints that define my design and would make my website less unique in my opinion. After all, the average user prefers being confronted with and using things they already know. Therefore, I abandoned the idea and did not deploy the changes.

In this sense, the design of my new website is clearly not 100% user-centered, but rather an experimental piece of art.

Did I manage to intrigue you? Feel free to have a look at http://www.maxspeicher.com/.

Advertisements

The Arrival of the Web 3.0

What is Web 3.0? That’s a good question! And I’m pretty sure I won’t be able to answer it in this essay. Yet, I’ll try my very best to get closer to the answer. There exist several definitions of Web 3.0, none of which can be considered definite. A very general one describes it as “an extension of Web 2.0,” which is of limited helpfulness. Also, I’ve heard some call the Semantic Web “Web 3.0,” while Nova Spivack as well as Tim Berners-Lee see it only as a part of the latter. Interestingly, what has been neglected in most discussions about Web 3.0 so far are augmented (AR) and virtual reality (VR), or 3D in general. Seems like this could be worth a closer look. Although both, AR and VR have been connected to Web 3.0 separately, they rather have to be seen as an integral part of the overall concept, in addition to the Semantic Web. In the following, I describe why 3D — and AR/VR in particular — are beyond the Web 2.0, why current trends in web technology show that we are entering the Web 3.0 at high speed right now, and what will change for us — the designers, developers, architects etc.

Where are we coming from?

To be able to put Web 3.0 in relation to what we’ve seen so far, let’s have a brief look at the beginnings first.

Web 1.0

What is now called the “Web 1.0” in retrospect is what we programmed 15 or 20 years ago, mostly using nothing more than plain HTML, CSS, and some JavaScript (or Microsoft FrontPage). There were no Ajax, no Facebook, and no comment sections. Instead, websites had dedicated guestbooks, which were programmed in PHP or Perl. Due to a lack of sophisticated templates, creating a great website was hard work that eventually received a web award or two. Essentially, Web 1.0 is what’s presented to us by @wayback_exe and was very much defined by the underlying, basic technologies used. Websites were flat and presented flat text and images.

Web 2.0

As web technologies evolved, websites became less static, looking more and more like desktop applications. Soon, users could connect via social networks (Facebook’s like button is ubiquitous nowadays) and watch videos online. YouTube videos, tweets, and alike became discrete content entities (i.e., detached from a particular webpage) that could now be easily embedded anywhere. For instance, WordPress by default features specific shortcodes for these two. Data, rather than the underlying technology, became the center of the web (cf. “What Is Web 2.0” by Tim O’Reilly), which in particular led to an increasing number of mash-ups. Through templating, e.g., by using WordPress, it became increasingly easy for everyone to create a sophisticated website. Also, the proliferation of mobile and small-screen devices with touch screens caused the advent of responsive and adaptive websites as well as completely new kinds of interaction and corresponding user interfaces. Rather than by technologies, the Web 2.0 was and is defined by social interactions, new types of (mashable) content, and a stronger focus on user experience, among other things (cf. “Social Web” by Anja Ebersbach et al.). Yet, contents were as flat as before. That’s the web today’s average user knows.

Web 3.0

Now that we’ve seen where we come from, let’s elaborate on why 3D is a major part of Web 3.0.

Virtual and augmented reality

Neither VR nor AR are the Web 3.0 (as has been stated by some). Still, they are an important part of the bigger picture. Since Google introduced their Cardboard at the I/O 2014, consuming VR has become affordable and feasible for average users. Another, similar device heavily pushed right now is the Gear VR. Yet, despite the introduction of 360° video support by YouTube and Facebook, as of today, corresponding content is still rather limited compared to the overall number of websites. This will change with the growing popularity of devices such as 360° cameras, which allow you to capture 360° videos and photospheres (like in Google Street View) with just one click. Such 360° images can then be combined to, e.g., virtual tours using dedicated web platforms such as Roundme, YouVisit, and HoloBuilder. In this way, the average user can also create their own VR content that can be consumed by anyone, in particular through their Cardboards or other head-mounted displays (HMDs). Hence, the amount of available VR content will grow rapidly in the near future.

I personally like to refer to the type of VR content created from 360° images and consumed through HMDs as “Holos,” so let’s stick to that naming convention for now. Just like YouTube videos and tweets, Holos are discrete content entities. That is, technically speaking, all of them are simply iframes, but denote completely different kinds of content on a higher level of abstraction. Particularly, unlike plain YouTube videos and tweets, Holos add a third spatial dimension to the web content that is consumed by the user. That is, they move the web from 2D to 3D, the enabling technologies being WebGL, Three.js, and Unity. Another example for this evolution is Sketchfab, which brings high-end 3D models to the web and has been described as “the youtube for 3D content.” Contrary to VR, AR has not yet reached the same status regarding affordability and feasibility for average users. This is due to the fact that AR can’t be simply created and consumed in a web browser. Currently, AR application are of more interest in Industry 4.0 contexts. However, I’m sure that once VR has hit the mainstream, the complexity of AR will decrease and develop into the same direction. Already now, platforms like HoloBuilder offer the possibility to also create AR content in the browser, which can then be consumed using a dedicated Android or iOS app.

holo_low_res
One example for Web 3.0 content: a Holo depicting an underwater scene with sharks, ready to be consumed through Google Cardboard (viewed with Chrome on a Nexus 5).
3d_model_low_res
Another example for Web 3.0 content: a 3D model hosted on Sketchfab (viewed with Chrome on a Nexus 5).

Interactions

With the introduction of the third dimension in web content, also the necessary interactions change significantly. So far, we’ve had traditional interaction using mouse and keyboard and the touch interaction we know from smartphones and tablet PCs. Now, when consuming Web 3.0 content through our Cardboard, we face a novel, hands-free kind of interaction since we cannot touch the screen of the inserted phone. Instead, “clickable” objects need to be activated using, e.g., some kind of crosshair that is controlled via head movements (notice the little dot right below the sharks in the picture above). Another scenario (of the seemingly thousands that can be thought of) could be content consumed through a Gear VR while controlling it with a smart watch. Also, smart glasses and voice recognition — and more natural user interfaces in general — will become a thing. This calls for completely new and probably radical approaches towards usability, UX, interface, and interaction design that more and more move away from what average users were used to 15 or even only five years ago. All of this will aim at providing an experience that’s as immersive as possible for the user.

Material Design

Finally, what I also consider to already be a part of Web 3.0 is Google’s Material Design language. This is because, just like AR and VR, it aims at extending Web 2.0 beyond the second dimension. Although the outcome is clearly not 3D content in the sense of AR and VR as described above, Material Design puts a strong focus on layers and shadows. Hence, it introduces what I like to call 2½D.

Where are we going?

To summarize, the specific properties of and differences between Web 1.0, 2.0, and 3.0 are given in the following rough overview1:

Web 1.0 2.0 3.0
Device(s) PC Smartphone, tablet PC Smart glasses, Google Cardboard, Gear VR
Interaction Mouse, keyboard Touch, gestures Hands-free, head movement, voice, smart watch
Technologies HTML, CSS, JavaScript, PHP, Perl HTML5, CSS3, Ajax, jQuery, Node.js WebGL, Three.js, Unity, Material Design
Entities Webpages, text, images YouTube videos, tweets, (blog) posts etc. Photospheres, 360° videos, 3D models, Holos
Defined by / focus on Technology Data, social interaction, mash-ups, UX, responsiveness Immersion
Dimensions 2 2 >2

AR and VR—or 3D in general—will become the predominant kind of content created and consumed by users, taking the place of the plain content we’ve been used to so far. For instance, think of the personal portfolio of a painter. In the Web 1.0, it was a hand-crafted website created with Microsoft FrontPage. In the Web 2.0, it’s a WordPress page featuring a premium theme specifically designed as a showcase for paintings. Also, the painter has a dedicated Facebook page to connect with their fans. In the Web 3.0, the personal portfolio will be a walkthrough of a virtual 3D arts gallery, with the paintings virtually hanging on the walls. That walkthrough can be consumed using a web browser, either on a PC, on a tablet, on a smartphone, or through Google Cardboard. Therefore, everyone involved in creating websites and web applications will face new challenges: from presenting information in 3D to designing completely novel kinds of interactions to having to consider a wide variety of VR devices and so on. The very underlying look and feel of the web—for both, creators and consumers—will change drastically. 

In analogy to the two-dimensional Web 2.0, Web 3.0 is the perfect metaphor for the three-dimensional web that is currently evolving. Besides the development towards interconnectedness, IoT, linked data, and the Semantic Web, the fact that the we are moving away from the webpage paradigm (cf. “Atomic Design” by Brad Frost) and into the third dimension is one of the major indicators that we are on the verge of experiencing the Web 3.0. And I for my part find it really exciting.

1 This table raises no claims to completeness. Particularly, for the sake of simplicity, I omit the properties of Web 3.0 not connected to AR and VR.

How to Not Have a Bad Day

Some time ago, when I was still more active on Google+, I used to share a funny little animated GIF from time to time. Not just any semi-funny GIF I came across, but only those that made me laugh really hard and at which I could look a thousand times without becoming bored of them.

Then, when I sat over a boring research paper and started to become demotivated, I would usually scroll through my Google+ timeline, check out one of the GIFs and laugh for five minutes. After that, I was in a good enough mood to finally finish that paper (or whatever other shitty task I had to do). Yet, it’s obviously not convenient to regularly scroll back through one’s Google+ timeline for finding one’s favorite GIFs or to organize them as bookmarks in the browser (point I).

Not so long ago, I stumbled upon Material Design Lite, which I really wanted to play around with since then; but I was lacking a nice use case (point II). Points I & II then finally led to the creation of ‘Good Mood’ as a part of my personal website MaxSpeicher.com. ‘Good Mood’ shall serve as a curated collection of my favorite GIFs, which I also intend to extend in the future. Whenever you’re having a bad day, you can go there, laugh a bit and then go on with a (hopefully) better mood than before 🙂

Now comes the geeky part: As a part of my website, the back end of ‘Good Mood’ is based on Node.js in combination with Express. The front end HTML is generated from Jade templates and—obviously—uses the Material Design Lite framework. However, during the creation of the site, there were some little obstacles to overcome.

I started with the front page of ‘Good Mood’, on which I show a random GIF from my collection. That one was pretty easy. But I thought one might also want to check out a specific GIF from time to time. So I decided to provide a second page on which the whole collection is featured.

How to Load Images Asynchronously as Soon as They Enter the Viewport

Problem 1: Animated GIFs are usually pretty heavyweight, so it’s not optimal to load the whole page with all GIFs, particularly if it’s accessed on the go via smart phone or alike.

The solution to this one was pretty straightforward: a GIF should be loaded only if the user has scrolled to the respective position, i.e., the image enters the viewport. For this, I register a waypoint for each material card displaying a GIF:

$.registerWaypoint = function($element, func) {
  waypoints.push({
    t: $element.offset().top, // top
    b: $element.offset().top + $element.outerHeight(), // bottom
    func: func
  });
};

In the above code, func is the callback function for asynchronously loading the GIF once the viewport reaches the corresponding waypoint:

$.loadCardImgAsync = function(cardCssClass, imgSrc) {
  var asyncImg = new Image();
  asyncImg.onload = function() {
    $('.' + cardCssClass + ' > .mdl-card__title').addClass('bg-img');
  };
  asyncImg.src = imgSrc;
};

In this case, I simply add a predefined CSS class bg-img to the respective card, which displays the GIF in terms of a background-image after it has been loaded as a new Image object.

Finally, we need a function for checking the waypoints against the current scrolling offset and viewport height. That function is bound to the window’s scroll event. Once a waypoint is reached by the viewport—entering from either the top or the bottom—, its callback function is executed and the waypoint is removed from the array of waypoints. In order to not mess up the array indexes after having removed an element, I iterate backwards using a while loop.

checkWaypoints = function() {
  var i = waypoints.length;
  while (i--) {
    waypoint = waypoints[i];
    if (waypoint.t < currentOffset + windowHeight && waypoint.t > currentOffset
        || waypoint.b > currentOffset && waypoint.b < currentOffset + windowHeight) {
      waypoint.func();
      waypoints.splice(i,1);
    }
  }
};

How to Dynamically Adjust Background Images to Mobile Viewports with CSS

Problem 2: The GIF with the cat was too wide for the mobile view of the page, which made horizontal scrolling necessary. But: Horizontal scrolling is pretty uncool!

Found here.
Found here.

This problem was a bit trickier than the previous one, mostly because it involved CSS*. When working with <img> tags, we can simply give them a max-width of 100% and omit the height property, so that the correct aspect ratio is automatically retained. However, since I use material design cards, I had to deal with <div> elements and the CSS background-image property. Unfortunately, those don’t know which height they must have unless we tell them. Say, for instance, the animated GIF we’re dealing with is 400 pixels wide and 225 pixels high. Then, we need the following structure according to material design cards:

div.mdl-card.mdl-shadow--4dp(class='#{card.cssClass}')
  div.mdl-card__title.bg-img
  div.mdl-card__supporting-text Found at
  div.mdl-card__actions.mdl-card--border
    a.mdl-button.mdl-button--colored.mdl-js-button.mdl-js-ripple-effect(href='#{card.link}', target='_blank')
      i.fa.fa-google-plus
      | /#{card.caption}

First, we have to give the container <div> element a width of 400 pixels and a max-width of 90% (to give it some space to the left and right on small screens), but we make no statement about its height:

.cat-card.mdl-card {
  display: inline-block;
  width: 400px;
  max-width: 90%;
}

The height of the container then must be determined by the inner <div> element that actually has the background-image property. In order to do so dynamically, it needs a padding-top value that reflects the aspect ratio of the background image. In our case, that would be 225 / 400 = 0.5625 = 56.25%.

.cat-card > .mdl-card__title.bg-img {
  background: url('/images/gif/cat.gif') center / cover;
  color: #fff;
  padding-top: 56.25%;
}

Now, since the padding of the inner <div> is relative to the actual width of the container <div>, the height of the container automatically adjusts to retain the correct aspect ratio of the background image. Go check out CodePen, where I had first sketched the solution to this. Yet, we need one more piece of the puzzle, which goes into the header of the HTML page:

<meta name="viewport" content="width=device-width, initial-scale=1.0" />

Aaand done! Enjoy—both the website and the source code, which is also available on GitHub!

http://www.commitstrip.com/en/2014/09/26/the-worst-issues-are-not-always-where-you-would-expect-them/

What is ›Usability‹?

What is Usability?Earlier this year, I submitted a research paper about a concept called usability-based split testing1 to a web engineering conference (Speicher et al., 2014). My evaluation involved a questionnaire that asked for ratings of different usability aspects—such as informativeness, readability etc.—of web interfaces. So obviously, I use the word “usability” in that paper a lot; however, without having thought of its exact connotation in the context of my research before. Of course I was aware of the differences compared to User eXperience, but just assumed that the used questionnaire and description of my analyses would make clear what my paper understands as usability.

Then came the reviews and one reviewer noted:

“There is a weak characterization of what Usability is in the context of Web Interface Quality, quality models and views. Usability in this paper is a key word. However, it is weakly defined and modeled w.r.t. quality.”

This confused me at first since I thought it was pretty clear what usability is and that my paper was pretty well understandable in this respect. In particular, I thought Usability has already been defined and characterized before, so why does this reviewer demand me to characterize it again? Figuratively, they asked me: “When you talk about usability, what is that ›usability‹?”

A definition of usability

As I could not just ignore the review, I did some more research on definitions of usability. I remembered that Nielsen defined usability to comprise five quality components—Learnability, Efficiency, Memorability, Errors, and Satisfaction. Moreover, I had already made use of the definition given in ISO 9241–11 for developing the usability questionnaire used in my evaluation: 

“The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.”

For designing the questionnaire I had only focused on reflecting the mentioned high-level factors of usability—effectiveness, efficiency, and satisfaction—by the contained items. However, the rest of the definition is not less interesting. Particularly, it contains the phrases

  1. “a product”;
  2. “specified users”;
  3. “specified goals”; and
  4. “specified context of use”.

As can be seen, the word “specified” is used three times—and also “a product” is a rather vague description here.

This makes it clear that usability is a difficult-to-grasp concept and even the ISO definition gives ample scope for different interpretations. Also, in his paper on the System Usability Scale, Brooke (1996) refers to ISO 9241–11 and notes that “Usability does not exist in any absolute sense; it can only be defined with reference to particular contexts.” Thus, one has to explicitly specify the four vague phrases mentioned above to characterize the exact manifestation of usability they are referring to. Despite my initial skepticism, that reviewer was absolutely right!

Levels of usability

As the reviewer explicitly referred to “Web Interface Quality”, we also have to take ISO/IEC 9126 into account. That standard is concerned with software engineering and product quality and defines three different levels of quality metrics: 

  • Internal metrics: Metrics that do not rely on software execution (i.e., they are a static measure)
  • External metrics: Metrics that are applicable to running software
  • Quality in use metrics: Metrics that are only available when the final product is used in real conditions

As usability clearly is one aspect of product quality, these metrics can be transferred into the context of usability evaluation. In analogy, this gives us three levels of usability: Internal usability, external usability, and usability in use.

This means that if we want to evaluate usability, we first have to state which of the above levels we are investigating. The first one might be assessed with a static code analysis, as for example carried out by accessibility tools. The second might be assessed in terms of an expert going through a rendered interface without actually using the product. Finally, usability in use is commonly assessed with user studies, either on a live website, or in a more controlled setting.

Bringing it all together

Once we have decided for one of the above levels of usability, we have to give further detail on the four vague phrases contained in ISO 9241–11. Mathematically speaking, we have to find values for the variables product, users, goals, and context of use, which are sets of characteristics. Together with the level of usability, this gives us a quintuple defined by the following cross product: 

level of usability × product × users × goals × context of use.

We already know the possible values for level of usability:

level of usability ∈ { internal usability, external usability, usability in use },

so what are the possible values for the remaining variables contained in the “quintuple of usability”?

Product

The first one is rather straightforward. Product is the actual product you are evaluating, or at least the type thereof. Particularly, web interface usability is different from desktop software or mobile app usability. Also, it is important to state whether one evaluates only a part of an application (e.g., a single webpage contained in a larger web app), or the application as a whole. Therefore: 

product ⊆ { desktop application, mobile application, web application, online shop, WordPress blog, individual web page, … }. 

Since product is a subset of the potential values, it is possible to use any number of them for a precise characterization of the variable, for instance, product = { mobile application, WordPress blog } if you are evaluating the mobile version of your blog. This should not be thought of as a strict formalism, but is rather intended as a convenient way to express the combined attributes of the variable. However, not all values can be meaningfully combined (e.g., desktop application and WordPress blog). The same holds for the remaining variables explained in the following.

Users

Next comes the variable users, which relates to the target group of your product (if evaluating in a real-world setting) or the participants involved in a controlled usability evaluation (such as a lab study). To distinguish between these is highly important as different kinds of users might perceive a product completely differently. Also, real users are more likely unbiased compared to participants in a usability study.

users ⊆ { visually impaired users, female users, users aged 19–49, test participants, inexperienced users, experienced users, novice users, frequent users, … }.

In particular, when evaluating usability in a study with participants, this variable should contain all demographic characteristics of that group. Yet, when using methods such as expert inspections, users should not contain “usability experts,” as your interface is most probably not exclusively designed for that very specific group. Rather, it contains the characteristics of the target group the expert has in mind when performing, for instance, a cognitive walkthrough. This is due to the fact that usability experts are usually well-trained in simulating a user with specific attributes.

Goals

The next one is a bit tricky, as goals are not simply the tasks a specified user shall accomplish (such as completing a checkout process). Rather, there are two types of goals according to Hassenzahl (2008): do-goals and be-goals. 

Do-goals refer to pragmatic usability, which means “the product’s perceived ability to support the achievement of [tasks]” (Hassenzahl, 2008), as for example the aforementioned completion of a checkout process.

Contrary, be-goals refer to hedonic usability, which “calls for a focus on the Self” (Hassenzahl, 2008). To give just one example, the ISO 9241–11 definition contains “satisfaction” as one component of usability. Therefore, “feeling satisfied” is a be-goal that can be achieved by users. The achievement of be-goals must not necessarily be connected to the achievement of corresponding do-goals (Hassenzahl, 2008). In particular, a user can be satisfied even if they failed to accomplish certain tasks and vice versa.

Thus, it is necessary to take these differences into account when defining the specific goals to be achieved by a user. The variable goals can be specified either by the concrete tasks the user shall achieve or by Hassenzahl’s more general notions if no specific tasks are defined:

goals ⊆ { do-goals, be-goals, completed checkout process, writing a blog post, feeling satisfied, having fun, … }.

Context of use

Last comes the variable context of use. This one describes the setting in which you want to evaluate the usability of your product. It can be something rather general—such as “real world” or “lab study” to indicate a potential bias of the users involved—, device-related (desktop PC vs. touch device) or some other more specific information about context. In general, your setting/context should be described as precisely as possible. 

context of use ⊆ { real world, lab study, expert inspection, desktop PC, mobile phone, tablet PC, at day, at night, at home, at work, user is walking, user is sitting, … }.

Case study

For testing a research prototype in the context of my industrial PhD thesis, we have evaluated a novel search engine results page (SERP) designed for use with desktop PCs (Speicher et al., 2014). The test was carried out as a remote asynchronous user study with participants being recruited via internal mailing lists of the cooperating company. They were asked to find a birthday present for a good friend that costs not more than €50, which is a semi-open task (i.e., a do-goal). According to our above formalization of usability, the precise type of usability assessed in that evaluation is therefore given by the following (for the sake of readability, the quintuple is given in list form): 

  • level of usability = usability in use
  • product = {web application, SERP}
  • users = {company employees, novice users, experienced searchers (several times a day), average age ≈ 31, 62% male, 38% female}
  • goals = {formulate search query, comprehend presented information, identify relevant piece(s) of information}
  • context of use = {desktop PC, HD screen, at work, remote asynchronous user study}

In case the same SERP is inspected by a team of usability experts in terms of screenshots, the assessed type of usability changes accordingly. In particular, users changes to the actual target group of the web application, as defined by the cooperating company and explained to the experts beforehand. Also, goals must be reformulated to what the experts pay attention to (only certain aspects of a system can be assessed through screenshots). Overall, the assessed type of usability is then expressed by the following:

  • level of usability = external usability
  • product = {web application, SERP}
  • users = {German-speaking Internet users, any level of searching experience, age 14–69}
  • goals = {identify relevant piece(s) of information, be satisfied with presentation of results, feel pleased by visual aesthetics}
  • context of use = {desktop PC, screen width ≥ 1225 px, expert inspection}

Conclusion

Usability is a term that spans a wide variety of potential manifestations. For example, usability evaluated in a real-world setting with real users might be a totally different kind of usability than usability evaluated in a controlled lab study—even with the same product. Therefore, a given set of characteristics must be specified or otherwise, the notion of “usability” is meaningless due to its high degree of ambiguity. It is necessary to provide specific information on five variables that have been identified based on ISO 9241–11 and ISO/IEC 9126: level of usability, product, users, goals, and context of use. Although I have introduced a mathematically seeming formalism for characterizing the precise type of usability one is assessing, it is not necessary to provide that information in the form of a quintuple. Rather, my primary objective is to raise awareness for careful specifications of usability, as many reports on usability evaluations—including the original version of my research paper (Speicher et al., 2014)—lack a complete description of what they understand as ›usability‹.

(This article has also been published on Medium and as a technical report.)

1 “Usability-based split testing” means comparing two variations of the same web interface based on a quantitative usability score (e.g., usability of interface A = 97%, usability of interface B = 42%). The split test can be carried out as a user study or under real-world conditions.


References

John Brooke. SUS: A “quick and dirty” usability scale. In Usability Evaluation in Industry. Taylor and Francis, 1996. 

Marc Hassenzahl. User Experience (UX): Towards an experiential perspective on product quality. In Proc. IHM, 2008.

Maximilian Speicher, Andreas Both, and Martin Gaedke. Ensuring Web Interface Quality through Usability-based Split Testing. In Proc. ICWE, 2014.

Acknowledgments

Special thanks go to Jürgen Cito, Sebastian Nuck, Sascha Nitsch & Tim Church, who provided feedback on drafts of this article 🙂

motherfuckingwebsite.com Redesigned. My Other Poster Presented at #ICWE2014

(Disclaimer: motherfuckingwebsite.com was not made by me!)

Device-agnostic design poster presented @ ICWE 2014From my original post about redesigning motherfuckingwebsite.com (see here), I have created a poster along with a corresponding short paper, which have been presented at the 2014 International Conference on Web Engineering (ICWE).

The short paper will be included in the conference proceedings published by Springer: Maximilian Speicher (2014). “Paving the Path to Device-agnostic and Content-centric Web Design”. In Proc. ICWE (Posters).

Special thanks go to Fred Funke, who helped with designing the poster!

This is a motherfucking website. And it’s not completely fucking perfect

(Disclaimer: motherfuckingwebsite.com was not made by me!)

motherfuckingwebsite.com
The motherfucking website.

I recently stumbled upon motherfuckingwebsite.com, which is the most pragmatic and minimalistic approach to website creation I’ve seen so far (except for plain TXT files, of course). The first two lines read “This is a motherfucking website. And it’s fucking perfect.” The site has raised quite some attention on different social media platforms with lots of people stating that the guy who created it is absolutely right.

Basically, he says that the site is perfect because it loads fast, is accessible to everyone, is dead responsive (without using media queries), has content (i.e., it gets its “fucking point across”), uses semantic HTML5 tags etc. Most of the statements he makes are indeed right and there are undoubtedly lots of web designers out there who should take them to heart. The creator’s final note is that motherfuckingwebsite.com is “fucking satire”. His aim is to convey that websites are not broken by default. Instead, developers break them.

“Good design is as little design as possible.”
— Dieter Rams

Based on the facts that lots of people like the idea behind the site and that the creator makes a whole bunch of true points, I want to point out three things in which he is not completely right or that he passed over. These would bring a site of this kind closer to perfection, assuming that we follow a text-centric and device-agnostic approach.

  1. Line length: Text lines on motherfuckingwebsite.com span across the whole width of the viewport. This is particularly disadvantageous on large screens. An optimally readable line should contain only ~66 characters, which corresponds to ~30 em.1 To reduce the amount of scrolling, this must can (Update July 23, 2016) be augmented with a multi-column layout and pagination.2
  2. Navigation: motherfuckingwebsite.com does not make statements about navigation. However, a website featuring larger amounts of content would require a navigation bar, optimally fixed to the top of the viewport.3 This navigation bar should adapt to smaller screens without device- or resolution-specific break points (= device-agnostic design).
  3. Aesthetics: motherfuckingwebsite.com follows a completely technical/functional point of view. Yet, this does not completely cover perfectness from the users’ perspective. Particularly, research has found that visual aesthetics are a crucial factor concerning user satisfaction.4 Even without excessive use of graphics this can be reached by leveraging more sophisticated color schemes and typography rather than just black/white coloring and default fonts.

Keeping the above points in mind, motherfuckingwebsite.com is for sure a good starting point for “back-to-the-roots”, minimalistic and device-agnostic web design.

Update (November 28, 2014): Please also pay attention to the follow-up posts based on this article, as well as the conference article published by Springer:

1 http://goldilocksapproach.com/article/
2 Michael Nebeling, Fabrice Matulic, Lucas Streit, and Moira C. Norrie (2011): “Adaptive Layout Template for Effective Web Content Presentation in Large-Screen Contexts”. In: Proc. DocEng.
3 Cf. http://www.dartlang.org/
4 Talia Lavie and Noam Tractinsky (2004): “Assessing dimensions of perceived visual aesthetics of websites”. International Journal of Human-Computer Studies, 60(3), pp. 269–298.