The Arrival of the Web 3.0

What is Web 3.0? That’s a good question! And I’m pretty sure I won’t be able to answer it in this essay. Yet, I’ll try my very best to get closer to the answer. There exist several definitions of Web 3.0, none of which can be considered definite. A very general one describes it as “an extension of Web 2.0,” which is of limited helpfulness. Also, I’ve heard some call the Semantic Web “Web 3.0,” while Nova Spivack as well as Tim Berners-Lee see it only as a part of the latter. Interestingly, what has been neglected in most discussions about Web 3.0 so far are augmented (AR) and virtual reality (VR), or 3D in general. Seems like this could be worth a closer look. Although both, AR and VR have been connected to Web 3.0 separately, they rather have to be seen as an integral part of the overall concept, in addition to the Semantic Web. In the following, I describe why 3D — and AR/VR in particular — are beyond the Web 2.0, why current trends in web technology show that we are entering the Web 3.0 at high speed right now, and what will change for us — the designers, developers, architects etc.

Where are we coming from?

To be able to put Web 3.0 in relation to what we’ve seen so far, let’s have a brief look at the beginnings first.

Web 1.0

What is now called the “Web 1.0” in retrospect is what we programmed 15 or 20 years ago, mostly using nothing more than plain HTML, CSS, and some JavaScript (or Microsoft FrontPage). There were no Ajax, no Facebook, and no comment sections. Instead, websites had dedicated guestbooks, which were programmed in PHP or Perl. Due to a lack of sophisticated templates, creating a great website was hard work that eventually received a web award or two. Essentially, Web 1.0 is what’s presented to us by @wayback_exe and was very much defined by the underlying, basic technologies used. Websites were flat and presented flat text and images.

Web 2.0

As web technologies evolved, websites became less static, looking more and more like desktop applications. Soon, users could connect via social networks (Facebook’s like button is ubiquitous nowadays) and watch videos online. YouTube videos, tweets, and alike became discrete content entities (i.e., detached from a particular webpage) that could now be easily embedded anywhere. For instance, WordPress by default features specific shortcodes for these two. Data, rather than the underlying technology, became the center of the web (cf. “What Is Web 2.0” by Tim O’Reilly), which in particular led to an increasing number of mash-ups. Through templating, e.g., by using WordPress, it became increasingly easy for everyone to create a sophisticated website. Also, the proliferation of mobile and small-screen devices with touch screens caused the advent of responsive and adaptive websites as well as completely new kinds of interaction and corresponding user interfaces. Rather than by technologies, the Web 2.0 was and is defined by social interactions, new types of (mashable) content, and a stronger focus on user experience, among other things (cf. “Social Web” by Anja Ebersbach et al.). Yet, contents were as flat as before. That’s the web today’s average user knows.

Web 3.0

Now that we’ve seen where we come from, let’s elaborate on why 3D is a major part of Web 3.0.

Virtual and augmented reality

Neither VR nor AR are the Web 3.0 (as has been stated by some). Still, they are an important part of the bigger picture. Since Google introduced their Cardboard at the I/O 2014, consuming VR has become affordable and feasible for average users. Another, similar device heavily pushed right now is the Gear VR. Yet, despite the introduction of 360° video support by YouTube and Facebook, as of today, corresponding content is still rather limited compared to the overall number of websites. This will change with the growing popularity of devices such as 360° cameras, which allow you to capture 360° videos and photospheres (like in Google Street View) with just one click. Such 360° images can then be combined to, e.g., virtual tours using dedicated web platforms such as Roundme, YouVisit, and HoloBuilder. In this way, the average user can also create their own VR content that can be consumed by anyone, in particular through their Cardboards or other head-mounted displays (HMDs). Hence, the amount of available VR content will grow rapidly in the near future.

I personally like to refer to the type of VR content created from 360° images and consumed through HMDs as “Holos,” so let’s stick to that naming convention for now. Just like YouTube videos and tweets, Holos are discrete content entities. That is, technically speaking, all of them are simply iframes, but denote completely different kinds of content on a higher level of abstraction. Particularly, unlike plain YouTube videos and tweets, Holos add a third spatial dimension to the web content that is consumed by the user. That is, they move the web from 2D to 3D, the enabling technologies being WebGL, Three.js, and Unity. Another example for this evolution is Sketchfab, which brings high-end 3D models to the web and has been described as “the youtube for 3D content.” Contrary to VR, AR has not yet reached the same status regarding affordability and feasibility for average users. This is due to the fact that AR can’t be simply created and consumed in a web browser. Currently, AR application are of more interest in Industry 4.0 contexts. However, I’m sure that once VR has hit the mainstream, the complexity of AR will decrease and develop into the same direction. Already now, platforms like HoloBuilder offer the possibility to also create AR content in the browser, which can then be consumed using a dedicated Android or iOS app.

One example for Web 3.0 content: a Holo depicting an underwater scene with sharks, ready to be consumed through Google Cardboard (viewed with Chrome on a Nexus 5).
Another example for Web 3.0 content: a 3D model hosted on Sketchfab (viewed with Chrome on a Nexus 5).


With the introduction of the third dimension in web content, also the necessary interactions change significantly. So far, we’ve had traditional interaction using mouse and keyboard and the touch interaction we know from smartphones and tablet PCs. Now, when consuming Web 3.0 content through our Cardboard, we face a novel, hands-free kind of interaction since we cannot touch the screen of the inserted phone. Instead, “clickable” objects need to be activated using, e.g., some kind of crosshair that is controlled via head movements (notice the little dot right below the sharks in the picture above). Another scenario (of the seemingly thousands that can be thought of) could be content consumed through a Gear VR while controlling it with a smart watch. Also, smart glasses and voice recognition — and more natural user interfaces in general — will become a thing. This calls for completely new and probably radical approaches towards usability, UX, interface, and interaction design that more and more move away from what average users were used to 15 or even only five years ago. All of this will aim at providing an experience that’s as immersive as possible for the user.

Material Design

Finally, what I also consider to already be a part of Web 3.0 is Google’s Material Design language. This is because, just like AR and VR, it aims at extending Web 2.0 beyond the second dimension. Although the outcome is clearly not 3D content in the sense of AR and VR as described above, Material Design puts a strong focus on layers and shadows. Hence, it introduces what I like to call 2½D.

Where are we going?

To summarize, the specific properties of and differences between Web 1.0, 2.0, and 3.0 are given in the following rough overview1:

Web 1.0 2.0 3.0
Device(s) PC Smartphone, tablet PC Smart glasses, Google Cardboard, Gear VR
Interaction Mouse, keyboard Touch, gestures Hands-free, head movement, voice, smart watch
Technologies HTML, CSS, JavaScript, PHP, Perl HTML5, CSS3, Ajax, jQuery, Node.js WebGL, Three.js, Unity, Material Design
Entities Webpages, text, images YouTube videos, tweets, (blog) posts etc. Photospheres, 360° videos, 3D models, Holos
Defined by / focus on Technology Data, social interaction, mash-ups, UX, responsiveness Immersion
Dimensions 2 2 >2

AR and VR—or 3D in general—will become the predominant kind of content created and consumed by users, taking the place of the plain content we’ve been used to so far. For instance, think of the personal portfolio of a painter. In the Web 1.0, it was a hand-crafted website created with Microsoft FrontPage. In the Web 2.0, it’s a WordPress page featuring a premium theme specifically designed as a showcase for paintings. Also, the painter has a dedicated Facebook page to connect with their fans. In the Web 3.0, the personal portfolio will be a walkthrough of a virtual 3D arts gallery, with the paintings virtually hanging on the walls. That walkthrough can be consumed using a web browser, either on a PC, on a tablet, on a smartphone, or through Google Cardboard. Therefore, everyone involved in creating websites and web applications will face new challenges: from presenting information in 3D to designing completely novel kinds of interactions to having to consider a wide variety of VR devices and so on. The very underlying look and feel of the web—for both, creators and consumers—will change drastically. 

In analogy to the two-dimensional Web 2.0, Web 3.0 is the perfect metaphor for the three-dimensional web that is currently evolving. Besides the development towards interconnectedness, IoT, linked data, and the Semantic Web, the fact that the we are moving away from the webpage paradigm (cf. “Atomic Design” by Brad Frost) and into the third dimension is one of the major indicators that we are on the verge of experiencing the Web 3.0. And I for my part find it really exciting.

1 This table raises no claims to completeness. Particularly, for the sake of simplicity, I omit the properties of Web 3.0 not connected to AR and VR.


Sparkle up Your Website ✨

Recently, for one of my current projects, I was looking for a jQuery plug-in that adds a sparkle effect to DOM elements. However, I couldn’t find any that suited my needs. Either, the available plug-ins were way too elaborated for what I had in mind or the sparkle effect just didn’t exactly look like I had imagined. So, after an afternoon of researching existing jQuery plug-ins, I decided to simply code one myself, which I wanna share with you here.

You can add a sparkle effect—i.e., a single sparkling star—to any DOM element by doing the following with the given, optional parameters:

  fill: "#fff", // fill color of the star that makes up the sparkle effect (default: #fff)
  stroke: "#000", // outline color of the star (default: #000)
  size: 20, // size of the sparkle effect in px (default: 20)
  delay: 0, // delay before first sparkle in ms (default: 0)
  duration: 1500, // duration of a sparkle in ms (default: 1500)
  pause: 1000 // delay between two sparkles in ms (default: 1000)

In case you want to add multiple sparkling stars to the DOM element, you can call the sparkle() function more than once, also with different parameters:

  size: 30
  size: 30
  delay: 1000,
  size: 10,
  pause: 750

The sparkle effect can be removed using the following option:


Check out GitHub for downloading the necessary CSS and JS files of the plug-in: Also, see CodePen for the original implementation of the plug-in:

How to Not Have a Bad Day

Some time ago, when I was still more active on Google+, I used to share a funny little animated GIF from time to time. Not just any semi-funny GIF I came across, but only those that made me laugh really hard and at which I could look a thousand times without becoming bored of them.

Then, when I sat over a boring research paper and started to become demotivated, I would usually scroll through my Google+ timeline, check out one of the GIFs and laugh for five minutes. After that, I was in a good enough mood to finally finish that paper (or whatever other shitty task I had to do). Yet, it’s obviously not convenient to regularly scroll back through one’s Google+ timeline for finding one’s favorite GIFs or to organize them as bookmarks in the browser (point I).

Not so long ago, I stumbled upon Material Design Lite, which I really wanted to play around with since then; but I was lacking a nice use case (point II). Points I & II then finally led to the creation of ‘Good Mood’ as a part of my personal website ‘Good Mood’ shall serve as a curated collection of my favorite GIFs, which I also intend to extend in the future. Whenever you’re having a bad day, you can go there, laugh a bit and then go on with a (hopefully) better mood than before 🙂

Now comes the geeky part: As a part of my website, the back end of ‘Good Mood’ is based on Node.js in combination with Express. The front end HTML is generated from Jade templates and—obviously—uses the Material Design Lite framework. However, during the creation of the site, there were some little obstacles to overcome.

I started with the front page of ‘Good Mood’, on which I show a random GIF from my collection. That one was pretty easy. But I thought one might also want to check out a specific GIF from time to time. So I decided to provide a second page on which the whole collection is featured.

How to Load Images Asynchronously as Soon as They Enter the Viewport

Problem 1: Animated GIFs are usually pretty heavyweight, so it’s not optimal to load the whole page with all GIFs, particularly if it’s accessed on the go via smart phone or alike.

The solution to this one was pretty straightforward: a GIF should be loaded only if the user has scrolled to the respective position, i.e., the image enters the viewport. For this, I register a waypoint for each material card displaying a GIF:

$.registerWaypoint = function($element, func) {
    t: $element.offset().top, // top
    b: $element.offset().top + $element.outerHeight(), // bottom
    func: func

In the above code, func is the callback function for asynchronously loading the GIF once the viewport reaches the corresponding waypoint:

$.loadCardImgAsync = function(cardCssClass, imgSrc) {
  var asyncImg = new Image();
  asyncImg.onload = function() {
    $('.' + cardCssClass + ' > .mdl-card__title').addClass('bg-img');
  asyncImg.src = imgSrc;

In this case, I simply add a predefined CSS class bg-img to the respective card, which displays the GIF in terms of a background-image after it has been loaded as a new Image object.

Finally, we need a function for checking the waypoints against the current scrolling offset and viewport height. That function is bound to the window’s scroll event. Once a waypoint is reached by the viewport—entering from either the top or the bottom—, its callback function is executed and the waypoint is removed from the array of waypoints. In order to not mess up the array indexes after having removed an element, I iterate backwards using a while loop.

checkWaypoints = function() {
  var i = waypoints.length;
  while (i--) {
    waypoint = waypoints[i];
    if (waypoint.t < currentOffset + windowHeight && waypoint.t > currentOffset
        || waypoint.b > currentOffset && waypoint.b < currentOffset + windowHeight) {

How to Dynamically Adjust Background Images to Mobile Viewports with CSS

Problem 2: The GIF with the cat was too wide for the mobile view of the page, which made horizontal scrolling necessary. But: Horizontal scrolling is pretty uncool!

Found here.
Found here.

This problem was a bit trickier than the previous one, mostly because it involved CSS*. When working with <img> tags, we can simply give them a max-width of 100% and omit the height property, so that the correct aspect ratio is automatically retained. However, since I use material design cards, I had to deal with <div> elements and the CSS background-image property. Unfortunately, those don’t know which height they must have unless we tell them. Say, for instance, the animated GIF we’re dealing with is 400 pixels wide and 225 pixels high. Then, we need the following structure according to material design cards:

  div.mdl-card__supporting-text Found at
    a.mdl-button.mdl-button--colored.mdl-js-button.mdl-js-ripple-effect(href='#{}', target='_blank')
      | /#{card.caption}

First, we have to give the container <div> element a width of 400 pixels and a max-width of 90% (to give it some space to the left and right on small screens), but we make no statement about its height:

.cat-card.mdl-card {
  display: inline-block;
  width: 400px;
  max-width: 90%;

The height of the container then must be determined by the inner <div> element that actually has the background-image property. In order to do so dynamically, it needs a padding-top value that reflects the aspect ratio of the background image. In our case, that would be 225 / 400 = 0.5625 = 56.25%.

.cat-card > {
  background: url('/images/gif/cat.gif') center / cover;
  color: #fff;
  padding-top: 56.25%;

Now, since the padding of the inner <div> is relative to the actual width of the container <div>, the height of the container automatically adjusts to retain the correct aspect ratio of the background image. Go check out CodePen, where I had first sketched the solution to this. Yet, we need one more piece of the puzzle, which goes into the header of the HTML page:

<meta name="viewport" content="width=device-width, initial-scale=1.0" />

Aaand done! Enjoy—both the website and the source code, which is also available on GitHub!

ICYMI: This is a motherfucking website and I wrote a motherfucking conference article about it

(Disclaimer: was not made by me!) short paperThe post in which I review and propose some changes to make it even more perfect (see here) is the most widely read post on my blog by far. To date, it has received 8,715 views, with the front page of my blog being in second place having 884 views. Most of the organic traffic I receive from search engines ends up on that very article.

What you probably haven’t noticed yet: Based on the blog post, I also wrote a short paper about my review and the proposed changes, which was accepted at the 2014 International Conference on Web Engineering (ICWE) and presented during its poster session. My conference article has now been published in the Lecture Notes on Computer Science series by Springer and is also available via I think this is a nice paper to cite in your work, just for the sake of citing it. I mean … who wouldn’t want to cite a paper about 😉

Speicher, Maximilian. “Paving the Path to Content-Centric and Device-Agnostic Web Design.” In Web Engineering, pp. 532-535. Springer International Publishing, 2014.

P.S.: In case you wonder why I had the more or less stupid idea to actually write a conference article about the motherfucking website: I was slightly inspired by the following conversation:

4 Submissions accepted at International Conference on Web Engineering (ICWE)

End of February, I submitted four contributions to the 14th International Conference on Web Engineering: two full papers, one demo and one poster. Of these four submissions, all were accepted and will be presented at the conference, which is to be held in Toulouse (see map below) from July 1 to July 4. In the following, I’ll give a quick overview of the accepted papers. A more detailed explanation of my current research will be the subject of one or two separate articles.

  • Maximilian Speicher, Sebastian Nuck, Andreas Both, Martin Gaedke: “StreamMyRelevance! Prediction of Result Relevance from Real-Time Interactions and its Application to Hotel Search” — This full paper is based on Sebastian Nuck’s Master thesis. He developed a system for processing user interactions collected on search results pages in real-time and predicting the relevance of individual search results from these.
  • Maximilian Speicher, Andreas Both, Martin Gaedke: “Ensuring Web Interface Quality through Usability-based Split Testing” — This full paper proposes a new approach to split testing that is based on the actual usability of the investigated web interface rather than pure conversion maximization. We have trained models for predicting usability from user interactions and from these have also derived additional interaction-based heuristics for comparing search results pages.
  • Maximilian Speicher, Andreas Both, Martin Gaedke: “WaPPU: Usability-based A/B Testing” — This demo accompanies our paper about Usability-based Split Testing. The WaPPU tool builds upon this new concept and demonstrates how usability can be predicted from user interactions using automatically learned models.
  • Maximilian Speicher: “Paving the Path to Content-centric and Device-agnostic Web Design” — This poster is based on one of my previous posts. It provides a review of, which satirically claims to be a perfect website. Based on current research, we suggest improvements to the site that follow a strictly content-centric and device-agnostic approach.

My PhD research is supervised by Prof. Dr.-Ing. Martin Gaedke (VSR Research Group, Chemnitz U of Technology) and Dr. Andreas Both (R&D, Unister GmbH) and funded by the ESF and the Free State of Saxony.

This is a motherfucking website. And it’s not completely fucking perfect

(Disclaimer: was not made by me!)
The motherfucking website.

I recently stumbled upon, which is the most pragmatic and minimalistic approach to website creation I’ve seen so far (except for plain TXT files, of course). The first two lines read “This is a motherfucking website. And it’s fucking perfect.” The site has raised quite some attention on different social media platforms with lots of people stating that the guy who created it is absolutely right.

Basically, he says that the site is perfect because it loads fast, is accessible to everyone, is dead responsive (without using media queries), has content (i.e., it gets its “fucking point across”), uses semantic HTML5 tags etc. Most of the statements he makes are indeed right and there are undoubtedly lots of web designers out there who should take them to heart. The creator’s final note is that is “fucking satire”. His aim is to convey that websites are not broken by default. Instead, developers break them.

“Good design is as little design as possible.”
— Dieter Rams

Based on the facts that lots of people like the idea behind the site and that the creator makes a whole bunch of true points, I want to point out three things in which he is not completely right or that he passed over. These would bring a site of this kind closer to perfection, assuming that we follow a text-centric and device-agnostic approach.

  1. Line length: Text lines on span across the whole width of the viewport. This is particularly disadvantageous on large screens. An optimally readable line should contain only ~66 characters, which corresponds to ~30 em.1 To reduce the amount of scrolling, this must can (Update July 23, 2016) be augmented with a multi-column layout and pagination.2
  2. Navigation: does not make statements about navigation. However, a website featuring larger amounts of content would require a navigation bar, optimally fixed to the top of the viewport.3 This navigation bar should adapt to smaller screens without device- or resolution-specific break points (= device-agnostic design).
  3. Aesthetics: follows a completely technical/functional point of view. Yet, this does not completely cover perfectness from the users’ perspective. Particularly, research has found that visual aesthetics are a crucial factor concerning user satisfaction.4 Even without excessive use of graphics this can be reached by leveraging more sophisticated color schemes and typography rather than just black/white coloring and default fonts.

Keeping the above points in mind, is for sure a good starting point for “back-to-the-roots”, minimalistic and device-agnostic web design.

Update (November 28, 2014): Please also pay attention to the follow-up posts based on this article, as well as the conference article published by Springer:

2 Michael Nebeling, Fabrice Matulic, Lucas Streit, and Moira C. Norrie (2011): “Adaptive Layout Template for Effective Web Content Presentation in Large-Screen Contexts”. In: Proc. DocEng.
3 Cf.
4 Talia Lavie and Noam Tractinsky (2004): “Assessing dimensions of perceived visual aesthetics of websites”. International Journal of Human-Computer Studies, 60(3), pp. 269–298.