The demo video about usability-based A/B testing I created for the 2014 International Conference on Web Engineering is now featured in the media center of the VSR research group at Chemnitz University of Technology. The chair of VSR is Prof. Dr.-Ing. Martin Gaedke, who is the primary advisor of my PhD thesis.
The video above demonstrates the use of the WaPPU* service, which implements the novel principle of usability-based A/B testing. The underlying concept is that on one variation of an interface (A), we train a model from collected user interactions and an automatically presented usability questionnaire. Then, the other variation (B) involved in the A/B test uses this model to infer its usability from interactions alone.
Say, on interface A we perform a click within a particular element (#content) and then rate the site’s usability as good using the questionnaire. We reload the page, click outside that particular element and give a bad usability rating. The WaPPU service automatically trains a model that—simply speaking—knows the following:
click --- usability = good / element #content \ no click --- usability = bad
This model is instantly available to interface B. So if we now visit B and click outside of #content, WaPPU automatically infers a bad usability rating from this. The ratings of both variations of the investigated interface are available in a dashboard provided by our tool in real-time. This dashboard also features a traffic light that indicates whether one interface is significantly better or worse than the other based on a Mann–Whitney U test.
* “Was that Page Pleasant to Use?”