Mystical guidelines for creating great user experiences

Written by: Tal Bloom

The Jewish Torah teaches that the Creator created our world through ten utterances–for example, “let there be light.”

The Jewish mystical tradition explains that these utterances correspond with ten stages in the process of creation. Every creative process in the world ultimately follows this progression, because it is really a part of the continual unfolding of the world itself, in which we are co-creators.

This article aims to present an overview of the mystical process of creation and principal of co-creation and to illustrate how it can guide bringing digital product ideas into reality–although it’s easy enough to see how this could translate to other products and services–in a way that ensures a great user experience, and makes our creative process more natural and outcomes more fruitful.

And a note as you read: In Jewish mysticism, the pronoun “He” is used when referring to the transcendent aspect of the Creator that is the source of creation, and “She” is used when referring to the imminent aspect that pervades creation, because they are characterized by giving and receiving, respectively. Because this article discusses the relationship of the transcendent aspect, the masculine pronoun has been used.

The process of creation

Ten stages, four realms

Ten stages, four realms
The order of creation

The ten stages in the process of creation progressively create four realms.

Three triads create three spiritual realms, and the tenth stage creates our tangible reality, which is the culmination of creation. It is understood that creation becomes increasingly defined and tangible as the creative power flows from one realm to the next. When we participate in creation, our efforts naturally follow the same progression.

The four realms are traditionally referred to by Hebrew terms, so to make things easier I’ll refer to them using a designer’s day-to-day terms–ideation, design, implementation, and operation.

Before we dive in though, one more thing to note is that within each realm there is a three-stage pattern whereby the creation first becomes revealed, then delineated, and finally consolidated in a state of equilibrium. Hang in there, you’ll shortly see what this means.

The realm of ideation

In the beginning there was only the Creator, alone.

In the first three stages of creation, He simply created the possibility for a creation. This corresponds with the generation of business ideas.

Just as before there was anything else it had to arise in the Creator’s mind to create the world, so too, the starting point of all products and services is the emergence of an idea–a simple and common example of which is “a digital channel will help our customers connect with us.”

Next, the seed sprouts a series of details to define it. In creation, the details included the fact that creation will be limited and that there is an order to its unfolding. In business, the idea undergoes an extrapolation to define its reach and scope. For example, “the digital channel will need product information, a shopping cart, a customer database, and a social function for customers’ reviews.”

The third stage in the process of creation is the preparation for bridging the gap between the abstract realm of potential where the Creator is still effectively alone, with a new reality of seemingly separate creations. Correspondingly, in business the third step requires bringing the idea from a place of theory to a point that it can be shared with others, such as presenting to decision makers and stakeholders, or briefing agents and consultants.

The realm of design

Now that it’s possible to distinguish between the Creator and His creation, the next three stages serve to coalesce the homogenous creation into spiritual templates. This corresponds with the conceptual design of how the business idea may be realized.

The first stage in this realm is an expression of the Creator’s kindness, as He indiscriminately bestows life to all of creation. Correspondingly, the design process begins with telling the end-to-end story of the idea, from the user discovering the new product or service through to their consummate pleasure in using it, without our being too concerned with practical considerations. This could be captured in business process diagrams, but human-centred user journey maps or storyboards have proven more natural.

Next, the Creator expressed His attribute of judgement to establish the boundaries of His evolving creations. In business, we begin addressing practical considerations, such as time, budget, and technical constraints to define the boundaries of the concept. This generally involves analyzing the desired story to establish the finite set of practical requirements for realizing it. For digital products, the requirements are often closely followed by a business case, an information architecture, and a system architecture.

As mentioned, the third stage is where a consolidated state of equilibrium is reached to form the output of the realm. In creation, mystics describe the culmination of this realm as being sublime angels who are only identified by their function–for example to heal or to enact justice–and consider them to be the templates for these attributes, as they become manifest in the lower beings.

Similarly, we consolidate the business idea by sketching or prototyping how we envision it will become manifest. Typically we deliver low-fidelity interaction, product or service designs, which are often accompanied by a business plan and functional and technical specifications.

The realm of implementation

Using the spiritual templates, the next three stages serve to create individualized spiritual beings. This corresponds with implementing our conceptual designs into an actual digital product.

In creation, the life-force is now apportioned according to the ability of the created being to receive, similar to pouring hot liquid material into a statue mould. Correspondingly, we apply branding, colors, and shapes to bring the blueprint to life–the result being high-fidelity visual designs of what the digital product will actually look and feel like.

Next, the life-force solidifies to form the individual spiritual being, similar to when the hot liquid cools and the mould can be removed. This corresponds with slicing the visual designs to develop the front-end, developing the database, and integrating the back-end functionality.

The culmination of this realm is often depicted in artwork and poetry as being angels that have human form, wings, and individual names. They are, however, still spiritual beings, not physical beings like us. Correspondingly, at the final stage of implementation, there exists a fully functional digital product…in a staging environment.

The realm of operation

The culmination of the process of creation is our tangible reality, which is comprised of physical matter and its infused life-force (part of which is our physical bodies infused with our souls). Bridging the infinitely large gap between the spiritual and physical realms is often considered the most profound step in the process of creation, yet paradoxically it’s simultaneously the smallest conceptual distance from a spiritual being that looks and functions like a physical being, and an actual physical being.

Correspondingly, launching a digital product into the live public domain can be the most daunting and exciting moment, yet it can be as easy as pressing a button to redirect the domain to point to the new web-server or to release the app on the app store.

At this point the Creator is said to have rested, observing His creation with pleasure. Similarly, it can be very satisfying to step back at this point and soak in how our initial seed of an idea has finally evolved into an actual operational reality–which will hopefully fulfill our business goals!

The principle of co-creation

User feedback

By now we can appreciate why there seems to be a natural and logical sequence for the activities typically involved in creating a new product or service. Jewish mysticism, however, unequivocally adds that we are co-creators with the Creator. That is: We, created beings, are able to influence what the end product of creation will be, just like users can influence our products and services when we engage with them during the creation process.

Jewish mysticism relates that the Creator consults with His retinue of angels to make decisions regarding His creation. This corresponds with our soliciting user input to validate the direction of our creative efforts, such as:

  • during ideation, conducting research to ensure the ideas indeed meet users’ needs and desires;
  • during design, conducting user validation to ensure the sensibility and completeness of the story, correlation of the framework with users’ mental models, and usability of the blueprints; and
  • during implementation, conducting user testing to help smooth out any remaining difficulties or doubts in the user experience.

We are also taught that the Creator is monitoring human activity and makes adjustments accordingly. Similarly, at the stage of operation, it’s good practice to steer the finished product to better achieve business goals by monitoring the usage analytics.

Finally, we’re taught that the Creator desires our prayers beseeching Him to change our reality, similar to how we’ve come to understand the most potent consideration is user feedback on the fully operational product.

Continual improvement

On the surface it still seems as though the process of creation is a cascading “waterfall,” but we see that our world is constantly evolving–for example, more efficient transport, more sophisticated communication, more effective health maintenance–seemingly through our learning from experience to improve our efforts. In a simple sense, this can be likened to the “agile” feedback loop where learnings from one round of production are used to influence and improve our approach to the next round.

Jewish mysticism teaches, however, that under the surface our genuine efforts below arouse a magnanimous bestowal of ever-increasingly refined life-force into the creation. This can be understood as similar to a pleased business owner allocating increasingly more budget to continue work on an evidently improving product or service.

These days, it is becoming more common for businesses to implement a continuous improvement program, whereby an ongoing budget is allocated for this purpose. The paradigm of continually looking for ways to more effectively meet user needs and achieve business goals–such that they can be fed back into the process for fleshing out the idea, designing, and then implementing–perfectly parallels the reality that we are co-creating an ever more refined world using ever-deepening resources.

But how can a compounding improvement continue indefinitely? Jewish mysticism explains that as the unlimited creative power becomes exponentially more revealed within our limited reality, there will eventually come a grand crescendo with the revelation of the Creator’s essential being, which is neither unlimited or limited, but both simultaneously. This will be experienced as the messianic era–“In that era, there will be neither famine or war, envy or competition, for good will flow in abundance and all the delights will be freely available as dust. The occupation of the entire world will be solely to know their Creator.”1

Users front of mind at every stage

Before we get there, however, it can be seen from the above how every stage of the creative process has a unique effect on the user experience of the end product or service, such that it would bode well if we strive to ensure:

  1. The initial business idea meets an actual need or fulfils an actual desire of our users
  2. The concept is designed to function according to the user’s understanding and expectations
  3. The product or service is implemented in a way that is appealing and easy to use
  4. The operating product or service is continually improved to meet users’ evolving needs

By knowing each stage and each skill set’s proper place in the sequence and how to incorporate our learnings and user sentiment, we can achieve a more natural creative process for ourselves, our peers, and our clients and ensure the end product or service offers the best possible user experience, indefinitely.

Creative activity Co-creation activity Output
Ideation Innovation brainstorms
Idea prioritization
User research User pain points
Idea pitch/brief
Design Business analysis
Requirements analysis
Card sorting
Interaction design
User focus groups
User interviews
Tree testing
User walkthroughs
User journeys/storyboards
Product requirements
Information architecture
Wireframes/prototype
Implementation Visual design
Front-end development
Back-end development
Content preparation
User testing Staging product
Operation Product launch
Product maintenance
Analytics
User feedback/surveys
Live product
Ideas for improvement

References and further reading

  1. Mishneh Torah, Sefer Shoftim, Melachim uMilchamot, Chapter 12, Halacha 5, by the Rambam, Rabbi Moses ben Maimon

A Beginner’s Guide to Web Site Optimization—Part 2

Written by: Charles Shimooka

In the previous article we talked about why site optimization is important and presented a few important goals and philosophies to impart on your team. I’d like to switch gears now and talk about more tactical stuff, namely, process.

Optimization process

Establishing a well-formed, formal optimization process is beneficial for the following reasons.

  1. It organizes the workflow and sets clear expectations for completion.
  2. Establishes quality control standards to reduce bugs/errors.
  3. Adds legitimacy to the whole operation so that if questioned by stakeholders, you can explain the logic behind the process.

At a high level, I suggest a weekly or bi-weekly optimization planning session to perform the following activities:

  1. Review ongoing tests to determine if they can be stopped or considered “complete” (see the boxed section below). For tests that have reached completion, the possibilities are:
    1. There is a decisive new winner. In this case, plan how to communicate and launch the change permanently to production.
    2. There is no decisive winner or the current version (control group) wins. In this case, determine if more study is required or if you should simply move on and drop the experiment.
  2. Review data sources and brainstorm new test ideas.
  3. Discuss and prioritize any externally submitted ideas.
How do I know when a test has reached completion?
Completion criteria are a somewhat tricky topic and seemingly guarded industry secrets. These define the minimum requirements that must be true in order for a test to be declared “completed.” My personal sense from reading/conferences is that there are no widely-accepted standards and that completion criteria really depend on how comfortable your team feels with the uncertainty that is inherent in experimentation. We created the following minimum completion criteria for my past team at DIRECTV Latin America. Keep in mind that these were bare-bones minimums, and that most of our tests actually ran much longer.

  1. Temporal: Tests must run for a minimum of two weeks to account for variation between days of the week.
  2. Statistical confidence: We used a 90-95% confidence interval for most tests.
  3. Stability over time: Variations must maintain their positions relative to each other for at least one week.
  4. Total conversions: Minimum of 200 total conversions.

For further discussion of the rationale behind these completion criteria, please see Best Practices When Designing and Running Experiments later in this article.

The creation of a new optimization test may follow a process that is similar to your overall product development lifecycle. I suggest the following basic structure:

Process-diagram-abbreviated

The following diagram shows a detailed process that I’ve used in the past.

A detailed process that the author has used in the past.

Step 1: Data analysis and deciding what to test

Step one in the optimization process is figuring out where to first focus your efforts. We used the following list as a loose prioritization guideline:

  1. Recent product releases, or pages that have not yet undergone optimization.
  2. High “value” pages
    • 1. High revenue (ie. shopping cart checkout pages, detail pages of your most expensive products, etc…).
    • 2. High traffic (ie. homepage, login/logout).
    • 3. Highly “strategic” (this might include pages that are highly visible internally or that management considers important).
  3. Poorly performing pages

Step 2: Brainstorm ideas for improvement

Ideas for how to improve page performance is a topic that is as large as the field of user experience itself, and definitely greater than the scope of this article. One might consider improvements in copywriting, form design, media display, page rendering, visual design, accessibility, browser targeting… the list goes on.

My only suggestion for this process is to make it collaborative – harness the power of your team to come up with new ideas for improvement, not only including designers in the brainstorming sessions, but also developers, copywriters, business analysts, marketers, QA, etc… Good ideas can (and often do) come from anywhere.

Adaptive Path has a great technique of collaborative ideation that they call sketchboarding, which uses iterative rounds of group sketching.

Step 3: Write the testing plan

An Optimization Testing Plan acts as the backbone of every test. At a high level, it is used to plan, communicate, and document the history of the experiment, but more importantly, it fosters learning by forcing the team to clearly formulate goals and analyze results.

A good testing plan should include:

  1. Test name
  2. Description
  3. Goals
  4. Opportunities (what gains will come about if the test goes well)
  5. Methodology
    • 1. Expected dates that the test will be running in production.
    • 2. Resources (who will be working on the test).
    • 3. Key metrics to be tracked through the duration of the experiment.
    • 4. Completion criteria.
    • 5. Variations (screenshots of the different designs that you will be showing your site visitors).

Here’s a sample optimization testing plan to get you started.

Step 4: Design and develop the test

Design and development will generally follow an abbreviated version of your organization’s product development lifecycle. Since test variations are generally simpler than full-blown product development projects, I try to use a lighter, more agile process.

Be sure that if you do cut corners, only skimp on things like process artifacts or documentation, and not on design quality. For example, be sure to perform some basic usability testing and user research on your variations. This small investment will create better candidates that will be more likely to boost conversions.

Step 5: Quality assurance

When performing QA on your variations, be as thorough as you would with any other code release to production. I recommend at least functional, visual, and analytics QA. Even though many tools allow you to manipulate your website’s UI on the fly using interfaces that immediately display the results of your changes, the tools are not perfect and any changes that you make might not render perfectly across all browsers.

Keep in mind that optimization tools provide you one additional luxury that is not usually possible with general website releases – that of targeting. You can decide to show your variations to only the target browsers, platforms, audiences, etc… for which you have performed QA. For example, let’s imagine that your team has only been able to QA a certain A/B test on desktop (but not mobile) browsers. When you actually configure this test in your optimization tool, you can decide to only display the test to visitors with those specific desktop browsers. If one of your variations has a visual bug when viewed on mobile phones, for example, that problem should not affect the accuracy of your test results.

Step 6: Run the Test

After QA has completed and you’ve decided how to allocate traffic to the different designs, it’s time to actually run your test. The following are a few best practices to keep in mind before pressing the “Go” button.

1.  Variations must be run concurrently

This first principle is almost so obvious that it goes without saying, but I’ve often heard the following story from teams that do not perform optimization: “After we launched our new design, we saw our [sales, conversions, etc…] increase by X%. So the new design must be better.”

The problem with this logic is that you don’t know what other factors might have been at play before and after the new change launched. Perhaps traffic to that page increased in either quantity or quality after the new design released. Perhaps the conversion rate was on the increase anyway, due to better brand recognition, seasonal variation, or just random chance. Due to these and many other reasons, variations must be run concurrently and not sequentially. This is the only way to hold all other factors consistent and level the playing field between your different designs.

2.  Always track multiple conversion metrics

One A/B test that we ran on the movie detail pages of the DIRECTV Latin American sites was the following: we increased the size and prominence of the “Ver adelanto” (View trailer) call to action, guessing that if people watched the movie trailer, it might excite them to buy more pay-per-view movies from the web site.

We increased the size and prominence of the “Ver adelanto” (View trailer) call to action, guessing that if people watched the movie trailer, it might excite them to buy more pay-per-view movies from the web site.

Our initial hunch was right, and after a few weeks we saw that pay-per-views purchases were 4.8% higher with this variation over the control. This increase would have resulted in a revenue boost of about $18,000/year in pay-per-view purchases. Not bad for one simple test. Fortunately though, since we were also tracking other site goals, we noticed that this variation also decreased purchases of our premium channel packages (ie. HBO and Showtime packages) by a whopping 25%! This would have decreased total revenue by a much greater amount than the uptick in pay-per-views, and because of this, we did not launch this variation to production.

It’s important to keep in mind that changes may affect your site in ways that you never would have expected. Always track multiple conversion metrics with every test.

3.  Tests should reach a comfortable level of statistical significance

I recently saw a presentation in which a consultant suggested that preliminary tests on email segmentation had yielded some very promising results.

Chart showing conversion rates per 1000 emails sent.

In the chart above, the last segment of users (those who had logged in more than four times in the past year) had a conversion rate of .00139% (.139 upgrades per 1000 emails sent). Even though a conversion rate of .00139% is dismally low by any standards, according to the consultant it represented an increase of 142% compared to the base segment of users, and thus, a very promising result.

Aside from the obvious lack of actionable utility (does this study suggest that emails only be sent to users who have logged in more than four times?) the test contained another glaring problem. If you look at the “Upgrades” column at the top of the spreadsheet, you will see that the results were based on only five individuals purchasing an upgrade. Five total individuals out of almost eighty four thousand emails sent! So if, by pure chance, only one other person had purchased an upgrade in any of the segments, it could have completely changed the study’s implications.

While this example is not actually an optimization test but rather just an email segmentation study, it does convey an important lesson: don’t declare a winner for your tests until it has reached a “comfortable” level of significance.

So what does “comfortable” mean? The field of science requires strict definitions to use the terms “significant” (95% confidence level) and “highly significant” (99% confidence level) when publishing results. Even with these definitions, it still means that there is a 5% and 1% chance, respectively, of your conclusions being wrong. Also keep in mind that higher confidence intervals require more data (ie. more website traffic) which translates into longer test durations. Because of these factors, I would recommend less stringent standards for most optimization tests – somewhere around 90-95% confidence depending on the gravity of the situation (higher confidence intervals for tests with more serious consequences or implications).

Ultimately, your team must decide on confidence intervals that reflect a compromise between test duration and results certainty, but I would propose that if you perform a lot of testing, the larger number of true winners will make up for the fewer (but inevitable) false positives.

4.  The duration of your tests should account for any natural variations (such as between weekdays/weekends) and be stable over time

In a 2012 article on AnalyticsInspector.com, Jan Petrovic brings to light an important pitfall of ending your tests too early. He discusses an A/B test that he ran for a high-traffic site in which, after only a day, the testing tool reported that a winning variation had increased the primary conversion rate by an impressive 87%, with a 100% confidence interval.

The duration of your tests should account for any natural variations (such as between weekdays/weekends) and be stable over time.

Jan writes, “If we stopped the test then and pat each other on the shoulder about how great we were, then we would probably make a very big mistake. The reason for that is simple: we didn’t test our variation on Friday or Monday traffic, or on weekend traffic. But, because we didn’t stop the test (because we knew it was too early), our actual result looked very different.”

Chart showing new design results over time.

After continuing the test for four weeks, Jan saw that the new design, although still better than the control, had leveled out to a more reasonable 10.49% improvement since it had now taken into account natural daily variation. He writes, “Let’s say you were running this test in checkout, and on the following day you say to your boss something like ‘hey boss, we just increased our site revenue by 87.25%’. If I was your boss, you would make me extremely happy and probably would increase your salary too. So we start celebrating…”

Jan’s fable continues with the boss checking the bank account at the end of the month, and upon seeing that sales had actually not increased by the 87% that you had initially reported, reconsiders your salary increase.

The moral of the story: Consider temporal variations in the behavior of your site visitors, including differences between weekday and weekend or even seasonal traffic.

Step 7: Analyze and Report on the Results

After your test has run its course and your team has decided to press the “stop” button, it’s time to compile the results into an Optimization Test Report. The Optimization Test Report can be a continuation of the Test Plan from Step 2, but with the following additional sections:

  1. Results
  2. Discussion
  3. Next steps

It is helpful to include graphs and details in the Results section so that readers can visually see trends and analyze data themselves. This will add credibility to your studies and hopefully get people invested in the optimization program.

The discussion section is useful for explaining details and postulating on the reasons for the observed results. This will force the team to think more deeply about user behavior and is an invaluable step towards designing future improvements.

Conclusion

In this article, I’ve presented a detailed and practical process that your team can customize to its own use. In the next and final article of this series, I’ll wrap things up with suggestions for communication planning, team composition, and tool selection.

A Beginner’s Guide to Web Site Optimization—Part 1

Written by: Charles Shimooka

Web site optimization, commonly known as A/B testing, has become an expected competency among many web teams, yet there are few comprehensive and unbiased books, articles, or training opportunities aimed at individuals trying to create this capability within their organization.

In this series, I’ll present a detailed, practical guide on how to build, fine-tune, and evolve an optimization program. Part 1 will cover some basics: definitions, goals and philosophies. In Part 2, I’ll dive into a detailed process discussion covering topics such as deciding what to test, writing optimization plans, and best practices when running tests. Part 3 will finish up with communication planning, team composition, and tool selection. Let’s get started!

The basics: What is web site optimization?

Web site optimization is an experimental method for testing which designs work best for your site. The basic process is simple:

  1. Create a few different design options, or variations, of a page/section of your website.
  2. Split up your web site traffic so that each visitor to the page sees either your current version (the control group) or one of these new variations.
  3. Keep track of which version performs better based on specific performance metrics.

The performance metrics are chosen to directly reflect your site’s business goals and might include things like how many product purchases were made on your site (a sales goal), how many people signed up for the company newsletter (an engagement goal), or how many people watched a self-help video in your FAQ section (a customer service goal). Performance metrics are often referred to as conversion rates, which equals the percentage of visitors who performed the action being tested compared to the total number of visitors to that page.

Optimization can be thought of as one component in the web site development ecosystem. Within optimization, the basic process is to analyze data, create and run tests, then implement the winners of those tests.

Visual of where optimzation fits in site development
Optimization can be thought of as one component in the website development ecosystem.

 

A/B vs. multivariate

There are two basic types of optimization tests: A/B tests (also known as an A/B/N tests) and multivariate tests.

A/B tests

In an A/B test, you run two or more fixed design variations against each other. The variations might differ in only one individual element (such as the color of a button or swapping out an image for a video) or in many elements all at once (such as changing the entire page layout and design, changing a long form into a step-by-step wizard, etc…).

Three buttons for testing, each with different copy.
Example 1: A simple A/B/N test trying to determine which of three different button texts drives more clicks.

 

 

 

Visuals showing page content in different layouts.
Example 2: An A/B test showing large variations in both page layout and content.

 

In general, A/B tests are simpler to design and analyze and also return faster results since they usually contain fewer variations than multivariate tests. They seem to constitute the vast majority of manual testing that occurs these days.

Multivariate tests

Multivariate tests vary two or more attributes on a page and test which combination works best. The key difference between A/B and multivariate tests is that the latter are designed to tease apart how two or more dimensions of a design interact with each other and lead to that design’s success. In the example below, the team is trying to figure out what combination of button text and color will get the most clicks.

Buttons with both different copy and different colors
Example 1: A simple multivariate test with 2 dimensions (button color and button text) and 3 variations on each dimension.

The simplest form of multivariate testing is called the full-factorial method, which involves testing every combination of factors against each other, as in the example above. The biggest drawback of these tests is that they generally take longer to get statistically significant results since you are splitting the same amount of site traffic between more variations than A/B tests.

Other fractional factorial methods use statistics to try and interpolate the results of certain combinations, thereby reducing the traffic needed to test every single variation. Many of today’s optimization tools allow you to play around with these different multivariate methods; just keep in mind that fractional factorial methods are often complex, named after deceased Japanese mathematicians, and require a degree in statistics to fully comprehend. Use at your own risk.

Why do we test? Goals, benefits, and rationale

There are many benefits of moving your organization to a more data-driven culture. Optimization establishes a metrics-based system for determining design success vs. failure, thereby allowing your team to learn with each test. No longer will people argue ad nauseum over design details. Cast away the chains of the HiPPO effect—in which the Highest Paid Person in the Office determines what goes on your site. Once you have established a clear set of goals and the appropriate metrics for measuring those goals, the data should speak as the deciding voice.

Optimization can also drastically improve your organization’s product innovation process by allowing you to test new product ideas at scale and quickly figure out which are good and which should be scrapped. In his article “How We Determine Product Success” John Ciancutti of Netflix describes it this way:

“Innovation involves a lot of failure. If we’re never failing, we aren’t trying for something out on the edge from where we are today. In this regard, failure is perfectly acceptable at Netflix. This wouldn’t be the case if we were operating a nuclear power plant or manufacturing cars. The only real failure that’s unacceptable at Netflix is the failure to innovate.

So if you’re going to fail, fail cheaply. And know when you’ve failed vs. when you’ve gotten it right.”

Top three testing philosophies

1. Rigorously focus on metrics

I personally don’t subscribe to the philosophy that you should test every single change on your site. However, I do believe that every organization’s web strategies should be grounded in measurable goals that are mapped directly to your business goals.

For example, if management tells you that the web site should “offer the best customer service,” your job is to then determine which metrics adequately represent that conceptual goal. Maybe it can be represented by the total number of help tickets or emails answered from your site combined with a web customer satisfaction rating or the average user rating of individual question/answer pairs in your FAQ section. As Galileo supposedly said, “Measure what is measurable, and make measurable what is not so.”

Additionally, your site’s foundational architecture should allow, to the fullest extent possible, the measurement of true conversions and not simply indicators (often referred to as macro vs micro conversions). For example, if your ecommerce site is only capable of measuring order submissions (or worse yet, leads), make it your first order of business to be able to track that order submission through to a true paid sale. Then ensure that your team always has an eye on these true conversions in addition to any intermediate steps and secondary website goals.  There are many benefits of measuring micro conversion rates, but the work must be done to map them to a tangible macro conversion or you run the risk of optimizing for a false conversion goal.

2. Nobody really knows what will win

I firmly believe that even the experts can’t consistently predict the outcome of optimization tests with even close to 100% accuracy. This is, after all, the whole point of testing. Someone with good intuition and experience will probably have a higher win rate than others, but for any individual test, anyone can be right. With this in mind, don’t let certain members of the team bully others into design submission. When it doubt, test it out.

3. Favor a “small-but-frequent” release strategy

In other words, err on the side of only changing one thing at a time, but perform the changes frequently. This strategy will allow you to pinpoint exactly which changes are affecting your site’s conversion rates. Let’s look at the earlier A/B test example to illustrate this point.

Visuals showing page content in different layouts.
An A/B test showing large variations in both page layout and content.

Let’s imagine that your new marketing director decides that your company should completely overhaul the homepage. After a few months of work, the team launches the new “3-column” design (above-right). Listening to the optimization voice inside your head, you decide to run an A/B test, continuing to show the old design to just 10% of the site visitors and the new design to the remaining 90%.

To your team’s dismay, the old design actually outperforms the new one. What should you do? It would be difficult to simply scrap the new design in its entirety, since it was a project that came directly from your boss and the entire team worked so hard on it. There are most likely a number of elements of the new design that actually perform better than the original, but because you launched so many changes all at once, it is difficult to separate the good from the bad.

A better strategy would have been to have constantly optimized different aspects of the page in small but frequent tests to gradually evolve towards a new version. This process, in combination with other research methods, would provide your team with a better foundation for performing site changes. As Jared Spool argued in his article The Quiet Death of the Major Relaunch, “the best sites have replaced this process of revolution with a new process of subtle evolution. Entire redesigns have quietly faded away with continuous improvements taking their place.”

Conclusion

By now you should have a strong understanding of optimization basics and may have started your own healthy internal dialogue related to philosophies and rationale. In the next article, we’ll talk about more tactical concerns, specifically, the optimization process.

User Experience Research at Scale

Written by: Nick Cawthon

An important part of any user experience department should be a consistent outreach effort to users both familiar and unfamiliar. Yet, it is hard to both establish and sustain a continued voice amongst the business of our schedules.

Recruiting, screening, and scheduling daily or weekly one-on-one walkthroughs can be daunting for someone in a small department having more than just user research responsibilities, and the investment of time eventually outweighs the returns as both the number of participants and size of the company grow.

This article is targeted at user experience practitioners at small- to mid-size companies who want to incorporate a component of user research into their workflow.

It first outlines a point of advocacy around why it is important to build user research into a company’s ethos from the very start and states why relying upon standard analytics packages are not enough. The article then addresses some of the challenges around being able to automate, scale, document, and share these efforts as your user base (hopefully) increases.

Finally, the article goes on to propose a methodology that allows for an adjustable balance between a department’s user research and product design and highlights the evolution of trends, best practices, and common avoidances found within the user research industry, especially as they relate to SaaS-based products.

Why conduct usability sessions?

User research is imperative to the success and prioritization of any software application–or any product, for that matter. Research should be established as an ongoing cycle, one that is woven into the fabric of the company, and should never drop-off nor be simply ‘tacked on’ as acceptance testing after launch. By establishing a constant stream of non-biased opinions and open lines of communication which are immune to politics and ever-shifting strategies, research keeps design and development efforts grounded in what should already be the application’s first priority–the user.

A primary benefit in working with SasS products is that you’re able to gain feedback in real-time when any feature is changed. You don’t have to worry about obsolete versions, or download packages–web-based software enables you to change directions quickly. Combining an ongoing research effort with popular software development methods such as agile or waterfall allows for immediate response when issues with an application’s usability are found.

Different from analytics

SaaS are unique in that there is not the same type of tracking needed in-product. Metrics such as page views or bounce-rates are largely irrelevant, because the user could be spending their entire session on configuring functions of a single feature on a single page.

For example, for our application here at Loggly, the user views an average of ~2 pages (predominantly login and then search) and spends on average 8x as long on search then any other page. Progression is made within the page-level functions, not among multiple pages within the application’s structure.

Javascript-heavy applications don’t have the same URL and tree structure content-heavy sites are built around but instead make calls to different states of the application from within the same page.

Say your analytics package gives an indication that something is wrong with the setup flow or configuration screen, but you don’t yet have a good concept of at what point in the process the users are getting stuck.

Perhaps a button might be getting click after click because it is confusing and unresponsive, not because its useful. Trying to solve this exclusively with an analytics package will pale in comparison to the feedback you’ll get from a single, candid user who hits the wall. As discussed later in this article, with screensharing, you’re able see the context in which the user is trying to achieve a specific task, defining the ‘why’ in their confusing becomes more apparent than just the ‘what’ are they clicking on.

Determining a testing audience

The first component of defining any research effort should be to define who you want to talk to. Ideally, you’ll be able to have a mix of both new users and veterans that are able to provide a well-rounded feedback loop on both initial impressions of your application as well as historical perspective on evolution and found shortcomings after repeated use, but not all companies have this luxury.

Once in the door

Focus first on the initial steps the user has to take when interacting with your application. It seems obvious, but if these are not fulfilled with maximum efficiency, the user will never progress into more advanced features.

Increasing the effectiveness of the flow through set-up, configuration, and properly defining a measure of activation will pay dividends to all areas of the application. This should be a metric that is tested, measured, and monitored closely, as it functions as a type of internal bounce rate. Ensuring that the top of the stream for the majority of application users is sound will guarantee improved usage further down the road to the deeper, buried interactions.

These advanced features should be also be tracked and measured with the correlation that starts to paint a profile of conversion. Some companies define conversion as free-to-paid; others do so in a more viral sense–conversion being defined as someone who has shared on social media or similar.

As you start itemize these important features, you’ll get a better sense of the usage profile for where you’re trying to point the user to. For example, adding a listing record, or perhaps customizing a page–these might match a profile for someone who is primed for repeat visitation, someone who has created utility and a lasting connection, and ultimately ready to convert.

Avoiding overlap

If there is a focus on recruiting participants who are newly signed-up users, then you’ll likely overlap with outbound sales efforts. Because your company’s sales and marketing funnel tries as hard as possible to convert trial users to paid, or paid to upgrade, the company’s priority will likely be on conversion, not on research.

Further, if a researcher tries to outreach for usability surveys at this point, from the user’s perspective (especially those deemed potential high-value customers) it would mean different prompts for different conversations with different people from various groups within your company, all competing for spots on their calendar. This gives a very hectic and frenetic impression of your company and should be avoided.

In the case of a SaaS product, sometimes the sales team has already made contact with potential customers, and many of these sales discussions involve demonstrations around populated, best-case scenarios (which showcase the full features) of your product.

As a result, you may find the participant has been able to ‘peek behind the curtain’ through watching the sales team provide these demonstrations, giving them an unfair advantage as to how much he / she knows before trying to finally use the product themselves. For the inexperienced user, your goal is to capture the genuine instinct of the uninitiated, not those who have seen the ‘happy path’ and are trying to trace back the steps to get to that fully-populated view.

To make sure you’re not bumping heads with the sales and conversion team, ask if you can take their castoffs–the customers they don’t think will convert. You can pull these from their CRM application and automate personalized emails asking for their time. I’ll outline this method in further detail in the section following, because it pertains to the veteran users as well.

Photo of people in a conference exhibit hall.
Conferences are a great way to survey new and existing users.

As described in a previous post, guerrilla testing at conferences is a great way of fulfilling what gets seen and what parts of the interface or concept get ignored. These participants are great providers of honest, unbiased feedback and haven’t been exposed to the product other than some initial impressions of the concept.

Desiring the messy room

But what about the users that have been using your product for months now, those who have skin in the game, have already put their sweat and dollars behind customization of their experience? Surveying these participants allows us to see where they’ve found both utility and what areas need be expanded upon. Surveying only the uninitiated won’t provide feedback on any nagging functional roadblocks, those which are found only after repeated use. These are the participants that will provide the most useful feedback, sessions where you can observe the environment that they’ve created for themselves, the ‘messy room.’

Making an observational research analogy, a messy room is more telling of the occupants’ personality than an empty one. Given your limitations, how has the participant been forced to find workarounds? Despite these workarounds, they’ve continued to use the product, in despite of how we’ve expected them to use it–and these two can be contrastingly very different.

Online feedback form for Loggly UK.
Example of a feedback form, initiated via email.
User is able to schedule a 1:1 screensharing session on the confirmation page.

Automated recruitment

Find your friendly marketing representative/sales engineer at your company (or just roll your own) and discuss with them the best way to integrate a user experience outreach email into the company’s post-funnel strategy. For example, post-funnel would be after their trial periods have long since expired and the user is either comfortable in their freemium state or fully paid up.

As mentioned earlier, you can also harvest leads from the top of the funnel in the discarded CRM leads. However, you’ll likely have a greater percentage of sessions with users that are misfires–those indifferent or only just poking around the app, with not yet a full understanding of what it might do. Thankfully, the opt-in approach for participation filters this out for the most part.

Focusing again on the recruitment of the veteran, experienced users, another, more complex scenario would be to trigger this UX outreach email once a specific set of features have been initiated–giving off the desired signature of an advanced, informed user.

Going from purely legacy-based perspective, six months of paid, active use should be enough time to establish a relationship with a piece of software, whether they love or hate it. If there exists enough insight into the analytics side of the sales process, it would behoove you to also make sure that the user has had a minimum number of logins across these six months (or however long you’ll allow the users to mature).

Outreach emails triggered through the CRM should empower the recipient to make the experience of the product better, both for themselves and their fellow customers. Netflix does a great job of this by continually asking about the streaming quality or any delays around arrival times of their product.

I also recommend asking the users a couple of quantitative and qualitative questions, as this metric something you should be doing for your greater UX efforts already. These questions follow the guidelines of general SUS (System Usability Survey) practices that have been around for decades. Make the questions general enough so that they can be re-used and compared going forward, without fear of needing the change the goalposts when features or company priorities change.

Screen grab of the user's desktop.
A peek into an active user’s work environment.

When engineering this survey, be sure to track which tier of customer is filling out these surveys, because both their experience and expectations could be wildly different. Remember also to capture the user’s email address as a hidden field so you can cross reference against any CRM or analytics packages that are already identifying existing customers.

Setting boundaries

It depends on the complexities of your product, but typically 20-30 minutes is enough time to cover at least the main areas of function. Any longer, and you might encounter people not wanting to fit in an entire hour block into their schedule. If these recorded sessions are kept to just a half-hour, I find that a $25 is sufficient compensation for this duration of time, but your results may certainly vary.

In any type session, do iterate that this is neither a sales, nor a support call. You’re researching on how to make the product better. However, you should be comfortable on how to avoid (or sometimes suggest) workarounds to optimize the participant’s experience, giving them greater value of use.

Tools of the trade

For implementation of the questionnaire, I hacked the HTML / CSS from a Google Form to exist as self-hosted page but still pushing results through the matching form and input IDs to the extensible Google Spreadsheet.

There are a few tutorials that explain how to retain your branding while using Google’s services. I went through the trouble so I can share the URL of either the form or the raw results with anyone, without the need to create an account or login. As we discuss the sharing component of these user research efforts, this will become more important. Although closed systems like SurveyMonkey or Wufoo are easy to get up and running, the extensibility or a raw, hosted result set does not compare.

Insert a prompt at the end of the questionnaire for the user to participate in a compensated user research survey, linking to a scheduling applications such as Calend.ly. This application has been indispensable for opt-in mass scheduling like this. The features of gCal syncing, timezone conversion, daily session capping, email reminders, and custom messaging all are imperative to a public-facing scheduling board. Anyone can grab a 30-minute time slot from your calendar with just your custom URL, embeddable at the end of your questionnaire.

To really scale this user research effort to the point where it can be automated, you cannot spend the time trying to negotiating mutually-available times, converting time zones and following up with confirmations. Calend.ly allows for you to cap the number of participants who can grab blocks of your time, so you can set a maximum number of sessions per day, preventing a complete overload of bookings in your schedule.

As a part of the scheduling flow within Calend.ly, a customizable input field asks the participant for their Skype handle in order to screen share together, and I’d advise for the practitioner to create a separate Skype account for this usability effort. With every session participant, you’ll begin to add and add more seemingly random contacts, any semblance of organization and purity for your personal contact list will be gone.

Screen grab of Calend.ly booking utility.
Calend.ly booking utility – a publicly-accessible reservation system.

Calend.ly booking utility – a publicly-accessible reservation system.

Once the user is on the Skype call, ask for permission to record the call and make sure that you give a disclaimer that their information will be kept private and shared with no one outside the company. You might also add ahead of time that any support questions that come up, you’ll be happy to direct to the proper technicians.

Permissions granted, be sure to re-iterate to the participant the purpose and goal of the call, and provide them with a license to say whatever they want, good or bad–you want to hear it. Your feelings won’t be hurt if they have frustrations or complaints about certain approaches or features of your product.

For recording the call, there are plenty of options out there, but I find that SnagIt is a good tool to capture video, especially given the resolution and dimension of the screen share tends to change based upon the participant’s monitor size. When compressing the output, a slow frame rate of 5/10 fps should suffice, saving you considerable file size when having to manage these large recordings.

Tagging annotations

When you’re walking the participant through the paces of the survey, be sure to annotate the time started and any high/lowlights you see along the way. While in front of your desktop, a basic note-taking utility application (or even pad and paper) should suffice. This will allow you to go back after the survey is finished and pull quotes for use elsewhere, such as powerpoint presentations or similar.

I always try to write a running diary of the transcript and a summary at the end just to cover what areas of the application we explored, as well as a quick summary of what feedback we gathered. Summarizing the typed transcript and posting the relative recorded video files should take no more than 10 minutes, which will still keep your total per-participant (including processing) time to under an hour each, certainly manageable as a part of your greater schedule.

Share the love (or hate)

I want to make sure that these sessions are able to be referred to by the executive and product management team for use in their prioritization strategy. Setting up an instance of MAMP / WordPress on a local box (we’re using one of the Mac Minis that power a dashboard display) which allows me to pass around the link internally and not have to deal with some of the issues around large video file sizes being uploaded, as well as alleviate any permissions concerns with these sessions being out in the wild.

Screen grab of the session archive interface.
Our UX session archive, with hundreds of recorded and tagged sessions.

Also important is to tag these posts attached to these files when you upload them. This allows faster indexing when trying to find evidence around a certain feature or function. Insert your written summary into the post content, and you’ll be able to better search on memorable quotes that might have been written down.

These resources can be very good for motivation internally, especially among the engineers who don’t often get to see people using the product they continually pour themselves into. They’ll also resonate with the product team, who will see first-hand what’s needed to re-prioritize for the next sprint.

After awhile, you’ll start to get a great library of clips that you can draw knowledge from. There’s also a certain satisfaction to seeing the evolution of the product in the interface through these screengrabs. That which was shown as confusing at one time may now be fixed!

Follow-up

Fulfillment of a participant compensation can be done through Amazon or other online retailers; you can wire a gift card through an email address, which you’ll be able to scrape as a hidden field from the spreadsheet of user inputs. Keep a running list of those that you’ve reached out to and contacted for responses.

You might also incorporate contacts met during sessions described in the Guerrilla Usability Testing at Conferences article, so you’ll be able to follow up when attending the next year’s conference to recruit again. After enough participants and feedback, think about establishing a customer experience council that you can follow up on with specific requests and outreach, even for quick vetting of opinions.

Conclusion

This article first outlined the strategies and motivation behind the research, advocating creating an automated workflow of continually-scheduled screenshares with customers, rather than trying to recruit participants individually. This methodology was then broken down to distinct steps of recruitment via email, gathering quantitative and qualitative feedback, and automating an opt-in booking of the sessions themselves. Finally, this article went on to discuss how to best leverage and organize this content internally, so that all might benefit from your process.

User research is imperative to the success and prioritization of any software application (or any product, for that matter). Yet, too often we forget to consume or own product. Whether it be server log management as I’ve chosen, or apartment listing or ecommerce purchases, shake off complacency and try to spend 30-mins a week trying to accomplish typical user tasks from start-to-finish.

Also make it a point to conduct some of these sessions among those you work alongside; you’ll be surprised what you can find just by the simple repetition with a fresh set of eyes and ears. The research process and its dependencies does not have to be as intricate as the one listed above.

 

When your company starts to incorporate user opinion into a design and development workflow, it will begin to pay out dividends, both in the perceived usability of your application as well as the gathered metrics of user satisfaction.

 

How to Make a Concept Model

Written by: Christina Wodtke

IMG_0048

I can draw.

I went to art school. I studied painting until I fell out with the abstract expressionists and switched to photography. But I can draw.

What I cannot do is diagram. I always wanted to. I have models in my head all the time of how things work. But when it comes time to make a visual model of those ideas, I can’t figure out to to represent them. I find myself resorting to pre-existing models like four-squares or the Sierpinski triangle (I dig fractals.) For example:

Social-Architecture-Diagram

Other than the oh-god-my-eyes color choices, my social architecture diagram has deeper problems. For example, the ideas in it are limited to threes within threes because that’s the form triangles take. The model served to communicate my ideas well enough for the sake of my workshop, but… shouldn’t form FOLLOW meaning? If I had more than four elements for any section, I’d have to either collapse two, or fudge it in some other way. I was sacrificing accuracy for consistency. But I didn’t know how to make to make it better.

A concept model is a visual representation of a set of ideas that clarifies the concept for both the thinker and the audience. It is a useful and powerful tool for user experience designers but also for business, engineering, and marketing… basically anyone who needs to communicate complexity. Which is most of us, these days.

The best known concept model in the user experience profession is probably Jesse James Garrett’s “Elements of User Experience.” The best known in start-up circles is the lean startup process. Both of these models encapsulate the ideas they hold in such a memorable way that they launched movements.

Elements-User-Experience-Lean-Startup

If you wish to clearly present a set of ideas to an audience and represent how they fit together, a diagram is much more powerful than words alone. Dan Roam points this out in his latest book, Blah Blah Blah:

“The more we draw, the more our ideas become visible, and as they become visible they become clear, and as they become clear they become easier to discuss—which in the virtuous cycle of visual thinking prompts us to discuss even more.”

Concept models can serve many purposes. You can use concept models to show your teammates how a complex website is organized before the site is built…

Andrew Hinton’s model of a “virtual shared organizational ‘building’ where people spread all over the country were collaborating to run and participate in the org”.
Andrew Hinton’s model of a “virtual shared organizational ‘building’ where people spread all over the country were collaborating to run and participate in the org.”

… or to help teammates understand how the site currently works…

Bryce Glass’s concept model of Flickr use.
Bryce Glass’s concept model of Flickr use.

… or to show end users how a service works, to help sell it.

    Biblios uses a concept model to help users understand the power of social cataloging. What it lacks in elegance, it makes up in clarity.
Biblios uses a concept model to help users understand the power of social cataloging. What it lacks in elegance, it makes up in clarity.

I teach user experience design, and my syllabus always includes concept models. Students of mine who do a concept model before working on the interaction design and information architecture always make better and more coherent products. The act of ordering information forces them to think through how all the disparate elements of a product fit together.

Stephen’s handout from the workshop on representing types of visual relationships. Advanced and useful thinking.
Stephen’s handout from the workshop on representing types of visual relationships–advanced and useful thinking.

You can imagine how excited I was to take the Design for Understanding workshop at the 2014 IA Summit. Partly because I will go see anything Karl Fast or Stephen P Anderson talk about and having them together is Christmas come early. But mostly in hopes of learning a way to make a good concept model.

The workshop was brain-candy and eye-opening: They covered how the brain processes information and how ways of interacting with information can promote understanding. BUT I still couldn’t make a model to save my life. I didn’t know where to begin!

At lunch, Stephen was manning the room while Karl grabbed food for them. I had been struggling with a model for negotiation I wanted for a talk I was presenting later in the program. Seeing Stephen idle, I pounced and begged for help.

Stephan P. Anderson is author of Seductive Interfaces and the upcoming Design for Understanding. He’s also a patient soul who will put up with ham-handed diagramming and ridiculous requests. He started to sketch my model and tell me what he was thinking as he drew. Then I had my bingo moment: Stephen had forgotten what it was like not to know how to begin! This happens to all experts. After a while some knowledge is so deeply embedded in their psyche they forgot what it was like not to know. They then teach the nuances rather than the fundamentals.

I suggested we do a think aloud protocol while he made a concept diagram; he would draw, and I’d prompt him to talk about what was going through his mind. He was excited to have me reflect his thinking back to him so he could become a better teacher as well. We arranged to have a sketching session after the workshop.

 

    Stephen Anderson draws; I do a think aloud protocol to capture how he works.
Stephen Anderson draws; I do a think aloud protocol to capture how he works.

Later in the day, we met in the quiet hotel bar with wine and a sketchbook. I asked him what he wanted to draw. “Do you have something you are working on?” he asked. “That way I can focus on the model, rather than rethinking the ideas.”

Did I have a model I was struggling with? Always!  I shared my new theory of the nature of digital products. I’ll be writing that up in another article when it’s done, but for now, the short version is that one must iterate through the elements of digital design, which include the framework, interactions, information structure, and aesthetics. But a product doesn’t become an experience until a person interacts with it; your design cannot be known until you see what happens when a human shows up.

Stephen’s first step was to ask me about my goal for the model. I said it was for students and young practitioners to understand the interdependencies of the elements, so they have a more iterative approach. And for critics to be able to understand why things are different, both good and bad.

Next, he did what I’d call a idea inventory. He brainstormed more elements that might play into the model. He made sure no ideas were left out. He made notes of those he suspected might be important in the margins. He sketched as he thought, sometimes just making meaningless marks, as if warming up his hands.

He then carefully asked about each element in my theory, making sure he understood each. What was an information structure and what was a framework and were they different? I ended up telling a little story about a product to make sure he got what I was explaining. I began to draw too, encouraged by his easy scribbles.

Finally, Stephen noted the relationships of the items to each other. Were some things subsets of others? Were some overlapping, or resulting?

Playing with relationships (my drawing).
Playing with relationships (my drawing).

Once he knew what each item was, and how they were related to each other, he began to sketch in earnest. He said, “I always start with circles because edges mean something. They mean you have four items, or five. Circles leave room for play.” His circles quickly became blobs and then shapes.

I don’t know if he’d normally talk to himself out loud when not encouraged to do so, but it was fascinating to to hear him free associate concepts, then draw them out. A string of concepts became a string of beads; moving through an experience became moving through a tunnel; intertwined ideas were a braid. Any important idea got a drawing.

Here Stephen tries on various relationship metaphors, including moving through tunnels, holding something, string of ideas, braided together concepts.
Here, Stephen tries on various relationship metaphors, including moving through tunnels, holding something, string of ideas, and braided-together concepts.

Each time he completed a mini-model, he’d evaluate what was missing and what was working and take that insight to the next drawing. He made dozens of these little thumbnail drawings.

Stephen said, “one shape leads to another…a single word sparks a new representation—we’re always ‘pivoting’ from one thumbnail to the next…”

He pointed out what concepts were left out, or where they could be misinterpreted.

“You want to avoid 3-d, because it’s fraught with problems. You want to be able to sketch it on a napkin.” —Stephen Anderson, on keeping in mind the model’s goal

At one point, he became tapped out, and we spoke of other things. We stared out the window at the harbor, and I drank some of my wine, forgotten in the excitement of drawing and talking.

Then suddenly he started in again and produced a flurry of new drawings. I realized resting and mulling was important too. I was a bit annoyed with myself. An article doesn’t come out perfect in one writing session. Why should I expect a concept model to just materialize?

Finally he came to a stop, several pages filled with a jumble of images. We didn’t have a model, but we had many good directions. As we finished our drinks and headed toward the opening reception, Stephen told me, “You gotta get Dan Brown to do this, too.”

section-break

Dan M. Brown is best known in the user experience design community as author of Communicating Design and Designing Together. Both books benefit greatly by clear and succinct conceptual models, and the former even talks about how to use them in the design process:

Purpose—What are concept models for?
There really is only one reason to create a concept model: to understand the different kinds of information that the site needs to display. This structure can drive requirements for the page designs, helping you to determine how to link templates to each other. With the structure ironed out, you might also use the model to help scope your project—determining what parts of the site to build when.

Audience—Who uses them?
Use concept models for yourself. Ultimately, they are the most selfish, introspective, and self-indulgent artifact, a means for facilitating your own creative process.”

–Communicating Design: Developing Web Site Documentation for Design and Planning 2nd Edition, Dan Brown, 2010

Clearly, a guy I should be talking to!

The IA Summit was held in sunny San Diego in a hotel with not one but two swimming pools, so Dan had brought his family with him. When I asked him if I could watch him draw a concept model, he said, “I’m at the coffee shop with the boys around 6:30 every morning.”

You take what you can get.

The next morning Dan settled the boys in a corner with books, pastries, and an emergency iPad, and we got to work. We agreed he’d model the same concept, to control for variations. By now I had created a formula for the idea: (F+In+Is+Ae)+P=E. Framework, interactions, information structure, and aesthetics plus a person makes an experience. I was modeling in words as my friends were modeling in pictures.

I took Dan through the same story of an iterative product design process, since it had helped Stephen. I sketched it out. I felt like my hands were waking up from a long sleep, and they were eager to hold a pen now.

As I spoke, Dan wrote down key ideas and also began to scribble. He used the same process as Stephen: collecting the concepts then inspecting them for hidden complexity.

“A question I ask myself is ‘what needs unpacking?’ I can’t diagram an idea until it’s clear in my own brain.” —Dan Brown

He then took each concept and free associated all the sub-elements of the concept. He drew them out loosely, mind-map style.

Dan also started with the goal and wrote it out across the page.

Dan-Brown-Goal

He also asked explicitly who the model was for. To draw, he needed to visualize the audience. This reminded me of a recent presentation workshop at Duarte where we literally drew pictures of our audience. No work can be good unless you know who it’s for.

Duarte has you draw your audience before you design your presentation, so you remember who you are presenting to and how much attention they are (or aren’t) giving you.
Duarte has you draw your audience before you design your presentation, so you remember who you are presenting to and how much attention they are (or aren’t) giving you.

Dan made sure he didn’t carry anything in his head: All ideas were put on paper as a note or a sketch. When he had to turn a page, he ripped it out to lay it next to the other pages. I realized how critical it was to have plenty of room to see everything at once. I saw the same technique of storytelling and drawing of ideas.

Around now, Stephen joined us. He was excited to see what Dan came up with, enough to also climb out of bed at the crack of dawn. I listened as the two diagrammers discussed the poster session and the strengths and weaknesses of the ideas that had been presented.

Dan said, “You can look at people’s posters and see their process. They are so close to their own narrative…In one poster, the key framework was rendered in a very pale text. It was a good story, but there are things you want to jump off the page. For her, my guess is those steps were so self-evident she didn’t see need to highlight them.”

 You have to have a beginner’s mind to explain to beginners.

“Speaking of beginner’s mind, so much of my design process is to throw it all out start all over again.” —Dan Brown

Dan Brown draws it all.
Dan Brown draws it all.

Now Dan began to model the concept. He emphasized the importance of sticking with very simple geometry–circles, squares, triangles, lines–not fussing with trying to find a perfect model at the beginning, just exploring the ideas and their relationships.

He also mentioned he begins with any concept in the model and doesn’t worry about representing order at first. He starts with what catches his interest to get familiar with the ideas.

Dan then deviated from Stephen by seeking the focal point. What concept held all the others together? What was the most important or key idea? He tried out placing one idea, then the other, in the center to see if felt right.

After scrapping one bowtie model, he paused. “I sometimes retreat into common structures and see how these common structures might speak to me. For example, time is one of those fundamental aspects, so I ask myself: How much do I need to show time here?”

He demonstrated by drawing swimlanes and sketched the ideas and their relationships in time.

Swim lanes for moving across the elements.
Swimlanes for moving across the elements.

“Are there other elements you often look for, like time?” I asked

“People,” he replied. “People and time are familiar concepts, easy for an audience to relate to. By using them as a foundation for a model, I’ve already made it easier for people to ‘get on board.'”

He stared at the paper, deep in thought.

Stephen then pointed at the page. “What Dan did here,” he said, poking at where Dan wrote out goal and audience, “I did also but didn’t externalize. I was holding it in my memory, but I like having it on the paper better.”

Eventually Dan, too, was tapped out, and his sons began to play Let It Go on the iPad at higher and higher volumes. He separated his sons from the electronics and left to prepare for the swimming pool.

 

section-break

After Dan, I knew I wanted to try to get one more person to model. Since I was lucky enough to be at a conference full of diagrammers, I chased Joe Elmendorf of The Understanding Group. He had just given a talk on Modeling for Clarity that my friends were raving about. And, with my luck still holding, I got to have breakfast with him. Happily, at 8 am this time.

Joe Elmendorf brings pace layers into the discussion. My handwriting is the ballpoint; his, the nice black ink pen.
Joe Elmendorf brings pace layers into the discussion. My handwriting is the ballpoint; his, the nice black ink pen.

Again, I saw what were becoming familiar concepts (inventory, inspection, relationships, then talk-draw.) I then focused on how he differed from Stephen and Dan. He choose to use the title of the diagram as an element. He did not iterate as widely as Stephen. He was the first person to argue with me about the validity of my theory, which was a great way to understand it (and benefited me by making it better!).

As well, he reinforced something Stephen had mentioned in his workshop and that Dan was obviously doing: Joe had a large mental library of typical models to draw upon, which got him started. Stephen keeps a Pinterest board full of inspiration, if you want to start your own “lego box” of models.

Stephen’s Board http://www.pinterest.com/stephenpa/the-visual-display-of-information/
Stephen’s Pinterest board: http://www.pinterest.com/stephenpa/the-visual-display-of-information/

Overall, there were so many familiar patterns I saw in his approach, the differences were more interesting than important. I had my answer. I knew how they did it.

section-break

On the last day of the conference in the afternoon, Stephen and I were scribbling further on the model, playing with petals for the elements, when Dan Willis joined us. Dan is also a master of models as well as an inveterate sketcher.

Stephen further refining ideas, always generating.
Stephen further refining ideas, always generating.

Although Dan declined to diagram for me, claiming brain fatigue (a reasonable claim at this year’s Summit) he pulled up a chair and sat sketching next to us. It was companionable, to sit and talk and draw ideas. We moved back and forth from discussing life to discussing the ideas, teasing, joking, drawing. As we chatted, I realized this was a part of the secret. You need a thinking partner. Sometimes it’s paper, sometimes it’s friends; but it’s best when it’s both. It doesn’t always matter what you draw, just that you draw.

Dan Willis drawing nearby makes me happy.

Dan Willis sketch from a tweet
A Dan Willis sketch from a tweet that day.

Our brains work better when our hands are busy.

section-break

Later, sitting in the back of a session, I lobbed a model at Stephen, and he shot back with his own.

Refining an idea, mine on left, Stephen’s on right.
Refining an idea; mine on left, Stephen’s on right.

Then I saw another step, one which Dan had alluded to when he mentioned the poster with the key point too pale to read: You have to refine the model to communicate effectively. Type, color, and labels are all a key part of the communication process. While the model did stand alone without the color and type, adding those–and most especially getting labels right–made the model more effective.

section-break

DaveGrayAfter getting home, I started sketching how concept models were made. I drafted this article and then asked my friend Dave Gray if he’d do a quick edit. Dave was the founder of Xplane, a company that used diagrams–concept and other–to transform companies. Dave has been a proponent of visual thinking and clear modeling for years, and I consider him the master of making ideas visible.

Life then intervened and this article sat. I was busy with several things, including Lou Rosenfeld’s 32 Awesome Practical UX Tips. Dave presented right before me, and watching him sketch, I realized I just had to get one more diagramming session in. It was not enough to have him comment, I needed to see him draw. I was grateful I did; otherwise, I would have missed a crucial piece of the puzzle.

Dave Gray draws on cards so he can rearrange, manipulate, and overlay the concepts.
Dave Gray draws on cards so he can rearrange, manipulate, and overlay the concepts.

We hopped on a Google Hangout and he also drew out that same darn design model for me. I saw familiar patterns in his approach: inventory, unpack, relationship exploration. But he added a critical step I hadn’t thought of before: Test the model.

He’s currently writing a book on Agile, and it shows. He said, first design the test, then design the thing. For the model, he suggested using his WhoDo Gamestorming tool as a way to design a test of the effectiveness of the model. He lists who the model is for and what they will do if they understand the model.

If Dave didn’t fully understand the audience for the model, he might do an empathy map for those people.
If Dave didn’t fully understand the audience for the model, he might do an empathy map for those people.

Designing a test of the model’s success radically clarified the goals for the model. Testing it would make sure it did what you wanted it to do.

section-break

So then I sat down to make a model of how to make models. And it came easily.

  • Determine the goal: How will the model be used, by whom? What is the job of the model? To change minds, explain a concept, simplify complexity?

  • Inventory the concepts: Brainstorm many parts of your concept. Keep adding more in the margins as you go.

  • Inspect the concepts: Are there many concepts hiding in one? Do you really understand each idea?

  • Determine the relationships: How do the concepts interact?

  • Decision point: Do I understand the ideas and what I’m trying to communicate? 
    Test: Ask yourself if the model “feels” right.
    If yes, then continue.

  • Iterate with words and pictures: Talk to yourself and draw it out!

  • Evaluate with yourself/the client: Keep making sure the drawings match the ideas you wish to communicate. Don’t punk out early! Rest if you need to!

  • Decision point: Does my audience understand the ideas and what I’m trying to communicate? 
    Test: Can my audience answer key questions with the model? 
    If yes, then continue.

  • Refine: Use color, type, line weight, and labels to make sure you are communicating clearly.

A model for making models.
A model for making models. It may not be beautiful, but it’s clear.

The concept model is invaluable. But like so many useful things, it takes time to make.

When my daughter first started drawing My Little Pony, she expected to start at the ears and draw it perfectly down to the hooves. She was angry when it didn’t work that way, and it took some convincing to get her to block out key shapes then refine the whole, and to use pencil before ink. When I sat down to make a concept model, I made the same mistake! I’d start in Powerpoint or Grafio, and expect perfection to flow from my mind.

No more! Stephen, Dan, Joe, and Dave taught me to play, explore, refine, test, and play some more until the result was right. Thank you all!

Now go make a model!

section-break

Postscript

If your hands do not obey your brain, and/or you need more ideas for shapes and relationship models, I recommend Dave Gray’s Visual Thinking School.

See my interview with Dave on how he’d make the experience model