When the exciting opportunity to work in a post-bubble dot.com startup arose, I jumped to take it. I had the luxury of doing things exactly as I thought right, and for a while it was truly fantastic. I built a team with a dedicated user researcher; information architect; interaction and visual designers and we even made a guerilla usability lab and had regular test sessions.
Unfortunately, the enthusiasm I had for my new job waned after six months when an executive was appointed Head of Product Development — who insisted he knew SCRUM1 better than anybody. As the Creative Director, I deferred authority to him to develop the product as he saw fit. I had worked with SCRUM before, done training with Ken Schwaber (author1 and co-founder of the Agile Alliance) and knew a few things from experience about how to achieve some success integrating a design team within SCRUM. This required the design team to work a “Sprint” (month long iteration) ahead of the development team. But the new executive insisted that SCRUM had to be done by-the-book. Which meant, all activities had to be included within the same sprint, including design.
Requirements came from the imagination of the Head of Product Development; design was rushed and ill-conceived as a result of time pressure; development was equally rushed and hacked together, or worse, unfinished. The end of Sprint debriefing meetings reliably consisted of a dressing down of the entire team by the executives (since nobody had delivered what they’d committed to i.e. they had tried to do too much, or had not done enough). Each Sprint consisted of trying to fix the mess from the Sprint before or brushing it under the carpet and developing something unstable atop the code-garbage. Morale languished, the product stank, good staff began to leave… it was horrible.
This is an extreme example of where SCRUM went bad. I am not anti-Agile although I’ve been bitten a few times and feel trepidation when I hear someone singing its praises without having much experience with it. Over the last eight years, I’ve seen Agile badly implemented far more often than well (and yes, it can be done well, too). The result of this is mediocre product released in as much time as it would have taken a good team to release great product using a waterfall approach. In this article, I will describe Agile and attempt to illuminate a potential minefield for those who are swept up in the fervor of this development trend and want to jump in headlong. Then I will present how practices within User Centred Design (UCD) can mitigate the inherent risks of Agile and how these may be integrated within Agile development approaches.
Where did Agile come from?
Envisioned by a group of developers, Agile is an iterative development approach that takes small steps toward defining a product or service. At the end of each step, we have something built that we could release to the market if we choose to and therefore it can assure some speed to market where waterfall methods usually fail. Agile prefers to work out how to build something as we go, rather than do a waterfall style deep dive into specification and then finding out we can’t build parts of the spec for some reason e.g. a misjudgment of feasibility, misjudgment of time to build, or changing requirements.
A group of developers such as Kent Beck, Martin Fowler and Ken Schwaber got together to come up with a way to synthesize what they had discovered was the most effective ways to develop software – The Agile Alliance was born. It released a manifesto2 to describe its tenets and how it differs from waterfall methods.
Agile can be thought of as a risk-management strategy. Often developers are approached directly by a client who does not know what a user experience designer, information architect or user interface designer is. Roles such as these usually interpret what clients want and translate it to some kind of specification for developers. Without this role, it’s down to the developer to work out and build what the customer wants. Because Agile requires a lot of engagement with the client (i.e. at the end of every iteration, which can be as little as a week) it mitigates the risk of going too far toward creating something the client doesn’t want. As such, it is a coping mechanism for a client’s shifting requirements during development as they begin to articulate what they want. To quote the Agile Manifesto’s principles “Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.
Why do people rave about it?
At the heart of what makes Agile attractive is the possibility of quicker return on investment for development effort, because we can release software earlier than we would have otherwise. In the short term, this is typically borne out. In the long term it can be too, though only when the team hasn’t fallen victim to temptation (more on that later). Agile is also good at generating momentum because the iterations act as a drumbeat to which the team marches toward manageable deadlines. The regular "push" to finish a sprint ensures that things move along swiftly. Agile is also good at avoiding feature bloat by encouraging developers to do only what is necessary to meet requirements.
Because it emphasizes face to face contact for a multidisciplinary team, Agile tends to encourage contribution from different perspectives. This is generally a positive influence on, pragmatism, innovation and speed of issue resolution. The team is empowered to make decisions as to how requirements should best be met.
In of itself, Agile does a good job of flexing to the winds of change. But one has to ask whether it was devised to treat a symptom of the larger cause: the business doesn’t know what it wants. While Agile enables the development team to better cope with this, it doesn’t solve the problem and in most cases creates new problems.
Mine 1: An unclear role for design
In the best cases of business approaching developers to build some software, some of those developers may have design skills. But that’s not a particularly common scenario. Many developers have also had bad experiences with designers who don’t know what they’re doing. It took a long time for the design profession to come to grips with designing for complex systems and there is still a deficit of expertise in this field. “Business people and developers must work together daily throughout the project” is another principle of Agile. Where does the designer fit into the frame?
Mine 2: The requirements gathering process is not defined
Agile accommodates design activities from the perspective of a developer. It tends to shoe-horn these activities into their view of the world where requirements fall from the sky (from the business or customer who is assumed to be all-knowing) and takes for granted that they are appropriate.
According to Ken Schwaber, SCRUM intends to be a holistic management methodology and leaves space for activities other than programming to occur within the framework of iterative cycles. But when organizations adopt SCRUM, too often the good parts of a waterfall process like research and forming a high-level blueprint for the overall design become the proverbial baby thrown out with the documentation bathwater. As the Agile Manifesto says, “Working software over comprehensive documentation.”2 Many latch onto this and don’t want to do any type of documentation that might outline a vision, even if in a rudimentary sense.
Mine 3: Pressure to cut corners
Implementations of Agile that put design activities within the same iteration as they must be developed, ensure designs are achievable in code. But they also put tremendous pressure on the experience design team to ‘feed the development machine’ in time enough for them to implement their vision. This can and does lead to impulsive design. So, what’s wrong with that? Well, nothing if you’re not adhering to user centric principles which suggest you should test ideas with end users before committing them to code.
Some assert that there are plenty of examples of best-practice interfaces to copy out there. So, why reinvent the wheel? Surely we can save time that way? Sometimes they’re right, but how will we know which best-practice interface works best in context with the user’s goals, with no time to test with the user? How can we innovate by copying what already exists? Before Google reinvented internet search, other search engines assumed a status quo which behooved the user to learn how to form proper search queries. It was institutional knowledge among the other search engines that this is how searching was done and customers simply had to learn to use it. Most people’s search results were poor at best. Then Google came along and realized what is now obvious. People just want to find what they’re looking for, not learn how to drive a search engine first. I’m not suggesting the other search engines could not have done what Google did sooner, but I am pointing the finger at a mentality which meant they missed the opportunity. Interestingly, Google is not known for its designers. It’s mainly a development house, but lots of those developers can clearly put a design hat on too.
There is absolutely nothing wrong with using Agile to produce results quickly; that is, if you don’t intend to release them on your poor, unsuspecting user without some usability testing. Just don’t be fooled that this is going to save you a lot of time if you want your new product to be right, because you will have to iterate to arrive at an appropriate solution. Alan Cooper has argued that this creates a kind of ‘scar tissue’ where code that has to be changed or modified leaves a ‘scar’ that makes the foundations of the program unsound.4
Mine 4: The temptation to call it “good enough”
Invariably when we have release-ready working code at the end of each cycle, even if it’s sub-optimal, there’s a strong temptation to release it because we can. Agile condones releasing whatever we have so long as it works. Sometimes, that means doing what we can get away with, not what is ultimately best for the user. Equally, if we do decide that a feature isn’t right yet, it’s amendments get fed back into the requirements backlog where temptation strikes again. Should we spend time in our next iteration on a feature that we’ve already got a version of? Or shall we develop something new instead? Too often, the rework gets left in favor of exciting new stuff. An so on we go building a product full of features that don’t quite meet the bar.
Mine 5: Insufficient risk-free conceptual exploration time
Iteration “zero” (i.e. a planning and design iteration prior to the first development iteration) can be used to do this and other planning activities. However, depending on how long this iteration is, the level of rigor applied to exploration may be insufficient. An argument used by some Agile practitioners asserts that a working example of a solution is the best way to validate whether it is the right one through exposure to the market. This ‘suck it and see’ approach bypasses an activity called “concepting.” Concept activities dedicate time to sketching different solutions at a high level and validating them in the rough with users before digging into detailed design or code. “Suck it and see” would have us just build it, launch it and see if it flies. This way, we’ve wasted time building something we will probably have to take apart or rebuild. The counter argument is: if it took as long to build as it would have to research and design before laying a line of code, then we break even. This statement is a stretch in practice because development itself usually does take longer than well-managed design research and conceptual exploration. Also, there has to be some level of design regardless of which methodology is used, and this adds days to the timeline.
Mine 6: Brand Damage
Let’s just say that design and research takes the same amount of time as development for argument’s sake. In the worst case, we completely miss the mark with the non-researched and designed solution and we have to start all over again. Then we’re back to the same total duration after developing it a second time, but there’s no guarantee we’ll get the solution right the second time either. All the while we’ve repeatedly foisted a botched product design on our users and adversely affected our brand. Many companies succeed on the back of their reputation for producing consistently appropriate products and services. When a company releases a flawed product or service, then their image in the customers mind (i.e. brand) is tarnished. Brand damage takes far longer to mend than it does to make. Software creators that fall victim to the temptation of "good enough" and fail to innovate through conceptual exploration put their companies revenues at risk. In a competitive market, repeated failure to meet user needs well leads to serious brand and subsequently financial repercussions, as other companies who do get it right take the business.
Agile is good for refining, not defining.
If you have an existing product that you want to develop to the next level, then Agile in its truest sense works because you have a base upon which to improve. This means that if you know what your requirements are and these have been properly informed with user research, comparative analysis, business objectives, and analysis of what content you have and what you can technically achieve, then Agile alone can work well.
But spending money on software development without a plan of what to build is like asking a construction crew to erect a tower with no blueprint. Some level of plan is necessary to avoid a Frankenstein of each individual’s perspective on the best design solution.
User Centered Design
UCD requires iteration – design, test with users, refine, test with users again, refine… repeat till it’s right. This is where Agile and UCD can work brilliantly together. Agile really is about presuming you’ll need to change things, and that’s a good thing when it comes to refinement.
Uncovering requirements to form a strategy
User Centered Design (UCD) is not about answering requirements alone, but also includes defining requirements. When we practice UCD end-to-end, we pretend we know little. Little about what the solution to a problem should be; little about what the problem actually is because assumptions close us off to new possibilities. We prefer to allow some design research to create a viewpoint and then form a hypothesis as to what we might build. In this regard, we cross into the realm of product managers, producers, program managers, business analysts and the like, trampling toes with gay abandon and meeting resistance all around. Facing confinement to defining the boring old business need (distinct from the user or customer need), these folks would prefer we constrain our UCD work to usability testing on designs meeting the requirements they set out. They’d prefer we stick to just helping with development… and if we can do that quicker using Agile? Wahey!
Is it always appropriate to do extensive research before starting design? That’s a good question and one that Jared Spool’s Market Maturity Framework5 helps answer. Sometimes, just getting something off the ground, regardless of how precisely we meet user’s needs with it is all we can afford to do. Once we graduate out of this "Raw Iron" stage into "Checklist Battles" focused on getting the right features and then beyond, research is a core ingredient to putting our feet in the right place.
After researching what the user and business requires, we can make the “Strategy” tier of Jesse James Garret’s Elements of User Experience3which underpins everything we do during the project. Do this well, and you really shouldn’t come up with something that’s fundamentally wrong. Agile doesn’t account for this beyond a planning phase (i.e. iteration zero), which may well define a strategy of sorts. But does it really define the correct strategy? Surely, that’s created through careful consideration of three things:
- Empathetic qualitative research that uncovers the user’s context, needs, goals and attitudes i.e. user requirements. Cooper suggests that the customer doesn’t know what they want and advocates a role of interaction designer as requirements planner.4 This would avert building to the wrong requirements in the first place, but the time to do this must come into the development lifecycle somewhere. It involves talking to users, preferably visiting with them in their environments to create experience models and user personas.
- A thorough appreciation of what else in the big wide world exists in terms of products, features and technology that can be emulated somehow (not necessarily addressing a similar situation to ours).
- A clear articulation of the business problem, objectives, success measures and constraints. Business people sat in a room discussing what they think should be done must be informed by all these things if the right strategy is to emerge. Agile doesn’t preclude that kind of consideration, but it does not mandate it either.
If we manage to built something usable and reasonably intuitive without research or strategy, did we succeed? Most MP3 players fit this bill but none took off like the Apple iPod. Leaving interface usability aside, the iPod had a service concept behind it which included digitizing, replenishing and managing your entire music library with iTunes. This was part of the iPod concept from the outset and in combination with good marketing and design, continues to eclipse the competition over seven years later. But that concept needed to be sketched and iterated at some point. If we don’t explicitly build this into our Agile methodology, we can miss that thinking time.
The best of both worlds
UCD can be too documentation-heavy, isolated and risky but Agile needs help with defining requirements and concept development. How can Agile and user centric principles work together? First let’s understand what works well with Agile and not so well with user centered design. In this regard, the work that user centered design calls the ‘design’ phase can produce buckets of documentation which isn’t read, describing interfaces specified in isolation which may not be feasibly coded in the time allotted to them. So, doing detailed design is best done in conjunction with the development team and in a way where resulting interfaces can be tweaked as you go.
Great article – In addition, I am intrigued to find out how much of this approach you are or have implemented and what were your SWOT outcomes – a follow up article I think. One of the biggest oxymorons is that Agile promotes leveraging those who know best how to implement technology/solution rather than those who define what should be implemented. Clients and other business units including UX, who are much more focused on the ‘what’, are being over shadowed today. As with many other evolved industries such as motoring, this will change where research, ideas and the market define what to build rather than technology alone. However, I suspect we need more UX business type people to help drive this, people who talk business, understand ROI and value and can sit with mangers, directors and CEO’s if not become them.
I’m currently pushing for a UCD process at work, and we’ve drank the Scrum kool-aid already.
The points I’m having the largest difficulty resolving are mines 2 and 5.
For #2, because we tend to do 2- to 3-week sprints, on the development team we tend to not know what’s coming up until our sprint planning meeting. We tend to do a lot of responding to RFP’s, which is a minefield in itself. Often as developers, we get the standard spreadsheet of requirements directly from the customer, without much analysis into the problem.
Which leads into #5. Once we know what we need to do, the sprint timer has started, and we often don’t discuss designs and possible solutions until we’re in our sprint planning meeting. Our situation is made worse by the fact that the developers on this team are all remote to each other, so our meetings tend to take place over the phone.
I’ve been an advocate for getting more “look-ahead” in our process, and having a specifically defined role within the organization that works with project management and customers to come up with a design before dumping work on engineers.
I currently just ended a very Agile project and ran into many of the minefields discussed in this article. There was a sprint 0 for planning and the first day of every sprint we had an inception meeting. Unfortunately, 99% of it was spent on the developers. The UX team did work a sprint ahead, which worked well enough.
I think one of the stickiest issues was that the business owners literally did not know what they wanted and went to the developers and asked them what could be done. The “what” was decided before the “why” which I think made for a much more complex situation.
Looking back on the project, I think certain aspects of the project would have worked better if the UX team sat with the developers and sketched out what they wanted in real time. While very difficult, the idea of seeing the “whole picture” needed to be let go and the UX team should have worked in a more component based model.
We have been using a process very like the one you describe extremely successfully for the last few years, although only after a few bad starts and screw ups, of course.
We make a clear distinction between *what* and *how*. We want a well defined UX, with a complete wireframe stack *before* we plan a release. The release plan provides a good structure that we expect to conform to during the build – although Agile allows us in theory to change everything we’d consider that to be failure of planning.
What we do use the agile process for is to enable us to flex scope *instead* of either time or quality, and I think this is it’s greatest strength. We do therefore need to plan how the UX would work if a feature is omitted completely – but this is generally something you can do with a well designed UI. What we do NOT do is completely rearchitect the UX each iteration – UX is just too fragile and expensive to change.
In effect I think the wireframes are not part of the specification, they are part of the deliverable and our (often long) Iteration 0 reflects this, with the wireframes considered a key deliverable, to help the customer work out what they want and how it might work.
Clearly a lot of attention is still required to ensure we have happy customers – the overall promise of Agile that it naturally produces the best ROI is probably only possible in a situation of 100% trust, not something an agency will ever encounter. But it does mean we can flex as we need, rather than committing to a completely defined triangulation of scope, time and quality – which is what causes developers such pain.
Great Article Anthony – I think you have clearly identified the obvious challenges to agile culture. I think you could take the very mines you describe and create a likert scale survey for stakeholders to measure the agile effect on an organization’s product development…
You’ve written an excellent article and helped me think through aspects of Agile that bothered me but that I couldn’t quite articulate. Thanks for that.
One question I’m often asked by developers when I emphasise that we need to do research in Iteration 0, is “How long?” I know the answer (“How long’s a piece of string”) but I wonder if you can put a more defined number on it. In your experience, in projects that get it right, what percentage of the time/budget is typically spent on Iteration 0?
This is a great article. I’ve been involved in the ‘Agile world’ for almost 10 years, and have seen some groups lose sight of the forest (the overall product) for the trees (the individual bits that make up the product). I’ve also seen IA & UX integrated very well into the process, with excellent results.
You should consider proposing this topic for the Agile 2010 Conference in Nashville in August. Submissions may be made until February 26th at http://agile2010.agilealliance.org/speaker.html .
Hi Anthony – great article. At a high-level, I think you really do a great job of highlighting some of the shortcomings of Agile and the challenges of integrating User Experience Design into Agile. There is an overarching theme here of UX people trying to fit their work into an Agile model and getting really really frustrated.
I see two reasons why that is the case. First, Agile was not intended to solve the UX challenges; it was intended to solve developers challenges, which is why, as you say, there is an unclear role for design (tho’ “Design” has a different meaning in Agile.) Second, and this is a point I don’t think you make that is fundamental to this entire discussion: You talk about a typical Agile process. Let’s be very very clear: **There is no typical Agile Process** because Agile is not a process, it is a way of thinking about design, an attitude. More importantly, **Agile is a completely different paradigm compared to waterfall** – this is why UX folks keep banging their heads against the wall in frustration when trying to fit their roles into Agile. Trying to fit the role of Information Architect or Interaction Designer into an Agile team is like trying to figure out how to best put a steering wheel on a bicycle. Sure, it is possible to do it, but it will be awkward indeed, because a steering wheel was designed or an automobile paradigm and not a bicycle paradigm. This is something that is very very hard for traditional designers to get their head around. Agile replaces the idea of roles with the idea of competencies. Like a sports team, while you may generally play left field, you would not hesitate for a second to take over the center fielder role if the situation demanded it.
Re. your statement that “spending money on software development without a plan of what to build is like asking a construction crew to erect a tower with no blueprint” is a perfect example of old thinking being applied to a new paradigm and completely flies in the face of fundamental Agile thinking. The construction of a building is a *terrible* and misleading analogy for building software, which is nothing at all like physical engineering and construction. The very reason why we are stuck with a waterfall model is because the engineering model was misapplied to software. In contrast to building a physical building, in which you have standard communication notation (e.g. building codes), software has no such attributes. This is why software development plans in fact are no more than speculation. *That said* I absolutely agree with you that there is a need for some up-front planning, some big-picture design, something we at ThoughtWorks refer to as a QuickStart, with “Quick” being the operative term. In other words, it needs to be very brief and intense, and with the understanding that until you start building, you really won’t know what you’ve got. In other words, until you are building you are only speculating and proposing. It is not until you start building that you really are defining which is why I disagree 100% with the statement that Agile is good for refining, not defining.
Sorry to come off so negative. Great article overall.
Overall, its a good article. It covers most common issues (in the context of user experience) that folks run into when they use Agile. However, based on my experience with Agile, many of the statements seem to be very generalized. Agile is a philosophy and not a “process”. Agile is supported by well thought practices and guidelines that are based on Lean principles. One can apply “Lean thinking” even on your UX design work as well. As a matter of fact my wife who manages finance uses Agile within her team and works well with planning, communication and collaboration. There are good adoption of this philosophy and there are bad adoptions. I liked this blog entry by Steve Yegge – http://steve-yegge.blogspot.com/2006/09/good-agile-bad-agile_27.html. Here he talks about “good” agile and “bad” agile.
I have worked with companies where most of the “cycle time” was spent on defining the product vision and user experience. Even with people working on user research and design for months, there design and ideas were not even close to “Apple”, “Google”, “Amazon” or any other companies that seem to have a good handle on user experience and innovation. There is something to be said about the “people” and culture of those companies. More lead time and more process by themselves do not deliver good design and innovation.
Currently I am in the process of introducing Agile into my current organization. As we go through the Agile transformation (which includes people, process and tools components), I have been asked to define how the “ideation” happens in Agile world, where would user experience fit, how long should we should we be staying in ideation, when do we have good enough UI design to start the development and so on. My answer is that it depends. One thing is for sure that we need to think about overall product vision and overall user experience before burning development hours. How long will you have to stay in I0 or pre I0 depends what you are working on. If you do it right, you are very likely to strike a good balance where designers get enough time to research and design, and are empowered to say NO its not ready when its not good enough. When to say “good enough” is important and who says that its “good enough” is important as well.
Thank you for this article, Anthony. I think your conclusion touched on Rajeev’s points: “dogmatic attitudes about each of these [ucd /agile} approaches should be avoided if they are to be combined. ” Right, Agile is a philosophy and not a process… My experience has been that people who are ingrained in long time waterfall methodology and old-school practices cling to a process. It is really hard to get both parties to think philosophically :). Anthony points out the minefields there — for that I am grateful. Am bringing this article to a meeting today about “Institutionalizing UX” into our recently-converted-to-agile culture.
Due to a random technical issue, Anthony can’t post comments.
Here are Anthony’s responses to comments thus far:
@ David Travis I have never worked out a formula for estimating how long something will take to research and concept with relation its development time. I think that would be hard because different situations call for either more or less research time according to approach. Make sure you allow enough time to a) to clearly define what your user requirements were through whatever research techniques you have at your disposal and b) sketch out the ideas and validate them with users on paper then c) create yourself a kind of pattern library or interaction model that you can use as your reference point when going into the sprint cycles.
@ Anders Ramsay I respect what you’re saying about a totally new paradigm and competencies. But I think you dismiss the blueprint metaphor too hastily. The very fact that there are no “building codes” is the issue I’m trying to illuminate. When it comes to interface, it’s simply not good enough to have many ways to skin the cat. I personally don’t care what happens in code (though perhaps that’s just ignorant of me and I should) what matters to me is that the interface is consistent from one section of an application to the next. When you build one thing in isolation to the next without a “code” for how that should work at an interface level, you end up with different interaction styles for each item. Whether that’s a subtle difference or a huge one is down to the team’s ability to share and communicate with a shared frame of reference such as an interaction model. Maybe this is the Quick Start you talk about? What I don’t agree with, is that this all assumes requirements you’re building to are in fact the right ones to address the user’s needs and goals. I believe, that design has a huge role to play in defining requirements through deep empathetic design research into user needs and goals. That has to come before we start sketching anything – waterfall? It simply has to be if it is to mitigate the risk of wasted development cost. Too many companies treat Agile as some kind of substitute for good product development practice and that’s the minefield.
@ Rajeev Kumar – I accept your point about process vs approach. UCD is an approach too, but I think we can get caught up in semantics. Both have certain activities that get done in a particular ways. Your point about Apple, Google and Amazon and people is spot on. You really can’t expect a process/approach to give you good results. As the adage says “What you put in, is what you get out…” However, you can have talented people hamstrung by a poor process/approach… Who says “good enough” is the topic of more hot debate, and one that’s well worth discussing!
Good points, Chris. Very good suggestion to David’s point. I agree with you 100% that even the best can deliver “average” quality work if they work under bad process/culture/structure.
For figuring out what form of Agile works for the team, my approach so far has been to introduce the smart guys to Agile principles and let them help us figure out the practice details. As a change manager my job is to get them the culture and structure (and other org stuff that they need) to bring the best out of them. Not that i am successful in getting all they want. Agile philosophy doesnt really pays off if the “org and skill stuff” is not aligned. I tried to introduce to Agile at a very large company. The change did not take roots as i couldnt align the “org stuff” with Agile. Sometimes, a well defined gated/heavy process is what works for a company. Not that i want to be a part of that kind of company.
In my current organization, our product delivery cycle looks similar to what Anthony has described as Agile UCD. Thats how our practices evolved. For “good enough” and product backlog prioritization we let the team (include the CX/UX team members) decide what works for them. Our guidelines are to have customer experience/User Experience leads and product managers make the decision together . We run our “delivery” prioritization/planning meetings every other week that takes care of “Agile as we know”. Its not uncommon for us to take stories/epics out the delivey backlog and put into ideation part “research and concept” part aka I0/pre I0. “Dedicated” team can ask questions make suggestions but when it comes to the aspects of user experience, the decision is made by the CX/UX person. There are other reasons why a story can be thrown out of the delivery queue. This works well for us.
For us, Agile works both for new/big ticket items that go through good ideation process and for “lights on” work (that keep the team busy ). Ofcourse, we have longer time in I0 for big ticket items than for enhancments.
On a side note, I kind of understand what Anders says. I dont personally like house analogy when it comes to building software. Its funny that yesterday I had a product manager in my office using the same analogy to talk about what their product ideation cycle should be so much longer than the other products. Not that i knew the right answer on how long it should be..
I really enjoyed your article. It makes some very valid points, which I have made myself in the passed. I am a certified Scrum Master myself, and I often encounter these problems and abuses.
I wanted to add that there are some of your points that are well accommodated by Agile / Scrum / XP.
(1) From the section “Mine 2: The requirements gathering process is not defined”.
Scrum does have a requirements gathering process. Requirements are documented in the form of User Stories, and managed in a product backlog.
Although it is true that one of the biggest abuses of Scrum/Agile is that people think that it eliminates documentation all together. This is a common misconception and untrue.
(2) In the section “Mine 3: Pressure to cut corners” the author implies that Agile does not consider the clients needs in the development cycle.
“This can and does lead to impulsive design. … you’re not adhering to user centric principles which suggest you should test ideas with end users before committing them to code.”
In the eXtreme Programing (XP), a different flavor of Agile, practitioners advocate having an “On-Site Customer”. A concept emphasizing the importance of the client’s needs.
“With XP, we get extreme about customer involvement. we’ll mandate that the customer be on the project full-time for the duration and be located on-site with the team.” – Stewart Baird
Refer to the following links:
(3) The following comment from the section “Mine 4: The temptation to call it “good enough” – “Agile condones releasing whatever we have so long as it works. Sometimes, that means doing what we can get away with, not what is ultimately best for the user. Equally, if we do decide that a feature isn’t right yet, it’s amendments get fed back into the requirements backlog where temptation strikes again.”
The truth is, that in every process people are guilty of releasing a product just because it fulfills the requirements. This is nothing unique to Agile. Iterative development is beneficial to meeting deadlines, otherwise products would never be released.
There are several points here that I do agree with like:
“Agile is also good at avoiding feature bloat by encouraging developers to do only what is necessary to meet requirements.”
“Because Agile requires a lot of engagement with the client (i.e. at the end of every iteration, which can be as little as a week) it mitigates the risk of going too far toward creating something the client doesn’t want.”
We often see that different tasks do have dependencies on each other. Development being dependent on design, QA being dependent on developers. Creates a constraint for tasks to be done simultaneously.
@ Isaac Loloely As you say, it’s easy to step on these mines. But, with experience, you can avoid them if you incorporate some good product development and UCD techniques. There are a few things you’ve said that I’d like to discuss.
(1) You’ve stated that scrum has a requirements gathering process. But I disagree. I think what you’ve described is a requirements DEFINITION process and there is a subtle difference. One is about working out what requirements to write down (gathering). The other is how you write them down (definition). What I advocate is that there is proper rigor applied to gathering via deep empathetic design research, some of Indy Young’s mental modelling technique or similar, then defining them through stories.
(2) XP describes a customer as your client. Frequently when you’re developing software for a client, that client actually has customers who will be using the product you’re building. I’d advocate that in this case, you need to be testing with that end customer, not just your client, who may have no better idea about what the end customer understands than you do.
In the case where the client is representative of the user of your software, having them on site allows them to get quite familiar with the difficulties you face and they sympathize with the trade offs you have to make. They get “Stockholm Syndrome” (i.e. identifying with their captors). They lose fresh eyes and become less critical as they become more and more familiar with the developing product. Great for the development team, not so great for the other users who will have to use the product on which your client has compromised the best UI. This is why we usually try not to user test with the same participant more than once. The more familiar they become with the interface, the less likely they are to help you see what is not intuitive about it. I know that there are some interfaces we design to be learnable before instantly usable, but still, these interfaces needs to be intuitive to learn too.
(3) You make a very good point. No process can guarantee that someone with the power to do so will call it “good enough” before time. However, by incorporating some UCD techniques such as user testing, we can implement some gating success criteria that go beyond just what the client (with an eye for saving money) would normally accept. Of course, we’d do this in the best interest of the client’s longer term development cost, since hopefully, they’d not need to come back and redo it again and again for what would have been cheaper to fix (in dev cost, customer acquisition or in brand damage) at the first time of development. The same goes for proper QA. The war story I told a the beginning of the article suffered greatly from Mine 4 at the expense of user testing AND rigorous QA. Why? Because we were slavishly following an Agile process rather than being pragmatic about what would ultimately be the best thing to do.
Great article Anthony.
One thing I’ve noticed during the “Agile ‘v’ Waterfall” debate is that there has been a strong focus on process and less of a focus on outcomes. Even though Agile is supposed to be a philosophy, it is commonly interpreted as a process (one of short development cycles and little or no upfront planning/strategy/research). It shouldn’t matter if a business calls how it works “Waterfall”, “Agile” or “Watergile” – you need a way of working that produces the right outcomes for your team, client and the end user.
I think it’s a great start for everyone in the team to sit together to find out how they’d like to work through the project, instead of this being mandated by someone – such as your Head of Product Development. Everyone in the team usually has some sort of idea of how they’d like to work together and can assess what needs to be done, and this can be co-ordinated by the Project Manager.
You’ll always need the pieces of the puzzle to put it together, but how you put it together should be decided by the team. The things that you know you need upfront, should be able to be done upfront. If you work for a company that has strong UCD focus, this would usually mean some form of user research would be conducted to help define the goals and vision of the product. This doesn’t mean that stacks of documentation is going to be created then handed to developers. Personally as a UX designer, I like working with developers in a low documentation, modular, highly iterative environment, but only when the core principles and interaction framework is defined up front – like in your “Best of Both Worlds” diagram.
Most people enjoy working in the buzz of a collaborative team environment, where what you produce as a team is better than what you could’ve created as an individual (the whole is greater than the sum of its parts). However, it’s important to think through how you’re your going to get there, so the right inputs and steps are taken to achieve the right outcomes.
Anthony, thanks for the great article. I’ve been working with a team that’s recently switched from waterfall to agile, so your insights are hitting close to home. In that time we’ve hit nearly every mine in the minefield you’ve laid out, but I did want emphasize two of the biggest mines in experience:
1) Forest for the Trees – In the case of large development teams working on a single project, they’re often divided into multiple individual scrum teams, each with their own focus. As they move forward, their marching orders are to do what’s right to complete their user stories in the most timely fashion, but there’s no default mechanism for the teams to communicate with each other, so when it comes time to review the products at the end of the sprint, you have a product with lots of individual features that meet the letter of their requirements, but isn’t necessarily (or even likely) a coherent design across the product. Great products aren’t defined by their individual features as much as by their user experience that span multiple features.
2) Shrink to Fit Quality – This is similar to mine #4, but I think it’s an important variant. I agree with your insight that Agile should be a methodology that allows the team to flex scope rather time or quality, but in practice I’ve found more often than not that the time constraint often forces the team to flex quality because the ‘right’ way to address a requirement is to build something that’s ultimately too big to fit into a sprint. If there’s another cheaper way to achieve a similar result and does fit in a sprint, it necessarily becomes the recommended plan of record. The quality of the individual solution might turn out to be just fine, but the bigger problem is that it wasn’t the right long-term solution in the first place. This also fits back into the first item, because I’ve found that teams would rather build a lot of one-off specialized solutions to their issues that become a sustaining nightmare, rather than cooperate to build a single ‘right’ solution that may be bigger than any of the one-off solutions, but ultimately would be for the benefit of all the teams.
Lastly, I wanted to point to one of the other challenges I’ve seen. I immediately came to the same conclusion that so many others have that the UX team needs to be working a sprint ahead of the rest of the team so we can ‘feed the machine’. The problem though is that we can’t work in a vacuum. We often need input from the product owners or technical staff, but given that they’re usually knee-deep in their own sprint, it’s difficult to get level of participation and focus necessary to design the right thing for the next sprint. The best solution I’ve seen for this to date is to have the product owner only loosely participate in the development for the sprint, but instead be working on user stories in the backlog and participate in planning for the next sprint.
Again, thanks for the great article. At times it felt like you were channeling items straight from my brain. 🙂
First some bits I agree on:
1) there’s often a focus on shipping new features at the expense of tidying existing ones
2) the push to ship unfinished features is short-sighted; frightening users away with a promise that things will improve is a precursor to ‘brand damage’
3) any development that doesn’t include time for user testing AND accessibility testing and allow time for findings to be worked on and integrated is broken
4) untrammeled product / project managers making ill considered decisions are a massive problem
5) dogmatic scrum ‘masters’ do exist
On the first 3 I can only say that pressure for visible movement will always be with us. Being seen to get one step ahead of competitors (external and often internal) is one price we pay for competition but competition is innate to capitalism. The only hope is an enlightened management acknowledging complexity and giving people enough time to work through problems…
Number 4 I’ll get back to.
And number 5 is just an extension of a general truism; all development methodologies quickly sink into dogma. Unfortunately organisations are suckers for process, workflow and box ticking. (And as many people have asked, “have you ever met anyone who FAILED scrum master training?”) Anyway all process is a creativity sink. This is equally true of UCD which, applied dogmatically, is one of the least user centric approaches I can think of. Putting real code in front of real users (with assorted accessibility requirements) one of the best. So long as management give you time to act on the findings.
Afraid the rest of this comment has more of the usual ‘someone on the internet is wrong’ type smell about it. Apologies but…
…your opening minefield comment of “one has to ask whether [agile] was devised to treat a symptom of the larger cause: the business doesn’t know what it wants” is true but understandably true. Things change. Business priorities change, businesses merge and divest, the competition changes, *user expectations change*. Failing to acknowledge this and ploughing on with ‘big design up front’ doesn’t address that problem and doesn’t work. It also doesn’t address the fact that in a complex system it’s often impossible to judge the feel of a website until real data hits real code; design that works on paper too often breaks in reality.
(I’ve worked in the past on a music website that followed the full UCD methodology: research, personas, experience models, sitemaps, wireframes, photoshop comps… In the time it took to work through all that both last.fm and itunes shipped. Our site was 5 years out of date before it even went live.)
For fear of typing a longer comment than the original post I’m only gonna deal with your first mine (an unclear role for design) but I suspect most of the rest drops out of that plus the usual concerns about management allowing time.
You say “some […] developers may have design skills. But that’s not a particularly common scenario”. This is either a very restrictive definition of design or just wrong. The question is: where does design happen?
Every time a developer creates a database or creates a model or creates a URL scheme or marks up a document that’s design. Writing code is design. Many (the majority?) of creative people I’ve worked with have been from the developer side of the wire.
The Google example you use is interesting in this context. Google was definitely ‘designed’ but no user experience professional or even the majority of software engineers I know could have designed it. The Google breakthrough was not the realisation that “people just want to find what they’re looking for, not learn how to drive a search engine first”. The breakthrough was making that possible. And making that possible was about software engineers and mathematicians taking academic models of citation, making the analogy to link density and *designing* a solution that used eigenvalues and n-dimensional spaces and serious maths to crack the nut. You could have spent months ‘concepting’ and taken the conclusion that “people just want to find what they’re looking for” to Yahoo! but the concept would be useless without the design and the design was tricky to say the least.
Back in the day flat websites were ‘designed’ by designers who handed over their wireframes and specs and photoshop comps to developers with the request ‘make it so’. Designers were creatives; developers just trades people. Having come across the UCD +/vs Agile arguments a few times there seems to be a temptation to wrestle back control and return to these glory days. But with the complexity of modern web applications it’s just not possible. No one person knows enough to be given overall control of design direction. This includes product/project managers, which is my easy answer to point 4 above.
You go on to say that ‘in good software development, a conceptual interaction model that has been thought through beforehand, outlines how the user navigates the system, performs tasks and uses tools in generic terms’. For a website of any complexity interaction design (together with quality copy and visual design and quality a/v) is the small part of the visible 20% of the application. It’s not an unimportant part but the other 80% of the application also needs to be “designed”.
In a later comment you say that “I personally don’t care what happens in code (though perhaps that’s just ignorant of me and I should) what matters to me is that the interface is consistent from one section of an application to the next”. For an interaction designer this is fine but again I think it’s short sighted to assume that design stops at what the user can see / feel. Taking Twitter as an example: it’s a beautifully designed service but the design insight was all about low friction communication and open APIs. Many Twitter users interact through clients and never go near the Twitter website. The Twitter interface is consistent but more important is consistency in the application / client interface all of which required design.
So decisions made further down the stack (what gets modelled, data flow, data licencing (particularly if you’re dealing with user data), url design, content negotiation / device detection, caching, document design, feed design) are not just examples of design but fundamental to a wider definition of user experience (in terms of seo/findability, website as platform, pointability, perceived performance, accessibility etc). If you don’t spend time designing that stuff then obsessing on a “consistent interface” is just painting pigs with lipstick.
To quote Steve Jobs (and why not?) “Design isn’t about how something looks; it’s not something you put onto the outside of an already-built product. It’s how you build the product, from the inside out.” In web terms it’s about design decisions all the way up the stack and working well with the design decisions of the web (statelessness, HTTP, URIs, HTML, CSS). If you’re not interested in code and not interested in technical design decisions of the web (which have moral and ethical implications) then I’d suggest you’re not a web designer and might be happier working in a different medium…
In conclusion I’d say the role of design in agile is clear. Everything is design; everything is development. Things work best when different people with different skills work together to solve problems. Attempting to rebuild the walls between design and development (even by staggering sprints) is a mistake. As Craig Webster says “A [user] story is a token for a conversation. Unfortunately very few teams actually have the conversation.” It’s up to designers and developers to have those conversations and up to management types to give them the time to do so.
One final point in answer to @doug’s comment: “what we do NOT do is completely rearchitect the UX each iteration – UX is just too fragile and expensive to change”. Yes UX is fragile and expensive to change but it’s actually a *lot* cheaper to change than any other aspect of a web application. Web apps are a layer cake of database, models, controllers, views and css. The further down the stack you make changes the more impact on the layers above. Changing mark up + css is much, much cheaper than remodelling your database eg. Not that constantly changing your UX is a good thing but the pain point is usability/consistency not designer/developer overhead.
Anthony, great article! It has fostered a lot of discussion and brought to light issues teams often come across when trying to implement Agile into their processes.
You mention, “Agile does a good job of flexing to the winds of change. But one has to ask whether it was devised to treat a symptom of the larger cause: the business doesn’t know what it wants.”
Your observation is the root problem that our company works to solve for development teams. Even waterfall, when combined with rich prototyping and easy collaboration, enables stakeholders to interact with a simulation early in the process. The result is an unambiguous understanding that allows the client and the development teams to iteratively flesh out the project prior to writing code. Waterfall teams that integrate prototyping become more agile because they elicit and define requirements through iterative visualizations rather than comprehensive documentation. This process would be a hybrid of the Agile philosophy and waterfall method you mention.
Prototypes also allow Agile teams to “plan the big picture” while accelerating their development process. Creating an interactive visualization prior to each sprint ensures that everyone is on the same page and sprinting in the right direction.
I’d really like to hear your thoughts on how you think interactive prototyping fits into Agile UCD.
@ Michael, you seem to think that I’m advocating for a big up front design process devoid of developers, but I’m actually not saying that. Yes, I am saying you need to do research and some modelling, but not exhaustive detailed design that specifies each and every detail. Your bad experience with a “full UCD methodology” that saw you launch something 5 years out of date sounds like you just took way too long to launch your site. Is that the fault of UCD techniqes? Or a lack of urgency? 5 yrs is a large gap between idea and web product by any standard. . I would say it is unfair to place the blame at the feet of UCD. After all, it is a philosophy – just as Agile is and does not mandate deep dive waterfall. Though some do get dogmatic about doing either in a particular way.
In a place such as the BBC, where you have ample talent, creativity, a core value of quality, far less drive to get things done quickly and an editorial heritage that doesn’t lend itself to software development, Agile is a perfect solution. Why? Because these qualities demand you get the product right and they’re more important than the imperatives which drive other commercial companies into cutting corners for perceived savings. This environment somewhat coccoons you from the experience so many others have. These people, particularly those who put design and build in the same sprint can fail to produce anything good and have a miserable time in the process. How do you save money paying a team of people to build something that doesn’t work for the user? Fact: Using paper and pencils is far cheaper than writing code. Developers can use pencils too! Fact: You can learn a great deal from user testing paper and pencil regardless of the richness in the interface. There’s a lot of value in a prototype, you agree. But you seem fixated on that prototype being high fidelity, working code. I don’t buy that is the only or most cost effective way to learn. There is a time for testing working code and that’s once you’re confident you have the broader brush strokes of overall proposition and information structure right for the user. All applications have this. Not just flat websites.
The other great topic for discussion is: What is design and who does it? Naturally, every team member’s work has an impact on the user’s experience. Only some work at the part the user touches. Some of those have a job title that suggests their primary function is to worry about that before all else: User Experience Designers. For the purposes of this article, that’s who i had in mind when I wrote “an unclear role for design”. It’s interesting quoting Steve Jobs to back the point that design starts on the inside. But that isn’t actually how Apple works. The first thing Jobs looks at is a photoshop comp or a sketch that defines the user’s experience. Not a working prototype to be “skinned” later (which is his point). When Jonathan Ives designs a new peice of hardware, he sketches it (and the outside first not the inside). Sure, good designers appreciate how things have to work. But the point is, it has to work for the end user and there are some disciplines which are more tuned into that world than others. No database architects I’ve known are focused on or skilled at designing a GUI that is easy or fun to use. I’m sure they exist, but I haven’t run into this special breed of unicorn. But that’s not what you’re getting at with the Twitter example.
This opens up a different point. Who are users? Are the patrons of an open API users? Yes, they are and perhaps our database architect is a better designer for this “persona”. Could database architects benefit from an API user persona? I’d like to think that makers of third party products have needs and goals too. Interestingly, these API consumers sometimes need to make GUIs at the end of the day because raw data is kind of opaque to their customers: “Joe Public”. So I don’t think caring about how that gets done puts me out to pasture just yet. But thanks for the inference.
@anthony – I suspect we’re not about to agree on this but here goes anyway…
Firstly my “five years out of date” comment was misleading. I didn’t mean the project took 5 years. From (pained) memory it took about 12-15 months. The point I was trying to make was that during those months iTunes took off and Last.fm happened. The market changed and user expectations changed. It was 5 years out of date because it felt like the market moved on 4 years in those months. Which is often why businesses “don’t know what they want”; or rather why businesses change their minds and why agile is better suited to cope with this.
The other major problem with that project was despite all the personas and wireframes and sitemaps and photoshop comps by the time we’d written the mark up and added the css the data we had couldn’t be queried in a way that allowed us to build the product we’d planned. Much of the project went in the bin and what we ended up shipping was a very small subset of what had been intended. Starting by designing the data model and working up just lessens the pain.
I’m not blaming UCD for this and not saying that some design shouldn’t happen up front. I am saying that designing from the visual layer down is much less efficient and scaleable than designing from the domain model up.
I should also say that this experience isn’t just confined to my current employer. I’ve seen plenty of projects fail in plenty of different organisations because they’ve spent too much time designing the skin and not enough time designing the skeleton.
We (designers, developers, IAs) spend a lot of time with paper and pencil (or more often whiteboard, pen and post-its). But when it comes to user testing web design there’s no substitute for working code on top of real data. You can theorise interface details til the cows come home but in my experience trying to second guess how the application will ‘feel’ when the data hits the code is just misleading. Things that have worked in my head and on paper just felt wrong when seen in reality.
I wouldn’t say I’m fixated on on the prototype being high fidelity; just that writing prototype code AND real code is inefficient so why not make the prototype the real thing? As (possibly more) important is the use of real data. Interfaces need to evolve as the data volume increases. Taking my current project as an example, views that used to work when the system had a low volume of data now need to be redesigned as more and more data hits the back end. Attempting to get the design right without being able to experience the experience is too often futile.
So onto design and who does it. I’m slightly confused by your statements: “Naturally, every team member’s work has an impact on the user’s experience. Only some work at the part the user touches.” I suspect it’s more accurate to say only some work at the part the user consciously touches. Taking my earlier example of HTTP caching. It’s a small detail but set up incorrectly the impact on UX (in terms of perceived response times) can be immense. It’s even worse on mobile where lack of caching means more return trips to the server and bigger data bills come the end of the month – which has to be a user experience issue?
Possibly the fact that “some [people] have a job title that suggests their primary function is to worry about [ux] before all else” is the problem here? Maybe we just need to go back to visual designer, interaction designer and information architect and embrace the fact that it takes more than those to make a user experience?
Taking a wider example from the project I’m currently working on. We have a single data model but with different business logic and ux on top we have 2, 3, 4 different ‘products’. The important point is that the domain model which underpins all of this is built taking user’s mental models into account. And my point is user experience starts with what you choose to model. I can’t think of a better link than Tom Coates’ Age of Point at Things  which talks about data modelling as the starting point for user experience (comparing ‘episodes’ on the BBC ( a concept that exists to users but which is not in any BBC business systems) to the lack of book works on Amazon books and performances of songs on iTunes). Taking a user centric approach to domain modelling and building upwards and out from the data model enables freedom of movement in the layers above which allows the site to evolve over time without changing the underlying model.
I’ve seen lots of projects built in the opposite direction; taking the ‘ux’ layer and working down the stack. That works fine for a first iteration but because the underlying model is designed to build a specific interface it gets trickier and trickier to warp the data model as the product evolves. Eventually the cost of changing the data model to cope with interface updates gets prohibitive and the whole stack needs to be rebuilt.
You go on to say “some disciplines [..] are more tuned into that world than others. No database architects I’ve known are focused on or skilled at designing a GUI that is easy or fun to use.” The point I’m making is not that dba’s are gonna build beautiful guis. That breed of unicorn may or may not exist; like you I’ve never really met any. The point is they do design too and that design has a ***fundamental*** impact on user experience. If they model the wrong things no amount of shiny css is gonna make that better. And correcting any mistakes they make is far more expensive than fixing the gui. Because, again, the further down the stack (css > markup > controller > model > database) you make changes, the greater the impact on the layers above.
I can’t really comment on the Apple example since I honestly have no experience of how they work. If it is from photoshop comps I fear that just depresses me…
On the final point about API design, I honestly believe that no-one benefits from personas. They’re an abstraction of real people; the world has quite enough real people without resorting to characature. And no persona is gonna give you the insights you need about modelling works not products or episodes not versions. Again a good api is reliant on a good data model but again the api (as user) experience relies on good http design, good document design etc. Much like experience for humans it’s a good mix of skills that makes an API work; no one person knows enough to make everything tick.
@ Michael – I don’t think our view points are actually completely different in practice. I talk a lot about design activities in the context of the interface for the purposes of this article, because the people who design them suffer the most when companies adopt Agile and these same people frequent Boxes and Arrows. I agree that the world can change and quickly, and I agree that developing in an iterative fashion using agile methods is usually an excellent way to go providing you don’t step on any mines. In practice, a multidisciplinary team making design decisions together is the best way forward because you do get all kind of design going on in parallel.
Starting at the UX layer, in my opinion, is not always starting at the interface layer. It’s about understanding what the right user experience is, regardless of what channel they touch and the medium they use. I think we agree on that too.
We will have to agree to disagree about personas and paper prototypes. I believe everyone benefits from personas because they stop us talking about ‘The User’ as some amorphous conglomeration of our own perspectives and forces us to adopt different lenses to look at the same problem. I also agree that our assumption of what will work can change slightly when we put an idea into working code (usually not fundamentally). But I can’t get past the fact that you learn a great deal from testing on paper and thus can save yourself a great deal of heartache, even if you add to that learning with working code.
Thanks for the comments, It’s great to have some good Socratic dialogue around these thorny issues!
@Andrea – prototypes are a vital part of the product development process regardless of process or approach. There are lots of different levels of fidelity we can choose and each is best suited to different purposes. Low-fi is very valuable for validating core principles and flow. High-fi gets you conclusive evidence as to the efficacy of the final solution – particularly rich interfaces or complex solutions – but obviously this costs more and at some point we need to asses the risk of developing our proposed product without seeing it in a prototype. How new or different an implementation is it?
But whether this is the best way to address the business not knowing what it wants is debatable. Yes it is a very good tool, but there are other ways to guide a business toward a great user experience that involve really understanding the problem they’re trying to solve (a.k.a. “the opportunity”). This involves a number of different types of research to uncover various opportunities i.e. design research with methods such as Contextual Inquiry to understand user behaviour; or comparative research that looks at analogous experiences in different industries and markets; or technological research that identifies trends in technology that may lead to some interesting ideas. This step narrows down the strategic options to the right playing field and provides some great fertile soil in which to plant idea; in so doing it maximises any investment in a prototype.
@Anthony, thanks a lot – great article!
well, i am very amazed to experience how i learn a lot from your article and from all the comments! I am still very new in this area and still learning…so please forgive me if i don’t leave a ‘smart;’ enough comment 🙂
I am now working for my final project for my master degree in Strategic Product Design in TU Delft in the Netherlands. I thought I am being overly ambitious to bringing UCD into agile development process (as many people told me).
In my opition, currently the UCD approaches are not appealing for business investment as its really being implementing in early stage of development (like R&D), and no one gonna be sure if in the next step the designer or marketers will really take advantages of the result. Despite all challenges as you describe in your article, I believe UCD and agile development somehow can bring a lot of benefit for companies and help bring new innovative product to the market faster. And at the end UCD should become one of strategical platform in business environment, not only in design environment.
Please keep write inspiring articles! it helps more people than you ever expected.
@ Anthony: really great article! Just the kind of balanced and experience-derived take that makes Boxes & Arrows such a great professional forum.
(also @ liliskirman)) Have you heard of Menlo Innovations (http://beta.menloinnovations.com/)?
They are an Agile (XP) and UX (“High-tech Anthropology (R)”) custom software shop in Ann Arbor, MI that has been successfully wrangling with all of these issues since *2001* (do you know of anybody else that was trying to do this that long ago?). I worked with them for a couple of years before bouncing across the pond (Germany) and found it a deeply satisfying way to make software. I’m very hopeful that this model of development will catch on much more widely. Articles such as this one which attempts to clearly map the terrain are a big step in that direction. Thanks!
In my experience, it does tend to be developers who are particularly drawn to Agile — and it has all to do with the battle lines between them and business sponsors, but slowly, as the practice matures, it’s being recognized that simply taking a willing user hostage for the duration of a project may not be the best way to cover the UX role ;).
@ Michael: I’ve got to say that while I not-infrequently wanted to yell at my computer screen while reading your posts, I do very much appreciate your taking the time to spell out your position so thoroughly. “Design” is an over-burdened word (here in Germany, for example, it nearly exclusively means “Graphic Design” and given how much half-English/half-German one tends to speak in the context of software dev, you can imagine the confusion when a word that is not entirely clear in English, means yet something else in the native tongue). I think we can actually all agree that the intellectual and creative process we native English speakers tend to mean when we say “design” takes place at every level of a software development team (especially if they’re doing OO). To the extent that there’s anything innovative going on (and every new project demands some innovation), somebody has to make sure it fits with surrounding patterns and will function across all the imagined contexts for which it is intended – that is, somebody has to design it. That happens at many different levels — from the size, type, placement and labels on the UI controls to the deepest level data repository. If anywhere along this chain, the models used don’t map to those in the mind of the targeted end-user, the thing just doesn’t work, period. Yes, the deeper layers have a grandeur all their own that is appreciated by far too few (though competence on this level is certainly well-compensated — so it’s some other sort of “appreciation” we’re talking about…), but how exactly do you propose that the designer of that deepest layer should map his design to the model in the head of its eventual end-users? I’m sure all of us have had experiences with databases that were wicked fast and theoretically sound, but that spit out nonsense and/or refused to take our input as we normally conceived it. And, unless we’re talking about a team of one, it’s not only important that the db designer know that end-user mental model, but that her understanding of it is shared with everyone else on the team, including the sponsor. To that end, somebody is going to have to go out and meet the putative end-users “in the field” and get a deep enough sense of their goals and desires to represent that back to the team. I don’t know what tools you would propose for this process, but in my experience that is best accomplished by describing the persona for whom the system is to be designed and following that up with the design of the initial contact points between the user and the system (the UI). From there, those whose design and other skills are more in the area of modeling abstract, logical spaces and articulating them in some version of machine code can pick up the torch and carry it forward. If your plan is to make software tailored to the end-user, then you’ve got to have a model of said user in mind. Absent an articulated persona, that role will be filled by any of a variety of agents, and likely not the same one for any two members of the project team (to say nothing of the sponsor – who knows what exactly they have in mind!). Such situations often do not end well. It doesn’t have to be that way. (Of course, if you are a dev team of one-three people and you are all similar and designing for people just like you and you don’t develop any idiosyncrasies during the project based on your “inside” perspective that won’t be shared by your intended users, then, sure, it would probably work out – without any more than the usual software dev project pain.)
My sense – yes, we are all designers. I don’t’ mean that trivially. It’s important to remember. Further, for a project of any size, one brain (actually I prefer two, and paired) should be modeling and facilitating the selection of a targeted end-user based on first-hand field observation and interactions, while another brain (or, again, a set of brains, working in pairs) is modeling the mechanisms that can empower that end-user to achieve the goals that are motivating her interaction with the machine, in the first place. And, of course, this all has to happen in a project context given its contours by clearly articulated business goals (teasing out clear statements of goals and posting them where everybody can see them is something UX folk are quite good at ;)).
Hmm, do I have to put an “End Rant” tag, here, now?
>> “They are an Agile (XP) and UX (“High-tech Anthropology®”) custom software shop… …since *2001* (do you know of anybody else that was trying to do this that long ago?).”
The only other contender that comes immediately to mind (for UX + Agile) is ThoughtWorks, of course, but I’m not sure when they added UX to their XP — was it from the beginning (in terms of XP that would be 1999, I believe)?
I don’t want to have my history way off, here — I’d be interested to know if Menlo has bragging rights in this dept…
Great Article Anthony – I think you have clearly identified the obvious challenges to agile culture. I think you could take the very mines you describe and create a likert scale survey for stakeholders to measure the agile effect on an organization’s product development…
Home Purchase Dominion Lending Centres
it is a great article and most of us will agree with the process described here.
I want to take this discussion further, where I question all UX designers: Why we need to change our process to fit into Agile and why “they” do not change their process to fit into UCD. After all, everything is made to be consumed by the “users” unless you are an artist.
I have been working on multiple high volume agile projects since last 2 years as an Information Architect. Definitely, Agile is in vogue today and one cannot expect clients to pay for complete “User Centered Design” process in this economy, but in the process of cutting corners we have gone too far. It seems that in the Agile process “the User” suffers the most. The companies benefit monetarily on the expense of users. All the proponents of Agile process are managers and executives who want to sell projects to the client by saving development costs.
Do we have a research of effectiveness of User Experience of Agile projects Vs. Waterfall? I am definitely interested to see such a report.
The same companies who implement Agile, after project completion goes to User Research firms for comprehensive User Research because the product did not meet company’s strategic requirements.
In the past, I really enjoyed working on projects where we implemented end-to-end UCD process. I sometimes feel that Agile has killed real design process. May be, I should go for PhD and become a full time researcher to come out of “the art of cutting corners for user experience”.
@Praveen – I understand your frustration well. But I think it’s interesting that Agile has become so popular and surmise that it’s success must be attributed to more than the promise of saving money alone. There are definitely things wrong with the way many organisations do waterfall for which Agile provides a solution. However, a philosophy or approach alone cannot create good products and Agile is no exception.
Part of what Agile has done well is to create a manifesto that easily communicates the tenets of this approach. I think this has helped people “get it” and adopt it so readily. It’s something I have mused on and concluded is missing from UCD. I am happy to be proven wrong, but I don’t believe that as a community we have created a UCD manifesto to help organisations grasp what it means to be user centric.
Just as some of the programming legends gathered to define Agile’s rules, wouldn’t it be great if we could pull together the likes of Alan Cooper, Jakob Nielsen, Don Norman, Jared Spool, Karen Holtzblatt, Hugh Beyer, Jesse James Garrett and others to form a UCD manifesto? What would the rules of a UCD approach be?
Are you sure?
(63 years old retired software engineer speaking)
A couple of years back I had to fit my UX process into a really tight scrum environment. The team was re-formed after a previous job and all the participants had strong views and a really successful product under their belt so it was very much up to me to add UX-think without undoing Agile-think.
I’ve started collating my findings and have put a deck on slideshare. It’s a first draft but it lists the points at which UX-think can be ‘hooked into’ the product backlog, the sprints, the standups etc.
Thanks for the article, hopefully I’ll find the time soon to finish writing up my findings:-)
Ironically, some of the issues that you bring up and compare to waterfall — have always been a problem in waterfall as well. I remember in the mid-90s when I was responsible for inserting page updates to the 9-volume set of instructions for our development methodology, I was aghast when I realized that every phase of the methodology had whole chapters/sections dedicated to them, except for requirements. Requirements seemed to be some thing that just magically appeared.
It was that day that I started my quest for uncovering better requirements methods. I checked out formal coursework, which was all built upon the assumption that we failed at requirements because we somehow had not entered them correctly or somehow had lost track of them. But I threw that out immediately because some 5+ years previously I was introduced to ethnography through a story that attempted to debunk this ‘theory’ and it did — that the problem with the requirements were that they lacked all the relevant context that people weren’t aware of and were unable to report.
The issues with design and waterfall met a similar demise when one day I was attending a presentation in the commercial building industry where someone was explaining how to change their existing work methods to embrace ‘sustainability’. It was then I was immersed in the realm of ‘specifications’ vs. ‘requirements’ — and it was then that the proverbial lightbulb went off and I had the comparison needed for Systems Development.
What I realized from the commercial building industry is that there is a whole phase that happens BEFORE what we’ve even known as ‘waterfall’ — it’s an architecture/design phase. The design phase that occurs IN the waterfall method is a ‘response’ to the ‘specifications’ that come out of the architecture/design phase.
Both waterfall and agile lack this critical phase.
Very well written applying practical experience. I have tried to impose the same model for couple of customers working in Agile mindset. Customers struggle to realize the value of the pre- Agile preparation phase that UX designer require.
Having substantial deliverables would satisfy the customer to buy in for having a Agile preparation phase.
Comments are closed.