Faceted Feature Analysis

Posted by

Everyone has ideas. Many of those ideas are held passionately. Some are brilliant, some are unrealistic and some are down-right stupid.

  • How can you make sense of ideas from multiple sources—formal requirements, brainstorm sessions, contextual inquiry, and input from the boss’s wife?
  • How do you entertain all ideas and still weed out the good stuff from the garbage without hurting someone’s feelings—especially when that someone signs your check?
  • How do you factor in real constraints and capabilities before these ideas become etched in stone?
  • How do you take in the different points of view that come from a programmers or business owners, not to mention the actual users of your product?
  • How do you do all these things and define project scope with some level of integrity that’s more than intuition or politics?

This article explains a process called “Faceted Feature Analysis.” It’s an exercise that I’ve been using for nearly 8 years on projects both large and small. The facets refer to three characterizing facets in any project: business value, ease of implementation, and user value.

Faceted Feature Analysis also uses three constraints that govern every project: cost, time, and quality.

By crossing the characterizing facets with constraints, you are combining the subjective needs of the project stakeholders with the objective constraints of the project in a way that ensures all points of view are fairly considered. It also ensures that a project requirement is not included or excluded simply because one person yelled louder than the others.

The process involves six steps:

  1. Rating the Feature List
  2. Creating a Flexibility Matrix
  3. Mapping
  4. Scoring
  5. Sorting
  6. Fine-Tuning

Step 1: Rating the Feature List

Compile a feature list from whatever sources are available. These typically include some sort of requirements documentation created by the business owner but can take in suggestions from a brainstorm session, ideas generated from contextual inquiry, competitive analysis, or other sources, formal or informal. As with brainstorming, there are no “bad” ideas at this point. This is important because, as you’ll see, the process is designed to weed out the impractical or ridiculous without any single person having to directly put it down.

Once the list is compiled, create a spreadsheet with the list of features and three adjoining columns: “Business Value,” “Technical Ease of Implementation,” and “User Value.” Ratings from 1-5 are assigned to each feature (1 being low). However, only the people who own each domain provide the ratings for that column: the business owners rate the Business Value column, the tech team rates the Technical Ease, and the user experience folks rate the User Value.

In this way, everybody is in a position to speak to their own area of expertise. Also, the rating of a particular feature isn’t as subject to the whims of the most charismatic or forceful person at the table, so you get a truer assessment of the general value of a feature.

Preliminary Ratings on a Feature List
Figure 1: Preliminary Ratings on a Feature List

Step 2: Creating a Flexibility Matrix

Flexibility matrices have been around for a while. Historically it has been a project management tool used to gain consensus on the constraints that govern a project. Every project is subject to three constraints: cost, time, and quality.

Blank Flexibility Matrix
Figure 2: Blank Flexibility Matrix

To create a flexibility matrix, the project team needs to agree on which of the three has the least amount of flexibility associated with it. For example, if there are certain features and functions that absolutely must be developed, then quality is the least flexible constraint. Use an “X” to note it on the matrix.

Developing Flexibility Matrix
Figure 3: Developing Flexibility Matrix

That leaves cost (the project has a finite budget of…) and time (the project must be completed by…). The result is a matrix similar to Figure 4.

Completed Flexibility Matrix
Figure 4: Completed Flexibility Matrix

This doesn’t mean that cost is not important. It just means that later, if you had to decide whether or not to cut something from the project, the quality or time involved would be considered first.

h3. Step 3: Mapping

The project constraints map loosely to the value columns where:

  • Cost = Business Value
  • Time = Technical Ease
  • Quality = User Value

By making this association you can add weight to the ratings in any column. In this case quality is the least flexible constraint so you multiply by 3 all of the ratings in the User Value column. As time is the next least flexible constraint, the ratings in the Technical Ease column are weighted by a factor of 2 and the ratings in the Business Value column are not weighted, because cost is the most flexible of the three constraints.

Mapping Flexibility Matrix to Ratings
Figure 5: Mapping Flexibility Matrix to Ratings

Step 4: Scoring

Simply add up the weighted ratings into the Total column.

Scored Ratings
Figure 6: Scored Ratings

Step 5: Sorting

 

This is where the magic happens! Sort the features according to their scores. Invariably, those features with the highest aggregated values rise to the top and those with the lowest values sink to the bottom.

Step 6: Fine-Tuning

What about the stuff in the middle? After you’ve sorted the list, you can usually find some natural cut-off point in the list where everything above the line constitutes a complete solution and everything below the line is either a feature for another day, a variation on a feature that made the cut, or something that might be best forgotten.

The question now is whether or not that natural cut-off point aligns with the constraints. In the case of quality there’s no need for further analysis because you’ve effectively said “regardless of cost or time, we need to have the features we’ve identified here.” In the case of cost or time, it is sometimes necessary to get estimates from the team on the hours needed for each feature. That way you can associate cost or time with each feature to negotiate the cut-off point by “horse-trading” with items of similar value above and below the line.

Effort Estimate
Figure 7: Effort Estimate

When the negotiating is done, you may discover one of three things: * You have defined the scope within the constraints * The constraints need to be revisited and the cut-off line needs to move * The constraints cannot be revisited and the project should not proceed (an extreme outcome)

The Benefits

  • Increases objectivity. You are leveraging individual bias to generate unbiased feature rankings. This occurs because participants are limited to rating features only from the perspective of their areas of expertise and using overriding, agreed-upon constraints, rather than personal influence, as the means of emphasis.
  • Assists in project planning. Scope and estimates provide the basis for a traditional project Gantt chart or the backlog that will feed an Agile iteration plan
  • Mitigates churn. This process greatly reduces the second-guessing during development that may occur when features have not been pre-qualified. There are fewer surprises downstream.
  • Minimizes politics. A feature rises or drops in the list on its own merit as it relates to the project constraints, not because anyone knocked it down or ram-rodded it to the top. (This can still happen but it’s harder to do without obviously and publicly disregarding the point of the exercise.)

 

A Few Thoughts By Way of Addenda

Understand that this process is not the way but simply a way to qualify a project’s scope.

The process is true enough to make sense out of a lot of information but not airtight in its logic because, as I mentioned, the associations between the values and constraints are loose and there is still usually some negotiation involved in the fine tuning phase. This means it is still possible for someone with influence to trump the findings in a particular exercise, based on their own agendas.

That said, by not being too prescriptive, the process allows for a great deal of flexibility. For instance, the checkout process on an e-commerce site is a feature on its own but it also has a number of sub-features like “review your purchase,” “enter shipping information,” or “confirm purchase.” So, while the team agrees that you need a checkout function, by rating the sub-features individually you’ll weed out some things for later development.

In your matrix, you can also include additional columns for more specific descriptions of proposed new features or columns for other metadata such as data source or legal mandate. Such additional information helps characterize the proposed features, making them easier to rate.

In my experience, this process has proven itself over and over as easy to do, easy for everyone to understand. It doesn’t take long and it yields both material and intangible value. However you apply it, you have a way to take a granular look at a lot of information and determine earlier rather than later whether or not something should be built, rather than if it could be built. That will save you and everyone else the downstream heartache that comes in the form of increased cost, increased time, and lack of quality. It also eases that boat ride because everyone participates in a way that brings their needs into the equation, making the outcome much more digestible and less likely to be challenged.

After all, if you can help create great products on time and under budget you’ll be a hit at parties and that’s why we do this sort of work, isn’t it?

22 comments

  1. Very interesting article, Adam. I particularly liked the distinction you made between “user value” and “business value”. It’s easy for us to sometimes forget that there’s a difference between these two – afterall, if something has business value, it’s gotta have user value, right? And vice versa! Well, while there’s obviously a correlation, I do think it’s important to understand the differences. For example, creating a migration utility that makes it really easy for users to migrate off your product and onto a competitor’s product might have very high user value… and very low business value.

    One nit I have is with your weights. 1x, 2x, and 3x seems awfully extreme. For most projects, cost will be the least flexible factor, so business value will get a 3x for all their ratings. While I would agree with the principle – low flexible on cost means higher focus on high business value requirements – I think a smaller multiplier would make more sense. Like I said, this is a nit.

    On a more fundamental level, I wonder how this would impact one of the major issues that I deal with – the difficulty in getting small improvements into plan. The user value and business value of small improvements is always small, and while this is balanced by high technical ease, it still sounds like the user experience team would need to “cook the books” to artificially raise the user value ratings for the small improvements to make them rise to the top. The other potential issue is that a release should be about more than a grab bag of independent requirements – the release should have themes. If a requirement is critical to a release theme, then it should rise in importance. Maybe the way to fix both the “small problem” and “theme” issues is to add a grouping mechanism to the process so that requirements are not considered in isolation.

    Regardless, I enjoyed the article, and think this process has a lot of merit.

  2. I like this approach. Scoring is similar to Failure Modes and Effect Analysis (FMEA) which is used to prioritise actions to prevent process failures in manufacturing etc. Like FMEAs it produce an overall numerical score which, in my experience, those with entrenched opinions or political axes to grind find difficult to argue with.
    If done as a group it can lead to a consensus approach and ‘buy in’ from participants – even those who are somewhat sceptical.

  3. This reminds me a lot of a presentation that Larry Marine gave at UPA 2006 called “Driving Product Design from the Business Objectives.” He referred to it as a Prioritization Matrix and I don’t remember him assigning any weighted values to the initial ratings. I think that adds an interesting and valuable dimension to the overall approach, but I agree with Terry that a 3x multiplier might be a bit too extreme.

    Regardless, I think the best use for this type of approach is in establishing a common understanding upon which the team can have more objective, and hopefully more reasonable, discussions between the interested parties. Any tool that might help avoid some of the “he said/she said” arguments that can occur when individual teams are working under different constraints is worth using in some capacity. Might be really useful for projects where the business, design, and development teams don’t share a common timezone…

    UPA Presentation Slides, for those interested:
    http://www.usabilityprofessionals.org/usability_resources/conference/2006/Marine-Driving-Product-Design.pdf

  4. To Terry’s comments. Thanks for the response. You make some good points worth remembering. Another example I like to use as an instance where business and user value clash is “on-line ads”. To a merchandising manager who gets a fat check from an advertiser there’s obviously a positive bottom-line impact but does a user care so much? Wouldn’t they love to block that ad server given half a chance?

    As for the scoring – When I was a consultant, cost typically led the way. Now-a-days, as an internal IA, the desire to stay competitive or a focus on usability (especially on the tail-end of an unflattering survey or market study) often outweigh the dollars if it come down to what stays in and what gets cut. To that point I don’t agree with the absolutes that User and Business value will always be small. That may be an attribute of a particular environment.

    Don’t take as writ that scoring weights should always be constrained to X1,2 & 3. If, as you’re experiencing, there’s a sort of “Built-in” weight, due to some individual slant, you can use the scoring to counter those environmental inequities. It may be cooking the books but it should be done toward levelling the playing field. That said, the process relies on a certain amount of fair play from all the stakeholders. The public nature of the effort can help engender that. Unfortunately, that won’t always be the case. If someone with enough influence is dead sure their way is the right way -all irrefutable evidence to the contrary, there’s not a lot you can do.

    Maybe the piece doesn’t go far enough into how you can extend the results of the excercise. You can take a really large list of requirements and use this method for part of it or apply it to the whole thing and use it to chop up the work into iterations. You can introduce other metrics that might be mandates in your particular environment. All these things can some into play when it’s time to make the final decisions about “what comes first”.

    I don’t want to put this method across as more than a way to apply that first level of characterization to a chaotic list of “Stuff” that brings the right voices to the table earlier rather than later. The real value isn’t in the number you come up with for each feature or story. It’s in the collaberative nature of doing it, the understanding it yields and the down-stream benefits of establishing a base-line that establishes the balance you want to see between business, technology and design. It’s about getting the right people to the table in the first place.

  5. Great article and an issue we are grappling with currently. One other dimension that we use is based around organizational change management. We list Business Benefit, Business Readiness (i.e., ability to support change, mature processes in place, etc.), Techncial Ease and User Benefit. It’s a great way to decentralize the opinions at the table, as you said. Thank you!

  6. Great article, thanks Adam. I think that associating a numerical value to features would be a very helpful exercise.

    I have always been heavily influenced by the book “Information Architecture for Designers,” by Peter Van Dijck (link below). Our firm always walks through a strategic planning process with the client where we first develop 1) business goals that the project needs to achieve, and then define 2) user goals of why someone would visit the website. We then prioritize each goal and generate a list of 3) potential features that would accomplish each goal.

    The problem with our current system is that personal opinion can still play a large role in deciding if a feature effectively fulfills a business or user goal. I think the next logical step in our process should be to numerically rate each potential feature as Adam has displayed here. For us, at least, I believe this faceted feature analysis will still be one step of several rather than a end-all be-all solution.

    Information Architecture for Designers:
    http://www.amazon.com/exec/obidos/ASIN/2880467314

  7. This is something I’ve done (either by choice or because it’s been forced upon me) several times in the past. While I would not choose to do it on every project, I think where the complexity, politics and other factors are there then it’s a good approach if you think your clients will wear it. Because it’s a very quantifiable method, buy-in is more likely if your clients have bosses who have bosses to please.

    In my experience though, Terry is right to sound some notes of caution. The risks of this approach that I’ve found are:

    1. You will be seen as a sort of list-keeping Nazi (“That would seem like an excellent idea, but with a low business value my spreadsheet says no”). Worse, you might be seen as spuriously justifying bad ideas (“I think the user value of a splash page is a 10 – we gotta do it man!”).

    2. You risk losing the big picture in disjointed minutiae. (“If we can justify a lower business value rating for a link-rich footer and a send-a-friend feature, then we might just be able to include a Flash movie on the home page for the same budget”) Design is, after all, about big pictures in the end.

    3. You stake too much on the clarity of the requirements up front. Unless you are completely sure that everyone has the same understanding of what the requirements are, you can’t very well apply a rating to them. Falling into the trap of wording requirements to suit your design objectives is also rather tempting.

    But with those risks in mind, I would say it’s a good approach in some (possibly limited) circumstances. Like all techniques, do it with yours eyes open.

  8. Again, these are some good considerations. Ironically, even though this process is here being proposed within the context of IA, ideally it’s managed by someone else like the Project Manager because as one of the participants, UX should be on even footing with the other stakeholders. You’re right. You don’t and shouldn’t be viewed as a list-nazi.

    That said, this effort is specifically designed to temper the kinds of extremes like your example of the “splash-page=10” scenario. (For me it was about finding a graceful way to put the “interactive Rockette paperdoll” applet in the…uhm…proper perspective.)

    To your point about the big picture; that’s one of those constant vigilance issues that we’ll always labor under throughout the entire life of a project. I wish I had a nickle for every time I looked up and realized that the default display of a drop-down list box had just claimed 20 minutes of my life that I’ll never get back. These days it’s become an integral part of how I operate with my teams to be the one to keep asking if we’re really keeping our eye on the ball.

    Your last point brings up another item that probably should have been mentioned in the article which is that unknowns are still relevent in this process particularly where technical ease is concerned. Sometimes they just don’t know how easy/difficult, fast/time-consuming something is going to be and that’s all they can give you. However, if you can determine that there’s high user and business value, it’s enough to say “Let’s factor-in the due diligence necessary to establish the tech effort. By the same token, if the other values are low, you can probably conclude that the discovery effort isn’t required at that time.

    There are TONS of similar exercises in which we’ve all taken part. Common sense dictates that you have to do something when you begin with an unqualified list of requirements. This is a mash-up. You mention design purpose and design objectives and while those concerns are the ones closest to my heart, I realized that there are other people whose business and technical objectives are just as near and dear to theirs and justifyably so.

    This exercise isn’t meant to put UX in the driver’s seat. It’s meant to get UX into the balance when you’re determining how to move forward so you’ll spend less time moving backward later on.

  9. Thanks for this breakdown, Adam. Our project team recently used this method of breaking out user stories with regard to priorities, project deliverables and tasks.

    It’s worked very well for us with a strong project manager keeping us on track and supporting the appropriate meetings. The “what we did well” column at the end of each iteration far outweigh the “what we didn’t do so well” column from all team member perspectives, including development, product owner, IA, design and technical analyst.

    I think it’s in the way the user stories are defined. In our case, that meant a conscious effort to remove design/UI language such “a checkbox option” or “a drop down list” that helped define the skeleton under the muscle.

    Like you said, it removes the politics, frees the UI in a way I have not experienced in a multi-role team environment and tames the value business creates for certain visual aspects and interactions by putting them into the proper perspective.

  10. Good ideas are coming from your article, Adam. On IT companies I work, where the User Experience approach wasn’t so welcome, or maybe, was completely unknow, this analysis will help us to increase the importance of our UX team, with a more balanced analysis within all perspectives, removing the chance of doubtful decisions or even plain ‘end-user-discrimination’. Excitement ahead!

    I don’t think that giving rates from 1-5 eliminates all subjective or political choices. Maybe it’s more clear giving importance for product releases and deliverables: “(-1) not important, (0) neutral, (1) important”. This give you a list of items rated by importance: the features to build on product version 01, product version 02, and so on, keeping your product development on track. Also, if needed, each team may justify the item importance given based on another document or spec pre-aproved.

    Do you have a particular experience with that? What do you think?

    Thanks.

  11. I think it is a good article to bring in more objectivity to a rather subjective one. But I have a question, under what situation will you have the constraint ‘ Quality’ as the most flexible element especially in consumer facing projects? Would any body want to compromise on Quality or atleast will they state it explicitly? If this hypothesis is right, then it leaves us with just 2 constraints…. time and cost

  12. You’ve brought up one of the symantic pitfalls that has actually come up in discussion before. Within my firm, the use of the term “quality” led to some questions about compromise. In this case, to one person, “quality” took in the perception of an application as it ‘s going through acceptance testing independant of much of the dynamics we discuss here. To them, is was about a product “working as designed” regardless of what informed the definition. In this context, “Quality” simply means that the application in question must absolutely have certain capabilities or there’s no point in pursuing the project.

    All the constraints in question (Time, Cost an Quality) aren’t necessarily being established – just recognized. By this I mean you’re asking hypothetically: “If we have to choose later between keeping or dropping something on the project, what will be the driver; the budget the time-frame or some assessment of the user’s need for a feature relative to our understanding/belief of the user’s goals for the application?”

    You may be right that no one would want to openly opt for a sacrifice of quality in an ideological sense but the fact is that when constraints come to bear and they’re breathing down your neck, the affect of these forces is clearer and more explicit. Some things will budge; take more time, spend more money or yes, have the application do a little less. I’ve seen each circumstance in practice. This exercise is meant to apply a sense of what those pressures might be earlier rather than later when the choices, no matter how self-evident they might become, are more painful when resources and effort have possibly been poorly directed or expended. In other words: wasted work.

  13. To Richard’s questions and comments…

    One of the things that gave rise to this exercise was that during other qualification processes I’d seen (If there was one at all!) too many factors were being layered on at once and not very transparently and therefore not very evenly.

    I deconstructed the process. The rating step is JUST about ratings given totally within the prejudices of the user need, biz value and technical ease by the people whose worlds revolve around those prejudices. The scoring, as I said, might be tinkered-with although I probably wouldn’t incorporate negative numbers because it literally has negative connotations. (Remember, at this stage, people are a little fragile about the things that are important to them. It’s a finesse point but an important one.) Some teams have used a 1-7 scale to get more distinction between features. As you point out; All you need to be able to say is; the relative value of “Feature A” when compared to “Feature X” is higher/lower.

    Afterward, when you’re horse-trading to define the scope, you can still include a “lower value” feature based on some of the other externalities like “spec pre-approved” or “legal mandate”. An example is a “Terms and Conditions” page or text block . The user doesn’t much care, it’s easy to do technically but it doesn’t really make make or save money. It’s not going to get a great score but the legal dept says you have to display it and you’ll get sued if you don’t so guess what! It’s in!

    This excercise is a little like alchemy where you take a substance, let somthing else act upon it, see the result and take the next step based on your observation. Don’t be too quick to collapse the steps. More and more, my work efforts have been helped by figuring out “What NOT to worry about” at least for the moment and convincing people that you can serve urgency better by being more patient at the beginning.

    I’m really gratified that this looks like something you’d want to try and I’d love to hear how it plays out. Good luck.

  14. How do you handle input from the bosses wife? I would laugh if it didn’t happen! Its hard since there is a certain resistance to getting uninvited feedback, and it can be perceived as a personal slight (though it generally isn’t). If there is already some tension in the team where some members feel their professional experience is not being respected, this sort of thing can be taken pretty badly.

    On the other hand, inspiration and ideas can come from any where.

    I would like to hear experiences of bosses who do manage to bring in ideas from their wife (or kids or pets!) while still keeping a happy team.

  15. One of the instances that gave rise to this in the first place was the fact that I had a boss who added suggestions to a requirements session that were way off the mark in terms of business and marketing goals not to mention cost.

    I couldn’t tell the person she was….uhm…impractical. The scoring did the work for me and since the rating came from more than one domain, no one person was in the awkward position of shooting down an idea that everyone at the table (with one exception) knew was not going to happen.

    Like you said though; good ideas can come from anywhere. I was on a panel discussion last night and one of my colleagues made the point that you sometimes need the input from someone who doesn’t live and breath your product all the time. They don’t know all the politics. Something either doesn’t work for them or something might make it better in their opinion.

    The good thing is that you don’t have to discriminate at the gate, let it all in with a smile and let the process do its work.

  16. As an agency-type consultant, I like to take the value scoring a step further and assign a point value representing ‘level of effort’ (LOE) to implement. I use that label to measure whatever resources are required to roll out a given feature. All projects I work on have some limitations in either time, money, resources, so it’s necessary to capture key requirements early on. With that information, a total project point value is established, say 1000 points. The team then prioritizes features (value vs. effort) and has an allowance of points which they can use to purchase functionality / services. Obviously, it’s necessary to provide sufficient detail around each item purchased so that the team knows exactly what it is and how it works. Being too vague in that definition step potentially causes more mayhem than it resolves.

    A portion of the points need to be spent on foundational requirements / platforms, with the remainder being available for a la carte feature purchase. The scoring and point assignment model can be as granular or relaxed as is needed.

    here’s an oversimplified example:

    Corporate Blog = 50 points
    Faceted Navigation through product catalog = 200 points
    enhanced product imagery = 150 points
    order and account history = 100 points
    Product Rating System = 100 points
    spinning 3d corporate wireframe logo = 800 points
    photo of bosses wife on site = 20,000 points

    As soon a team tries to spend more points than the project is alloted, they either need to expand project resources (essentially buy more points) or scale back features in the initial release. This type of structure is the perfect response to the ‘we want everything, now, for this low price’. I’ve been in a lot of situations where the client wants to swap one piece of functionality for another, mid-stream. by assigning LOE estimates from the start, it makes it easier to figure out which pieces are comparable.

    While the above description may just sound like ‘money’, it’s often easier to talk about resources in terms of a generic fixed unit. I hate sitting in meetings saying ‘sure, we can do that, but it costs more money’. For most of the agency projects i’ve worked on, the ‘checkbook’ isn’t sitting in on the day to day meetings, so there’s not much point in belaboring the point.

    i certainly didn’t invent this system, but can’t recall the originating source. Maybe I read about it at joelonsoftware.com.

  17. Seth – I really like that approach, because it helps to address the problem I mentioned above… how to get relatively small improvements into the product. In most prioritization methods, the big, sexy, expensive requirements end up at the top and then you work down the list until you run out of resources, completely skipping the easy stuff. Your method encourages people to think “I can either do this big, cool thing or these 12 small things”. The only problem I see is: how do you determine how many points are available? In my experience, even sizing a requirement is an expensive activity, so some prioritization needs to happen before sizings are done. This leads me to think that a combination of your method and Adam’s would be really useful. Start with Adam’s approach to identify the top requirements (including a gross ballpark sizing), then do a more detailed sizing of the list that makes the initial cut, then apply your technique to it.

    This thread validates my belief that “release architect” is the most difficult and underappreciated job in software.

  18. Exactly Seth! That’s where the effort estimate comes in as one example of what you’ve described. And the natural cut-off I describe lets you take the features and roll them up into the kinds of things you have in your list with an aggregate value associated with them.

    Yes, points are thinly veiled dollars but as you say, it helps people keep their eye on the ball.

    When I was consulting and we were going through the horse-trading phase, I made it clear that if it started to look like someone wants to discuss anything having to do with contracts or rates, that’s a conversation for another table at a another time. Furthermore, by trading against these sub-totals, if you determined that you just couldn’t stay within them and meet the goals of the project, (After all, that technically sophisticated photo of the bosses wife is a deal-breaking user goal) you’ll have a clearer idea of what that conversation needs to cover.

  19. it’s also nice to be able to extract the IA from the dollars and cents discussions, since they generally aren’t in meetings to play the role of account director. But, many a wily client has tried to get the IA to commit to extra work without going through the appropriate change request project.

    In the consulting business, conducting these scoping, prioritization, and negotiation activities/workshops during discovery is valuable and billable work. Or, include those discussions during the biz dev effort and recapture the costs in more efficient project delivery works

    Terry – how do I determine points? It can sometimes be done by phase, I like to separate upfront exploration from build. Here’s a rough and arbitrary value I use to determine what the resource/point burn rate is per week. This is optimized for T&M projects. I use a different approach for fixed bid / value pricing.

    200 points per billable week (80% billable) for a practitioner level (developer, ia, business analyst, etc)
    300 points per billable week for a senior level
    350 points per billable week for a Lead (mostly oversight rather than production)
    then I throw in a 20% overhead for project management / reporting and ~25% for QA

    Great book called Managing The Professional Service Firm by David H. Maister that talks about proportions of Jr/Sr practitioners.
    http://www.amazon.com/Managing-Professional-Service-David-Maister/dp/0684834316

    Total points available just tend to be a product of time x money.

    make it as crazy or as simple as you’d like. for level of effort, I tend to use 3 categories Easy (1x), Medium (3x), and Hard (5x).

  20. This has always been a difficult challenge – do you try to go with a single, consensus view of requirements or do you somehow try to be inclusive of everyone’s viewpoint?

    I like Adam’s approach – it is systematic and a degree of objectivity.

    I use a similar system – with weightings for priority, risk, value, and cost – but with a different process, which I’ll briefly explain here:

    I use a hierarchy of viewpoints – starting from the views of individuals, moving up to the views of a team or group, the views of a business unit, project, product, or geography, and then to an enterprise-wide consensus view. The details of this hierarchy of viewpoints varies from client to client, or situation to situation.

    When gathering requirements I assume that everything is a valid point, requirement, need, wish or observation. With each requirement I have a link to the originating viewpoint; for example, this could be an individual, or a team requirement from a workshop, or a mandatory enterprise-wide need. I also note the weightings for priority, risk, value and cost.

    Comparing viewpoints uses a venn analysis to identify things that are common or not. This type of analysis allows me to highlight things that are common and move them into a viewpoint that is higher up the hierarchy. The higher a requirement goes up the hierarchy, the more it is the requirement of a larger number of people. (Note that this doesn’t necessarily mean it is better, more important or more valid).

    It is a simple matter then of deciding which viewpoints are in scope for any phase of a project.

    (There is more about this technique in my book, “Information First – Integrating Knowledge and Information Architecture for Business Advantage”)

    Roger Evernden

  21. very smart

    tools that facilitate — put back on the people making the requests — decision making work much better than the IA standing up and saying why x y or z aren’t going to work; you get the people who are ultimately driving the train making the decisions on how many cars it will have, where it will stop and where it will go

    as always: fast, good or cheap … how many do you want

Comments are closed.