Slate: Calculated Refinement or simple inertia

Before we get started, I just wanted to note that my comments are intended to supplement the diagram, rather than vice versa. So be sure to download the PDF version of the diagram to get a full understanding. That said…

No matter how you look at it, publishing content on the web daily is a lot of work. From an information architecture perspective, a daily web publication presents challenges and possibilities no newspaper editor ever had to face. As one of the longest-running daily publications on the web, Slate has dealt
with these issues for years. But it is unclear whether the site’s
current architecture is the result of calculated refinement or
simple inertia.

The architectural decisions here demonstrate one key assumption about the site’s content: the ‘shelf life’ of any given article is about seven days. Navigating to a piece during those first seven days is fairly easy; after that, it becomes very hard.

At a glance, the high-level architecture seems fairly straightforward. But a closer look reveals that the five primary ‘sections’ exist only in the tables of contents. These categories appear nowhere else on the site—not even on the articles themselves. Furthermore, the classification of articles into these
categories only persists for seven days from the date of publication. After that, the section to which a piece belonged is forgotten.

Note the absence of an ‘archive’ area. The only access to articles more than seven days old is through the advanced search page. In place of a browsable archive, Slate offers canned searches by “department” and by author. The author list page works well enough, though such a feature would only be useful in the event that a user already knew the name of the author of a desired piece; but if that were so, the search interface would be sufficient.

The department list page has a greater burden to bear. As the only persistent classification scheme employed on the site, the department list is the only element that can provide the reader with a sense of the range of content and subject matter covered on the site. But the page currently falls far short of this goal. What the user faces here is nothing more than a very long list
that makes no distinction between limited-run features like “Campaign ’98”; occasional, semi-regular features like Michael Kinsley’s “Readme”; and ongoing staples like “Today’s Papers.”

This problem is only exacerbated by the fact that, by and large, the department titles are too clever by half. Even the savviest user could be forgiven for having trouble remembering whether Slate’s roundup of opinions from movie critics was filed under “Critical Mass” or “Summary Judgment.” The cute titles would be fine if the site provided some sort of context for what was to be found inside; as it is, providing a plain list of titles like “Flame Posies”, “Varnish Remover”, and “In the Soup” does little to help readers find specific items or even get a general sense of what the site has to offer.

Letter-sized diagram ( PDF, 41K)

Note: The date on the diagram indicates when the snapshot of the system was taken. Slate may be substantially different now.

Finally, I wanted to find out what sites you’d like to see me diagram in the future. You can post your suggestions here.

Jesse James Garrett is one of the founders of Adaptive Path, a user experience consultancy based in San Francisco. His book “The Elements of User Experience” is forthcoming from New Riders.

14 comments

  1. I can’t imagine that any unintentionally disposable thing can be good. Maybe that could be their new tag line: “Slate — unintentionally disposable.”

    The architecture, such as it is, seems to be the result of hastily layered-on (and fully disposable) categories. The underlying assumption still seems to be that rearchitecting old content is more trouble than it’s worth.

  2. There are categories applied to the most recently visible articles, but does that mean that all older articles were categorised … maybe they never were and it’s a big job to go back and re-categorise them?

  3. RE>Finally, I wanted to find out what sites you’d like to see me diagram in the future. You can post your suggestions here.

    Jesse, could you please diagram the IAWiki?

    Thanks.

  4. John: If the link to the PDF had been at the top of the piece, you wouldn’t have needed luck.

    Eric: It’s true that the categories seem to have been only recently applied, and adding categories to 5+ years of old content is certainly a big job. Nevertheless, I can’t imagine how temporary categories that are only visible for 7 days from publication offer any significant benefit to the users.

    Victor: I’ll get right on that.

  5. On categories: It depends on what information seeking mode the visitor is in … are they looking for certain specific content, or just *any* content so long as its fresh and interesting. That is, is the site being positioned as a research repository, or as infotainment?

    What is the nature/context of the visits to the archive — people looking for past articles they want to revisit/reference, or are people looking to contextually browse through categories?

  6. Eric asked: That is, is the site being positioned as a research repository, or as infotainment?

    Doesn’t it squander the potential of the Web to treat your operation as if you’re cranking out a disposable supermarket tabloid? Or, put another way, why bother keeping archives if they’re nigh impenetrable?

    Eric again: What is the nature/context of the visits to the archive — people looking for past articles they want to revisit/reference, or are people looking to contextually browse through categories?

    I don’t think this matters; in either case, it comes down to recognition vs. recall. Obviously, if I’m just bouncing around looking for something interesting to read, categories are going to be most useful. But even if I’m looking for a past article, having only the search engine at my disposal forces me to recall some (hopefully unique) snippet of text to plug into the query field. With categories, I can scan until I recognize what I’m looking for.

  7. >Doesn’t it squander the potential of the Web to treat your operation as if you’re cranking out a disposable supermarket tabloid?

    Sadly, that pretty much describes an inneffable number of blogs 🙁

    >Or, put another way, why bother keeping archives if they’re nigh impenetrable?

    Perhaps they are more concerned with keeping permanent URLs than with building a research library. Ideally, they’d do both, and more … but it’s tough to find the budget for everything.

    Please understand I’m not arguing against the idea of well-managed archives, just the applicability within specific contexts.

  8. Another conceivable justification for making old articles difficult to find is a widely held intuition (at least in my experience working with news web sites) among product managers that content should expire for any or all of the following reasons:
    1) Easily available old content can somehow dilute the value of new content; in some cases it may have to do with the concentration of advertising on pages with fresher conent;
    2) Users will be enticed to visit a site frequently lest the new content expire before they get to read it;
    3) Old content has the potential to bring in revenue and should not be given away for free (or at least without a fight) The New York Times and Northern Light are two well-known sites that sell older content.

    Alas, experience designers don’t get to call all the shots. At the end of the day, web publishing is a business.

    Ruth

  9. >Sadly, that pretty much describes an inneffable number of blogs 🙁

    You won’t get any argument from me.

    >Mr. Garrett, how about diagramming MapQuest?

    Interesting idea — though my initial reaction is that it may not be complex enough (from an IA/interaction design perspective; certainly it’s plenty complex technically) to make for an interesting diagram. There may be more going on beyond the query-response-iterate model of the core app, though. I’ll check it out.

    Also, please don’t call me that. Thanks.

    >1) Easily available old content can somehow dilute the value of new content; in some cases it may have to do with the concentration of advertising on pages with fresher conent;

    A plausible hypothesis. But I don’t think the answer is to obscure old content; the answer is to drive traffic to your archives, and sell more ads there. Thus making your users happy *and* making you more money.

    >2) Users will be enticed to visit a site frequently lest the new content expire before they get to read it;

    Maybe, although my instinct is that users are more enticed by how frequently content appears than by how frequently it disappears.

    >3) Old content has the potential to bring in revenue and should not be given away for free (or at least without a fight) The New York Times and Northern Light are two well-known sites that sell older content.

    This is a widespread — but unproven — assumption. I haven’t read anything about how financially successful the NYT’s pay-per-view archives are, but I do know the business unit is still operating at a loss. As for Northern Light, I believe the only content that makes them money is content that was never available for free, no matter its age.

  10. >3) Old content has the potential to bring in revenue and should not be given away for free (or at least without a fight) The New York Times and Northern Light are two well-known sites that sell older content.

    >This is a widespread — but unproven — assumption. I haven’t read anything about how financially successful the NYT’s pay-per-view archives are, but I do know the business unit is still operating at a loss. As for Northern Light, I believe the only content that makes them money is content that was never available for free, no matter its age.

    Actually, Northern Light sells “old” press releases. That’s right — from PR Newswire and Business Wire. I can’t imagine why someone would pay for these — they’re usually available on the web sites of the companies who distributed them for at least a few years and sometimes in other far corners of the Internet. PRNewswire.com expires their press releases after 30 days, unless a customer pays for it to be archived on their site for 3+ years. Businesswire.com expires them from their site after 7 days.

    I can vouch that PRN can show releases from far earlier than the past 30 days but chooses not to. Whatever the “truth” behind user behavior as it relates to fresh or old content, there are still many untested assumptions around that influence and sometimes drive business decisions. I’ve seen it happen many times.

  11. this may be a good example: this article hasn’t been responded to after approximately 2 weeks of it being published.

    My prefered method of finding information is a search interface. I have a question/goal and I want it as fast as I can. Later, if that page also had links to “other articles by the author” or “similar pages” or “recommended readings” I would start going down those paths.

Comments are closed.