5 comments

  1. Nice.

    Having researched search in some depth over the past year and a half I have to add that search, and the experience of performing a search, is so very dependant on how the content owner approaches the challenges of:

    a) Organising their content (their collection, or collections)

    b) Indexing their collection

    c) Presenting search results

    d) Adding value to the entire experience in regards to pre- and post-search functionality and usability

    Fundamentally these elements contribute to how effective a site search is, as opposed to the wider public search engines effectiveness.

    A simple example of c) and d) is “pre-canned” results. Set by a site admin team these can add incredible weight to how users find the right result first time. This in itself points to just how all the primary skills of IA lend themselves to a better site search tool.

    These pre-set results are triggered by query patterns and can cover misspellings, adapt to varying approaches to locating similar answers and also provide a gentle nudge to user so that they can build a better picture of what a site structure really does contain.

    Additionally, if there was any one overriding lesson I’ve learnt it’s that search functionality needs constant (and consistent) review in order to remain a great tool for any site. Constantly reviewing search logs and relating them to media or national events, marketing initiatives or site changes must be folded in to regular cycles of content reviews and usage tracking.

    Skipping this essential work means that not only do you miss identifying badly returned result sets, ill judged ranking weights and (frankly) badly indexed content – but you also miss a chance to understand and adapt to the user experience.

    Cheers

    Brian

  2. I’m no expert on searching, but I am somewhat of a developer (web site). I don’t think the problem is entirely on the search engine and the users. It certainly doesn’t help when web developers throw a bunch of random, non-related keywords into the site as to pop up on searches more frequently. To a search engine there’s really no way to differentiate between these sites because they’re based mainly on the descriptors and keywords given to them by the code on the site. So I think a lot, if not more of the responsibility falls on the developers not just the individual users and the search engines.

    I do feel that refining searching is very important. We live in a world based on time and the more we can get done in the least amount of time the better we are. So, of course, when it comes to searching I’d love to see new techniques to filter out the obvious unrelated sites and become better at showing me “best matches”.

    Good read, thank you.

  3. In my experience ‘training users’ never works in a self service medium like the Web.

    I suspect that the real problem is that most search interfaces don’t encourage multiple queries. We know that search is an iterative process, and in the real world, people do have long conversations with each other when searching. But they don’t do this online.

    Part of the problem is search results: they are not presented as a dialogue. Users’ attention is focused on the result list and the cues to encourage users to modify the search are overwhelmed. So the ‘dialogue’ is limited to ‘Do you have x?’, ‘No.’, ‘Goodbye’.

    I see very few interfaces that progressively reveal advanced features (instead users are given a choice of one text box or every concievable control). Progressive disclosure encourages dialogue.

    As designers, I think it’s our job to understand how to present features and interactions so that users see their value. In a sense, this is ‘training’ – not through instruction, but through environment.

  4. Perhaps one of my biggest quibbles with some of this is when IAs or designers focus on search as a stand alone solution or a closed-loop feature set for “finding information.” Search is but one key piece of a larger, findability strategy that users employ to meet any number of disparate needs. By diving into traditional IA/UX initiatives like content classification, search interfaces, feedback messaging, refinement/sorting tools, etc., I find that a large portion of what search really is can be completely missed.

    I’d suggest that the following should be considered:

    1) Determining where search fits in the set of archetypal users’ offline and online finding behaviors
    2) Understanding the emotional, physical and cognitive contexts within which a user comes to a website to find information and how these factors may affect the perception of what search is and the expectation of what search will deliver
    3) How search functionality is perceived as integrating with other on-site finding features like browsing or exploring functionality
    4) How the website’s UX articulates or infers what sort of information can be found by using search

    Insights from these threads of research should then inform a search model that can be tested and refined. Only then should the work of designing the UX and UI start.

    Finally, one should also consider that information finding or search does not simply stop when a target is recognized among a list of distractors. Users must have an opportunity to “acquire” or “encode” the information for search to really be useful. As such, search results should empower users to be able to act on the information that they just spent some effort finding.

Comments are closed.