|
on Boxes and Arrows, Newsmap, Yahoo, Google, Amazon, and Ebay
As I mentioned a couple of weeks ago, I was excited about finding Boxes and Arrows through the IA Summit site; now that I’ve taken a bit more time to explore, I can see not only acknowledge the implementation of solid IA in the site’s organization, but appreciate the site as a great resource for any level of IA or information science explorer.
While I played around with Newsmap, I was ambivalent about its use. The very impressive part, of course, is the double-coding of news stories by topic and by popularity (using clear visual elements of larger text and different colors)—I can understand how this could allow a user to quickly browse the largest news stories rather than sift through pages and pages using, say, the Google news aggregator by itself, or a number of news sites. On the down side—and I’ll blame bad resolution for this, too—I wasn’t able to see the smaller articles (the text wasn’t clear) and, overall, I was a little distracted by the variety of colors and sizes and the large number of boxes. Sifting news this way feels as if I am not thoroughly interacting, but rather am grabbing almost randomly—especially when I start to try to read the articles marked by smaller boxes and move through most of the page.
Since I have used Amazon almost daily for work for the past six years, I admit to a bias from the outset—though I know better than to rely on their spotty customer support, their navigation, search system, and controlled vocabulary are definitely better than average for any retail site. Navigation is easy, especially as driven by facets on the left, (the same is true whether browsing from the front page or when navigating from a search result to drill deeper). The search results are good—not always great, but good. An example: search for a popular show that is available in multiple formats, like Warehouse 13. This show is available on DVD as well as through Amazon’s Video on Demand streaming video collection. Upon searching for just ‘warehouse 13’ on the front page, though, the search results in order are: 1)Warehouse 13 season 1 on DVD,
2)Eureka season 3.5 on DVD (a show that has had a cross-over, is from the same network, and is noted as having similar viewers based on the ‘customer who bought this’ notes on Warehouse 13 and Eureka titles on Amazon, but is not Warehouse 13),
3)Warehouse 13 season 2 video on demand,
4) Warehouse 13 season two (notice the difference in terminology between seasons, ‘2’ versus ‘two’) on DVD but not yet available,
5)13 Hours in a Warehouse on DVD (a film that is not related to the show),
5)Warehouse 13, season 1 video on demand.
If this search result were as successful as I could hope for, the results would instead at least be all of the Warehouse 13 titles grouped as the first 4 results, rather than spread around through the first 6. Also, the ‘departments’ and categories (facets) for further navigation/finding are very intuitive, clearly driven by user demand and user behavior. Comparing this to other sites—even those like Borders or Barnes & Noble, neither of which are awful—I can understand why Amazon has the market share it does…and not just in books and music!
Yahoo’s site (http://www.yahoo.com/) just seems to be behind the times. That’s a weak critique, but the navigation, while relatively intuitive—I want just news, I hit Ctrl+F to find “news” on the page—is not especially effective. Going to the news subsite, I was able to see a list of stories, and if I worked enough (by either squinting and scrolling down the page for a very long time) I was able to find the timeliness and categories of different news stories. I almost want to take back what I said about Newsmap after trying to view news through this more traditional method, since at least with that site I was able to aggregate news stories into one window rather than sorting through a seemingly endless list. My search results seemed to be weak, too, in that my search queries retrieved results with the search word in the title of the article, (I tried this with ‘Palin’ and ‘BP’). I was on the eighth page of results for ‘BP’ before I found an article without BP in the headline—which tells me that while the search results may work well for people who are looking for specific things (especially if those things can be captured in a keyword that will be in the headline), it does not facilitate discovery or finding of related items—like just an article on the oil spill and economic impact in the Gulf of Mexico.
Google’s site establishes the standard for effective searching, including even aids for exploration like “Searches related to [initial search]” within a user’s search results. Even out of order, (‘Crime and the City Solution’ versus the mis-ordered ‘the solution and city crime’), the results retrieved are related to that group of words, ‘crime and the city solution’. I only received results about urban crime about half-way down the second page of results. The format groups selectable on the left are an intuitive way to target search results, and once going on (to ‘images’ for instance), the further categories available use common language that I expect most or all users could easily employ to reach their goal. Navigation is easy—though this may be biased, since Google is likely the search engine that taught me what to expect of search engines, having used this for so many years after Yahoo, AltaVista, Dogpile, and many others—and even varies by object type; for instance, navigating page results happens in a list of text with interspersed thumbnails for video results, whereas image results strip the text from the top level allowing scanning much more like Newsmap. This seems a much more effective way to browse images, rather than scrolling down an interminable list.
Ebay (http://www.ebay.com) more and more looks like a normal retail site, with ads and large, attractive images taking up a substantial part of the real estate on the front page to promote daily specials and seasonal interests. The search results are, like Yahoo news, based on the headlines (the item title), at least by default. While Ebay does enable a search option to get at the description provided by a seller, this could be a nuisance for the novice Ebay buyer or seller who do not know to select the description as well as the title as the search indexes. Ebay has enabled a fairly smart ‘related searches’ feature, which I would be curious to know more about; I assume that they mine repeating words in similar listings, review all searches as groups by user/session and look for overlapping terms between users, or maybe use some other mechanism. In any event, searching for the Australian band the Cannanes not only led to relevant results in the Music category, but also brought up two related searches of fairly obscure, related bands (by virtue of shared band members), Ashtray Boy and Boyracer. Once a user gets to a results list, the categories for deeper investigation or limiting are intuitive—though I would question whether these categories are tagged by the seller posting the item or by some automated process on the side of Ebay, (I can imagine manipulation by sellers to push their items into multiple categories to try to generate more views, thus more sales).
Information Architecture for the World Wide Web, chapters 10 & 11
Chapter 10 of Information Architecture for the World Wide Web does well to acclimate the reader to the landscape of research methods that can be used to inform IA deployment in web development. Morville and Rosenfeld (2006) guide their readers through “a balanced approach to research” by focusing equally on context, content, and users, (p. 233), and reviewing different avenues to engaging with each area. Whether through mapping of the data contained in a site, using card sorting to get a sense of users’ organization of that content, or conducting surveys, the authors provide a lengthy list of approaches to doing the research necessary to affect good information architecture.
The usefulness of these chapters is not just in the laundry list of tools to conduct the research and develop the strategies necessary for a solid information architecture foundation; it is also in the way the authors offer words of caution about difficulties with some of the research tools and barriers to designing a comprehensive strategy. For instance, in the same way that at the outset of this course most students lacked the skills and vocabulary to speak about what we were reading or what we could do with the information we were synthesizing, Morville & Rosenfeld (2006) describe a problem with the input of focus groups, “most people don’t have the understanding or language necessary to be articulate about information architectures,” (p. 253). The even-handed approach in these two chapters—indeed, throughout the book–exactly represent the tension of doing the work of IA: to have a strong desire to improve organization to facilitate finding or use, while remaining cognizant of the pitfalls created by (1) outside forces, (resistance to change, politics), (2) technological limitations, and (3) even the very users that need to be continuously consulted to have an effective, progressive information architecture. The effective use of strategy seems to be at the very core of information architecture–if the work processes of an information architect can be described in stages, this stage is the essential, comprehensive battle plan. Morville and Rosenfeld’s use of the work on Weather.com is helpful in detailing many of the essential steps for planning that will lead to the specifics of design, like designing a conceptual blueprint and wireframing, for IA product delivery. I look forward to reviewing these chapters again as a sort of guidebook upon my attempts at doing some of the work of an information architect.
References
Morville, P. & Rosenfeld, L. (2006). Information Architecture for the World Wide Web (3rd ed.). Sebastopol, CA: O’Reilly.
on Information Architecture for the World Wide Web, ch. 7, 8, & 9
It seems common practice for websites to include multiple embedded navigation systems, (global, local, and contextual), and some form of supplemental navigation systems, (a site index, a guide, and/or search functions), as the best methods to enable easy movement within a site’s content. Possible impediments to easy, intuitive navigation within a site are, as Morville and Rosenfeld caution, customization and personalization of websites; while many sites (especially retail sites) allow some level of customization, the user’s investment of time and energy in customizing any site might be limited, especially if the site will not be frequented, (Morville & Rosenfeld, 2006, p. 141). We might think of incorporation of social navigation in a similar way – while some aspect of it may be helpful, (for instance, tagging, reviews, star ranking), which could encourage users to contribute in some fashion. As with customization and personalization of any web presence, though, so too with contributing to social navigation: users have a limited amount of time and will likely not interact with websites on this level unless they are frequent visitors and can see benefits for themselves. Speaking from experience, I know that I only post reviews or ratings on a couple of sites I use frequently, (ones where I find value in the reviews of others, like Amazon, Netflix, and Zappos), where I also look to the reviews and ratings of others as a guide for my own activities, (for instance, browsing other content because it is referenced as comparable in a review).
As far as elements of searching are concerned, this chapter gets at how remarkably complex elements of search can be, even by touching on the surface of design, indexing, and results organization. Further evidence for the complexity of search design and functionality is apparent enough in the entire book Morville authored on the subject, Search Patterns: Design for Discovery. This chapter’s overview of areas accessed by a search, algorithms that drive searching, and organization/display of results will definitely be a go-to resource if I’m tasked with developing any new website, or overhauling an existing site. I expect my first stop will be evaluating the content of the site to make the determination if search functionality is even appropriate, just as Morville and Rosenfeld (2006) recommend, (pp. 145-148). Similarly, the chapter on metadata, controlled vocabularies, and thesauri is effective as both a quick reference tool and brief overview. Just as information architects must be concerned with ease of navigation and access to all a site’s content in determining the need for a search function, so too with the need for a thesaurus—or even use of any controlled vocabulary—in aiding finding actions.
References
Morville, P. & Rosenfeld, L. (2006). Information Architecture for the World Wide Web (3rd ed.). Sebastopol, CA: O’Reilly.
on the CIADA site, http://www.caida.org/home/
My only comments about this site are steeped in awe over both the spectacular organization and the staggering amount of data it makes accessible. This is clearly a site intended as an information source and, likely, also a facilitator of community for those working to understand the structure of—and data transmission via—the internet. While some of the labeling is so standard as to be comprehensible by the absolute layman (myself), I expect the abundance of acronyms and field-specific language used to present the collected data (and its interpretation, in papers and other formats on the site) do well to keep the novice at a distance. As for the actual content, I’m not sure I understand some of the maps like the IPv4 & IPv6 Topology Map – is this meant as a snapshot of internet traffic as routed through only a certain number of monitors, or is it meant to represent all internet traffic?
(Revised, 9/20), on the Pew Internet & American Life Project
I wasn’t aware of this project, but am happy to know of its existence. It seems a great warehouse for data regarding trends of internet usage across the country, which I think would be especially useful in guiding thinking about new projects within, say, libraries. In some ways, I’m not at all surprised at the results of some of the studies — finding, for instance, that more than half of Americans 65 or older are online, versus 95% of those ages 18 to 29, (“Demographics of internet users,” 2010). On the other hand, some of the findings were unexpected, such as Purcell’s (2010) “Information on the go” presentation, which says nearly half of African-American adults use cell phones to access the internet (far outpacing Hispanics and whites), (slide 11). Information such as this is clearly useful for anyone seeking to organize information for easy access: whether the IA professional is working at the Wall Street Journal, a library, or an online retailer, it seems obvious that a conclusion to finding widespread internet access via cell phones would be to develop online content in highly mobile-friendly ways. As an afterthought, I can imagine those responsible for marketing even using this information to customize the mobile experience of their users (perhaps expecting higher proportions of some demographics via mobile technologies, thus delivering customized content).
References
Pew Internet & American Life Project. (2010). Demographics of internet users. Retrieved from http://www.pewinternet.org/Static-Pages/Trend-Data/Whos-Online.aspx
Purcell, K. (2010, Sep. 20). Information on the go. Keynote address at the Arizona State Library’s E-Reader Summit and Technology Showcase, Tempe, AZ. Retrieved from http://www.pewinternet.org/Presentations/2010/Sep/Information-on-the-go.aspx
on UC Berkeley’s How Much Information?, http://www2.sims.berkeley.edu/research/projects/how-much-info-2003/
While the study itself is fascinating–I’m honestly a little surprised to see a serious attempt at quantifying annual information creation–I would love to see a comparative analysis across multiple years (to present), especially if the number of files (for instance, for p2p sharing) was tracked, too. The reason I’m particularly interested in the number of files as well as the size of the data is because I expect–despite the qualification about compression near the end of the Executive Summary, http://www2.sims.berkeley.edu/research/projects/how-much-info-2003/execsum.htm), that quality of video and audio created and transferred over the internet is substantially increasing as (1) download and upload speeds have increased with widespread high-speed connectivity, and (2) there are many more outlets for higher quality digitized products, (I think here to the iTunes offering to prorate higher encoding level versions of songs already purchased, or of HD offerings of episodes of television programs through Amazon). As quality increases and transmission of larger and larger files becomes easier, I think capturing the number of discrete units of information as well as the overall volume will be significant — especially for our purposes of organizing and facilitating access to all of this information.
While I understand the overarching concept of contextual design, and (though we didn’t name it as such) believe I have participated in its application as a user, I am having trouble with the difference between “interpretation” and “data consolidation” in the 7 step process. The Wikipedia article includes this description within the “Interpretation” part of the process, “Data from each interview is analyzed and key issues and insights are captured,” (“Contextual Design,” 2010, “Interpretation” para. 1). The article also includes similar language in the “Data Consolidation” section, “Data from individual customer interviews are analyzed in order to reveal patterns and the structure across distinct interviews,” (“Contextual Design,” 2010, “Data Consolidation” para. 1). Since “models” are built from both steps in the process, (again, at least according to the Wikipedia article), is the substance or structure of the models the differentiating factor between the steps? Or is the difference that in the “Interpretation” phase, a single interview is used to build a model, then in the “Data Consolidation” phase, all of the different models that were built from separate interviews are compared to find commonalities? Also, I believe I was asked to take part in the “Visioning” process used in the OLE Project (http://oleproject.org/) by constructing “stories” — is this standard practice for the story creation to be done by the users, especially if they have not been involved in any other aspect of the project development (or contextual design, other than being asked about tasks in a way that seems like the data-collecting phase)?
I’ll continue to refer back to this process of (1) collecting data, (2) interpreting data/(3) consolidating data, (4) ‘visioning’, (5) storyboarding, (6) diagramming with User Environment Design, and (7) prototyping as I approach future projects (whether as someone simply doing tasks, the user, or the one designing how they are done or could be done more efficiently). I hope, though, that I’ll have the good fortune to have knowledgeable guidance as I attempt to apply this model in some sample situations before actually trying it in the real world!
References
Contextual design. (n.d.). In Wikipedia. Retrieved September 11, 2010, from http://en.wikipedia.org/wiki/Contextual_design
on Rosenfeld Media’s page on Rosenfeld’s upcoming book, Search Analytics
I’ve heard the term search analytics bandied about in libraries, but hadn’t yet heard of a sustained project to record and analyze search terms in order to better define the organization of library resources; needless to say, I’m interested in exploring Rosenfeld’s idea–and book–further, at least judging by the first chapter, (http://rosenfeldmedia.com/books/searchanalytics/content/sample_chapter/). The idea of a continuing review process is especially appealing as a method any group can use to continually evaluate and update the organization and delivery of their web content. Though I can see potential dangers if (when) search engines load the dice for certain search terms — delivering a retailer’s site as the first result for a search for ‘men’s clothing’, for instance — I can definitely see the positive applicability of the ‘best bets’ Rosenfeld talks of as a way for some groups (like a university, just as in his example) to better serve their users with search analytics. I would be interested to see this applied to a library’s OPAC, too, (or some similar mechanism, for when subject headings and all manner of keyword searching fail the user).
on Information Architecture for the World Wide Web, chapters 5 and 6
Morville and Rosenfeld (2006) enumerate the “challenges of organizing information” as follows: “ambiguity” (over labeling on websites), “heterogeneity” (of content of websites), “differences in perspective” (about proper organization, like determining association of content), and “internal politics,” (pp. 54-58). Since I continually move back to the context of libraries when thinking of IA, the applications of these ideas that immediately come to mind are based on experience at the University of Florida Libraries. As I’ve mentioned previously, I know the site I inherited and was responsible for maintaining certainly did a poor job of addressing the ‘challenges of organizing information,’ but to step outside of that one sub-site and to instead look at the library’s site as a whole, I’ve noticed the tension coming from internal politics. As the development officer and his staff have worked harder to increase donations to the library, I’ve noticed more real estate on the front page focused on that function; whereas in the past, the ‘giving’ portion of the site was buried in a different grouping that also included what is now labeled ‘services for alumni and friends,’; in addition to increasing the paths from the front page to get to the web content from that office, there has also been a heavy rotation within the ‘news and highlights’ section of the page of donations and fundraising information, (see http://www.uflib.ufl.edu). I expect the organization of these development areas and their place on the library’s main page are a function of the political will of the library administration, (to include the head of the Development office, as he is a dean and a member of the admin team).
I appreciate the explication of organizational schemes and structures in the writing of Morville and Rosenfeld—and I suppose most sites tend to toward using hybrid schemes; while I can see the importance of providing an alphabetical, chronological or geographical index to users, I suspect this would be considered only one way that users seek their destination. Again, I refer back to the UF Library’s site, where some of these organizational schemes are used but in order to enable faceted searching, not as the only entry point for searching, (so, in other words, an alphabetical index of all titles in the library isn’t the preferred method of reviewing what is available, and even in a results list, sorting alphabetically or chronologically is possible but isn’t expected as a primary search mechanism). Instead, I think like many sites the UF Library does one good thing in trying to anticipate the variety of approaches of users. I’m not certain how well this is done, though; as Morville and Rosenfeld (2006) write, “shallow hybrid schemes are fine but deep hybrid schemes are not,” (p. 68). I’m curious – could this account for why so many library websites are not very easy to navigate, if not outright impossible to use (fully and well, anyway)? Does this have to do with the variety of resources (in volume, and in volume of format) libraries seek to make available, and an insufficient way of organizing and allowing access to all of this information? As an almost aside to this, are libraries enabling user-tagging of materials in their collections, and if so is it possible to rely on user tags to drive searches? Are there other examples of successful user tagging, (other than del.icio.us and Flickr), and can we consider rating systems as a type of tagging (as used with StumbleUpon) or need it be based on tagging content with words?
I can see how labeling systems go hand-in-hand with systems of organization, especially as both should facilitate access to information. It seems that the rules for developing labeling systems follow in a similar fashion from those that would apply to methods of organizing: use appropriate language to meet the user at his or her level, remain consistent in labeling across the site(s)—to include not leaving gaps in coverage, as well as in maintaining even and consistent distribution of information across labels/categories.
References
Morville, P. & Rosenfeld, L. (2006). Information Architecture for the World Wide Web (3rd ed.). Sebastopol, CA: O’Reilly.
on Information Architecture for the World Wide Web, chapters 3 and 4
Morville and Rosenfeld (2006) identify four different “information needs” that inform user search behavior, which in turn should impact the work of the information architect in organizing a web site or other information environment –
- A user looking for an answer performs a search and finds an exact answer to the exact question; or, “known-item seeking,”
- A user is looking for some possible answers or more data associated with a subject; the search seems to be as informative to helping the search as the answers are; or, “exploratory seeking,”
- A user carries out a comprehensive sweep of data wanting to net everything available related to a subject; or, “exhaustive research,”
- A user previously found and tagged something and is seeking it again; or, “refinding,” ( p. 34).
Morville and Rosenfeld (2006) also class “information seeking behavior” into a few different categories: “searching, browsing, and asking,” (p. 35).
I can easily think of examples of each type of search, all of which I have encountered while working or studying in a library. While working on the reference desk, I often had patrons ask me “Where is [a book that was being held on course reserve]?” or even “Where is the bathroom?” This sort of “known-item seeking” was simple enough to answer, (and their face-to-face contact seems to be of the “asking” sort of information-seeking behavior). This sort of search does seem as simple as the “‘too-simple’ information model” Morville and Rosenfeld (2006) mention, (p. 31).
As for exploratory seeking, I admit to doing this all too often myself, especially using interfaces that allow faceted browsing – when starting a search in a library’s online catalog with something as simple as Russian author’s name, (say, Pushkin), I can retrieve results in many languages and pinpoint only those in English and Russian for my comparative translation work just by using the language facets. While performing this targeting reduction, I might happen upon a different author’s name listed within the author facet; perhaps this is the first connection I have ever seen between the two authors, (maybe Nabokov and Pushkin), and even leads me to find a work absolutely relevant to my own project, (like Nabokov’s translation of Eugene Onegin). Following this example, it seems obvious to me that the actual process of looking for an answer can be instructive, leading not only to an answer but also causing additional searching.
“Exhaustive research” seems related to “exploratory seeking” in a way, since it can also benefit from an initial subject (or author, or keyword) search, with a great explosion of hits led to by as many facets (or aspects, or result limiters, or whatever you’d like to call them) as are available. Also, in the same way that exploratory searching is instructive, so too is exhaustive research each time a search effort returns unknown (but relevant) results.
Finally, I can recall working public service and too often trying to guide students back to an article found through a poorly-remembered database search. Not only is “refinding” a recurring need in users, I can only thank any information professional for recognizing the need to facilitate this process, (whether through del.icio.us, breadcrumbs showing the path of a search, or even the ability to bookmark webpages on a local machine). In short, I can see how an understanding of these two groups of ideas, about information needs and information-seeking behavior, can help positively influence the organization of any system that stores information.
As far as “The Anatomy of an Information Architecture,” I was happy to see a few different examples to assist me in conceptualizing IA in practice. I expect with more experiencing analyzing web content in terms of IA, I will develop faster interpretive abilities for categorizing organization by IA components – as Morville (2006) names them, “organization systems,” “navigation systems,” “search systems,” and “labeling systems,” (p. 43); elsewhere, Morville groups the components differently as “browsing aids,” “search aids,” “content and tasks,” and “’invisible’ components,” (pp. 50-52). Upon reviewing the University of Florida Libraries unit webpage I mentioned in previous posts, I can already see the where aspects of the site in terms of the IA components – and, actually, can only too well see the gaps and missteps in organization that an understanding of IA and its application could resolve, (see http://www.uflib.ufl.edu/acqlic/mono/ for the site, and its embarrassing lack of intuitive categorization).
References
Morville, P. & Rosenfeld, L. (2006). Information Architecture for the World Wide Web (3rd ed.). Sebastopol, CA: O’Reilly.
|
|