Crowdsourcing RA | Social MediatorWith algorithms crunching numbers behind the scenes, sites like LibraryThing, Shelfari, and Goodreads are helping librarians with readers’ advisory June 1, 2011
Would you recommend The Brothers Karamazov? How about The Girl with the Dragon Tattoo? The “Twilight” series? Love them or hate them, you’re not alone. There are likely millions of other people out there who feel the same, and they are making their opinions known on sites such as LibraryThing, Shelfari, and Goodreads. Librarians, too, have been increasingly using such services to engage with local communities—and to aid in readers’ advisory (RA) efforts.
The über-example of crowdsourced reviews and recommendations is, of course, the massive book retailer Amazon, with its five-star user rating and review system. It almost certainly informed the design of other book recommendation sites. (In fact, Amazon has taken a few of them under its wing: it has wholly owned Shelfari since 2008 and has a 40 percent stake in LibraryThing.) In just a few years, members of these sites have offered up their reviews and ratings on millions of books.
Librarians also have been using such sites to add comments, tags, and reviews about books (see Neal Wyatt’s “2.0 for Readers,” LJ 11/1/07, p. 30–33, and “The Ideal Tool,” LJ 10/15/09, p. 39–43), and many include similar functionality in their own catalogs.
RA is most often a one-on-one interaction, in which a librarian taps his or her knowledge and experience to determine what book best fits the needs of a given patron—identifying what the reader likes and dislikes and selecting the best book for them. Experimentation with social networking and RA is rampant, with, for example, Multnomah County Library, OR, recently holding a Facebook Readers’ Advisory event (see “Facebook RA,” LJ 5/1/11, p. 24–26).
Such one-on-one dialogs are the heart and soul of RA work, and they are getting more support than ever from book recommendation sites, which can also add complex algorithms to the mix, crunching the numbers on millions of user-provided book ratings.
This process moves beyond individual tags and reviews to provide a broad overview of opinion, and, done well, it can help provide ready-made recommendations for RA. As Goodreads founder and CEO Otis Chandler points out to LJ, “It’s impossible for even the most literate person in the world to be able to have read every book, or know about every subject that a reader might ask about.”
It’s a library thing
LibraryThing has over one million users and more than 58 million books in its database. It gets book information from Amazon as well as from libraries that provide open access to their collections via the Z39.50 protocol.
About 250 library systems use LibraryThing for Libraries, which adds tagging and book recommendation functionality to library catalogs. They include the Central/Western Massachusetts Automated Resource Sharing (CW/MARS) consortium and the Wisconsin-based SWITCH academic library consortium.
Recommendations are a central service of LibraryThing, as straightforwardly seen in its BookSuggester feature (and, more slyly, in its UnSuggester, which tracks down books that you’re sure not to like: You say you like Immanuel Kant’s Critique of Pure Reason? Then you’re sure to hate Sophie Kinsella’s Confessions of a Shopaholic!). But LibraryThing provides guidance in a host of other ways. Members can get recommendations from other members, or can use tags to find specific types of books.
It also provides book-to-book suggestion features, both algorithmically provided and member-written. LibraryThing founder and developer Tim Spalding says that it uses five different algorithms to generate recommendations, with the most powerful factor being the principle of “members-who-have-X-have-Y”—sussing out whether, say, members who have J.R.R. Tolkien’s The Fellowship of the Ring also have Terry Brooks’s The Sword of Shannara and how they have rated it. Even library classifications are factored in.
Spalding notes that such algorithms require a large and rich set of user data to work well, which LibraryThing certainly has. “You also need to play with the algorithms extensively and have some taste and discernment,” he says. “And you need to pay attention to what’s interesting. [F]or example...for almost every young adult book, the best statistical recommendation would be a Harry Potter [title]. But that’s not interesting, is it? So you need an algorithm that knows what’s interesting and that mixes things up in interesting ways.”
Spalding also notes that “straight-up person-to-person recommendations are great, especially when—as on LibraryThing—members can explain the recommendation, and when they can look at the members’ library, reviews, and such in order to see if they’re someone whose opinions [they] connect with.”
Off the shelf
Shelfari, which first came on the scene in 2006, is unquestionably popular, with plenty of librarian usage. One librarian, in a discussion last year in Shelfari’s nearly 1000-member “Libraries and Librarians” group, described how they have used Shelfari to generate recommendation ideas: “In our library, all the staff are regulars on Shelfari. We’ve talked it up to a few patrons as well. Mostly, though, we use it to remember what we’ve read. When our ‘staff picks’ cart gets too empty, I check my Shelfari shelf and pick out a few books that I rated high and add them to the cart.”
According to Shelfari cofounder Josh Hug, Shelfari’s recommendation method is relatively straightforward. “The primary discovery is via following your friends’ and influencers’ reading experience,” he says. “You can also find people who have similar reading tastes and get a snapshot of their reading history, which provides great context to evaluate if you’d like to read a new review.”
The new algorithm
Goodreads, which began in 2006 and now has five million users and a staggering 130 million books in its database, is the biggest player around. Goodreads doesn’t have statistics on how many librarians use the site, but recently it conducted an informal poll, asking its members, “Do you work as a librarian for a living?” About 1900 people out of more than 26,000 respondents said yes—about seven percent. And many libraries run reading groups through the site, including the San Diego Public Library and Salt Lake County Library Services.
Most recently, Goodreads bought the much smaller site Discovereads. The reason? To obtain its recommendation algorithm. At press time, Goodreads planned to integrate the Discovereads algorithm into Goodreads by the end of May. “Personalized predictions”—Goodreads’ guess at how a given user will rate each book—will be up “a month or so after that,” according to Goodreads CEO Otis Chandler. The algorithms will draw on Goodreads’ 100 million ratings and millions of user-generated tags.
The way to RA
Megan McArdle, library services manager of collection development/technical services at Berkeley Public Library, CA, has tried all of these sites. She finds Goodreads to be “more seamlessly, effortlessly social” and uses it primarily to keep track of books she has read herself, so that she can use it for RA later. “If I am trying to remember a particular book that I think will be a perfect suggestion for a patron, I can often scroll back through my log and find the title I was thinking about,” she says. She also said that fellow librarians on the site help her gauge the quality of books she hasn’t read.
Goodreads’ integration of algorithmic recommendations has McArdle “intrigued.” She says that “[i]f this algorithm is smart and learns from user input, it could be a great boon to readers’ advisory. Another source to look at when you are trying to help someone find a good book is never a bad thing.”
One pitfall of such counsel, she points out, is that it may well have to work perfectly out of the gate. “I know that if it gives me a really bad suggestion, I probably won’t ever trust it again,” she says. “But that could be said of a human RA interaction too! ‘Boy, that librarian gave me a terrible book to read—I’ll never ask her for help again!’”
McArdle says that although one-to-one RA experiences “beat computers in this kind of task, hands down,” algorithmically generated recommendations can still be worthwhile. As McArdle puts it, they’re “another tool to help those humans!”
|David Rapp is Associate Editor, Technology, LJ|