The RecSys 2012 Limerick Challenge

Announcing a contest for the best RecSys-related limerick! Winner to be recognized at the 2012 conference.

Prize: 100 Euros, possibly to be split among multiple winners.

Post your limerick as a comment in response to this post. Keep it appropriate for all ages, please, but it’s OK if mathematical sophistication is required!

Answers to frequently questions that might get asked infrequently:

  • Judges TBD.
  • No, we don’t promise to be fair or careful in our judging.
  • No, even if there are many submissions we will not announce an acceptance rate.
  • No, you shouldn’t put it on your CV if you win, unless you already have tenure.
  • Yes, if you win you should spend the money buying a round at a pub in Dublin, or possibly make a road trip to Limerick.

Panel on The Filter Bubble

The phrase “filter bubble” was coined by the author Eli Pariser in his best seller book with the same title. The book is a critique of the proliferation of personalization technologies across the Internet and its potential negative impact on the ability of internet users to be exposed to diverse sources of information and varied viewpoints. The panel discussion at the 2011 conference explored this issue from the perspective of people working on personalization and recommendation technologies.

Panelists were:

  • Paul Resnick, University of Michigan
  • Joseph Konstan, University of Minnesota
  • Anthony Jameson, DFKI – German Research Center for Artificial Intelligence

The panelists addressed three questions:

  • Are there “filter bubbles?”
  • To what degree is personalized filtering a problem?
  • What should we as a community do to address the filter bubble issue?

To help with note-taking on the oral discussion by panelists and audience, please edit the  summary of the live discussion.

Slides used by the panelists (and some that weren’t used).

————-As of 11/29/11, the state of the TitanPad summary of the live discussion has been copied here, just in case TitanPad goes away at some point.

This document contains a summary of the oral discussion at the 2011 RecSys panel. Michael Ekstrand is serving as the primary note-taker. Feel free to fill in or correct anything he missed. Shortly after the conference, these notes will be merged into the main blog entry about the panel.
If you’d like to add written comments to supplement the discussion, please add them using the comment box on the blog entry, https://acmrecsys.wordpress.com/2011/10/25/panel-on-the-filter-bubble/ ‎.
Moderator: Bamshad Mobasher
Panelists:
  • [PR] Paul Resnick, University of Michigan
  • [JK] Joseph Konstan, University of Minnesota
  • [AJ] Anthony Jameson, DFKI – German Research Center for Artificial Intelligence
Welcome!
We’re getting ready to start. Panelists lined up.
We will be discussing the Filter Bubble, popularized by Eli Pariser.  Critique of personalization as it is deployed across the web.
The panelists addressed three questions:
=================================
Q1: Are there “filter bubbles?”
[AJ]
  • Anyone in the personalization field is aware of the possibility of narrowing of experience . PR one of the first to mention in literature (in mid-2990s). It’s been textbook/handbook knowledge for 10 years.
  • To what extent does the problem really arise in practice?
  • (Slide:) Look at Google News (example from the “Filter Bubble” blog page) in personalized vs. incognito mode
  • Note that only a direct comparison between personalized vs. nonpersonalized modes can reliably reveal the effects of personalization.
  • Closer inspection shows that all of the “Top stories” are still visible in personalized mode, at most 1 click away
  • So we need a more precise way of assessing the seriousness of any narrowing.
  • (Slide:) Proposed RecSys “bubble” scale, with two questions
  • 1. To what extent is the user prevented from encountering missing things (M) “omitted” by personalized algorithm?
  • There is no M
  • Links to M are visible
  • The user is likely to see M elsewhere in the site even without looking for it
  • The user can find M if s/he thinks it might be there and actively looks for it.
  • There is no way for the user to find M
  • 2. In comparable cases, is M consistently the same type of information?
  • Words like “bubble”,  “hiding”,  and “editing out” imply extreme values on both questions at  once; where  are the examples of such cases?
  • (Slides:) Own Google search for “Egypt” shows no difference between personalized and non-personalized results (when not signed into to Google account).
  • Subsequent search for “BP”: Only difference is addition to personalized results of a couple of results about BP in Egypt, due to immediately preceding query about the Egypt. All of the other results in the nonpersonalized list are still available in the personalized list, though some have been pushed a bit farther down in the list.
[JK]
  • Yes. But let’s say more
  • They always have been.
  • Fundamental trade-off.
  • FDR – post-1915 (“responsible”) press elided the wheelchair from public view.
  • Responsible press is a filter – is it better?
  • News programs on Big 3 networks in US (e.g. Cronkite era) – we had interesting American programming, but little world news.
  • Cable programming – lots of it. Now can watch al-Jazeera or Fox News for different perspectives on Egypt. People complained about lack of common ground – we aren’t all watching “the Evening News”.
  • Personalization about delivering to a person what they want – how much do we trust their choices? Are they good for them?
[PR]
  • Yes, bubbles exist. Will wait to say more.
=================================
Q2: To what degree is personalized filtering a problem?
[JK]
  • First must ask: compared to what?
  • What is alternative to personalized filtering?
  • If Amazon just had a search, we’d have no bias but would have overload.
  • Two questions
  • What’s the alternative?
  • People are unlikely to explore all the possibilities themselves.
  • People make lists that others consult
  • Objective content model
  • Expert-engineered organization
  • How correlated are these bubbles, and how transparent are they?
  • Do we have multiple independent bubbles with different views, or one big brother bubble?
  • How much opportunity to see things outside the bubble?
  • Can I find interesting things outside?
  • Am I occasionally reminded of the outside?
[PR]
  • Think it’s a good thing when different people have different results.
  • Better for cocktail party to have 17 things people have seen or 10 things?
  • We should all boycott Duck Duck Go – it would be bad for society if we all used it.
  • Underlying concern: people will just get reinforcement of what they already believe, already agree with.
  • Little evidence to support this. Studied under “Selective Exposure”, and the results are quite mixed.
  • Some results from PR’s research:
  • Everyone likes reinforcement
  • Some people prefer mix of reinforcement and challenge
  • Challenge is only mildy aversive on average
  • People viewing extreme sites spend more time on mainstream news as well
  • Not so concerned that this is a major problem
[AJ]
  • Relevant concept of “choice architecture” from Thaler and Sunstein’s book “Nudge: …” Consider the arrangement of food in a cafeteria. Any arrangement will influence what people eat, so there is no “neutral” arrangement. What’s the best arrangement is a tricky question (optimize profit, healthiness, variety, …?) And who should decide (the cafeteria manager, the visitors, both, …)? “Nudge” can mean either “push” or “hint”.
  • Personalization is the equivalent of having the cafeteria rearrange itself for each visitor, so it vastly enlarges the space of possible solutions.
=================================
Q3: What should we as a community do to address the filter bubble issue?
[PR]
4 directions we should explore:
  • 1: take longer-term view of accuracy, esp. in terms of exploration/exploitation
  • 2. Portfolio preferences – for collections, not individual items
  • 3. Tools for perspective-taking – see the world through other eyes (e.g. Living Voter’s Guide)
  • 4. When immediate preferences & aspirations conflict, build in nudges towards better selves. Don’t give them broccoli when they hate it, but give it to them when they aspire to eat it.
[AJ]
  • Giving people better control over personalization remains an important, under-studied research challenge.
  • (Slide:) A very early (2005) interface for personalized Google search allowed real-time control over amount of personalization with immediate visual feedback.
  • But even it didn’t help much to solve the tough aspects of the control problem:
  • Most people don’t want to take the time to exert fine-grained control
  • But it’s also hard to find one or two good long-run settings
  • It’s often hard to predict the consequences of a control adjustment
  • And important cumulative consequences may emerge only over the long term, making it hard to learn from trial and error
  • The good news: There’s been so little attention to this problem that there are plenty of unexplored directions for seeking improvement
[JK]
  • Why do we care?
  • There are people who are wrong (read: disagree with us) who are dangerous.
  • We worry about “dangerous” people (e.g. Bachmann or Thomas) isolating themselves, but not people we don’t consider “dangerous”
  • We don’t push for EE training for the Amish
  • There are many people very happy in their shell.
  • Allow people to show off their diversity of reading?
  • We’re doing many things right now. Many real problems are already on our research agenda.
  • FB issues are a problem with implicit ratings – click and interest are not equivalent (sometimes reading the headline is enough)
  • Analyze portfolios for topic, diversity of point of view
  • HCI issues – how to give awareness & control without overwhelming users? Easy to add levers.
=================================
Questions
  • Peter Brusilovsky
  • Message from us to the outside – do we have a book saying personalization is great?
  • Transparency – show what is there beyond the recommended items.
  • [JK]: As a field, we need to have some idea of what it means to be coercive or hiding. FB example is most concerning – is only way to see updates to go friend-by-friend?
  • [AJ]: This is the most extreme example we’ve seen on RecSys “bubble” scale so far – User can find missing information (only) by actively looking for it.
  • Shilad Sen
  • We have different framings – giving users what they want, meeting needs of host/designer, etc. Can solving filter bubble be fit into that framing?
  • [JK] Framining is nice – what people want conflicts with paternalism.
  • [PR] Giving people what they truly want – long-term perspective including aspirations. “What they want” doesn’t have to imply short-term pleasure.
  • [AJ] What do we mean with expressions like “what people want” “what they like”, or “their preferences”? (Talk at workshop on Friday.)
  • BBC rep
  • BBC royal charter demands transparency
  • Do personalization and bias have to be the same thing? Is it possible to have impartial personalization?
  • [AJ]: Depends on what “impartial” means. If balanced, balanced between what and what (cf. the cafeteria problem)?
  • (Remark later in discussion:) Though there is no absolute notion of balance, you can define a balance  policy and use it as a constraint for personalization.
  • [JK]: We can personalize w/o what most ppl consider bias: consider reading levels of literature – different version of basically the same stories. Likewise, geographical personalization. People may well consider 60% of news referencing cricket for a cricket lover biased.
AJ: is what we call “filtering” really filtering? Let’s ban the term unless it’s actually filtering.
  • PR: votes no – if it’s hard enough to get to the stuff it’s effectively not there.
  • JK: huge difference between promoting a few things to the top vs. demoting so far you won’t see them. Demoting below the top 100 may well be filtering.
  • AJ: There is a difference that goes beyond the question of how hard it is to find the nonpromoted items: “Filtering” suggests that particular types of item (e.g., news items about foreign countries) are systematically being “edited out”. When instead the nonpromoted items are being pushed down to a less prominent location, there is no systematic bias against particular types of content.
  • Pearl Pu
  • Issue is adoption, not filtering
  • In early adoption, ease of use is paramount
  • As adoption increases, shift to control
  • Amazon “fix the recommendation” useful direction
  • Bryce
  • Couple of emotional appeals
  • Are we comfortable with machines making these decisions?
  • Gets upset when he hears about things from friend that he doesn’t know already.
  • Similar issue to high-frequency trading debate
  • Gunnar Shroeder
  • Filter bubble is there in some applications
  • Returning to Germany from Canada, couldn’t see Canadian as easily.
  • [PR]: Perspective-taking
  • We need to make sure our tools are used for good, not evil.
  • Martijn Willemsen
  • Problem: people don’t understand what is happening. We do – we built it. People have simple model of Google – type keywords, get best fit. They have the wrong mental model. Finding solution is hard – controls, transparency, etc. Starts with transparency so people understand what is happening.
  • Xavier Amatriain
  • New term: Popularity Bubble. Obvious alternative to personalization.
  • Problem isn’t that Google is personalized – problem is that there is only one Google.
  • Neal Lathia
  • If recommenders were perfect, there would be a bubble.
  • Which woman dying is important? There are many wars, many women dying. Pariser seems to say there is an absolute truth we need to encode into algorithms.
  • ?
  • Is there opportunity or need to recommend common things to improve shared context? e.g. Alice can see A or B (equiv. from recsys perspective), Bob can see A or C – recommend A to introduce a little homogeneity?
  • Bamshad Mobasher
  • Not job of every personalized app to broaden our tastes.
=================================
Closing comments
JK:
  • “Which woman dying” always a problem; wars don’t kill as many as cars even. Everyone has an agenda about what they want you to read.
  • Popularity and choice: it’s an issue of commercial bundling. Companies want to own the data & the profiles; unbundling interesting research topic, but hasn’t turned out to be commercially viable
  • Re: emotional judgements – emotional judgements have basis. People concerned about machines becaus ethey don’t make moral/ethical judgements like people do. But people can’t process Internet-scale quantities of information.
  • Humans are irrational
AJ
  • Note that words can create mental bubbles
  • Words like “hiding”, “editing out”, “filter”, and “bubble” imply a lot more than that personalization is promoting some content, to some extent at the expense of other content
  • If you use these words, you are assuming that all these additional bad things are happening – probably without being aware of your assumptions
  • “Show me the bubble!”
PR
  • We may want to model/adjust the value of items based on who has seen it
  • Fear of “humans vs. machines” vs the fear of “missing things” – is there important stuff that I’m missing?
  • Fun things to finish with:
  • @UMAP: a song about bandits and exporation/exploitation. See the limerick  challenge for RecSys 2012.
  • Joe has composed a song.
MDE has paper notes for some missing points & will fill in later.
 ————–end of TitanPad notes———————

 

To add written comments to supplement the oral discussion, add a comment in a reply on this entry.