Panel on The Filter Bubble

The phrase “filter bubble” was coined by the author Eli Pariser in his best seller book with the same title. The book is a critique of the proliferation of personalization technologies across the Internet and its potential negative impact on the ability of internet users to be exposed to diverse sources of information and varied viewpoints. The panel discussion at the 2011 conference explored this issue from the perspective of people working on personalization and recommendation technologies.

Panelists were:

  • Paul Resnick, University of Michigan
  • Joseph Konstan, University of Minnesota
  • Anthony Jameson, DFKI – German Research Center for Artificial Intelligence

The panelists addressed three questions:

  • Are there “filter bubbles?”
  • To what degree is personalized filtering a problem?
  • What should we as a community do to address the filter bubble issue?

To help with note-taking on the oral discussion by panelists and audience, please edit the  summary of the live discussion.

Slides used by the panelists (and some that weren’t used).

————-As of 11/29/11, the state of the TitanPad summary of the live discussion has been copied here, just in case TitanPad goes away at some point.

This document contains a summary of the oral discussion at the 2011 RecSys panel. Michael Ekstrand is serving as the primary note-taker. Feel free to fill in or correct anything he missed. Shortly after the conference, these notes will be merged into the main blog entry about the panel.
If you’d like to add written comments to supplement the discussion, please add them using the comment box on the blog entry, https://acmrecsys.wordpress.com/2011/10/25/panel-on-the-filter-bubble/ ‎.
Moderator: Bamshad Mobasher
Panelists:
  • [PR] Paul Resnick, University of Michigan
  • [JK] Joseph Konstan, University of Minnesota
  • [AJ] Anthony Jameson, DFKI – German Research Center for Artificial Intelligence
Welcome!
We’re getting ready to start. Panelists lined up.
We will be discussing the Filter Bubble, popularized by Eli Pariser.  Critique of personalization as it is deployed across the web.
The panelists addressed three questions:
=================================
Q1: Are there “filter bubbles?”
[AJ]
  • Anyone in the personalization field is aware of the possibility of narrowing of experience . PR one of the first to mention in literature (in mid-2990s). It’s been textbook/handbook knowledge for 10 years.
  • To what extent does the problem really arise in practice?
  • (Slide:) Look at Google News (example from the “Filter Bubble” blog page) in personalized vs. incognito mode
  • Note that only a direct comparison between personalized vs. nonpersonalized modes can reliably reveal the effects of personalization.
  • Closer inspection shows that all of the “Top stories” are still visible in personalized mode, at most 1 click away
  • So we need a more precise way of assessing the seriousness of any narrowing.
  • (Slide:) Proposed RecSys “bubble” scale, with two questions
  • 1. To what extent is the user prevented from encountering missing things (M) “omitted” by personalized algorithm?
  • There is no M
  • Links to M are visible
  • The user is likely to see M elsewhere in the site even without looking for it
  • The user can find M if s/he thinks it might be there and actively looks for it.
  • There is no way for the user to find M
  • 2. In comparable cases, is M consistently the same type of information?
  • Words like “bubble”,  “hiding”,  and “editing out” imply extreme values on both questions at  once; where  are the examples of such cases?
  • (Slides:) Own Google search for “Egypt” shows no difference between personalized and non-personalized results (when not signed into to Google account).
  • Subsequent search for “BP”: Only difference is addition to personalized results of a couple of results about BP in Egypt, due to immediately preceding query about the Egypt. All of the other results in the nonpersonalized list are still available in the personalized list, though some have been pushed a bit farther down in the list.
[JK]
  • Yes. But let’s say more
  • They always have been.
  • Fundamental trade-off.
  • FDR – post-1915 (“responsible”) press elided the wheelchair from public view.
  • Responsible press is a filter – is it better?
  • News programs on Big 3 networks in US (e.g. Cronkite era) – we had interesting American programming, but little world news.
  • Cable programming – lots of it. Now can watch al-Jazeera or Fox News for different perspectives on Egypt. People complained about lack of common ground – we aren’t all watching “the Evening News”.
  • Personalization about delivering to a person what they want – how much do we trust their choices? Are they good for them?
[PR]
  • Yes, bubbles exist. Will wait to say more.
=================================
Q2: To what degree is personalized filtering a problem?
[JK]
  • First must ask: compared to what?
  • What is alternative to personalized filtering?
  • If Amazon just had a search, we’d have no bias but would have overload.
  • Two questions
  • What’s the alternative?
  • People are unlikely to explore all the possibilities themselves.
  • People make lists that others consult
  • Objective content model
  • Expert-engineered organization
  • How correlated are these bubbles, and how transparent are they?
  • Do we have multiple independent bubbles with different views, or one big brother bubble?
  • How much opportunity to see things outside the bubble?
  • Can I find interesting things outside?
  • Am I occasionally reminded of the outside?
[PR]
  • Think it’s a good thing when different people have different results.
  • Better for cocktail party to have 17 things people have seen or 10 things?
  • We should all boycott Duck Duck Go – it would be bad for society if we all used it.
  • Underlying concern: people will just get reinforcement of what they already believe, already agree with.
  • Little evidence to support this. Studied under “Selective Exposure”, and the results are quite mixed.
  • Some results from PR’s research:
  • Everyone likes reinforcement
  • Some people prefer mix of reinforcement and challenge
  • Challenge is only mildy aversive on average
  • People viewing extreme sites spend more time on mainstream news as well
  • Not so concerned that this is a major problem
[AJ]
  • Relevant concept of “choice architecture” from Thaler and Sunstein’s book “Nudge: …” Consider the arrangement of food in a cafeteria. Any arrangement will influence what people eat, so there is no “neutral” arrangement. What’s the best arrangement is a tricky question (optimize profit, healthiness, variety, …?) And who should decide (the cafeteria manager, the visitors, both, …)? “Nudge” can mean either “push” or “hint”.
  • Personalization is the equivalent of having the cafeteria rearrange itself for each visitor, so it vastly enlarges the space of possible solutions.
=================================
Q3: What should we as a community do to address the filter bubble issue?
[PR]
4 directions we should explore:
  • 1: take longer-term view of accuracy, esp. in terms of exploration/exploitation
  • 2. Portfolio preferences – for collections, not individual items
  • 3. Tools for perspective-taking – see the world through other eyes (e.g. Living Voter’s Guide)
  • 4. When immediate preferences & aspirations conflict, build in nudges towards better selves. Don’t give them broccoli when they hate it, but give it to them when they aspire to eat it.
[AJ]
  • Giving people better control over personalization remains an important, under-studied research challenge.
  • (Slide:) A very early (2005) interface for personalized Google search allowed real-time control over amount of personalization with immediate visual feedback.
  • But even it didn’t help much to solve the tough aspects of the control problem:
  • Most people don’t want to take the time to exert fine-grained control
  • But it’s also hard to find one or two good long-run settings
  • It’s often hard to predict the consequences of a control adjustment
  • And important cumulative consequences may emerge only over the long term, making it hard to learn from trial and error
  • The good news: There’s been so little attention to this problem that there are plenty of unexplored directions for seeking improvement
[JK]
  • Why do we care?
  • There are people who are wrong (read: disagree with us) who are dangerous.
  • We worry about “dangerous” people (e.g. Bachmann or Thomas) isolating themselves, but not people we don’t consider “dangerous”
  • We don’t push for EE training for the Amish
  • There are many people very happy in their shell.
  • Allow people to show off their diversity of reading?
  • We’re doing many things right now. Many real problems are already on our research agenda.
  • FB issues are a problem with implicit ratings – click and interest are not equivalent (sometimes reading the headline is enough)
  • Analyze portfolios for topic, diversity of point of view
  • HCI issues – how to give awareness & control without overwhelming users? Easy to add levers.
=================================
Questions
  • Peter Brusilovsky
  • Message from us to the outside – do we have a book saying personalization is great?
  • Transparency – show what is there beyond the recommended items.
  • [JK]: As a field, we need to have some idea of what it means to be coercive or hiding. FB example is most concerning – is only way to see updates to go friend-by-friend?
  • [AJ]: This is the most extreme example we’ve seen on RecSys “bubble” scale so far – User can find missing information (only) by actively looking for it.
  • Shilad Sen
  • We have different framings – giving users what they want, meeting needs of host/designer, etc. Can solving filter bubble be fit into that framing?
  • [JK] Framining is nice – what people want conflicts with paternalism.
  • [PR] Giving people what they truly want – long-term perspective including aspirations. “What they want” doesn’t have to imply short-term pleasure.
  • [AJ] What do we mean with expressions like “what people want” “what they like”, or “their preferences”? (Talk at workshop on Friday.)
  • BBC rep
  • BBC royal charter demands transparency
  • Do personalization and bias have to be the same thing? Is it possible to have impartial personalization?
  • [AJ]: Depends on what “impartial” means. If balanced, balanced between what and what (cf. the cafeteria problem)?
  • (Remark later in discussion:) Though there is no absolute notion of balance, you can define a balance  policy and use it as a constraint for personalization.
  • [JK]: We can personalize w/o what most ppl consider bias: consider reading levels of literature – different version of basically the same stories. Likewise, geographical personalization. People may well consider 60% of news referencing cricket for a cricket lover biased.
AJ: is what we call “filtering” really filtering? Let’s ban the term unless it’s actually filtering.
  • PR: votes no – if it’s hard enough to get to the stuff it’s effectively not there.
  • JK: huge difference between promoting a few things to the top vs. demoting so far you won’t see them. Demoting below the top 100 may well be filtering.
  • AJ: There is a difference that goes beyond the question of how hard it is to find the nonpromoted items: “Filtering” suggests that particular types of item (e.g., news items about foreign countries) are systematically being “edited out”. When instead the nonpromoted items are being pushed down to a less prominent location, there is no systematic bias against particular types of content.
  • Pearl Pu
  • Issue is adoption, not filtering
  • In early adoption, ease of use is paramount
  • As adoption increases, shift to control
  • Amazon “fix the recommendation” useful direction
  • Bryce
  • Couple of emotional appeals
  • Are we comfortable with machines making these decisions?
  • Gets upset when he hears about things from friend that he doesn’t know already.
  • Similar issue to high-frequency trading debate
  • Gunnar Shroeder
  • Filter bubble is there in some applications
  • Returning to Germany from Canada, couldn’t see Canadian as easily.
  • [PR]: Perspective-taking
  • We need to make sure our tools are used for good, not evil.
  • Martijn Willemsen
  • Problem: people don’t understand what is happening. We do – we built it. People have simple model of Google – type keywords, get best fit. They have the wrong mental model. Finding solution is hard – controls, transparency, etc. Starts with transparency so people understand what is happening.
  • Xavier Amatriain
  • New term: Popularity Bubble. Obvious alternative to personalization.
  • Problem isn’t that Google is personalized – problem is that there is only one Google.
  • Neal Lathia
  • If recommenders were perfect, there would be a bubble.
  • Which woman dying is important? There are many wars, many women dying. Pariser seems to say there is an absolute truth we need to encode into algorithms.
  • ?
  • Is there opportunity or need to recommend common things to improve shared context? e.g. Alice can see A or B (equiv. from recsys perspective), Bob can see A or C – recommend A to introduce a little homogeneity?
  • Bamshad Mobasher
  • Not job of every personalized app to broaden our tastes.
=================================
Closing comments
JK:
  • “Which woman dying” always a problem; wars don’t kill as many as cars even. Everyone has an agenda about what they want you to read.
  • Popularity and choice: it’s an issue of commercial bundling. Companies want to own the data & the profiles; unbundling interesting research topic, but hasn’t turned out to be commercially viable
  • Re: emotional judgements – emotional judgements have basis. People concerned about machines becaus ethey don’t make moral/ethical judgements like people do. But people can’t process Internet-scale quantities of information.
  • Humans are irrational
AJ
  • Note that words can create mental bubbles
  • Words like “hiding”, “editing out”, “filter”, and “bubble” imply a lot more than that personalization is promoting some content, to some extent at the expense of other content
  • If you use these words, you are assuming that all these additional bad things are happening – probably without being aware of your assumptions
  • “Show me the bubble!”
PR
  • We may want to model/adjust the value of items based on who has seen it
  • Fear of “humans vs. machines” vs the fear of “missing things” – is there important stuff that I’m missing?
  • Fun things to finish with:
  • @UMAP: a song about bandits and exporation/exploitation. See the limerick  challenge for RecSys 2012.
  • Joe has composed a song.
MDE has paper notes for some missing points & will fill in later.
 ————–end of TitanPad notes———————

 

To add written comments to supplement the oral discussion, add a comment in a reply on this entry.

15 thoughts on “Panel on The Filter Bubble

  1. Hey! Someone in my Facebook group shared this website with
    us so I came to give it a look. I’m definitely loving the information. I’m bookmarking
    and will be tweeting this to my followers! Fantastic blog and terrific design.

  2. Pingback: IA の再定義とフィルターバブル | OVERKAST

  3. As far as I understand, problem seems to be clear.
    We human being has two evil tendencies:
    1) We see only what we want to see, and
    2) We want to control what others can see.
    Personalization technologies enhance the tendencies.
    In this sense, filter bubble is exist and is a problem.

    What we should do is not so clear.
    Current personalization technologies are not so high level to cause serious problem
    unless they are politically used. Pros of personalization is larger than cons of it.
    However, many technologies can become evil when they exceed some points.
    Hence, what we should do may be
    1) Announce and Educate about existence of personalization and
    pros and cons of personalization,
    2) Prevent personalizatio technologies from being monopolized,
    3) Provide technologies to contol the degree of personalization flexibly
    .

  4. Isn’t unpersonalized search also a filter? Most people don’t browse pas the first few pages and so really most of the web is not visible even without personalization.

  5. Eli Parsier likes diversity in his view and I respect that… But is taking on the role of broadening the horizon’s of other people not a form of arrogance – maybe the user doesn’t want diversity? Like Joe said “Person” is the first part of the word personalization, so we need to develop interfaces that allow users to get what they want – not just items that we think they want, but also to get diversity if they want it or indeed to switch off the personalization and drown in content if that is what they want!

  6. Is part of the problem the fact that we use the filter metaphor rather than a magnifying glass metaphor? Filtering has a connotation that you throw away one of two parts. Maybe we need to refer to what we do as the magnifier where we make some things more visible but provide aggregations of the less “relevant” stuff so that the user can dig into the less relevant stuff if something catches their eye.

  7. As I was watching Eli Pariser’s TED Talk, I was thinking that it might be a good time to write a similar book, “Search Bubble”…. Could be a real hit.

    Eli is longing for an old good world where you can access the whole Web. And now, we are filtered out. One person’s search for Egypt brings results on unrests and and public movements. Another person’s search for Egypt brings results on vacations in Egypt. Nothing on unrest. Bad.

    It reminds me old good days of Web before search. It was not until 1994 when I first met Mauldin and seen his Lycos search at WWW’94, that I (and everyone else) can search. But we can navigate (Web was all about links) and thus find a lot of unusual and interesting content. And that was good, I guess. Now when we search on Googe/Bing/Yahoo we get filtered out (have not you noticed?). Let’s get to the example from Eli’s talk and assume that two users went to search for Egypt on March 11, 2011. Well, forget about the small problem – second guy was not able to learn about unrests in Egypt. There is a much larger issue. When both poor guys was searching for Egypt, he only got results for Egypt. But there are much more on the Web than Egypt. When you search for Egypt, you can easily miss more important things like tsunami in Japan that happend right on March 11. In the old good times, with no search engines, whoever was looking for some information on Egypt had to navigate and, by the nature of navigation, had a really nice chances to find about something important – like Japan. Now both are searching for Egypt and all they got is something abut Egypt. Nothing about Japan. Bad.

    Search engines keep us in a bubble. You do not see the world, you only see what’s relevant to the search results.

    So, anyone to do the “Search Bubble” sequel?

    PB

  8. After watching the video I feel really abused. Are we abused by algorithms we designed thinking they will make are world better?

  9. This guy is full of baloney. confusing censoring and filtering information with personaliizing information… Is it wrong if you go to your regular coffee shop and the barrista asks “will you have the usual, sir?”

  10. I think we need control and transparency in recommender systems. That may overcome the filter bubble.

    Also, the reason why we haven’t been successful in countering Eli’s arguments, is because we don’t do user testing!! 🙂

  11. Regardless of the veracity of Pariser’s claims, the discussion has already had an impact on the public perception of personalization. The search engine DuckDuckGo.com, for example, is promoting its service with a lack of personalization as a selling point: http://dontbubble.us/ .

  12. We also encourage the audience to contribute additional examples or cases of “filter bubbles” and their impact (both negative and positive) on users.

Leave a comment