mfioretti: eco chamber*

Bookmarks on this page are managed by an admin user.

13 bookmark(s) - Sort by: Date ↓ / Title / Voting / - Bookmarks from other users for this tag

  1. As the visualization below shows, Michele, the bot pretending to be a fascist, enjoyed a radically different news feed experience from the others:
    Total number of posts seen by each bot (the wider the bar, the more posts), grouped by how many times the posts were repeated (the higher up the bar, the more times).

    Fash-bot Michele is shown a much smaller variety of posts, repeated way more often than normal — it saw some posts as often as 29 times in the 20 days represented in the data set.

    I mostly agree with studies such as Political polarization? Don’t blame the web, especially because the opposite belief is a kind of techno-determinism I feel doesn’t take into account a lot of political complexity. But the data above displays a frightening situation: the Michele bot has been segregated by the algorithm, and only receives content from a very narrow political area. Sure, let’s make fun of fascists because they see mostly pictures or because they are ill-informed, but these people will vote in 31 hours. Not cool.

    If you were curious what posts the Facebook algorithm deemed so essential that they had to be shown 29 times each (once a day or more, on average — each), here they are, all three of them. The third is peculiar, with its message that “mass media does not give us a platform, they never even mention our name, but people still declare they will vote for us. Mass media is a scam, spread the word”.
    https://medium.com/@trackingexposed/t...es-fascists-from-reality-d36739b0758b
    Voting 0
  2. Who is doing the targeting?

    Albright: It really depends on the platform and the news event. Just the extensiveness of the far right around the election: I can’t talk about that right this second, but I can say that, very recently, what I’ve tended to see from a linking perspective and a network perspective is that the left, and even to some degree center-left news organizations and journalists, are really kind of isolated in their own bubble, whereas the right have very much populated most of the social media resources and use YouTube extensively. This study I did over the weekend shows the depth of the content and how much reach they have. I mean, they’re everywhere; it’s almost ubiquitous. They’re ambient in the media information ecosystem. It’s really interesting from a polarization standpoint as well, because self-identified liberals and self-identified conservatives have different patterns in unfriending people and in not friending people who have the opposite of their ideology.

    From those initial maps of the ad tech and hyperlink ecosystem of the election-related partisan news realm, I dove into every platform. For example, I did a huge study on YouTube last year. It led me to almost 80,000 fake videos that were being auto-scripted and batch-uploaded to YouTube. They were all keyword-stuffed. Very few of them had even a small number of views, so what these really were was about impact — these were a gaming system. My guess is that they were meant to skew autocomplete or search suggestions in YouTube. It couldn’t have been about monetization because the videos had very few views the sheer volume wouldn’t have made sense with YouTube’s business model.

    Someone had set up a script that detected social signals off of Twitter. It would go out and scrape related news articles, pull the text back in, and read it out in a computer voice, a Siri-type voice. It would pull images from Google Images, create a slideshow, package that up and wrap it, upload it to YouTube, hashtag it and load it with keywords. There were so many of these and they were going up so fast that as I was pulling data from the YouTube API dozens more would go up.




    I worked with The Washington Post on a project where I dug into Twitter and got, for the last week leading up to the election, a more or less complete set of Twitter data for a group of hashtags. I found what were arguably the top five most influential bots through that last week, and we found that the top one was not a completely automated account, it was a person.

    The Washington Post’s Craig Timberg » looked around and actually found this person and contacted him and he agreed to an interview at his house. It was just unbelievable. It turns out that this guy was almost 70, almost blind.

    From Timberg’s piece: “Sobieski’s two accounts…tweet more than 1,000 times a day using ‘schedulers’ that work through stacks of his own pre-written posts in repetitive loops. With retweets and other forms of sharing, these posts reach the feeds of millions of other accounts, including those of such conservative luminaries as Fox News’s Sean Hannity, GOP strategist Karl Rove and Sen. Ted Cruz (R-Tex.), according to researcher Jonathan Albright…’Life isn’t fair,’ Sobieski said with a smile. ‘Twitter in a way is like a meritocracy. You rise to the level of your ability….People who succeed are just the people who work hard.'” »

    The most dangerous accounts, the most influential accounts, are often accounts that are supplemented with human input, and also a human identity that’s very strong and possibly already established before the elections come in.


    I mean, I do hold that it’s not okay to come in and try to influence someone’s election; when I look at these YouTube videos, I think: Someone has to be funding this. In the case of the YouTube research, though, I looked at this more from a systems/politics perspective.

    We have a problem that’s greater than the one-off abuse of technologies to manipulate elections. This thing is parasitic. It’s growing in size. The last week and a half are some of the worst things I’ve ever seen, just in terms of the trending. YouTube is having to manually go in and take these videos out. YouTube’s search suggestions, especially in the context of fact-checking, are completely counter-productive. I think Russia is a side effect of our larger problems.

    Why is it getting worse?

    Albright: There are more people online, they’re spending more time online, there’s more content, people are becoming more polarized, algorithms are getting better, the amount of data that platforms have is increasing over time.

    I think one of the biggest things that’s missing from political science research is that it usually doesn’t consider the amount of time that people spend online. Between the 2012 election and the 2016 election, smartphone use went up by more than 25 percent. Many people spend all of their waking time somehow connected.

    This is where psychology really needs to come in. There’s been very little psychology work done looking at this from an engagement perspective, looking at the effect of seeing things in the News Feed but not clicking out. Very few people actually click out of Facebook. We really need social psychology, we really need humanities work to come in and pick up the really important pieces. What are the effects of someone seeing vile or conspiracy news headlines in their News Feed from their friends all day?

    Owen: This is so depressing.
    http://www.niemanlab.org/2018/02/news...-what-to-do-as-things-crash-around-us
    Voting 0
  3. As problematic as Facebook has become, it represents only one component of a much broader shift into a new human connectivity that is both omnipresent (consider the smartphone) and hypermediated—passing through and massaged by layer upon layer of machinery carefully hidden from view. The upshot is that it’s becoming increasingly difficult to determine what in our interactions is simply human and what is machine-generated. It is becoming difficult to know what is real.

    Before the agents of this new unreality finish this first phase of their work and then disappear completely from view to complete it, we have a brief opportunity to identify and catalogue the processes shaping our drift to a new world in which reality is both relative and carefully constructed by others, for their ends. Any catalogue must include at least these four items:

    the monetisation of propaganda as ‘fake news’;
    the use of machine learning to develop user profiles accurately measuring and modelling our emotional states;
    the rise of neuromarketing, targeting highly tailored messages that nudge us to act in ways serving the ends of others;
    a new technology, ‘augmented reality’, which will push us to sever all links with the evidence of our senses.



    The fake news stories floated past as jetsam on Facebook’s ‘newsfeed’, that continuous stream of shared content drawn from a user’s Facebook’s contacts, a stream generated by everything everyone else posts or shares. A decade ago that newsfeed had a raw, unfiltered quality, the notion that everyone was doing everything, but as Facebook has matured it has engaged increasingly opaque ‘algorithms’ to curate (or censor) the newsfeed, producing something that feels much more comfortable and familiar.

    This seems like a useful feature to have, but the taming of the newsfeed comes with a consequence: Facebook’s billions of users compose their world view from what flows through their feeds. Consider the number of people on public transport—or any public place—staring into their smartphones, reviewing their feeds, marvelling at the doings of their friends, reading articles posted by family members, sharing video clips or the latest celebrity outrages. It’s an activity now so routine we ignore its omnipresence.

    Curating that newsfeed shapes what Facebook’s users learn about the world. Some of that content is controlled by the user’s ‘likes’, but a larger part is derived from Facebook’s deep analysis of a user’s behaviour. Facebook uses ‘cookies’ (invisible bits of data hidden within a user’s web browser) to track the behaviour of its users even when they’re not on the Facebook site—and even when they’re not users of Facebook. Facebook knows where its users spend time on the web, and how much time they spend there. All of that allows Facebook to tailor a newsfeed to echo the interests of each user. There’s no magic to it, beyond endless surveillance.

    What is clear is that Facebook has the power to sway the moods of billions of users. Feed people a steady diet of playful puppy videos and they’re likely to be in a happier mood than people fed images of war. Over the last two years, that capacity to manage mood has been monetised through the sharing of fake news and political feeds atuned to reader preference: you can also make people happy by confirming their biases.

    We all like to believe we’re in the right, and when we get some sign from the universe at large that we are correct, we feel better about ourselves. That’s how the curated newsfeed became wedded to the world of profitable propaganda.

    Adding a little art to brighten an other-wise dull wall seems like an unalloyed good, but only if one completely ignores bad actors. What if that blank canvas gets painted with hate speech? What if, perchance, the homes of ‘undesirables’ are singled out with graffiti that only bad actors can see? What happens when every gathering place for any oppressed community gets invisibly ‘tagged’? In short, what happens when bad actors use Facebook’s augmented reality to amplify their own capacity to act badly?

    But that’s Zuckerberg: he seems to believe his creations will only be used to bring out the best in people. He seems to believe his gigantic sharing network would never be used to incite mob violence. Just as he seems to claim that Facebook’s capacity to collect and profile the moods of its users should never be monetised—but, given that presentation unearthed by the Australian, Facebook tells a different story to advertisers.

    Regulating Facebook enshrines its position as the data-gathering and profile-building organisation, while keeping it plugged into and responsive to the needs of national powers. Before anyone takes steps that would cement Facebook in our social lives for the foreseeable future, it may be better to consider how this situation arose, and whether—given what we now know—there might be an opportunity to do things differently.
    https://meanjin.com.au/essays/the-last-days-of-reality
    Voting 0
  4. There are 1.5 billion YouTube users in the world, which is more than the number of households that own televisions. What they watch is shaped by this algorithm, which skims and ranks billions of videos to identify 20 “up next” clips that are both relevant to a previous video and most likely, statistically speaking, to keep a person hooked on their screen.

    Company insiders tell me the algorithm is the single most important engine of YouTube’s growth. In one of the few public explanations of how the formula works – an academic paper that sketches the algorithm’s deep neural networks, crunching a vast pool of data about videos and the people who watch them – YouTube engineers describe it as one of the “largest scale and most sophisticated industrial recommendation systems in existence”.

    The algorithm does not appear to be optimising for what is truthful, or balanced, or healthy for democracy
    Guillaume Chaslot, an ex-Google engineer

    Lately, it has also become one of the most controversial. The algorithm has been found to be promoting conspiracy theories about the Las Vegas mass shooting and incentivising, through recommendations, a thriving subculture that targets children with disturbing content such as cartoons in which the British children’s character Peppa Pig eats her father or drinks bleach.

    Lewd and violent videos have been algorithmically served up to toddlers watching YouTube Kids, a dedicated app for children. One YouTube creator who was banned from making advertising revenues from his strange videos – which featured his children receiving flu shots, removing earwax, and crying over dead pets – told a reporter he had only been responding to the demands of Google’s algorithm. “That’s what got us out there and popular,” he said. “We learned to fuel it and do whatever it took to please the algorithm.”
    https://www.theguardian.com/technolog...how-youtubes-algorithm-distorts-truth
    Voting 0
  5. Facebook has turned into a toxic commodity since Mr Trump was elected. Big Tech is the new big tobacco in Washington. It is not a question of whether the regulatory backlash will come, but when and how.

    Mr Zuckerberg bears responsibility for this. Having denied Facebook’s “filter bubble” played any role in Mr Trump’s victory — or Russia’s part in helping clinch it — Mr Zuckerberg is the primary target of the Democratic backlash. He is now asking America to believe that he can turn Facebook’s news feed from an echo chamber into a public square. Revenue growth is no longer the priority. “None of that matters if our services are used in a way that doesn’t bring people closer together,” he says.

    How will Mr Zuckerberg arrange this Kumbaya conversion? By boosting the community ties that only Facebook can offer. Readers will forgive me if I take another lie down. Mr Zuckerberg suffers from two delusions common to America’s new economy elites. They think they are nice people — indeed, most of them are. Mr Zuckerberg seems to be, too. But they tend to cloak their self-interest in righteous language. Talking about values has the collateral benefit of avoiding talking about wealth. If the rich are giving their money away to good causes, such as inner city schools and research into diseases, we should not dwell on taxes. Mr Zuckerberg is not funding any private wars in Africa. He is a good person. The fact that his company pays barely any tax is therefore irrelevant.

    The second liberal delusion is to believe they have a truer grasp of people’s interests than voters themselves. In some cases that might be true. It is hard to see how abolishing health subsidies will help people who live in “flyover” America. But here is the crux. It does not matter how many times Mr Zuckerberg invokes the magic of online communities. They cannot substitute for the real ones that have gone missing. Bowling online together is no cure for bowling offline alone.

    The next time Mr Zuckerberg wants to showcase Facebook, he should invest some of his money in an actual place. It should be far away from any of America’s booming cities — say Youngstown, Ohio. For the price of a couple of days’ Facebook revenues, he could train thousands of people. He might even fund a newspaper to make up for social media’s destruction of local journalism. The effect could be electrifying. Such an example would bring a couple more benefits. First, it would demonstrate that Mr Zuckerberg can listen, rather than pretending to. Second, people will want to drop round to his place for dinner.
    https://medium.com/financial-times/the-zuckerberg-delusion-5d427c5d699a
    Voting 0
  6. The bubble filter demonstrates the internet as shifting from a tool of global connectivity to individual disconnect; personal opinion becomes fossilised while public discourse withers away. Without meaningful public discourse, the internet exposes us to competing opinions only through (often anonymous) trolling. This is dangerous. With little to no space for productive debate, ideological conflicts are carried out institutionally – as evinced by the onslaught of fake news accusations that characterised the final American presidential debate.

    When we live in bubbles, we forget how to engage and disagree in a civil manner.

    As a product of the neoliberal project, the bubble filter caters to our perceived demands for constant personalised stimulation and to the commodification of the digital experience. We find ourselves further removed from neighbours to whom we occupy distant ideological worlds; we cease to understand each other as we increasingly lack basic exposure to each other. When we live in bubbles, we forget how to engage and disagree in a civil manner. This situation has the potential to normalise extreme polarity and reactionary populism. Left without public forums to negotiate competing worldviews and engage with each other, we should not be surprised if ideological conflicts start to increasingly escalate in violent ways.
    https://www.opendemocracy.net/digital...erm=0_717bc5d86d-5aca16cdcc-407399415
    Voting 0
  7. Overall, our results showed that, while real-world social networks were positively associated with overall well-being, the use of Facebook was negatively associated with overall well-being. These results were particularly strong for mental health; most measures of Facebook use in one year predicted a decrease in mental health in a later year. We found consistently that both liking others’ content and clicking links significantly predicted a subsequent reduction in self-reported physical health, mental health, and life satisfaction.
    https://hbr.org/2017/04/a-new-more-ri...e-you-use-facebook-the-worse-you-feel
    Voting 0
  8. Una gamma di fornitori sempre più ampia. Della pratica Repubblica ne ha già parlato nel 2013, quando Facebook ha lanciato Categorie Partner, il servizio con cui Mark Zuckerberg ha messo a disposizione dei propri inserzionisti queste notizie raccolte da aziende terze, in modo da garantire la pubblicità giusta alla persona giusta. Ma, se inizialmente il sistema era attivo solo negli Stati Uniti, oggi è disponibile anche in Francia, Germania e Regno Unito. Mentre i fornitori della rete in blu si sono moltiplicati fino a includere: Acxiom, Experian, Greater Data, Epsilon, Quantium, TransUnion, WPP e Oracle data cloud. Quest'ultimo, in particolare, ha un ruolo di primo piano nel settore grazie all'acquisizione nel 2014 sia di BlueKai, piattaforma basata sul cloud che permette alle società di personalizzare le campagne di marketing online, offline e su mobile, sia di Datalogix, che aggrega e fornisce informazioni relative a oltre due trilioni di spesa dei consumatori di 1500 partner commerciali, tra cui Visa, Mastercard e TiVo.

    LEGGI: Svelato il codice etico. Così Facebook sceglie i post da cancellare

    Diffile uscirne. Per comprendere meglio quali siano esattamente le informazioni che il social di Menlo Park compra da queste aziende, il sito di giornalismo investigativo ha scaricato una lista di 29mila categorie che Facebook fornisce agli inserzionisti pubblicitari. La scoperta: di queste 29mila, 600 sono riconducibili a dati forniti da terzi e si tratta per lo più di notizie finanziarie. Nessuna di queste categorie, però, risulta presente tra le "Preferenze relative alle inserzioni": la pagina che Facebook ci mette a disposizione per capire quali informazioni influenzano gli spot che vediamo in bacheca e controllarli. "Non sono onesti", ha commentato Jeffrey Chester, direttore esecutivo del Center for Digital Democracy. "Le persone dovrebbero poter aver accesso a questo pacchetto". Inoltre, i giornalisti di ProPubblica hanno messo in evidenza che è difficilissimo uscire fuori da questa forma di profilazione. Per esempio, stando alla loro indagine, impedire a Oracle data cloud di fornire i nostri dati a Facebook richiede ai consumatori statunitensi di spedire via posta una richiesta scritta, con la copia di un documento rilasciato dalle autorità, al responsabile della privacy di Oracle.
    http://www.repubblica.it/tecnologia/s...k_della_nostra_vita_offline-155458727
    Voting 0
  9. Can outside sources verify what God believes to be holy? Can anyone verify God’s existence? Can anyone think of more hypothetical questions like this to underscore the point?

    As religious leaders expressed their concerns to The Literalist, The Literalist in turn became increasingly worried about Facebook deciding what is “fake” and “real” news. So The Literalist sent a short note to Facebook headquarters reading, “Now, don’t take this literally, but The Literalist encourages you to let users use reason when it comes to fake news. Satire included.”
    http://religionnews.com/2016/11/29/fa...ws-crackdown-threatens-religious-news
    Voting 0
  10. Now what can be done? Certainly the explanation for Trump’s rise cannot be reduced to a technology- or media-centered argument. The phenomenon is rooted in more than that; media or technology cannot create; they can merely twist, divert, or disrupt. Without the growing inequality, shrinking middle class, jobs threatened by globalization, etc. there would be no Trump or Berlusconi or Brexit. But we need to stop thinking that any evolution of technology is natural and inevitable and therefore good. For one thing, we need more text than videos in order to remain rational animals. Typography, as Postman describes, is in essence much more capable of communicating complex messages that provoke thinking. This means we should write and read more, link more often, and watch less television and fewer videos—and spend less time on Facebook, Instagram, and YouTube.
    https://www.technologyreview.com/s/60...iscourse-because-its-too-much-like-tv
    Voting 0

Top of the page

First / Previous / Next / Last / Page 1 of 2 Online Bookmarks of M. Fioretti: Tags: eco chamber

About - Propulsed by SemanticScuttle