mfioretti: fake news*

Bookmarks on this page are managed by an admin user.

31 bookmark(s) - Sort by: Date ↓ / Title / Voting / - Bookmarks from other users for this tag

  1. Seconda domanda: Cambridge Analytica segnala come propri casi di successo il ruolo consulenziale avuto per la Brexit e le presidenziali USA del 2016. Questo vuol dire che la propaganda computazionale data-based ha successo prevalentemente con i movimenti populisti? Qui entriamo nella fantapolitica, ma è possibile provare a ragionare sulla questione. Se fosse vero che il populismo è più sensibile ad una comunicazione semplice e mirata, vorrebbe dire che la mente di chi vota conservatore sia diversa dalla mente di chi vota liberale. Chi ha sollevato la questione è il linguista George Lakoff che nel suo libro “Moral Politics” ha ipotizzato che i conservatori hanno un modello familiare rigoroso, in cui i valori sono fondati su autodisciplina e lavoro duro, mentre i liberali hanno un modello familiare partecipativo, in cui i valori sono basati sul prendersi cura gli uni con gli altri.
    Voting 0
  2. Falsehoods almost always beat out the truth on Twitter, penetrating further, faster, and deeper into the social network than accurate information.

    And blame for this problem cannot be laid with our robotic brethren. From 2006 to 2016, Twitter bots amplified true stories as much as they amplified false ones, the study found. Fake news prospers, the authors write, “because humans, not robots, are more likely to spread it.”

    Political scientists and social-media researchers largely praised the study, saying it gave the broadest and most rigorous look so far into the scale of the fake-news problem on social networks, though some disputed its findings about bots and questioned its definition of news.

    “This is a really interesting and impressive study, and the results around how demonstrably untrue assertions spread faster and wider than demonstrable true ones do, within the sample, seem very robust, consistent, and well supported,” said Rasmus Kleis Nielsen, a professor of political communication at the University of Oxford, in an email.

    “I think it’s very careful, important work,” Brendan Nyhan, a professor of government at Dartmouth College, told me. “It’s excellent research of the sort that we need more of.”
    Voting 0
  3. anything designed to maximize engagement maximizes popularization.

    What we are witnessing is the computational exploitation of a natural human desire: to look “behind the curtain,” to dig deeper into something that engages us. As we click and click, we are carried along by the exciting sensation of uncovering more secrets and deeper truths. YouTube leads viewers down a rabbit hole of extremism, while Google racks up the ad sales.

    Human beings have many natural tendencies that need to be vigilantly monitored in the context of modern life. For example, our craving for fat, salt and sugar, which served us well when food was scarce, can lead us astray in an environment in which fat, salt and sugar are all too plentiful and heavily marketed to us. So too our natural curiosity about the unknown can lead us astray on a website that leads us too much in the direction of lies, hoaxes and misinformation.

    In effect, YouTube has created a restaurant that serves us increasingly sugary, fatty foods, loading up our plates as soon as we are finished with the last meal. Over time, our tastes adjust, and we seek even more sugary, fatty foods, which the restaurant dutifully provides.
    Voting 0
  4. In seguito alla pubblicazione del documento stilato di comune accordo tra INE e Facebook, resta fuor di dubbio che l'azienda fondatrice della nota piattaforma social non abbia nessun obbligo formale, e non mostrerebbe neanche l'intenzione di combattere le cosiddette “fake news”, argomento molto discusso negli ultimi giorni.

    Dobbiamo anche ricordare che non lontano dal Messico, in Honduras, il Congresso sta discutendo su una proposta di legge che tenta di frenare la diffusione di notizie false, inerenti anche all'ambito elettorale, con modalità poco trasparenti.
    Voting 0
  5. Who is doing the targeting?

    Albright: It really depends on the platform and the news event. Just the extensiveness of the far right around the election: I can’t talk about that right this second, but I can say that, very recently, what I’ve tended to see from a linking perspective and a network perspective is that the left, and even to some degree center-left news organizations and journalists, are really kind of isolated in their own bubble, whereas the right have very much populated most of the social media resources and use YouTube extensively. This study I did over the weekend shows the depth of the content and how much reach they have. I mean, they’re everywhere; it’s almost ubiquitous. They’re ambient in the media information ecosystem. It’s really interesting from a polarization standpoint as well, because self-identified liberals and self-identified conservatives have different patterns in unfriending people and in not friending people who have the opposite of their ideology.

    From those initial maps of the ad tech and hyperlink ecosystem of the election-related partisan news realm, I dove into every platform. For example, I did a huge study on YouTube last year. It led me to almost 80,000 fake videos that were being auto-scripted and batch-uploaded to YouTube. They were all keyword-stuffed. Very few of them had even a small number of views, so what these really were was about impact — these were a gaming system. My guess is that they were meant to skew autocomplete or search suggestions in YouTube. It couldn’t have been about monetization because the videos had very few views the sheer volume wouldn’t have made sense with YouTube’s business model.

    Someone had set up a script that detected social signals off of Twitter. It would go out and scrape related news articles, pull the text back in, and read it out in a computer voice, a Siri-type voice. It would pull images from Google Images, create a slideshow, package that up and wrap it, upload it to YouTube, hashtag it and load it with keywords. There were so many of these and they were going up so fast that as I was pulling data from the YouTube API dozens more would go up.

    I worked with The Washington Post on a project where I dug into Twitter and got, for the last week leading up to the election, a more or less complete set of Twitter data for a group of hashtags. I found what were arguably the top five most influential bots through that last week, and we found that the top one was not a completely automated account, it was a person.

    The Washington Post’s Craig Timberg » looked around and actually found this person and contacted him and he agreed to an interview at his house. It was just unbelievable. It turns out that this guy was almost 70, almost blind.

    From Timberg’s piece: “Sobieski’s two accounts…tweet more than 1,000 times a day using ‘schedulers’ that work through stacks of his own pre-written posts in repetitive loops. With retweets and other forms of sharing, these posts reach the feeds of millions of other accounts, including those of such conservative luminaries as Fox News’s Sean Hannity, GOP strategist Karl Rove and Sen. Ted Cruz (R-Tex.), according to researcher Jonathan Albright…’Life isn’t fair,’ Sobieski said with a smile. ‘Twitter in a way is like a meritocracy. You rise to the level of your ability….People who succeed are just the people who work hard.'” »

    The most dangerous accounts, the most influential accounts, are often accounts that are supplemented with human input, and also a human identity that’s very strong and possibly already established before the elections come in.

    I mean, I do hold that it’s not okay to come in and try to influence someone’s election; when I look at these YouTube videos, I think: Someone has to be funding this. In the case of the YouTube research, though, I looked at this more from a systems/politics perspective.

    We have a problem that’s greater than the one-off abuse of technologies to manipulate elections. This thing is parasitic. It’s growing in size. The last week and a half are some of the worst things I’ve ever seen, just in terms of the trending. YouTube is having to manually go in and take these videos out. YouTube’s search suggestions, especially in the context of fact-checking, are completely counter-productive. I think Russia is a side effect of our larger problems.

    Why is it getting worse?

    Albright: There are more people online, they’re spending more time online, there’s more content, people are becoming more polarized, algorithms are getting better, the amount of data that platforms have is increasing over time.

    I think one of the biggest things that’s missing from political science research is that it usually doesn’t consider the amount of time that people spend online. Between the 2012 election and the 2016 election, smartphone use went up by more than 25 percent. Many people spend all of their waking time somehow connected.

    This is where psychology really needs to come in. There’s been very little psychology work done looking at this from an engagement perspective, looking at the effect of seeing things in the News Feed but not clicking out. Very few people actually click out of Facebook. We really need social psychology, we really need humanities work to come in and pick up the really important pieces. What are the effects of someone seeing vile or conspiracy news headlines in their News Feed from their friends all day?

    Owen: This is so depressing.
    Voting 0
  6. ‘Whatever the causes of political polarisation today, it is not social media or the internet.

    ‘If anything, most people use the internet to broaden their media horizons. We found evidence that people actively look to confirm the information that they read online, in a multitude of ways. They mainly do this by using a search engine to find offline media and validate political information. In the process they often encounter opinions that differ from their own and as a result whether they stumbled across the content passively or use their own initiative to search for answers while double checking their “facts”, some changed their own opinion on certain issues.’

    The research shows that respondents used an average of four different media sources, and had accounts on three different social media platforms. The more media outlets people used, the more they tended to avoid echo chambers.

    While age, income, ethnicity nor gender were found to significantly influence the likelihood of being in an echo chamber, political interest significantly did. Those with a keen political interest were most likely to be opinion leaders who others turn to for political information. Compared with the less politically inclined, these people were found to be media junkies, who consumed political content wherever they could find it, and as a result of this diversity they were less likely to be in an echo chamber.

    Dr Elizabeth Dubois, co-author and Assistant Professor at the University of Ottawa, said: ‘Our results show that most people are not in a political echo chamber. The people at risk are those who depend on only a single medium for political news and who are not politically interested: about 8% of the population. However, because of their lack of political engagement, their opinions are less formative and their influence on others is likely to be comparatively small.’
    Voting 0
  7. Highlighting the U.S.’s long history in meddling in other countries’ elections is not “whataboutism,” but rather a highly germane point to understanding the context for the allegations of Russian meddling in Election 2016, Caitlin Johnstone observes.

    By Caitlin Johnstone

    There is still no clear proof that the Russian government interfered with the 2016 U.S. election in any meaningful way. Which is weird, because Russia and every other country on earth would be perfectly justified in doing so.

    Former CIA Director James Woolsey admitting on national television that the United States routinely meddles in other countries’ elections.

    Like every single hotly publicized Russiagate “bombshell” that has broken since this nonsense began, Mueller’s indictment of 13 Russian social media trolls was paraded around as proof of something hugely significant (an “act of war” in this case), but on closer examination turns out to be empty.

    The always excellent Moon of Alabama recently made a solid argument that has also been advanced by Russiagate skeptics like TYT’s Michael Tracey and Max Blumenthal of The Real News, pointing out that there is in fact no evidence that the troll farming operation was an attempt to manipulate the U.S. election, nor indeed that it had any ties to the Russian government at all, nor indeed that it was anything other than a crafty Russian civilian’s money making scheme.
    Voting 0
  8. Given the tumult of the news cycle in late 2016, it is understandable how a report put out in late November of that year by SHEG, a division within its Graduate School of Education, might have been overlooked. But anyone still self-soothing with the thought that it’s primarily adults, their brains addled by Fox, recklessly incompetent at the civic skills required to keep democracy even limping along, ought to be chastened by what the report says. “Evaluating Information: The Cornerstone of Online Civic Reasoning” detailed the depressing results of 18 months of research into young people’s digital media literacy.

    According to the study’s authors (one of whom was Wineburg), across income levels and educational environments, in beleaguered urban school districts and well-resourced suburban ones, the ability of so-called "digital natives" to reason through the information they encounter online, “can be summed up in one word: bleak.” Eighty percent of middle-schoolers in the study could not distinguish articles from ads labeled “sponsored content.” High-schoolers, when shown an imgur photo depicting weird, double-headed daisies purporting to show the effects of a nuclear meltdown, accepted its claim at face value, with only 20 percent of respondents raising objections about the complete lack of information about the picture’s provenance. Reading the report, I was horrified but not surprised. Yes, my students would get A’s on their ability to produce cool merch promoting their YouTube channels if I graded such things, but when it comes to bringing sound judgement and a critical eye to media they consume, they are babes in the woods.

    While it is relief to me to know that fact-checkers’ online practices can be studied, taught, and learned, there is still the matter of getting these skills to children in a systematic way. I talked about this with Jennifer Higgs, a professor of Education at UC-Davis whose research focuses on digital practices in the classroom, and she emphasized that doing this will require an investment in educating teachers, who are themselves often untrained in digital media literacy. Higgs explained that, to date, most professional education for teachers around technology has been about how to use it in the classroom rather than on developing a critical framework for helping students to decipher its messages.
    Voting 0
  9. As problematic as Facebook has become, it represents only one component of a much broader shift into a new human connectivity that is both omnipresent (consider the smartphone) and hypermediated—passing through and massaged by layer upon layer of machinery carefully hidden from view. The upshot is that it’s becoming increasingly difficult to determine what in our interactions is simply human and what is machine-generated. It is becoming difficult to know what is real.

    Before the agents of this new unreality finish this first phase of their work and then disappear completely from view to complete it, we have a brief opportunity to identify and catalogue the processes shaping our drift to a new world in which reality is both relative and carefully constructed by others, for their ends. Any catalogue must include at least these four items:

    the monetisation of propaganda as ‘fake news’;
    the use of machine learning to develop user profiles accurately measuring and modelling our emotional states;
    the rise of neuromarketing, targeting highly tailored messages that nudge us to act in ways serving the ends of others;
    a new technology, ‘augmented reality’, which will push us to sever all links with the evidence of our senses.

    The fake news stories floated past as jetsam on Facebook’s ‘newsfeed’, that continuous stream of shared content drawn from a user’s Facebook’s contacts, a stream generated by everything everyone else posts or shares. A decade ago that newsfeed had a raw, unfiltered quality, the notion that everyone was doing everything, but as Facebook has matured it has engaged increasingly opaque ‘algorithms’ to curate (or censor) the newsfeed, producing something that feels much more comfortable and familiar.

    This seems like a useful feature to have, but the taming of the newsfeed comes with a consequence: Facebook’s billions of users compose their world view from what flows through their feeds. Consider the number of people on public transport—or any public place—staring into their smartphones, reviewing their feeds, marvelling at the doings of their friends, reading articles posted by family members, sharing video clips or the latest celebrity outrages. It’s an activity now so routine we ignore its omnipresence.

    Curating that newsfeed shapes what Facebook’s users learn about the world. Some of that content is controlled by the user’s ‘likes’, but a larger part is derived from Facebook’s deep analysis of a user’s behaviour. Facebook uses ‘cookies’ (invisible bits of data hidden within a user’s web browser) to track the behaviour of its users even when they’re not on the Facebook site—and even when they’re not users of Facebook. Facebook knows where its users spend time on the web, and how much time they spend there. All of that allows Facebook to tailor a newsfeed to echo the interests of each user. There’s no magic to it, beyond endless surveillance.

    What is clear is that Facebook has the power to sway the moods of billions of users. Feed people a steady diet of playful puppy videos and they’re likely to be in a happier mood than people fed images of war. Over the last two years, that capacity to manage mood has been monetised through the sharing of fake news and political feeds atuned to reader preference: you can also make people happy by confirming their biases.

    We all like to believe we’re in the right, and when we get some sign from the universe at large that we are correct, we feel better about ourselves. That’s how the curated newsfeed became wedded to the world of profitable propaganda.

    Adding a little art to brighten an other-wise dull wall seems like an unalloyed good, but only if one completely ignores bad actors. What if that blank canvas gets painted with hate speech? What if, perchance, the homes of ‘undesirables’ are singled out with graffiti that only bad actors can see? What happens when every gathering place for any oppressed community gets invisibly ‘tagged’? In short, what happens when bad actors use Facebook’s augmented reality to amplify their own capacity to act badly?

    But that’s Zuckerberg: he seems to believe his creations will only be used to bring out the best in people. He seems to believe his gigantic sharing network would never be used to incite mob violence. Just as he seems to claim that Facebook’s capacity to collect and profile the moods of its users should never be monetised—but, given that presentation unearthed by the Australian, Facebook tells a different story to advertisers.

    Regulating Facebook enshrines its position as the data-gathering and profile-building organisation, while keeping it plugged into and responsive to the needs of national powers. Before anyone takes steps that would cement Facebook in our social lives for the foreseeable future, it may be better to consider how this situation arose, and whether—given what we now know—there might be an opportunity to do things differently.
    Voting 0
  10. Children who are cyberbullied are three times more likely to contemplate suicide, according to a study in JAMA Pediatrics in 2014. With such facts and figures, who could argue that there’s something to worry about. Throw in the increased unease within big technology companies such as Facebook about the corrosive effects of rumor and fake news in its feeds, and among executives such as former Facebook VP Chamath Palihapitiya that they’ve unleashed a potentially destructive force, and the argument would seem airtight.

    Except that it’s not. Widespread parental apprehension combined with studies lasting only a few years, with few data points, and few controls do not make an unequivocal case. Is there, for instance, a control group of teens who spent an equivalent amount of time watching TV in the 70s or playing arcade video games in the 80s or in internet chat rooms in the 90s? There is not. We may fear the effects of the smartphone, but it would seem that we fear massive uncertainty about the effects of the smartphone at least as much.

    Any new technology whose effects are unknown bears careful study, but that study should start with a blank slate and an open mind. The question should not be framed by what harm these devices and technologies cause but rather by an open-ended question about their long-term effects.

    Take the frequently cited link between isolation, cyber-bullying, depression and suicide. Yes, suicide rates in the U.S. have been on the rise, but that has been true since the early 1990s, and prevalence is highest among middle-aged men, who are most disrupted by the changing nature and demographics of employment but are not the teens spending so many hours glued to their devices. Cyber-bullying is an issue, but no one kept rigorous data about physical and psychological bullying in the 20th century, so it’s impossible to know if the rate and effects of bullying have grown or diminished in a cyber age. As for depression, there too, no one looked at the syndrome until late in the 20th century, and it remains a very fuzzy term when used in mainstream surveys. It’s impossible to say with any certainty what the effects of technology and depression are, especially without considering other factors such as income, diet, age, and family circumstances.

    Some might say that until we know more, it’s prudent, especially with children, to err on the side of caution and concern. There certainly are risks. Maybe we’re rewiring our brains for the worse; maybe we’re creating a generation of detached drones. But there also may be benefits of the technology that we can’t (yet) measure.

    Consider even an anodyne prescription such as “everything in moderation.” Information is not like drugs or alcohol; its effects are neither simple nor straightforward. As a society, we still don’t strike the right balance between risk and reward for those substances. It will be a long time before we fully grapple with the pros and cons of smartphone technology.
    Voting 0

Top of the page

First / Previous / Next / Last / Page 1 of 4 Online Bookmarks of M. Fioretti: Tags: fake news

About - Propulsed by SemanticScuttle