mfioretti: fake news*

Bookmarks on this page are managed by an admin user.

40 bookmark(s) - Sort by: Date ↓ / Title / Voting / - Bookmarks from other users for this tag

  1. rom this mis-diagnosis flows a proposed solution: limit Facebook and Google’s access to our personal data and/or ensure others have access to that personal data on equal terms (“data portability”). Data portability means almost nothing in a world where you have a dominant network. So what if I can get my data out of Facebook if no other network has a critical mass of participants. What is needed is that Facebook has a live, open read/write API that allows other platforms to connect if authorized by the user.

    In fact, personal data is a practical irrelevancies to the monopoly issue. Focusing on it serves only to distract us from the real solutions.

    Limiting Facebook’s and Google’s access to our personal data or making it more portable would make very little difference to their monopoly power, or reduce the deleterious effects of that power on innovation and freedom — the key freedoms of enterprise, choice and thought.

    It make little difference because their monopoly just doesn’t arise from their access to our personal data. Instead it comes from massive economies of scale (costless copying) plus platform effects. If you removed Google’s and Facebook’s ability to use personal data to target ads tomorrow it would make very little difference to their top or bottom lines because their monopoly on our attention would be little changed and their ad targeting would be little diminished — in Google’s case the fact you type in a specific search from a particular location is already enough to target effectively and similar Facebook’s knowledge of your broad demographic characteristics would be enough given the lock-hold they have on our online attention.

    What is needed in Google’s case is openness of the platform and in Facebook’s openness combined with guaranteed interoperability (“data portability” means little if everyone is on Facebook!).

    Worse, focusing on privacy actually reinforces their monopoly position. It does so because privacy concerns:

    Increase compliance costs which burden less wealthy competitors disproportionately. In particular, increased compliance costs make it harder for new firms to enter the market. A classic example is the “right to be forgotten” which actually makes it harder for alternative search firms to compete with Google.
    Make it harder to get (permitted) access to user data on the platform and it is precisely (user-permitted) read/write access to a platform’s data that is the best chance for competition. In fact, it now gives monopolists the perfect excuse to deny such access: Facebook can now deny other competing firms (user-permitted) access to user data citing “privacy concerns”.


    Similarly, the idea sometimes put forward that we just need another open-source decentralized social network is completely implausible (even if run by Tim Berners-Lee*).

    Platforms/networks like Facebook tend to standardize: witness phone networks, postal networks, electricity networks and even the Internet. We don’t want lots of incompatible social networks. We want one open one — just like we have one open Internet.

    In addition, the idea that some open-source decentralized effort is going to take on an entrenched highly resourced monopoly on its own is ludicrous (the only hope would be if there was serious state assistance and regulation — just in the way that China got its own social networks by effectively excluding Facebook).

    Instead, in the case of Facebook we need to address the monopoly at root: networks like this will always tend to standardization. The solution is ensure that we get an open rather than closed, proprietary global social network — just like we got with the open Internet.

    Right now that would mean enforcing equal access rights to facebook API for competitors or, enforcing full open sourcing of key parts of the software and tech stack plus getting guarantees ongoing non-discriminatory API access.

    Even more importantly we need to prevent these kind of monopolies in future — we want to stop shutting the door after the horse has bolted! This means systematic funding of open protocols and platforms. By open i mean the software, algorithms and non-personal data are open. And we need to fund the innovators who create and develop these and the way to do that is replacing patents/copyright with remuneration rights.
    https://blog.okfn.org/2018/05/09/solv...opolies-problem-facebook-google-et-al
    Voting 0
  2. the gullibility of voters who take their news from Gateway Pundit or Facebook groups provides the raw material for the cynical acceptance of untruth. Arendt explained the phenomenon this way:

    In an ever-changing, incomprehensible world the masses had reached the point where they would, at the same time, believe everything and nothing, think that everything was possible and nothing was true...

    This mixture of gullibility and world-weary cynicism, Arendt wrote, dispelled “the illusion that gullibility was a weakness of unsuspecting primitive souls and cynicism the vice of superior and refined minds.”

    Mass propaganda discovered that its audience was ready at all times to believe the worst, no matter how absurd, and did not particularly object to being deceived because it held every statement to be a lie anyhow.

    The masters of this sort of propaganda understood that they could change their stories with impunity, because they would see their deceptions as a form of 8-dimensional chess.

    The totalitarian mass leaders based their propaganda on the correct psychological assumption that, under such conditions, one could make people believe the most fantastic statements one day, and trust that if the next day they were given irrefutable proof of their falsehood, they would take refuge in cynicism; instead of deserting the leaders who had lied to them, they would protest that they had known all along that the statement was a lie and would admire the leaders for their superior tactical cleverness."

    Remarkably, she wrote that nearly 70 years ago, long before the rise of our own alternative reality media ecosystems. But Arendt understood the endgame here; a tsunami of lies isn’t aimed at getting people to believe what the propagandist is saying. Rather, it’s to induce chronic disbelief, or an indifferent shrug. Who knows what to believe? Who cares? What is truth?

    “The result of a consistent and total substitution of lie, for factual truth is not that the lie will now be accepted as truth and truth be defamed as a lie, “ wrote Arendt, “but that the sense by which we take our bearings in the real world—and the category of truth versus falsehood is among the mental means to this end—is being destroyed.”
    https://www.weeklystandard.com/charle...thing-is-possible-and-nothing-is-true
    Voting 0
  3. Time and again, communal hatreds overrun the newsfeed — the primary portal for news and information for many users — unchecked as local media are displaced by Facebook and governments find themselves with little leverage over the company. Some users, energized by hate speech and misinformation, plot real-world attacks.

    A reconstruction of Sri Lanka’s descent into violence, based on interviews with officials, victims and ordinary users caught up in online anger, found that Facebook’s newsfeed played a central role in nearly every step from rumor to killing. Facebook officials, they say, ignored repeated warnings of the potential for violence, resisting pressure to hire moderators or establish emergency points of contact.


    where institutions are weak or undeveloped, Facebook’s newsfeed can inadvertently amplify dangerous tendencies. Designed to maximize user time on site, it promotes whatever wins the most attention. Posts that tap into negative, primal emotions like anger or fear, studies have found, produce the highest engagement, and so proliferate.

    In the Western countries for which Facebook was designed, this leads to online arguments, angry identity politics and polarization. But in developing countries, Facebook is often perceived as synonymous with the internet and reputable sources are scarce, allowing emotionally charged rumors to run rampant. Shared among trusted friends and family members, they can become conventional wisdom.

    And where people do not feel they can rely on the police or courts to keep them safe, research shows, panic over a perceived threat can lead some to take matters into their own hands — to lynch.

    Last year, in rural Indonesia, rumors spread on Facebook and WhatsApp, a Facebook-owned messaging tool, that gangs were kidnapping local children and selling their organs. Some messages included photos of dismembered bodies or fake police fliers. Almost immediately, locals in nine villages lynched outsiders they suspected of coming for their children.
    https://www.nytimes.com/2018/04/21/world/asia/facebook-sri-lanka-riots.html
    Voting 0
  4. How did big media miss the Donald Trump swell? News organizations old and new, large and small, print and online, broadcast and cable assigned phalanxes of reporters armed with the most sophisticated polling data and analysis to cover the presidential campaign. The overwhelming assumption was that the race was Hillary Clinton’s for the taking, and the real question wasn’t how sweeping her November victory would be, but how far out to sea her wave would send political parvenu Trump. Today, it’s Trump who occupies the White House and Clinton who’s drifting out to sea—an outcome that arrived not just as an embarrassment for the press but as an indictment. In some profound way, the election made clear, the national media just doesn’t get the nation it purportedly covers.

    What went so wrong? What’s still wrong? To some conservatives, Trump’s surprise win on November 8 simply bore out what they had suspected, that the Democrat-infested press was knowingly in the tank for Clinton all along. The media, in this view, was guilty not just of confirmation bias but of complicity. But the knowing-bias charge never added up: No news organization ignored the Clinton emails story, and everybody feasted on the damaging John Podesta email cache that WikiLeaks served up buffet-style. Practically speaking, you’re not pushing Clinton to victory if you’re pantsing her and her party to voters almost daily.

    The answer to the press’ myopia lies elsewhere, and nobody has produced a better argument for how the national media missed the Trump story than FiveThirtyEight’s Nate Silver, who pointed out that the ideological clustering in top newsrooms led to groupthink. “As of 2013, only 7 percent of journalists » identified as Republicans,” Silver wrote in March, chiding the press for its political homogeneity. Just after the election, presidential strategist Steve Bannon savaged the press on the same point but with a heartier vocabulary. “The media bubble is the ultimate symbol of what’s wrong with this country,” Bannon said. “It’s just a circle of people talking to themselves who have no fucking idea what’s going on.”
    About the Illustration

    The map at the top of this piece shows how concentrated media jobs have become in the nation’s most Democratic-leaning counties. Counties that voted for Donald Trump in 2016 are in red, and Hillary Clinton counties are in blue, with darker colors signifying higher vote margins. The bubbles represent the 150 counties with the most newspaper and internet publishing jobs. Not only do most of the bubbles fall in blue counties, chiefly on the coasts, but an outright majority of the jobs are in the deepest-blue counties, where Clinton won by 30 points or more.

    Illustration by DataPoint; data reporting by Tucker Doherty

    But journalistic groupthink is a symptom, not a cause. And when it comes to the cause, there’s another, blunter way to think about the question than screaming “bias” and “conspiracy,” or counting D’s and R’s. That’s to ask a simple question about the map. Where do journalists work, and how much has that changed in recent years? To determine this, my colleague Tucker Doherty excavated labor statistics and cross-referenced them against voting patterns and Census data to figure out just what the American media landscape looks like, and how much it has changed.

    The results read like a revelation. The national media really does work in a bubble, something that wasn’t true as recently as 2008. And the bubble is growing more extreme. Concentrated heavily along the coasts, the bubble is both geographic and political. If you’re a working journalist, odds aren’t just that you work in a pro-Clinton county—odds are that you reside in one of the nation’s most pro-Clinton counties. And you’ve got company: If you’re a typical reader of Politico, chances are you’re a citizen of bubbleville, too.

    The “media bubble” trope might feel overused by critics of journalism who want to sneer at reporters who live in Brooklyn or California and don’t get the “real America” of southern Ohio or rural Kansas. But these numbers suggest it’s no exaggeration: Not only is the bubble real, but it’s more extreme than you might realize. And it’s driven by deep industry trends.


    The magic of the internet was going to shake up the old certainties of the job market, prevent the coagulation of jobs in the big metro areas, or so the Web utopians promised us in the mid-1990s. The technology would free internet employees to work from wherever they could find a broadband connection. That remains true in theory, with thousands of Web developers, writers and producers working remotely from lesser metropolises.
    WhiteHousePress-Lede-ByMattChase.jpg

    1600 Penn
    Trump’s Fake War on the Fake News

    By BEN SCHRECKINGER and HADAS GOLD

    But economists know something the internet evangelists have ignored: All else being equal, specialized industries like to cluster. Car companies didn’t arise in remote regions that needed cars—they arose in Detroit, which already had heavy industry, was near natural resources, boasted a skilled workforce and was home to a network of suppliers that could help car companies thrive. As industries grow, they bud and create spinoffs, the best example being the way Silicon Valley blossomed from just a handful of pioneering electronics firms in the 1960s. Seattle’s rise as a tech powerhouse was seeded by Microsoft, which moved to the area in 1979 and helped create the ecosystem that gave rise to companies like Amazon.

    As Enrico Moretti, a University of California, Berkeley, economist who has studied the geography of job creation, points out, the tech entrepreneurs who drive internet publishing could locate their companies in low-rent, low-cost-of-living places like Cleveland, but they don’t. They need the most talented workers, who tend to move to the clusters, where demand drives wages higher. And it’s the clusters that host all the subsidiary industries a tech start-up craves—lawyers specializing in intellectual property and incorporation; hardware and software vendors; angel investors; and so on.

    The old newspaper business model almost prevented this kind of clustering.


    Economists call these “non-tradable goods”—goods that must be consumed in the same community in which they’re made. The business of a newspaper can’t really be separated from the place where it’s published. It is, or was, driven by ads for things that don’t travel, like real estate, jobs, home decor and cars. And as that advertising has gotten harder and harder to come by, local newsrooms have become thinner and thinner.

    The online media, liberated from printing presses and local ad bases, has been free to form clusters, piggyback-style, on the industries and government that it covers.
    https://www.politico.com/magazine/sto...eal-journalism-jobs-east-coast-215048
    Voting 0
  5. The situation in the country is so severe that an estimated 700,000 Rohingya refugees are thought to have fled to neighboring Bangladesh following a Myanmar government crackdown that began in August. U.S. Secretary of State Rex Tillerson has labeled the actions as ethnic cleansing, as has the UN.

    Tensions inflamed, Facebook has been a primary outlet for racial hatred from high-profile individuals inside Myanmar. One of them, monk Ashin Wirathu who is barred from public speaking due to past history, moved online to Facebook where he quickly found an audience. Though he had his Facebook account shuttered, he has vowed to open new ones in order to continue to amplifly his voice via the social network.

    Beyond visible figures, the platform has been ripe for anti-Muslim and anti-Rohinga memes and false new stories to go viral. UN investigators last month said Facebook has “turned into a beast” and played a key role in spreading hate.
    https://techcrunch.com/2018/04/06/mya...nch+%28TechCrunch%29&sr_share=twitter
    Voting 0
  6. Which brings us back to Facebook, which to this day seems at best to dimly understand how the news business works, as is evident in its longstanding insistence that it's not a media company. Wired was even inspired to publish a sarcastic self-help quiz for Facebook execs on "How to tell if you're a media company." It included such questions as "Are you the country's largest source of news?"

    The answer is a resounding yes. An astonishing 45 percent of Americans get their news from this single source. Add Google, and above 70 percent of Americans get their news from a pair of outlets. The two firms also ate up about 89 percent of the digital-advertising growth last year, underscoring their monopolistic power in this industry.

    Facebook's cluelessness on this front makes the ease with which it took over the press that much more bizarre to contemplate. Of course, the entire history of Facebook is pretty weird, even by Silicon Valley standards, beginning with the fact that the firm thinks of itself as a movement and not a giant money-sucking machine.


    That Facebook saw meteoric rises without ever experiencing a big dip in users might have something to do with the fact that the site was consciously designed to be addictive, as early founder Parker recently noted at a conference in Philadelphia.

    Facebook is full of features such as "likes" that dot your surfing experience with neuro-rushes of micro-approval – a "little dopamine hit," as Parker put it. The hits might come with getting a like when you post a picture of yourself thumbs-upping the world's third-largest cheese wheel, or flashing the "Live Long and Prosper" sign on International Star Trek day, or whatever the hell it is you do in your cyber-time. "It's a social-validation feedback loop," Parker explained. "Exactly the kind of thing that a hacker like myself would come up with, because you're exploiting a vulnerability in human psychology."
    https://www.rollingstone.com/politics...e-be-saved-social-media-giant-w518655
    Voting 0
  7. Il nodo della questione si trova in questo dettaglio: un numero limitato di persone particolarmente attive può trasformare un sussurro in un “urlo collettivo”, come si è visto in questa, e in altre occasioni, anche per mancanza di analisi da parte dei giornalisti. Il giornalismo di oggi si alimenta naturalmente di ciò che trova larga diffusione sui social media, e non può non farlo; ma i numeri dei social media non possono essere interpretati correttamente senza un’accurata analisi che consideri la facilità con cui si possono manipolare gli hashtag: può infatti accadere che sui primi 71.000 tweet sopracitati, 35.000 siano stati fatti da 500 persone. “Dettagli” che non possono essere trascurati, soprattutto considerando il comportamento degli account più attivi, un numero esiguo, che non possono essere descritti solamente come bot, ma che alternano attività “umane”, come tweet personalizzati e risposte ad altri tweet, ad attività automatizzate, destinate unicamente a ripetere in maniera meccanica il tweet in questione. “Dettagli” che indicano la presenza di un tentativo concertato di manipolare la rilevanza di un hashtag, tradotto poi dai più importanti media del Paese come il sentimento di una parte dell’elettorato del Pd. Quanto questi account abbiano contribuito alla diffusione dell’hashtag è difficile da quantificare con precisione. Quello che possiamo dire con certezza è che senza di loro non saremmo stati qui a parlarne.
    http://thevision.com/attualita/senzadime-alleanza
    Tags: , , , , , by M. Fioretti (2018-03-29)
    Voting 0
  8. Di nuovo: dove sta lo scandalo di questi giorni, dunque? Lo scandalo sta nell’evidenza di un errore di fondo nella concezione delle interazioni umane, la concezione che Mark Zuckerberg ha imposto — per sua stessa ammissione, nel tanto agnognato intervento post-Cambridge Analytica — dal 2007. L’idea cioè di costruire un “web dove si è social di default”. Dove cioè la norma è condividere. Un principio che è strutturalmente opposto alla tutela della privacy individuale, che si fonda sulla riservatezza come norma, riguardo ai propri dati personali.

    Zuckerberg lo spiega benissimo nel suo più recente intervento, individuando - giustamente - in quell’errore filosofico e antropologico la radice della tempesta in cui è costretto a destreggiarsi: “Nel 2007, abbiamo lanciato la Facebook Platform nella convinzione (“vision”) che più app dovessero essere social. Il tuo calendario doveva poterti mostrare il compleanno degli amici, le tue mappe mostrare dove vivono i tuoi amici, il tuo address book le loro foto. Per farlo, abbiamo consentito di accedere alle app e condividere chi fossero i tuoi amici e alcune informazioni su di loro”.

    È questo che conduce, nel 2013, Kogan a ottenere l’accesso ai dati di milioni di persone. E certo, quei dati hanno un immenso valore scientifico — ed è giusto che la ricerca, se condotta nel pieno rispetto del consenso informato degli utenti divenuti soggetti sperimentali, possa accedervi. Per soli scopi accademici, però. E anche così, già nel 2014 il famoso esperimento condotto da Facebook stessa sulla manipolazione delle emozioni di centinaia di migliaia di utenti, a cui erano stati mostrati deliberatamente più contenuti positivi o negativi, aveva dimostrato che anche quando non ci sono di mezzo fini commerciali, la questione è ambigua, complessa. E che no, non basta accettare condizioni di utilizzo intricate e che non legge nessuno per dire che allora ogni utente ha, per il fatto stesso di avere accettato di essere su Facebook, di diventare indiscriminatamente un topo di laboratorio arruolato in esperimenti di cui ignora tutto.

    Eppure è proprio la piattaforma a rendersi conto, già in quello stesso anno, che così le cose non vanno. Che a quel modo Facebook perde il controllo su quali terze parti hanno accesso ai dati dei suoi utenti. La policy dunque cambia, e da allora gli “amici” devono acconsentire al trattamento dei propri dati da parte di una app. La nuova filosofia, ricorda Albright, è “people first”. Ma è tardi. E l’incapacità di rientrare davvero in possesso di quell’ammasso di informazioni, dimostrata dal caso Cambridge Analytica – possibile Facebook debba scoprire dai giornali che l’azienda non aveva cancellato i dati che diceva di aver cancellato, o che debba comunque condurre un auditing serio per verificarlo ora, dimostrando di non avere idea se lo siano o meno? – fa capire che il problema va ben oltre il singolo caso in questione, ma è sistematico.

    Per capirci più chiaramente: come scrive Albright, la prima versione delle API v.1.0 per il Facebook Graph – cioè ciò che gli sviluppatori di applicazioni potevano ottenere dal social network tra il 2010, data di lancio, e il 2014, data in cui la policy è cambiata – consentiva di sapere non su chi si iscriveva a una determinata app, ma dagli amici inconsapevoli, i seguenti dati: “about, azioni, attività, compleanno, check-ins, istruzione, eventi, giochi, gruppi, residenza, interessi, like, luogo, note, status, tag, foto, domande, relazioni, religione/politica, iscrizioni, siti, storia lavorativa”. Davvero si poteva pensare di controllare dove finissero tutti questi dati, per milioni e milioni di persone?

    E davvero Facebook lo scopre oggi? Nel 2011, la Federal Trade Commission americana aveva già segnalato la questione come problematica. Non ha insegnato nulla
    https://www.valigiablu.it/facebook-cambridge-analytica-scandalo
    Voting 0
  9. Seconda domanda: Cambridge Analytica segnala come propri casi di successo il ruolo consulenziale avuto per la Brexit e le presidenziali USA del 2016. Questo vuol dire che la propaganda computazionale data-based ha successo prevalentemente con i movimenti populisti? Qui entriamo nella fantapolitica, ma è possibile provare a ragionare sulla questione. Se fosse vero che il populismo è più sensibile ad una comunicazione semplice e mirata, vorrebbe dire che la mente di chi vota conservatore sia diversa dalla mente di chi vota liberale. Chi ha sollevato la questione è il linguista George Lakoff che nel suo libro “Moral Politics” ha ipotizzato che i conservatori hanno un modello familiare rigoroso, in cui i valori sono fondati su autodisciplina e lavoro duro, mentre i liberali hanno un modello familiare partecipativo, in cui i valori sono basati sul prendersi cura gli uni con gli altri.
    https://www.agendadigitale.eu/cultura...ioni-con-i-social-che-dice-la-scienza
    Voting 0
  10. Falsehoods almost always beat out the truth on Twitter, penetrating further, faster, and deeper into the social network than accurate information.

    And blame for this problem cannot be laid with our robotic brethren. From 2006 to 2016, Twitter bots amplified true stories as much as they amplified false ones, the study found. Fake news prospers, the authors write, “because humans, not robots, are more likely to spread it.”

    Political scientists and social-media researchers largely praised the study, saying it gave the broadest and most rigorous look so far into the scale of the fake-news problem on social networks, though some disputed its findings about bots and questioned its definition of news.

    “This is a really interesting and impressive study, and the results around how demonstrably untrue assertions spread faster and wider than demonstrable true ones do, within the sample, seem very robust, consistent, and well supported,” said Rasmus Kleis Nielsen, a professor of political communication at the University of Oxford, in an email.

    “I think it’s very careful, important work,” Brendan Nyhan, a professor of government at Dartmouth College, told me. “It’s excellent research of the sort that we need more of.”
    https://www.theatlantic.com/technolog...udy-ever-fake-news-mit-twitter/555104
    Voting 0

Top of the page

First / Previous / Next / Last / Page 1 of 4 Online Bookmarks of M. Fioretti: Tags: fake news

About - Propulsed by SemanticScuttle