2018/09/18: “Alternative Influence Network (AIN)”; an alternative media system that adopts the techniques of brand influencers to build audiences and “sell” them political ideology.
Alternative Influence offers insights into the connection between influence, amplification, monetization, and radicalization at a time when platform companies struggle to handle policies and standards for extremist influencers. The network of scholars, media pundits, and internet celebrities that Lewis identifies leverages YouTube to promote a range of political positions, from mainstream versions of libertarianism and conservatism, all the way to overt white nationalism.
Notably, YouTube is a principal online news source for young people.1 Which is why it is concerning that YouTube, a subsidiary of Google, has become the single most important hub by which an extensive network of far-right influencers profit from broadcasting propaganda to young viewers.
“Social networking between influencers makes it easy for audience members to be incrementally exposed to, and come to trust, ever more extremist political positions,” writes Lewis, who outlines how YouTube incentivizes their behavior. Lewis illustrates common techniques that these far-right influencers use to make money as they cultivate alternative social identities and use production value to increase their appeal as countercultural social underdogs. The report offers a data visualization of this network to show how connected influencers act as a conduit for viewership.
Increasingly, understanding the circulation of extremist political content does not just involve fringe communities and anonymous actors. Instead, it requires us to scrutinize polished, well-lit microcelebrities and the captivating videos that are easily available on the pages of the internet’s most popular video platform.
2018/09/27: One of the main interesting findings is that fake news is mostly circulated by politically affiliated accounts (in particular those of the embattled candidate Fillon) whereas debunks mostly came from unaffiliated accounts,” he said in an email. “This seems like quite a new observation.”
In an email to Daniel, Sénécat agreed, and said that fact-checkers should think about adapting their methodologies to tackle a range of political misinformation.
“We fact-checkers should not only work on 100 percent false information but also delve into other phenomena such as propaganda,” he said. “And considering that false information is a portion of a more massive phenomenon, which we could define as misinformation as a whole.”
2018/09/24: Oggi non si possono pubblicare foto della Torre Eiffel illuminata dallo spettacolo di luci. Se infatti la torre non è più coperta da diritto d’autore in quanto opera architettonica di valore creativo, così non è per la coreografia di luci che la illuminano di sera. La coreografia è coperta da diritto d’autore e dunque non può essere ripresa o fotografata.
Altra cosa da tenere a mente: anche quando è prevista l’eccezione per uso privato e non commerciale, non si potranno comunque postare queste foto sui social, in quanto questi ne fanno un uso commerciale.
Art. 11: Aggregatori di news (Google News, Apple News, Flipboard...)
Potreste non trovarci più notizie di giornali europei — Essendo costretti a chiedere una licenza ai giornali per pubblicare link alle loro notizie, se non volessero farlo non potrebbero più pubblicare le notizie dei media europei.
Esempio: Sia in Spagna che in Germania una legge simile è stata approvata in passato ma Google News ha preferito non pagare e in Spagna ha chiuso con conseguente calo di traffico verso i giornali spagnoli.
Lo stesso vale per i social: potreste non poter più linkare notizie di media europei. Ma tranquilli, le fake news fatte in Russia invece potranno circolare liberamente non essendo soggette alla direttiva.
2018/09/24: Snippets are being edited to improve/damage reputation or send certain signals to different audiences.
While the changes in the Bipartisan Report panel illustrate the possible use of the Wikipedia snippets to either damage or salvage the reputation of a publisher, there are other changes that are puzzling in their nature. Here is one, concerning the magazine American Renaissance, a white supremacist publication.
Figure 5: Knowledge Panels for American Renaissance on Jan and Sep 2018. The change of the text snippets makes one wonder which audiences are being targeted.
Both text snippets shown in Figure 5 acknowledge that American Renaissance is a white supremacist publication, but the provenance of the categorization differs. In January, the snippet lists third-party, well-known organizations as sources for the “white supremacist” label, however, in the September snippet, we read that the publisher self-describes as a “white-advocacy organization”. This shift of perspective (who does the labeling?) needs to be a matter of debate. Should these information panels tell us what the organizations think about themselves (how is this different from “About Us” pages which literacy experts suggest to avoid) or how other (especially watchdog) organizations regard them?
I don’t know how we can solve these issues without increasing the burden on Wikipedia editors. However, I think it’s important to raise awareness about these issues, so that we continue to actively address them. Furthermore, Google and Facebook need to better acknowledge the limitations of their initiatives and increase their support for Wikipedia and other knowledge production organizations.
2018/09/14: while Brandeis believed that anyone had the right to express their views, he did not believe that anyone had the right to be amplified.
More importantly, he didn’t believe that anyone who had the means to shove a message down someone’s throat had the right to do so.
Health misinformation in Nigeria varies from “cruel hoaxes” such as drinking saltwater to cure Ebola, to general misperceptions about causes of disease, mode of transmission and available treatment.
There are also ungrounded concerns about the safety of medical interventions. Classic examples include false beliefs about contraceptives and vaccinations.
Nigerians have "generally poor health-seeking behaviour" as a a result of poverty, religion and a poorly functioning health system. Social media makes the situation worse by spreading false health rumours.
2018/09/12: In the book, the narrator, a sociologist, describes how a system in which status accorded by birth had been replaced by a society in which the classes are reconstituted in the basic formula, IQ plus Effort = Merit. The belief in a common good and a flourishing civic life is corroded. “If the meritocrats believe…that their advantage comes from their own merits,” Young wrote. “They can feel they deserve whatever they can get.
As Young’s book predicted, opportunities to accrue social capital, the springboard that allows the middle classes to leap ahead, have been drastically reduced for those at the bottom of society even as, contrary to one of Young’s predictions, the rich have grown wealthier with 10% owning 40% of this country’s wealth.
Democracy does not require perfect equality but it does require that citizens share a common life.
Do we want a society in which everything is up for sale? Meritocrats might say, “Yes”. As Young pointed out all those years ago, the ability to buy what it wants when it wants is one way in which the meritocracy proves its “worth” – at least to itself.
2018/09/07: I don’t want to accept this idea that certain words that are problematic, especially for Americans to make peace with their past, should be banned.
Instead of banning every word out there, we should make the mental effort to do better than the political correctness movement that stops at the surface.
So, let’s call it master-slave, and instead make a call for US, where a sizable black population is very poor, to have free healthcare, to have cops that are less biased against non-white people, to stop death penalty. This makes really a difference.
For instance Europeans that are a lot less sensible to political correctness, managed to do a much better job on that stuff.
Moreover I don't believe in the right to be offended, because it's a subjective thing. Different groups may feel offended by different things.
To save the right of not being offended it becomes basically impossible to say or do anything in the long run. Is this the evolution of our society?
A few days ago on Hacker News I discovered I can no longer say "fake news" for instance.
2018/09/06: Never before has such a small number of firms been able to control what billions can say and see
For all the recent hand-wringing in the United States over Facebook’s monopolistic power, the mega-platform’s grip on the Philippines is something else entirely. Thanks to a social media–hungry populace and heavy subsidies that keep Facebook free to use on mobile phones, Facebook has completely saturated the country. And because using other data, like accessing a news website via a mobile web browser, is precious and expensive, for most Filipinos the only way online is through Facebook. The platform is a leading provider of news and information, and it was a key engine behind the wave of populist anger that carried Duterte all the way to the presidency.
If you want to know what happens to a country that has opened itself entirely to Facebook, look to the Philippines. What happened there — what continues to happen there — is both an origin story for the weaponization of social media and a peek at its dystopian future. It’s a society where, increasingly, the truth no longer matters, propaganda is ubiquitous, and lives are wrecked and people die as a result — half a world away from the Silicon Valley engineers who’d promised to connect their world.
In July, residents of a rural Indian town saw rumors of child kidnappers on WhatsApp. Then they beat five strangers to death.
There are human consequences to Facebook’s growth-at-all-costs approach in the developing world. In Myanmar, hate speech spread on the company’s Messenger app amplified calls for the genocide of Rohingya Muslims. In the Philippines, President Rodrigo Duterte stoked anger and fear on Facebook in service of a brutal drug war. In Brazil, anti-vaccination groups spread misinformation on WhatsApp about yellow fever vaccinations, contributing to a measured uptick of the disease. And in India, villagers — many experiencing the internet for the first time — have whipped themselves into frenzies after viewing viral, forwarded videos from unknown sources warning of child abductors.
it was less about actually electing Trump. It was pretty clear the GRU's goal was to weaken a future Hillary presidency. Putin has like a personal antipathy towards her and believes that she was behind the protests against him in the 2012 Russian election, and so, the GRU activity was specifically focused on weakening her.
Throwing an election one way or another is going to be very difficult for a foreign adversary but throwing any election into chaos is totally doable right now.
Trump-like so many other politicians and pundits-has found search and social media companies to be convenient targets in the debate over free speech and censorship online. "This is a very serious situation-will be addressed!"rnrnBut in this moment, the conversation we should be having-how can we fix the algorithms?-is instead being co-opted and twisted by politicians and pundits howling about censorship and miscasting content moderation as the demise of free speech online. It would be good to remind them that free speech does not mean free reach. There is no right to algorithmic amplification. In fact, that's the very problem that needs fixing.rnrnThe algorithms don't understand what is propaganda and what isn't, or what is "fake news" and what is fact-checked. Their job is to surface relevant content (relevant to the user, of course), and they do it exceedingly well. So well, in fact, that the engineers who built these algorithms are sometimes baffled: "Even the creators don't always understand why it recommends one video instead of another," says Guillaume Chaslot, an ex-YouTube engineer who worked on the site's algorithm.rnrn YouTube's algorithms can also radicalize by suggesting "white supremacist rants, Holocaust denials, and other disturbing content," Zeynep Tufekci recently wrote in the Times. "YouTube may be one of the most powerful radicalizing instruments of the rnrnThe problem extends beyond YouTube, though. On Google search, dangerous anti-vaccine misinformation can commandeer the top results. And on Facebook, hate speech can thrive and fuel genocidernrnSo what can we do about it? The solution isn't to outlaw algorithmic ranking or make noise about legislating what results Google can return. Algorithms are an invaluable tool for making sense of the immense universe of information online. There's an overwhelming amount of content available to fill any given person's feed or search query; sorting and ranking is a necessity, and there has never been evidence indicating that the results display systemic partisan bias. rnIt's imperative that we focus on solutions, not politics.
At least 31 people have been killed over the last one year in 10 different states by lynch mobs mobilised by rumours of child lifting spread over WhatsApp.
The good news is that an increasing number of people seem to agree that: Facebook, Google etc are monopolies That is a problem Agreeing we have a problem is always a crucial first step. But to go f
False rumors set Buddhist against Muslim in Sri Lanka, the most recent in a global spate of violence fanned by social media.
We crunched the data on where journalists work and how fast it's changing. The results should worry you.
La mancanza di analisi da parte dei giornalisti e un numero limitato di persone particolarmente attive su Twitter oggi può trasformare un sussurro in un "urlo collettivo".
Dopo il caso Cambridge Analytica, il punto di vista di uno scienziato di computational social science. E' davvero possibile manipolare il comportamento elettorale attraverso una strategia basata su dati e veicolata tramite piattaforme social? E' perché sarebbero monipolabili soprattutto i populisti? Prime parziali risposte
Falsehoods almost always beat out the truth on Twitter, penetrating further, faster, and deeper into the social network than accurate information.