2018/10/05: When white respondents perceived the share of non-white residents in the nation and in their cities to be higher, they tended to feel that they themselves were being discriminated against more. While the actual size of the non-white population in their neighborhood also went hand in hand with this attitude, that association was less strong.
Having diverse neighbors move in seems to have two effects on white Americans. It can affirm that their overestimation of the extent of demographic change happening in the country is correct, and therefore increase the threat they perceive. Or, by giving them opportunities to interact with people who don’t look like them, it can mitigate some of their fears.
2018/09/25: DNA, these marketing campaigns imply, reveals something essential about you. And it’s working. Thanks to television-ad blitzes and frequent holiday sales, genetic-ancestry tests have soared in popularity in the past two years. More than 15 million people have now traded their spit for insights into their family history.
If this were simply about wearing kilts or liking Ed Sheeran, these ads could be dismissed as, well, ads. They’re just trying to sell stuff, shrug. But marketing campaigns for genetic-ancestry tests also tap into the idea that DNA is deterministic, that genetic differences are meaningful. They trade in the prestige of genomic science, making DNA out to be far more important in our cultural identities than it is, in order to sell more stuff.
First, the accuracy of these tests is unproven (as detailed here and here). But putting that aside, consider simply what it means to get a surprise result of, say, 15 percent German. If you speak no German, celebrate no German traditions, have never cooked German food, and know no Germans, what connection is there, really? Cultural identity is the sum total of all of these experiences. DNA alone does not supersede it.
Listening to 99 Luftballons or rooting for Germany in the World Cup is fairly trivial as these things go. But this wave of marketing campaigns encourages a way of thinking—that you can pick and choose which fractional parts of genetic identity to highlight when it makes for good cocktail-party conversation.
2018/09/24: Snippets are being edited to improve/damage reputation or send certain signals to different audiences.
While the changes in the Bipartisan Report panel illustrate the possible use of the Wikipedia snippets to either damage or salvage the reputation of a publisher, there are other changes that are puzzling in their nature. Here is one, concerning the magazine American Renaissance, a white supremacist publication.
Figure 5: Knowledge Panels for American Renaissance on Jan and Sep 2018. The change of the text snippets makes one wonder which audiences are being targeted.
Both text snippets shown in Figure 5 acknowledge that American Renaissance is a white supremacist publication, but the provenance of the categorization differs. In January, the snippet lists third-party, well-known organizations as sources for the “white supremacist” label, however, in the September snippet, we read that the publisher self-describes as a “white-advocacy organization”. This shift of perspective (who does the labeling?) needs to be a matter of debate. Should these information panels tell us what the organizations think about themselves (how is this different from “About Us” pages which literacy experts suggest to avoid) or how other (especially watchdog) organizations regard them?
I don’t know how we can solve these issues without increasing the burden on Wikipedia editors. However, I think it’s important to raise awareness about these issues, so that we continue to actively address them. Furthermore, Google and Facebook need to better acknowledge the limitations of their initiatives and increase their support for Wikipedia and other knowledge production organizations.
Trump-like so many other politicians and pundits-has found search and social media companies to be convenient targets in the debate over free speech and censorship online. "This is a very serious situation-will be addressed!"rnrnBut in this moment, the conversation we should be having-how can we fix the algorithms?-is instead being co-opted and twisted by politicians and pundits howling about censorship and miscasting content moderation as the demise of free speech online. It would be good to remind them that free speech does not mean free reach. There is no right to algorithmic amplification. In fact, that's the very problem that needs fixing.rnrnThe algorithms don't understand what is propaganda and what isn't, or what is "fake news" and what is fact-checked. Their job is to surface relevant content (relevant to the user, of course), and they do it exceedingly well. So well, in fact, that the engineers who built these algorithms are sometimes baffled: "Even the creators don't always understand why it recommends one video instead of another," says Guillaume Chaslot, an ex-YouTube engineer who worked on the site's algorithm.rnrn YouTube's algorithms can also radicalize by suggesting "white supremacist rants, Holocaust denials, and other disturbing content," Zeynep Tufekci recently wrote in the Times. "YouTube may be one of the most powerful radicalizing instruments of the rnrnThe problem extends beyond YouTube, though. On Google search, dangerous anti-vaccine misinformation can commandeer the top results. And on Facebook, hate speech can thrive and fuel genocidernrnSo what can we do about it? The solution isn't to outlaw algorithmic ranking or make noise about legislating what results Google can return. Algorithms are an invaluable tool for making sense of the immense universe of information online. There's an overwhelming amount of content available to fill any given person's feed or search query; sorting and ranking is a necessity, and there has never been evidence indicating that the results display systemic partisan bias. rnIt's imperative that we focus on solutions, not politics.