mfioretti: algorithms*

Bookmarks on this page are managed by an admin user.

92 bookmark(s) - Sort by: Date ↓ / Title / Voting / - Bookmarks from other users for this tag

  1. anything designed to maximize engagement maximizes popularization.

    What we are witnessing is the computational exploitation of a natural human desire: to look “behind the curtain,” to dig deeper into something that engages us. As we click and click, we are carried along by the exciting sensation of uncovering more secrets and deeper truths. YouTube leads viewers down a rabbit hole of extremism, while Google racks up the ad sales.

    Human beings have many natural tendencies that need to be vigilantly monitored in the context of modern life. For example, our craving for fat, salt and sugar, which served us well when food was scarce, can lead us astray in an environment in which fat, salt and sugar are all too plentiful and heavily marketed to us. So too our natural curiosity about the unknown can lead us astray on a website that leads us too much in the direction of lies, hoaxes and misinformation.

    In effect, YouTube has created a restaurant that serves us increasingly sugary, fatty foods, loading up our plates as soon as we are finished with the last meal. Over time, our tastes adjust, and we seek even more sugary, fatty foods, which the restaurant dutifully provides.
    Voting 0
  2. search engine algorithms aren’t as neutral as Google would like you to think. Algorithms promote some results above others, and even a seemingly neutral piece of code can reflect society’s biases. What’s more, without any insight into how the algorithms work or what the broader context is, searches can unfairly shape the discussion of a topic like black girls.

    Noble spoke to MIT Technology Review about the problems inherent with the current system, how Google could do better, and how artificial intelligence might make things worse.

    But when we start getting into more complicated concepts around identity, around knowledge, this is where search engines start to fail us.
    Tags: , , by M. Fioretti (2018-03-06)
    Voting 0
  3. As the visualization below shows, Michele, the bot pretending to be a fascist, enjoyed a radically different news feed experience from the others:
    Total number of posts seen by each bot (the wider the bar, the more posts), grouped by how many times the posts were repeated (the higher up the bar, the more times).

    Fash-bot Michele is shown a much smaller variety of posts, repeated way more often than normal — it saw some posts as often as 29 times in the 20 days represented in the data set.

    I mostly agree with studies such as Political polarization? Don’t blame the web, especially because the opposite belief is a kind of techno-determinism I feel doesn’t take into account a lot of political complexity. But the data above displays a frightening situation: the Michele bot has been segregated by the algorithm, and only receives content from a very narrow political area. Sure, let’s make fun of fascists because they see mostly pictures or because they are ill-informed, but these people will vote in 31 hours. Not cool.

    If you were curious what posts the Facebook algorithm deemed so essential that they had to be shown 29 times each (once a day or more, on average — each), here they are, all three of them. The third is peculiar, with its message that “mass media does not give us a platform, they never even mention our name, but people still declare they will vote for us. Mass media is a scam, spread the word”.
    Voting 0
  4. Stratumseind in Eindhoven is one of the busiest nightlife streets in the Netherlands. On a Saturday night, bars are packed, music blares through the street, laughter and drunken shouting bounces off the walls. As the night progresses, the ground becomes littered with empty shot bottles, energy drink cans, cigarette butts and broken glass.

    It’s no surprise that the place is also known for its frequent fights. To change that image, Stratumseind has become one of the “smartest” streets in the Netherlands. Lamp-posts have been fitted with wifi-trackers, cameras and 64 microphones that can detect aggressive behaviour and alert police officers to altercations. There has been a failed experiment to change light intensity to alter the mood. The next plan, starting this spring, is to diffuse the smell of oranges to calm people down. The aim? To make Stratumseind a safer place.

    We get that comment a lot – ‘Big brother is watching you’. I prefer to say, ‘Big brother is helping you’
    Peter van de Crommert

    All the while, data is being collected and stored. “Visitors do not realise they are entering a living laboratory,” says Maša Galic, a researcher on privacy in the public space for the Tilburg Institute of Law, Technology and Society. Since the data on Stratumseind is used to profile, nudge or actively target people, this “smart city” experiment is subject to privacy law. According to the Dutch Personal Data Protection Act, people should be notified in advance of data collection and the purpose should be specified – but in Stratumseind, as in many other “smart cities”, this is not the case.

    Peter van de Crommert is involved at Stratumseind as project manager with the Dutch Institute for Technology, Safety and Security. He says visitors do not have to worry about their privacy: the data is about crowds, not individuals. “We often get that comment – ‘Big brother is watching you’ – but I prefer to say, ‘Big brother is helping you’. We want safe nightlife, but not a soldier on every street corner.”
    Revellers in Eindhoven’s Stratumseind celebrate King’s Day.
    Revellers in Eindhoven’s Stratumseind celebrate King’s Day. Photograph: Filippo Manaresi/Moment Editorial/Getty Images

    When we think of smart cities, we usually think of big projects: Songdo in South Korea, the IBM control centre in Rio de Janeiro or the hundreds of new smart cities in India. More recent developments include Toronto, where Google will build an entirely new smart neighbourhood, and Arizona, where Bill Gates plans to build his own smart city. But the reality of the smart city is that it has stretched into the everyday fabric of urban life – particularly so in the Netherlands.

    In the eastern city of Enschede, city traffic sensors pick up your phone’s wifi signal even if you are not connected to the wifi network. The trackers register your MAC address, the unique network card number in a smartphone. The city council wants to know how often people visit Enschede, and what their routes and preferred spots are. Dave Borghuis, an Enschede resident, was not impressed and filed an official complaint. “I don’t think it’s okay for the municipality to track its citizens in this way,” he said. “If you walk around the city, you have to be able to imagine yourself unwatched.”

    Enschede is enthusiastic about the advantages of the smart city. The municipality says it is saving €36m in infrastructure investments by launching a smart traffic app that rewards people for good behaviour like cycling, walking and using public transport. (Ironically, one of the rewards is a free day of private parking.) Only those who mine the small print will discover that the app creates “personal mobility profiles”, and that the collected personal data belongs to the company Mobidot.
    Voting 0
  5. IoT will be able to take stock of your choices, moods, preferences and tastes, the same way Google Search does. With enough spreadsheets, many practical questions are rendered trivial. How hard will it be for the IoT — maybe through Alexa, maybe through your phone — to statistically study why, where and when you raise your voice at your child? If you can correlate people’s habits and physical attributes, it will be toddler-easy to correlate mood to environment. The digitally connected devices of tomorrow would be poor consumer products if they did not learn you well. Being a good and faithful servant means monitoring the master closely, and that is what IoT devices will do. They will analyze your feedback and automate their responses — and predict your needs. In the IoT, Big Data is weaponized, and can peer deeper into the seeds your life than the government has ever dreamed.
    Voting 0
  6. Rome and London are two huge, sluggish beasts of cities that have outlived millennia of eager reformers. They share a world where half the people already live in cities and another couple billion are on their way into town. The population is aging quickly, the current infrastructure must crumble and be replaced by its very nature, and climate disaster is taking the place of the past’s great urban fires, wars, and epidemics. Those are the truly important, dull but worthy urban issues.

    However, the cities of the future won’t be “smart,” or well-engineered, cleverly designed, just, clean, fair, green, sustainable, safe, healthy, affordable, or resilient. They won’t have any particularly higher ethical values of liberty, equality, or fraternity, either. The future smart city will be the internet, the mobile cloud, and a lot of weird paste-on gadgetry, deployed by City Hall, mostly for the sake of making towns more attractive to capital.

    Whenever that’s done right, it will increase the soft power of the more alert and ambitious towns and make the mayors look more electable. When it’s done wrong, it’ll much resemble the ragged downsides of the previous waves of urban innovation, such as railways, electrification, freeways, and oil pipelines. There will also be a host of boozy side effects and toxic blowback that even the wisest urban planner could never possibly expect.

    “information about you wants to be free to us.”

    This year, a host of American cities vilely prostrated themselves to Amazon in the hopes of winning its promised, new second headquarters. They’d do anything for the scraps of Amazon’s shipping business (although, nobody knows what kind of jobs Amazon is really promising). This also made it clear, though, that the flat-world internet game was up, and it’s still about location, location, and location.

    Smart cities will use the techniques of “smartness” to leverage their regional competitive advantages. Instead of being speed-of-light flat-world platforms, all global and multicultural, they’ll be digitally gated communities, with “code as law” that is as crooked, complex, and deceitful as a Facebook privacy chart.

    You still see this upbeat notion remaining in the current smart-city rhetoric, mostly because it suits the institutional interests of the left.

    The “bad part of town” will be full of algorithms that shuffle you straight from high-school detention into the prison system. The rich part of town will get mirror-glassed limos that breeze through the smart red lights to seamlessly deliver the aristocracy from curb into penthouse.

    These aren’t the “best practices” beloved by software engineers; they’re just the standard urban practices, with software layered over. It’s urban design as the barbarian’s varnish on urbanism.

    If you look at where the money goes (always a good idea), it’s not clear that the “smart city” is really about digitizing cities. Smart cities are a generational civil war within an urban world that’s already digitized.

    It’s a land grab for the command and control systems that were mostly already there.
    Voting 0
  7. The “bad part of town” will be full of algorithms that shuffle you straight from high-school detention into the prison system. The rich part of town will get mirror-glassed limos that breeze through the smart red lights to seamlessly deliver the aristocracy from curb into penthouse.

    These aren’t the “best practices” beloved by software engineers; they’re just the standard urban practices, with software layered over. It’s urban design as the barbarian’s varnish on urbanism. People could have it otherwise, technically, if they really wanted it and had the political will, but they don’t. So they won’t get it.
    Voting 0
  8. As problematic as Facebook has become, it represents only one component of a much broader shift into a new human connectivity that is both omnipresent (consider the smartphone) and hypermediated—passing through and massaged by layer upon layer of machinery carefully hidden from view. The upshot is that it’s becoming increasingly difficult to determine what in our interactions is simply human and what is machine-generated. It is becoming difficult to know what is real.

    Before the agents of this new unreality finish this first phase of their work and then disappear completely from view to complete it, we have a brief opportunity to identify and catalogue the processes shaping our drift to a new world in which reality is both relative and carefully constructed by others, for their ends. Any catalogue must include at least these four items:

    the monetisation of propaganda as ‘fake news’;
    the use of machine learning to develop user profiles accurately measuring and modelling our emotional states;
    the rise of neuromarketing, targeting highly tailored messages that nudge us to act in ways serving the ends of others;
    a new technology, ‘augmented reality’, which will push us to sever all links with the evidence of our senses.

    The fake news stories floated past as jetsam on Facebook’s ‘newsfeed’, that continuous stream of shared content drawn from a user’s Facebook’s contacts, a stream generated by everything everyone else posts or shares. A decade ago that newsfeed had a raw, unfiltered quality, the notion that everyone was doing everything, but as Facebook has matured it has engaged increasingly opaque ‘algorithms’ to curate (or censor) the newsfeed, producing something that feels much more comfortable and familiar.

    This seems like a useful feature to have, but the taming of the newsfeed comes with a consequence: Facebook’s billions of users compose their world view from what flows through their feeds. Consider the number of people on public transport—or any public place—staring into their smartphones, reviewing their feeds, marvelling at the doings of their friends, reading articles posted by family members, sharing video clips or the latest celebrity outrages. It’s an activity now so routine we ignore its omnipresence.

    Curating that newsfeed shapes what Facebook’s users learn about the world. Some of that content is controlled by the user’s ‘likes’, but a larger part is derived from Facebook’s deep analysis of a user’s behaviour. Facebook uses ‘cookies’ (invisible bits of data hidden within a user’s web browser) to track the behaviour of its users even when they’re not on the Facebook site—and even when they’re not users of Facebook. Facebook knows where its users spend time on the web, and how much time they spend there. All of that allows Facebook to tailor a newsfeed to echo the interests of each user. There’s no magic to it, beyond endless surveillance.

    What is clear is that Facebook has the power to sway the moods of billions of users. Feed people a steady diet of playful puppy videos and they’re likely to be in a happier mood than people fed images of war. Over the last two years, that capacity to manage mood has been monetised through the sharing of fake news and political feeds atuned to reader preference: you can also make people happy by confirming their biases.

    We all like to believe we’re in the right, and when we get some sign from the universe at large that we are correct, we feel better about ourselves. That’s how the curated newsfeed became wedded to the world of profitable propaganda.

    Adding a little art to brighten an other-wise dull wall seems like an unalloyed good, but only if one completely ignores bad actors. What if that blank canvas gets painted with hate speech? What if, perchance, the homes of ‘undesirables’ are singled out with graffiti that only bad actors can see? What happens when every gathering place for any oppressed community gets invisibly ‘tagged’? In short, what happens when bad actors use Facebook’s augmented reality to amplify their own capacity to act badly?

    But that’s Zuckerberg: he seems to believe his creations will only be used to bring out the best in people. He seems to believe his gigantic sharing network would never be used to incite mob violence. Just as he seems to claim that Facebook’s capacity to collect and profile the moods of its users should never be monetised—but, given that presentation unearthed by the Australian, Facebook tells a different story to advertisers.

    Regulating Facebook enshrines its position as the data-gathering and profile-building organisation, while keeping it plugged into and responsive to the needs of national powers. Before anyone takes steps that would cement Facebook in our social lives for the foreseeable future, it may be better to consider how this situation arose, and whether—given what we now know—there might be an opportunity to do things differently.
    Voting 0
  9. When we look at digital technology and platforms, it’s always instructive to remember that they exist to extract data. The longer you are on the platform, the more you produce and the more can be extracted from you. Polarization keys engagement, and engagement/attention are the what keep us on platforms. In the words of Tristan Harris, the former Google Design Ethicist, and one of the earliest SV folks to have the scales fall from his eyes, “What people don’t know about or see about Facebook is that polarization is built into the business model,” Harris told NBC News. “Polarization is profitable.”

    David Golumbia’s description of the scholarly concept of Cyberlibertarianism is useful here (emphasis mine) :

    In perhaps the most pointed form of cyberlibertarianism, computer expertise is seen as directly applicable to social questions. In The Cultural Logic of Computation, I argue that computational practices are intrinsically hierarchical and shaped by identification with power. To the extent that algorithmic forms of reason and social organization can be said to have an inherent politics, these have long been understood as compatible with political formations on the Right rather than the Left.

    So the cui bono of digital polarization are the wealthy, the powerful, the people with so much to gain promoting systems that maintain the status quo, despite the language of freedom, democratization, and community that are featured so prominently when people like Facebook co-founder Mark Zuckerberg or Twitter co-founder and CEO Jack Dorsey talk about technology. Digital technology in general, and platforms like Facebook, YouTube, and Twitter specifically, exist to promote polarization and maintain the existing concentration of power.

    To the extent that Silicon Valley is the seat of the technological power, it’s useful to note that the very ground of what we now call Silicon Valley is built on the foundation of segregating black and white workers. Richard Rothstein’s The Color of Law talks about auto workers in 1950’s California:

    So in 1953 the company (Ford) announced it would close its Richmond plant and reestablish operations in a larger facility fifty miles south in Milpitas, a suburb of San Jose, rural at the time. (Milpitas is a part of what we now call Silicon Valley.)

    Because Milpitas had no apartments, and houses in the area were off limits to black workers—though their incomes and economic circumstances were like those of whites on the assembly line—African Americans at Ford had to choose between giving up their good industrial jobs , moving to apartments in a segregated neighborhood of San Jose, or enduring lengthy commutes between North Richmond and Milpitas.
    Voting 0
  10. Here’s one example: In 2014, Maine Gov. Paul LePage released data to the public detailing over 3,000 transactions from welfare recipients using EBT cards in the state. (EBT cards are like state-issued debit cards, and can be used to disperse benefits like food stamps.)

    LePage created a list of every time this money had been used in a strip club, liquor store, or bar, and used it to push his political agenda of limiting access to state benefits. LePage’s list represents a tiny fraction of overall EBT withdrawals, but it effectively reinforced negative stereotypes and narratives about who relies on welfare benefits and why.

    I spoke with Eubanks recently about her new book, and why she believes automated technologies are being used to rig the welfare system against the people who need it the most.

    A lightly edited transcript of our conversation follows.
    Sean Illing

    What’s the thesis of your book?
    Virginia Eubanks

    There’s a collision of political forces and technical innovations that are devastating poor and working-class families in America.
    Voting 0

Top of the page

First / Previous / Next / Last / Page 1 of 10 Online Bookmarks of M. Fioretti: Tags: algorithms

About - Propulsed by SemanticScuttle