mfioretti: algorithms*

Bookmarks on this page are managed by an admin user.

93 bookmark(s) - Sort by: Date ↓ / Title / Voting / - Bookmarks from other users for this tag

  1. Here’s one example: In 2014, Maine Gov. Paul LePage released data to the public detailing over 3,000 transactions from welfare recipients using EBT cards in the state. (EBT cards are like state-issued debit cards, and can be used to disperse benefits like food stamps.)

    LePage created a list of every time this money had been used in a strip club, liquor store, or bar, and used it to push his political agenda of limiting access to state benefits. LePage’s list represents a tiny fraction of overall EBT withdrawals, but it effectively reinforced negative stereotypes and narratives about who relies on welfare benefits and why.

    I spoke with Eubanks recently about her new book, and why she believes automated technologies are being used to rig the welfare system against the people who need it the most.

    A lightly edited transcript of our conversation follows.
    Sean Illing

    What’s the thesis of your book?
    Virginia Eubanks

    There’s a collision of political forces and technical innovations that are devastating poor and working-class families in America.
    https://www.vox.com/2018/2/6/16874782/welfare-big-data-technology-poverty
    Voting 0
  2. Nel 2015 un libro bianco ha ipotizzato che la “condivisione” degli appartamenti a Los Angeles ha eliminato undici appartamenti al giorno dal mercato degli affitti tradizionali. Un altro studio ha sostenuto che Airbnb elimina circa il 20% degli appartamenti in affitto in alcune zone di Manhattan e di Brooklyn a New York, fino al 28% nell’East Village, sebbene sia illegale affittarli oltre 30 giorni all’anno. Nei venti quartieri più centrali della metropoli americana si stima che Airbnb abbia sottratto almeno il 10% delle case disponibili dal mercato.

    La perdita di case disponibili sul mercato causata dalla disruzione (disruption) di Airbnb colpisce sei volte in più i residenti neri. Il quartiere con la più alta discriminazione razziale è Stuyvesant Heights, nel cuore di Central Brooklyn, dove le prenotazioni effettuate dai proprietari bianchi sono 1.012 volte superiori a quelle dei neri. La diseguaglianza economica sarebbe pari all’857% sul totale dei redditi accumulati dagli host bianchi.
    https://www.che-fare.com/leffetto-dis...iccarelli-gentrificare-e-turistizzare
    Voting 0
  3. There are 1.5 billion YouTube users in the world, which is more than the number of households that own televisions. What they watch is shaped by this algorithm, which skims and ranks billions of videos to identify 20 “up next” clips that are both relevant to a previous video and most likely, statistically speaking, to keep a person hooked on their screen.

    Company insiders tell me the algorithm is the single most important engine of YouTube’s growth. In one of the few public explanations of how the formula works – an academic paper that sketches the algorithm’s deep neural networks, crunching a vast pool of data about videos and the people who watch them – YouTube engineers describe it as one of the “largest scale and most sophisticated industrial recommendation systems in existence”.

    The algorithm does not appear to be optimising for what is truthful, or balanced, or healthy for democracy
    Guillaume Chaslot, an ex-Google engineer

    Lately, it has also become one of the most controversial. The algorithm has been found to be promoting conspiracy theories about the Las Vegas mass shooting and incentivising, through recommendations, a thriving subculture that targets children with disturbing content such as cartoons in which the British children’s character Peppa Pig eats her father or drinks bleach.

    Lewd and violent videos have been algorithmically served up to toddlers watching YouTube Kids, a dedicated app for children. One YouTube creator who was banned from making advertising revenues from his strange videos – which featured his children receiving flu shots, removing earwax, and crying over dead pets – told a reporter he had only been responding to the demands of Google’s algorithm. “That’s what got us out there and popular,” he said. “We learned to fuel it and do whatever it took to please the algorithm.”
    https://www.theguardian.com/technolog...how-youtubes-algorithm-distorts-truth
    Voting 0
  4. By using algorithms to "triage" the neediness of poor people, system designers can ensure that the people harmed by the system are the least sympathetic and least likely to provoke outrage among those with political clout.

    Algorithmically defined guilt is also a problem because of the real problems agencies are trying to solve. In Allegheny, your child's at-risk score is largely defined by your use of social services to deal with financial crises, health crises, addiction and mental health problems. If you deal with these problems privately -- by borrowing from relatives or getting private addiction treatment -- you aren't entered into the system, which means that if these factors are indeed predictors of risk to children, then the children of rich people are being systematically denied interventions by the same system that is over-policing poor children.
    https://boingboing.net/2018/01/31/empiricized-injustice.html
    Voting 0
  5. CEO Mark Zuckerberg wrote on Facebook today, “I’m changing the goal I give our product teams from focusing on helping you find relevant content to helping you have more meaningful social interactions.” VP of News Feed Adam Mosseri tells TechCrunch “I expect that the amount of distribution for publishers will go down because a lot of publisher content is just passively consumed and not talked about. Overall time on Facebook will decrease, but we think this is the right thing to do.”

    The winners in this change will be users and their sense of community, since they should find Facebook more rewarding and less of a black hole of wasted time viewing mindless video clips and guilty-pleasure articles. And long-term, it should preserve Facebook’s business and ensure it still has a platform to provide referral traffic for news publishers and marketers, albeit less than before.

    The biggest losers will be publishers who’ve shifted resources to invest in eye-catching pre-recorded social videos, because Mosseri says “video is such a passive experience”. He admits that he expects publishers to react with “a certain amount of scrutiny and anxiety”, but didn’t have many concrete answers about how publishers should scramble to react beyond “experimenting . . . and seeing . . what content gets more comments, more likes, more reshares.”
    https://techcrunch.com/2018/01/11/facebook-time-well-spent
    Voting 0
  6. Google Photos users upload photos snapped under all kinds of imperfect conditions. Given the number of images in the massive database, a tiny chance of mistaking one type of great ape for another can become a near certainty.

    Google parent Alphabet and the wider tech industry face versions of this problem with even higher stakes, such as with self-driving cars. Together with colleague Baishakhi Ray, an expert in software reliability, Román is probing ways to constrain the possible behaviors of vision systems used in scenarios like self-driving cars. Ray says there has been progress, but it is still unclear how well the limitations of such systems can be managed. “We still don’t know in a very concrete way what these machine learning models are learning,” she says.
    https://www.wired.com/story/when-it-c...-gorillas-google-photos-remains-blind
    Voting 0
  7. this article analyses Google’s two main advertising systems, AdWords and AdSense, and proposes that these financial models have significant effects upon online discourse. In discussing AdWords, this article details some of the tensions between the local and the global that develop when tracing flows of information and capital, specifically highlighting Google’s impact on the decline of online language diversity. In outlining AdSense, this article demonstrates how Google’s hegemonic control prescribes which parts of the web can be monetised and which remain unprofitable. In particular, in drawing from existing studies, evidence is provided that Google’s AdSense programme, along with Google’s relationship with Facebook, incentivised the rise of fake news in the 2016 US presidential election. This work builds on existing scholarship to demonstrate that Google’s economic influence has varied and far-reaching effects in a number of contexts and is relevant to scholars in a range of disciplines. As such, this article is intended as a discursive introduction to the topic and does not require specific disciplinary background knowledge. In doing so, this article does not attempt to provide the final word on Google’s relationship to digital capitalism, but rather, demonstrate the profitability of a Post-Fordist perspective, in order to enable a wider engagement with the issues identified.
    https://www.nature.com/articles/s41599-017-0021-4
    Tags: , , , by M. Fioretti (2018-01-02)
    Voting 0
  8. no serious scholar of modern geopolitics disputes that we are now at war — a new kind of information-based war, but war, nevertheless — with Russia in particular, but in all honesty, with a multitude of nation states and stateless actors bent on destroying western democratic capitalism. They are using our most sophisticated and complex technology platforms to wage this war — and so far, we’re losing. Badly.

    Why? According to sources I’ve talked to both at the big tech companies and in government, each side feels the other is ignorant, arrogant, misguided, and incapable of understanding the other side’s point of view. There’s almost no data sharing, trust, or cooperation between them. We’re stuck in an old model of lobbying, soft power, and the occasional confrontational hearing.

    Not exactly the kind of public-private partnership we need to win a war, much less a peace.

    Am I arguing that the government should take over Google, Amazon, Facebook, and Apple so as to beat back Russian info-ops? No, of course not. But our current response to Russian aggression illustrates the lack of partnership and co-ordination between government and our most valuable private sector companies. And I am hoping to raise an alarm: When the private sector has markedly better information, processing power, and personnel than the public sector, one will only strengthen, while the latter will weaken. We’re seeing it play out in our current politics, and if you believe in the American idea, you should be extremely concerned.
    https://shift.newco.co/data-power-and-war-465933dcb372
    Voting 0
  9. Whether or not Willie Lynch is “Midnight” remains to be seen. But many experts see the facial recognition technology used against him as flawed, especially against black individuals. Moreover, the way the Jacksonville sheriff’s office used the technology – as the basis for identifying and arresting Lynch, not as one component of a case supported by firmer evidence – makes his conviction even more questionable.

    The methods used to convict Lynch weren’t made clear during his court case. The Jacksonville sheriff’s office initially didn’t even disclose that they had used facial recognition software. Instead, they claimed to have used a mugshot database to identify Lynch on the basis of a single photo that the detectives had taken the night of the exchange.
    An ‘imperfect biometric’

    The lack of answers the Jacksonville sheriff’s office have provided in Lynch’s case is representative of the problems that facial recognition poses across the country. “It’s considered an imperfect biometric,” said Garvie, who in 2016 created a study on facial recognition software, published by the Center on Privacy and Technology at Georgetown Law, called The Perpetual Line-Up. “There’s no consensus in the scientific community that it provides a positive identification of somebody.”

    Experts fear the new technology may actually be hurting the communities the police claims they are trying to protect

    The software, which has taken an expanding role among law enforcement agencies in the US over the last several years, has been mired in controversy because of its effect on people of color. Experts fear that the new technology may actually be hurting the communities the police claims they are trying to protect.

    “If you’re black, you’re more likely to be subjected to this technology and the technology is more likely to be wrong,” House oversight committee ranking member Elijah Cummings said in a congressional hearing on law enforcement’s use of facial recognition software in March 2017. “That’s a hell of a combination.”

    Cummings was referring to studies such as Garvie’s. This report found that black individuals, as with so many aspects of the justice system, were the most likely to be scrutinized by facial recognition software in cases. It also suggested that software was most likely to be incorrect when used on black individuals – a finding corroborated by the FBI’s own research. This combination, which is making Lynch’s and other black Americans’ lives excruciatingly difficult, is born from another race issue that has become a subject of national discourse: the lack of diversity in the technology sector.
    https://www.theguardian.com/technolog...tion-white-coders-black-people-police
    Voting 0
  10. Earlier this month, writer James Bridle published an in-depth look at the underbelly of creepy, violent content targeted at kids on YouTube – from knock-off Peppa Pig cartoons, such as one where a trip to the dentist morphs into a graphic torture scene, to live-action “gross-out” videos, which show real kids vomiting and in pain.

    These videos are being produced and added to YouTube by the thousand, then tagged with what Bridle calls “keyword salad” – long lists of popular search terms packed into their titles. These keywords are designed to game or manipulate the algorithm that sorts, ranks and selects content for users to see. And thanks to a business model aimed at maximising views (and therefore ad revenue), these videos are being auto-played and promoted to kids based on their “similarity” – at least in terms of keywords used – to content that the kids have already seen. That means a child might start out watching a normal Peppa Pig episode on the official channel, finish it, then be automatically immersed in a dark, violent and unauthorised episode – without their parent realising it.
    Advertisement

    YouTube’s response to the problem has been to hand responsibility to its users, asking them to flag videos as inappropriate. From there, the videos go to a review team that YouTube says comprises thousands of people working 24 hours a day to review content. If the content is found to be inappropriate for children, it will be age-restricted and not appear in the YouTube Kids app. It will still appear on YouTube proper, however, where, officially, users must be at least 13 years old, but in reality, is still a system which countless kids use (just think about how often antsy kids are handed a phone or tablet to keep them occupied in a public space).

    Like Facebook’s scheme, this approach has several flaws: since it’s trying to ferret out inappropriate videos from kids’ content, it’s likely that most of the people who will encounter these videos are kids themselves. I don’t expect a lot of six-year-olds to become aggressive content moderators any time soon. And if the content is flagged, it still needs to be reviewed by humans, which, as YouTube has already acknowledged, takes “round the clock” monitoring.

    When we talk about this kind of challenge, the tech companies’ response is often that it’s simply the inevitability of scale – there’s no way to serve billions of users endless streams of engaging content without getting it wrong or allowing abuse to slip by some of the time. But of course, these companies don’t have to do any of this. Auto-playing an endless stream of algorithmically selected videos to kids isn’t some sort of mandate. The internet didn’t have to become a smorgasbord of “suggested content”. It’s a choice that YouTube made, because ad views are ad views. You’ve got to break a few eggs to make an omelette, and you’ve got to traumatise a few kids to build a global behemoth worth $600bn.
    Facebook asks users for nude photos in project to combat revenge porn
    Read more

    And that’s the issue: in their unblinking pursuit of growth over the past decade, these companies have built their platforms around features that aren’t just vulnerable to abuse, but literally optimised for it. Take a system that’s easy to game, profitable to misuse, intertwined with our vulnerable people and our most intimate moments, and operating at a scale that’s impossible to control or even monitor, and this is what you get.

    The question now is, when will we force tech companies to reckon with what they’ve wrought? We’ve long decided that we won’t let companies sell cigarettes to children or put asbestos into their building materials. If we want, we can decide that there are limits to what tech can do to “engage” us, too, rather than watching these platforms spin further and further away from the utopian dreams they were sold to us on.
    https://www.theguardian.com/technolog...ra-wachter-boettcher?CMP=share_btn_tw
    Voting 0

Top of the page

First / Previous / Next / Last / Page 2 of 10 Online Bookmarks of M. Fioretti: Tags: algorithms

About - Propulsed by SemanticScuttle