17406 shaares
24 results
tagged
discrimination
2018/11/15: At the beginning of October, Amazon was quietly issued a patent that would allow its virtual assistant Alexa to decipher a user’s physical characteristics and emotional state based on their voice. Characteristics, or “voice features,” like language accent, ethnic origin, emotion, gender, age, and background noise would be immediately extracted and tagged to the user’s data file to help deliver more targeted advertising.
The algorithm would also consider a customer’s physical location — based on their IP address, primary shipping address, and browser settings — to help determine their accent. Should Amazon’s patent become a reality, or if accent detection is already possible, it would introduce questions of surveillance and privacy violations, as well as possible discriminatory advertising, experts said.
Like facial recognition, voice analysis underlines how existing laws and privacy safeguards simply aren’t capable of protecting users from new categories of data collection — or government spying, for that matter. Unlike facial recognition, voice analysis relies not on cameras in public spaces, but microphones inside smart speakers in our homes. It also raises its own thorny issues around advertising that targets or excludes certain groups of people based on derived characteristics like nationality, native language, and so on.
voice-based accent detection can determine a person’s ethnic background, it opens up a new category of information that is incredibly interesting to the government.
The Foreign Intelligence Surveillance Act, or FISA, makes it possible for the government to covertly demand such data.
The algorithm would also consider a customer’s physical location — based on their IP address, primary shipping address, and browser settings — to help determine their accent. Should Amazon’s patent become a reality, or if accent detection is already possible, it would introduce questions of surveillance and privacy violations, as well as possible discriminatory advertising, experts said.
Like facial recognition, voice analysis underlines how existing laws and privacy safeguards simply aren’t capable of protecting users from new categories of data collection — or government spying, for that matter. Unlike facial recognition, voice analysis relies not on cameras in public spaces, but microphones inside smart speakers in our homes. It also raises its own thorny issues around advertising that targets or excludes certain groups of people based on derived characteristics like nationality, native language, and so on.
voice-based accent detection can determine a person’s ethnic background, it opens up a new category of information that is incredibly interesting to the government.
The Foreign Intelligence Surveillance Act, or FISA, makes it possible for the government to covertly demand such data.
2018/10/10: There is a "machine learning is hard" angle to this: while the flawed outcomes from the flawed training data was totally predictable, the system's self-generated discriminatory criteria were surprising and unpredictable. No one told it to downrank resumes containing "women's" -- it arrived at that conclusion on its own, by noticing that this was a word that rarely appeared on the resumes of previous Amazon hires.
The group created 500 computer models focused on specific job functions and locations. They taught each to recognize some 50,000 terms that showed up on past candidates’ resumes. The algorithms learned to assign little significance to skills that were common across IT applicants, such as the ability to write various computer codes, the people said.
Instead, the technology favored candidates who described themselves using verbs more commonly found on male engineers’ resumes, such as “executed” and “captured,” one person said.
Gender bias was not the only issue. Problems with the data that underpinned the models’ judgments meant that unqualified candidates were often recommended for all manner of jobs, the people said. With the technology returning results almost at random, Amazon shut down the project, they said.
The group created 500 computer models focused on specific job functions and locations. They taught each to recognize some 50,000 terms that showed up on past candidates’ resumes. The algorithms learned to assign little significance to skills that were common across IT applicants, such as the ability to write various computer codes, the people said.
Instead, the technology favored candidates who described themselves using verbs more commonly found on male engineers’ resumes, such as “executed” and “captured,” one person said.
Gender bias was not the only issue. Problems with the data that underpinned the models’ judgments meant that unqualified candidates were often recommended for all manner of jobs, the people said. With the technology returning results almost at random, Amazon shut down the project, they said.
2018/10/10: Il sistema può scalare? (“Scalare” significa cambiare livello di scala, ovvero essere esteso da una porzione particolare a un intero ambito o a una generalità di ambiti). Ad esempio le mie valutazioni scolastiche nei test possono essere utilizzate come una variabile per valutare la mia affidabilità nella concessione di un mutuo? È proprio la scalabilità che rende l’ADM un’arma terribile. Sin tanto che una valutazione negativa, giusta o sbagliata, resta in un ambito ristretto, il danno è limitato, ma se “scala” a un contesto più ampio, il danno può essere terribile. Provate a ritrovarvi classificati dal sistema come “cattivi pagatori” e a vivere comunque una vita normale: questo capita già oggi, poiché per essere considerati tali basta aver saltato il pagamento delle rate di un debito. E che succederebbe se creassimo un modello che identifica un probabile “cattivo pagatore” mettendo questa variabile in relazione con altre caratteristiche personali rilevate statisticamente, fino a scoprire una correlazione tra “cattivi pagatori” e neri o meridionali? Chi pensa che sia futurologia o fantascienza alla Minority report non conosce i primi sistemi anti-cheating di Invalsi che prevedevano una correzione automatica dei risultati sulla base della provenienza regionale.
Il libro di Cathy O’Neill non è un invito a rinunciare al potere descrittivo e modellizzante della matematica, ma a riconoscere il suo enorme potere, i suoi usi nefasti se non addirittura fraudolenti, per poterne così chiedere un uso corretto e legittimo.
Il libro di Cathy O’Neill non è un invito a rinunciare al potere descrittivo e modellizzante della matematica, ma a riconoscere il suo enorme potere, i suoi usi nefasti se non addirittura fraudolenti, per poterne così chiedere un uso corretto e legittimo.
segregation sim is based off the work of Nobel Prize-winning game theorist, Thomas Schelling. Specifically, his 1971 paper, Dynamic Models of Segregation. We built on top of this, and showed how a small demand for diversity can desegregate a neighborhood. In other words, we gave his model a happy ending.
Schelling's model gets the general gist of it, but of course, real life is more nuanced. You might enjoy looking at real-world data, such as W.A.V. Clark's 1991 paper, A Test of the Schelling Segregation Model.
There are other mathematical models of institutionalized bias out there! Male-Female Differences: A Computer Simulation shows how a small gender bias compounds as you move up the corporate ladder. The Petrie Multiplier shows why an attack on sexism in tech is not an attack on men.
Today's Big Moral Message™ is that demanding a bit of diversity in your spaces makes a huge difference overall.
WRAPPING UP:
1. Small individual bias → Large collective bias.
When someone says a culture is shapist, they're not saying the individuals in it are shapist. They're not attacking you personally.
2. The past haunts the present.
Your bedroom floor doesn't stop being dirty just coz you stopped dropping food all over the carpet. Creating equality is like staying clean: it takes work. And it's always a work in progress.
3. Demand diversity near you.
If small biases created the mess we're in, small anti-biases might fix it. Look around you. Your friends, your colleagues, that conference you're attending. If you're all triangles, you're missing out on some amazing squares in your life - that's unfair to everyone. Reach out, beyond your immediate neighbors.
Schelling's model gets the general gist of it, but of course, real life is more nuanced. You might enjoy looking at real-world data, such as W.A.V. Clark's 1991 paper, A Test of the Schelling Segregation Model.
There are other mathematical models of institutionalized bias out there! Male-Female Differences: A Computer Simulation shows how a small gender bias compounds as you move up the corporate ladder. The Petrie Multiplier shows why an attack on sexism in tech is not an attack on men.
Today's Big Moral Message™ is that demanding a bit of diversity in your spaces makes a huge difference overall.
WRAPPING UP:
1. Small individual bias → Large collective bias.
When someone says a culture is shapist, they're not saying the individuals in it are shapist. They're not attacking you personally.
2. The past haunts the present.
Your bedroom floor doesn't stop being dirty just coz you stopped dropping food all over the carpet. Creating equality is like staying clean: it takes work. And it's always a work in progress.
3. Demand diversity near you.
If small biases created the mess we're in, small anti-biases might fix it. Look around you. Your friends, your colleagues, that conference you're attending. If you're all triangles, you're missing out on some amazing squares in your life - that's unfair to everyone. Reach out, beyond your immediate neighbors.
2018/09/27: Insurance works because we are ignorant of our individual fates. It is the fact that any of us might turn out to be a bad risk that makes it sensible for everyone to insure against that remote chance. The pooling of individual risks that can only be known in aggregate underlies the whole system. But there is a subtle mismatch of aims between insurers and their customers. The customers want to avoid the consequences of misfortune; the insurers want customers who avoid misfortune. The two aims are reconciled because both sides are operating behind a veil of ignorance.
Insurers have an interest in knowing as much as possible about their customers. Customers have an interest in insurers underestimating their real risk. But both sides will benefit if ways are found to reduce the risk of the misfortune insured against. The balance between knowledge and ignorance of risk has traditionally been struck at the level of statistical knowledge about large groups.
But statistically significant groups are getting smaller in the age of big data.
Insurers have an interest in knowing as much as possible about their customers. Customers have an interest in insurers underestimating their real risk. But both sides will benefit if ways are found to reduce the risk of the misfortune insured against. The balance between knowledge and ignorance of risk has traditionally been struck at the level of statistical knowledge about large groups.
But statistically significant groups are getting smaller in the age of big data.
12018/09/19: one of the oldest and largest North American life insurers, will stop underwriting traditional life insurance and instead sell only interactive policies that track fitness and health data through wearable devices and smartphones.
Privacy and consumer advocates have raised questions about whether insurers may eventually use data to select the most profitable customers, while hiking rates for those who do not participate. The insurance industry has said that it is heavily regulated and must justify, in actuarial terms, its reasons for any rate increases or policy changes.
Privacy and consumer advocates have raised questions about whether insurers may eventually use data to select the most profitable customers, while hiking rates for those who do not participate. The insurance industry has said that it is heavily regulated and must justify, in actuarial terms, its reasons for any rate increases or policy changes.
2018/09/19: 15 employers in past year, including Uber, advertised jobs on Facebook exclusively to one gender.
In a statement, Facebook spokesman Joe Osborne said, “There is no place for discrimination on Facebook; it’s strictly prohibited in our policies. We look forward to defending our practices once we have an opportunity to review the complaint.”
The company has previously said that giving advertisers the ability to target employment ads by sex and age does not facilitate discrimination.
In response to other suits, Facebook has argued that it is not liable for the content its users—in this case, advertisers—post on its platform.
In a statement, Facebook spokesman Joe Osborne said, “There is no place for discrimination on Facebook; it’s strictly prohibited in our policies. We look forward to defending our practices once we have an opportunity to review the complaint.”
The company has previously said that giving advertisers the ability to target employment ads by sex and age does not facilitate discrimination.
In response to other suits, Facebook has argued that it is not liable for the content its users—in this case, advertisers—post on its platform.
Nuova tegola per Facebook che ha ricevuto una denuncia dal ministero della Casa statunitense per pratiche discriminatorie sulla sua piattaforma legata agli annunci immobiliari.rnrnQuestione (un'altra volta) di privacyrnrn"Facebook permette ai suoi utenti un controllo sugli annunci in base al sesso, alla razza alla religione, alla nazionalità a eventuali handicap e al codice postale" denuncia in un comunicato il ministero secondo cui il social metterebbe a disposizione degli strumenti di identificazione degli acquirenti e dei locatari.rnrnLa tesirnrnL'accusa è quella di non mettere addirittura a disposizione dei richiedenti alcune proposte di acquisto o di affitto a chi proviene dall'America latina, Canada, Sudest asiatico, Cina, Honduras o Somalia. Gli scandali Cambridge Analytica e il capitolo fake news legato al Russiagate stanno da tempo danneggiando l'immagine del social network che, ormai sotto i riflettori, ha iniziato a rivedere molte regole sulla privacy e sulla condivisione dei dati.
A new ProPublica investigation1 uncovered a disturbing fact: Facebook allows advertisers to exclude Black, Hispanic and other so-called "ethnic affinity" groups from seeing ads.
Please enable cookies on your web browser in order to continue.
2018/05/39: In April 2018, the “Finnish Non-Discrimination and Equality Tribunal” prohibited “discriminatory use” of artificial intelligence. I am not sure they did the right thing.
If you're under 18, the future is now
Exclusive: former employee alleges that women hired to work as preschool teachers in the company's childcare center were paid lower salaries than men with fewer qualifications doing same job
This piece was co-written by Hugo Guyader and Julian Agyeman. Guyader is a PhD candidate at Link195182ping University, Sweden, where he focuses his research on collaborative consumption and green services. He is also a OuiShare Connector. Agyeman is a Professor of Urban and Environmental Policy and Planning at Tufts University in Medford, Massachusetts. He is co-author of "Sharing Cities" (MIT Press 2015) and a member of Shareable's Advisory Board.
2014 review of recent studies and reports on online discrimination and potential downsides of digital marketplaces featuring user profiles and data.
In a wide-reaching decision, the U.S. Supreme Court has declared that same-sex "marriage" is a constitutional right and that states must recognize same-sex unions.
I wrote a post a couple of days ago in which I asked the question Are Gay Marriage Activists Too Needy to Take Yes for an Answer? The combox response was immediate and vociferous. Before I could say "wedding cake," the discussion had abandoned the matter of political exigencies, as well as the weighty Constitutional
Commentators in France and elsewhere have taken the recent terrorist attacks in Paris as an occasion to reflect more broadly about Muslims in France. Many read the attacks as a sign of French Muslims
People have a mental model of shopping that is based on experiences from brick-and-mortar stores. We intuitively understand how this process works: all available products are displayed around the store
Companies should disclose the information they sell about us.