mfioretti: algorithms* + big data*

Bookmarks on this page are managed by an admin user.

19 bookmark(s) - Sort by: Date ↓ / Title / Voting / - Bookmarks from other users for this tag

  1. Stratumseind in Eindhoven is one of the busiest nightlife streets in the Netherlands. On a Saturday night, bars are packed, music blares through the street, laughter and drunken shouting bounces off the walls. As the night progresses, the ground becomes littered with empty shot bottles, energy drink cans, cigarette butts and broken glass.

    It’s no surprise that the place is also known for its frequent fights. To change that image, Stratumseind has become one of the “smartest” streets in the Netherlands. Lamp-posts have been fitted with wifi-trackers, cameras and 64 microphones that can detect aggressive behaviour and alert police officers to altercations. There has been a failed experiment to change light intensity to alter the mood. The next plan, starting this spring, is to diffuse the smell of oranges to calm people down. The aim? To make Stratumseind a safer place.

    We get that comment a lot – ‘Big brother is watching you’. I prefer to say, ‘Big brother is helping you’
    Peter van de Crommert

    All the while, data is being collected and stored. “Visitors do not realise they are entering a living laboratory,” says Maša Galic, a researcher on privacy in the public space for the Tilburg Institute of Law, Technology and Society. Since the data on Stratumseind is used to profile, nudge or actively target people, this “smart city” experiment is subject to privacy law. According to the Dutch Personal Data Protection Act, people should be notified in advance of data collection and the purpose should be specified – but in Stratumseind, as in many other “smart cities”, this is not the case.

    Peter van de Crommert is involved at Stratumseind as project manager with the Dutch Institute for Technology, Safety and Security. He says visitors do not have to worry about their privacy: the data is about crowds, not individuals. “We often get that comment – ‘Big brother is watching you’ – but I prefer to say, ‘Big brother is helping you’. We want safe nightlife, but not a soldier on every street corner.”
    Revellers in Eindhoven’s Stratumseind celebrate King’s Day.
    Facebook
    Twitter
    Pinterest
    Revellers in Eindhoven’s Stratumseind celebrate King’s Day. Photograph: Filippo Manaresi/Moment Editorial/Getty Images

    When we think of smart cities, we usually think of big projects: Songdo in South Korea, the IBM control centre in Rio de Janeiro or the hundreds of new smart cities in India. More recent developments include Toronto, where Google will build an entirely new smart neighbourhood, and Arizona, where Bill Gates plans to build his own smart city. But the reality of the smart city is that it has stretched into the everyday fabric of urban life – particularly so in the Netherlands.
    Advertisement

    In the eastern city of Enschede, city traffic sensors pick up your phone’s wifi signal even if you are not connected to the wifi network. The trackers register your MAC address, the unique network card number in a smartphone. The city council wants to know how often people visit Enschede, and what their routes and preferred spots are. Dave Borghuis, an Enschede resident, was not impressed and filed an official complaint. “I don’t think it’s okay for the municipality to track its citizens in this way,” he said. “If you walk around the city, you have to be able to imagine yourself unwatched.”

    Enschede is enthusiastic about the advantages of the smart city. The municipality says it is saving €36m in infrastructure investments by launching a smart traffic app that rewards people for good behaviour like cycling, walking and using public transport. (Ironically, one of the rewards is a free day of private parking.) Only those who mine the small print will discover that the app creates “personal mobility profiles”, and that the collected personal data belongs to the company Mobidot.
    https://www.theguardian.com/cities/20...-privacy-eindhoven-utrecht?CMP=twt_gu
    Voting 0
  2. The “bad part of town” will be full of algorithms that shuffle you straight from high-school detention into the prison system. The rich part of town will get mirror-glassed limos that breeze through the smart red lights to seamlessly deliver the aristocracy from curb into penthouse.

    These aren’t the “best practices” beloved by software engineers; they’re just the standard urban practices, with software layered over. It’s urban design as the barbarian’s varnish on urbanism. People could have it otherwise, technically, if they really wanted it and had the political will, but they don’t. So they won’t get it.
    https://www.theatlantic.com/technolog.../stupid-cities/553052/?utm_source=twb
    Voting 0
  3. Here’s one example: In 2014, Maine Gov. Paul LePage released data to the public detailing over 3,000 transactions from welfare recipients using EBT cards in the state. (EBT cards are like state-issued debit cards, and can be used to disperse benefits like food stamps.)

    LePage created a list of every time this money had been used in a strip club, liquor store, or bar, and used it to push his political agenda of limiting access to state benefits. LePage’s list represents a tiny fraction of overall EBT withdrawals, but it effectively reinforced negative stereotypes and narratives about who relies on welfare benefits and why.

    I spoke with Eubanks recently about her new book, and why she believes automated technologies are being used to rig the welfare system against the people who need it the most.

    A lightly edited transcript of our conversation follows.
    Sean Illing

    What’s the thesis of your book?
    Virginia Eubanks

    There’s a collision of political forces and technical innovations that are devastating poor and working-class families in America.
    https://www.vox.com/2018/2/6/16874782/welfare-big-data-technology-poverty
    Voting 0
  4. no serious scholar of modern geopolitics disputes that we are now at war — a new kind of information-based war, but war, nevertheless — with Russia in particular, but in all honesty, with a multitude of nation states and stateless actors bent on destroying western democratic capitalism. They are using our most sophisticated and complex technology platforms to wage this war — and so far, we’re losing. Badly.

    Why? According to sources I’ve talked to both at the big tech companies and in government, each side feels the other is ignorant, arrogant, misguided, and incapable of understanding the other side’s point of view. There’s almost no data sharing, trust, or cooperation between them. We’re stuck in an old model of lobbying, soft power, and the occasional confrontational hearing.

    Not exactly the kind of public-private partnership we need to win a war, much less a peace.

    Am I arguing that the government should take over Google, Amazon, Facebook, and Apple so as to beat back Russian info-ops? No, of course not. But our current response to Russian aggression illustrates the lack of partnership and co-ordination between government and our most valuable private sector companies. And I am hoping to raise an alarm: When the private sector has markedly better information, processing power, and personnel than the public sector, one will only strengthen, while the latter will weaken. We’re seeing it play out in our current politics, and if you believe in the American idea, you should be extremely concerned.
    https://shift.newco.co/data-power-and-war-465933dcb372
    Voting 0
  5. Similarly, GOOG in 2014 started reorganizing itself to focus on artificial intelligence only. In January 2014, GOOG bought DeepMind, and in September they shutdown Orkut (one of their few social products which had momentary success in some countries) forever. The Alphabet Inc restructuring was announced in August 2015 but it likely took many months of meetings and bureaucracy. The restructuring was important to focus the web-oriented departments at GOOG towards a simple mission. GOOG sees no future in the simple Search market, and announces to be migrating “From Search to Suggest” (in Eric Schmidt’s own words) and being an “AI first company” (in Sundar Pichai’s own words). GOOG is currently slightly behind FB in terms of how fast it is growing its dominance of the web, but due to their technical expertise, vast budget, influence and vision, in the long run its AI assets will play a massive role on the internet. They know what they are doing.

    These are no longer the same companies as 4 years ago. GOOG is not anymore an internet company, it’s the knowledge internet company. FB is not an internet company, it’s the social internet company. They used to attempt to compete, and this competition kept the internet market diverse. Today, however, they seem mostly satisfied with their orthogonal dominance of parts of the Web, and we are losing diversity of choices. Which leads us to another part of the internet: e-commerce and AMZN.

    AMZN does not focus on making profit.
    https://staltz.com/the-web-began-dying-in-2014-heres-how.html
    Voting 0
  6. Notato? “E assicuratori”.

    Quello che i giornali italiani a marzo 2016 non dicono è che lo sbarco di Ibm nell’area Expo – buco nero a cui da un paio d’anni si fatica a trovare un futuro – è subordinato alla consegna a Ibm dei dati sanitari degli abitanti della Lombardia, una delle regioni più ricche d’Europa. Sono le cosiddette “Protected Health Information”, che includono “i dati dell’assistenza sanitaria”, le “cartelle cliniche personali”, le “informazioni fiscali nominative o anonimizzate”. Cedendo all’azienda americana i “diritti all’uso per la memorizzazione ed elaborazione di tali dati a fini progettuali, nonché per l’utilizzo dei dati anonimizzati anche per finalità ulteriori a quelle progettuali”. Insomma: è in arrivo il Grande Fratello della Sanità che avrà a disposizione tutti i nostri dati sanitari.

    Nel documento “confidenziale” Ibm che il Fatto quotidiano ha potuto vedere, si legge: “Come presupposto per realizzare il Programma ed effettuare l’investimento, Ibm (incluse le società controllanti, controllate, affiliate o collegate, ove necessario) si aspetta di poter avere accesso – in modalità da definire – al trattamento dei dati sanitari dei circa 61 milioni di cittadini italiani (intesi come dati sanitari storici, presenti e futuri) in forma anonima e identificata, per specifici ambiti progettuali, ivi incluso il diritto all’utilizzo secondario dei predetti dati sanitari per finalità ulteriori rispetto ai progetti”.

    La sanità pubblica italiana consegnata tutta nelle sapienti mani di una multinazionale americana. “A titolo esemplificativo ma non esaustivo”, continua il documento confidenziale, “si ritiene cruciale avere accesso a dati dei pazienti, ai dati farmacologici, ai dati del registro dei tumori, ai dati genomici, dati delle cure, dati regionali o Agenas, dati Aifa sui farmaci, sugli studi clinici attivi, dati di iscrizione e demografici, diagnosi mediche storiche, rimborsi e costi di utilizzo, condizioni e procedure mediche, prescizioni ambulatoriali, trattamenti farmacologici con relativi costi, visite di pronto soccorso, schede di dimissioni ospedaliere (sdo), informazioni sugli appuntamenti, orari e presenze, e altri dati sanitari”. Ogni nostro respiro, ogni nostro battito, ogni nostro bacillo, ogni nostro pagamento per la sanità entrerà nei computer Watson Ibm per alimentare la loro capacità di apprendimento e sviluppare la loro intelligenza artificiale. I risultati che saranno via via raggiunti, gli algoritmi che saranno messi a punto grazie ai nostri dati resteranno privati. Ibm potrà venderli alle industrie sanitarie o alle compagnie d’assicurazione.

    Ora la parola è passata alla Regione Lombardia, la prima d’Italia a essere coinvolta. Essendo stata bocciata la riforma costituzionale che toglieva i poteri alle Regioni, dovrà dare il suo ok. Viene qualche dubbio sul fatto che uno dei grandi mercati del futuro, quello della salute, sia di fatto regalato a una azienda privata, senza chiedere nulla in cambio, ma soddisfatti soltanto dalla promessa che questa apra un centro sui tribolati terreni Expo.

    I dati saranno “anonimizzati”, promette qua e là il documento. Ma ormai si stanno affinando sistemi in grado di rendere “reversibili” i dati anonimi, rinominandoli. Chissà se il Garante della privacy, così sensibile ad altre battaglie, avrà tempo per dire la sua anche su questo progetto. Comunque, anche anonimi, i dati sanitari della Lombardia sono un bene preziosissimo: perché passarli a un’impresa privata, esautorando del tutto il sistema sanitario pubblico? E, se proprio bisogna darli ai privati, perché senza gara? Perché a Ibm-Watson e non, per esempio, a Google-Deep Mind o Amazon? E perché, infine, concederli gratis? L’investimento previsto di 150 milioni di dollari per un centro privato è nulla rispetto al valore dell’immensa mole di dati sanitari promessi, che sul deep web oggi vengono venduti a 10-15 dollari a record (50 volte più dei codici di una carta di credito).
    http://www.giannibarbacetto.it/2017/0...cambio-della-nuova-sede-sullarea-expo
    Voting 0
  7. The reason I bring this up: first of all, it’s a great way of understanding how machine learning algorithms can give us stuff we absolutely don’t want, even though they fundamentally lack prior agendas. Happens all the time, in ways similar to the Donald.

    Second, some people actually think there will soon be algorithms that control us, operating “through sound decisions of pure rationality” and that we will no longer have use for politicians at all.

    And look, I can understand why people are sick of politicians, and would love them to be replaced with rational decision-making robots. But that scenario means one of three things:

    1. Controlling robots simply get trained by the people’s will and do whatever people want at the moment. Maybe that looks like people voting with their phones or via the chips in their heads. This is akin to direct democracy, and the problems are varied – I was in Occupy after all – but in particular mean that people are constantly weighing in on things they don’t actually understand. That leaves them vulnerable to misinformation and propaganda.

    2. Controlling robots ignore people’s will and just follow their inner agendas. Then the question becomes, who sets that agenda? And how does it change as the world and as culture changes? Imagine if we were controlled by someone from 1000 years ago with the social mores from that time. Someone’s gonna be in charge of “fixing” things.

    3. Finally, it’s possible that the controlling robot would act within a political framework to be somewhat but not completely influenced by a democratic process. Something like our current president. But then getting a robot in charge would be a lot like voting for a president. Some people would agree with it, some wouldn’t. Maybe every four years we’d have another vote, and the candidates would be both people and robots, and sometimes a robot would win, sometimes a person. I’m not saying it’s impossible, but it’s not utopian. There’s no such thing as pure rationality in politics, it’s much more about picking sides and appealing to some people’s desires while ignoring others.
    https://mathbabe.org/2016/08/11/donal...e-a-biased-machine-learning-algorithm
    Voting 0
  8. Quit fracking our lives to extract data that’s none of your business and that your machines misinterpret. — New Clues, #58

    That’s the blunt advice David Weinberger and I give to marketers who still make it hard to talk, sixteen years after many of them started failing to get what we meant by Markets are Conversations.

    In the world of 2001, people have become so machinelike that the most human character turns out to be a machine. That’s the essence of Kubrick’s dark prophecy: as we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence.

    Even if our own intelligence is not yet artificialized, what’s feeding it surely is.

    In The Filter Bubble, after explaining Google’s and Facebook’s very different approaches to personalized “experience” filtration, and the assumptions behind both, Eli Pariser says both companies approximations are based on “a bad theory of you,” and come up with “pretty poor representations of who we are, in part because there is no one set of data that describes who we are.” He says the ideal of perfect personalization dumps us into what animators, puppetry and robotics engineers call the uncanny valley: a “place where something is lifelike but not convincingly alive, and it gives people the creeps.”

    Sanity requires that we line up many different personalities behind a single first person pronoun: I, me, mine. And also behind multiple identifiers. In my own case, I am Doc to most of those who know me, David to various government agencies (and most of the entities that bill me for stuff), Dave to many (but not all) family members, @dsearls to Twitter, and no name at all to the rest of the world, wherein I remain, like most of us, anonymous (literally, nameless), because that too is a civic grace. (And if you doubt that, ask any person who has lost their anonymity through the Faustian bargain called celebrity.)

    Third, advertising needs to return to what it does best: straightforward brand messaging that is targeted at populations, and doesn’t get personal. For help with that, start reading
    https://medium.com/@dsearls/on-market...bad-guesswork-88a84de937b0#.deu5ue16x
    Voting 0
  9. Aadhaar reflects and reproduces power imbalances and inequalities. Information asymmetries result in the data subject becoming a data object, to be manipulated, misrepresented and policed at will.

    Snowden: “Arguing that you don't care about the right to privacy because you have nothing to hide is no different than saying you don't care about free speech because you have nothing to say.”

    Snowden’s demolition of the argument doesn’t mean our work here is done. There are many other tropes that my (now renamed) Society for the Rejection of Culturally Relativist Excuses could tackle. Those that insist Indians are not private. That privacy is a western liberal construct that has no place whatsoever in Indian culture. That acknowledging privacy interests will stall development. This makes it particularly hard to advance claims of privacy, autonomy and liberty in the context of large e-governance and identity projects like Aadhaar: they earn one the labels of elitist, anti-progress, Luddite, paranoid and, my personal favourite, privacy fascist.
    http://scroll.in/article/748043/aadha...n-its-the-only-way-to-secure-equality
    Voting 0
  10. A May 2014 White House report on “big data” notes that the ability to determine the demographic traits of individuals through algorithms and aggregation of online data has a potential downside beyond just privacy concerns: Systematic discrimination.

    There is a long history of denying access to bank credit and other financial services based on the communities from which applicants come — a practice called “redlining.” Likewise, the report warns, “Just as neighborhoods can serve as a proxy for racial or ethnic identity, there are new worries that big data technologies could be used to ‘digitally redline’ unwanted groups, either as customers, employees, tenants or recipients of credit.” (See materials from the report’s related research conference for scholars’ views on this and other issues.)

    One vexing problem, according to the report, is that potential digital discrimination is even less likely to be pinpointed, and therefore remedied.

    Approached without care, data mining can reproduce existing patterns of discrimination, inherit the prejudice of prior decision-makers, or simply reflect the widespread biases that persist in society. It can even have the perverse result of exacerbating existing inequalities by suggesting that historically disadvantaged groups actually deserve less favorable treatment.” The paper’s authors argue that the most likely legal basis for anti-discrimination enforcement, Title VII, is not currently adequate to stop many forms of discriminatory data mining, and “society does not have a ready answer for what to do about it.”

    Their 2014 paper “Digital Discrimination: The Case of Airbnb.com” examined listings for thousands of New York City landlords in mid-2012. Airbnb builds up a reputation system by allowing ratings from guests and hosts.

    The study’s findings include:

    “The raw data show that non-black and black hosts receive strikingly different rents: roughly $144 versus $107 per night, on average.” However, the researchers had to control for a variety of factors that might skew an accurate comparison, such as differences in geographical location.
    “Controlling for all of these factors, non-black hosts earn roughly 12% more for a similar apartment with similar ratings and photos relative to black hosts.”
    “Despite the potential of the Internet to reduce discrimination, our results suggest that social platforms such as Airbnb may have the opposite effect. Full of salient pictures and social profiles, these platforms make it easy to discriminate — as evidenced by the significant penalty faced by a black host trying to conduct business on Airbnb.”

    “Given Airbnb’s careful consideration of what information is available to guests and hosts,” Edelman and Luca note. “Airbnb might consider eliminating or reducing the prominence of host photos: It is not immediately obvious what beneficial information these photos provide, while they risk facilitating discrimination by guests. Particularly when a guest will be renting an entire property, the guest’s interaction with the host will be quite limited, and we see no real need for Airbnb to highlight the host’s picture.” (For its part, Airbnb responded to the study by saying that it prohibits discrimination in its terms of service, and that the data analyzed were both older and limited geographically.)
    http://journalistsresource.org/studie...racial-discrimination-research-airbnb
    Voting 0

Top of the page

First / Previous / Next / Last / Page 1 of 2 Online Bookmarks of M. Fioretti: Tags: algorithms + big data

About - Propulsed by SemanticScuttle