mfioretti: algorithms* + control*

Bookmarks on this page are managed by an admin user.

28 bookmark(s) - Sort by: Date ↓ / Title / Voting / - Bookmarks from other users for this tag

  1. Stratumseind in Eindhoven is one of the busiest nightlife streets in the Netherlands. On a Saturday night, bars are packed, music blares through the street, laughter and drunken shouting bounces off the walls. As the night progresses, the ground becomes littered with empty shot bottles, energy drink cans, cigarette butts and broken glass.

    It’s no surprise that the place is also known for its frequent fights. To change that image, Stratumseind has become one of the “smartest” streets in the Netherlands. Lamp-posts have been fitted with wifi-trackers, cameras and 64 microphones that can detect aggressive behaviour and alert police officers to altercations. There has been a failed experiment to change light intensity to alter the mood. The next plan, starting this spring, is to diffuse the smell of oranges to calm people down. The aim? To make Stratumseind a safer place.

    We get that comment a lot – ‘Big brother is watching you’. I prefer to say, ‘Big brother is helping you’
    Peter van de Crommert

    All the while, data is being collected and stored. “Visitors do not realise they are entering a living laboratory,” says Maša Galic, a researcher on privacy in the public space for the Tilburg Institute of Law, Technology and Society. Since the data on Stratumseind is used to profile, nudge or actively target people, this “smart city” experiment is subject to privacy law. According to the Dutch Personal Data Protection Act, people should be notified in advance of data collection and the purpose should be specified – but in Stratumseind, as in many other “smart cities”, this is not the case.

    Peter van de Crommert is involved at Stratumseind as project manager with the Dutch Institute for Technology, Safety and Security. He says visitors do not have to worry about their privacy: the data is about crowds, not individuals. “We often get that comment – ‘Big brother is watching you’ – but I prefer to say, ‘Big brother is helping you’. We want safe nightlife, but not a soldier on every street corner.”
    Revellers in Eindhoven’s Stratumseind celebrate King’s Day.
    Facebook
    Twitter
    Pinterest
    Revellers in Eindhoven’s Stratumseind celebrate King’s Day. Photograph: Filippo Manaresi/Moment Editorial/Getty Images

    When we think of smart cities, we usually think of big projects: Songdo in South Korea, the IBM control centre in Rio de Janeiro or the hundreds of new smart cities in India. More recent developments include Toronto, where Google will build an entirely new smart neighbourhood, and Arizona, where Bill Gates plans to build his own smart city. But the reality of the smart city is that it has stretched into the everyday fabric of urban life – particularly so in the Netherlands.
    Advertisement

    In the eastern city of Enschede, city traffic sensors pick up your phone’s wifi signal even if you are not connected to the wifi network. The trackers register your MAC address, the unique network card number in a smartphone. The city council wants to know how often people visit Enschede, and what their routes and preferred spots are. Dave Borghuis, an Enschede resident, was not impressed and filed an official complaint. “I don’t think it’s okay for the municipality to track its citizens in this way,” he said. “If you walk around the city, you have to be able to imagine yourself unwatched.”

    Enschede is enthusiastic about the advantages of the smart city. The municipality says it is saving €36m in infrastructure investments by launching a smart traffic app that rewards people for good behaviour like cycling, walking and using public transport. (Ironically, one of the rewards is a free day of private parking.) Only those who mine the small print will discover that the app creates “personal mobility profiles”, and that the collected personal data belongs to the company Mobidot.
    https://www.theguardian.com/cities/20...-privacy-eindhoven-utrecht?CMP=twt_gu
    Voting 0
  2. IoT will be able to take stock of your choices, moods, preferences and tastes, the same way Google Search does. With enough spreadsheets, many practical questions are rendered trivial. How hard will it be for the IoT — maybe through Alexa, maybe through your phone — to statistically study why, where and when you raise your voice at your child? If you can correlate people’s habits and physical attributes, it will be toddler-easy to correlate mood to environment. The digitally connected devices of tomorrow would be poor consumer products if they did not learn you well. Being a good and faithful servant means monitoring the master closely, and that is what IoT devices will do. They will analyze your feedback and automate their responses — and predict your needs. In the IoT, Big Data is weaponized, and can peer deeper into the seeds your life than the government has ever dreamed.
    https://www.salon.com/2018/02/19/why-...signed-for-corporations-not-consumers
    Voting 0
  3. no serious scholar of modern geopolitics disputes that we are now at war — a new kind of information-based war, but war, nevertheless — with Russia in particular, but in all honesty, with a multitude of nation states and stateless actors bent on destroying western democratic capitalism. They are using our most sophisticated and complex technology platforms to wage this war — and so far, we’re losing. Badly.

    Why? According to sources I’ve talked to both at the big tech companies and in government, each side feels the other is ignorant, arrogant, misguided, and incapable of understanding the other side’s point of view. There’s almost no data sharing, trust, or cooperation between them. We’re stuck in an old model of lobbying, soft power, and the occasional confrontational hearing.

    Not exactly the kind of public-private partnership we need to win a war, much less a peace.

    Am I arguing that the government should take over Google, Amazon, Facebook, and Apple so as to beat back Russian info-ops? No, of course not. But our current response to Russian aggression illustrates the lack of partnership and co-ordination between government and our most valuable private sector companies. And I am hoping to raise an alarm: When the private sector has markedly better information, processing power, and personnel than the public sector, one will only strengthen, while the latter will weaken. We’re seeing it play out in our current politics, and if you believe in the American idea, you should be extremely concerned.
    https://shift.newco.co/data-power-and-war-465933dcb372
    Voting 0
  4. Earlier this month, writer James Bridle published an in-depth look at the underbelly of creepy, violent content targeted at kids on YouTube – from knock-off Peppa Pig cartoons, such as one where a trip to the dentist morphs into a graphic torture scene, to live-action “gross-out” videos, which show real kids vomiting and in pain.

    These videos are being produced and added to YouTube by the thousand, then tagged with what Bridle calls “keyword salad” – long lists of popular search terms packed into their titles. These keywords are designed to game or manipulate the algorithm that sorts, ranks and selects content for users to see. And thanks to a business model aimed at maximising views (and therefore ad revenue), these videos are being auto-played and promoted to kids based on their “similarity” – at least in terms of keywords used – to content that the kids have already seen. That means a child might start out watching a normal Peppa Pig episode on the official channel, finish it, then be automatically immersed in a dark, violent and unauthorised episode – without their parent realising it.
    Advertisement

    YouTube’s response to the problem has been to hand responsibility to its users, asking them to flag videos as inappropriate. From there, the videos go to a review team that YouTube says comprises thousands of people working 24 hours a day to review content. If the content is found to be inappropriate for children, it will be age-restricted and not appear in the YouTube Kids app. It will still appear on YouTube proper, however, where, officially, users must be at least 13 years old, but in reality, is still a system which countless kids use (just think about how often antsy kids are handed a phone or tablet to keep them occupied in a public space).

    Like Facebook’s scheme, this approach has several flaws: since it’s trying to ferret out inappropriate videos from kids’ content, it’s likely that most of the people who will encounter these videos are kids themselves. I don’t expect a lot of six-year-olds to become aggressive content moderators any time soon. And if the content is flagged, it still needs to be reviewed by humans, which, as YouTube has already acknowledged, takes “round the clock” monitoring.

    When we talk about this kind of challenge, the tech companies’ response is often that it’s simply the inevitability of scale – there’s no way to serve billions of users endless streams of engaging content without getting it wrong or allowing abuse to slip by some of the time. But of course, these companies don’t have to do any of this. Auto-playing an endless stream of algorithmically selected videos to kids isn’t some sort of mandate. The internet didn’t have to become a smorgasbord of “suggested content”. It’s a choice that YouTube made, because ad views are ad views. You’ve got to break a few eggs to make an omelette, and you’ve got to traumatise a few kids to build a global behemoth worth $600bn.
    Facebook asks users for nude photos in project to combat revenge porn
    Read more

    And that’s the issue: in their unblinking pursuit of growth over the past decade, these companies have built their platforms around features that aren’t just vulnerable to abuse, but literally optimised for it. Take a system that’s easy to game, profitable to misuse, intertwined with our vulnerable people and our most intimate moments, and operating at a scale that’s impossible to control or even monitor, and this is what you get.

    The question now is, when will we force tech companies to reckon with what they’ve wrought? We’ve long decided that we won’t let companies sell cigarettes to children or put asbestos into their building materials. If we want, we can decide that there are limits to what tech can do to “engage” us, too, rather than watching these platforms spin further and further away from the utopian dreams they were sold to us on.
    https://www.theguardian.com/technolog...ra-wachter-boettcher?CMP=share_btn_tw
    Voting 0
  5. Similarly, GOOG in 2014 started reorganizing itself to focus on artificial intelligence only. In January 2014, GOOG bought DeepMind, and in September they shutdown Orkut (one of their few social products which had momentary success in some countries) forever. The Alphabet Inc restructuring was announced in August 2015 but it likely took many months of meetings and bureaucracy. The restructuring was important to focus the web-oriented departments at GOOG towards a simple mission. GOOG sees no future in the simple Search market, and announces to be migrating “From Search to Suggest” (in Eric Schmidt’s own words) and being an “AI first company” (in Sundar Pichai’s own words). GOOG is currently slightly behind FB in terms of how fast it is growing its dominance of the web, but due to their technical expertise, vast budget, influence and vision, in the long run its AI assets will play a massive role on the internet. They know what they are doing.

    These are no longer the same companies as 4 years ago. GOOG is not anymore an internet company, it’s the knowledge internet company. FB is not an internet company, it’s the social internet company. They used to attempt to compete, and this competition kept the internet market diverse. Today, however, they seem mostly satisfied with their orthogonal dominance of parts of the Web, and we are losing diversity of choices. Which leads us to another part of the internet: e-commerce and AMZN.

    AMZN does not focus on making profit.
    https://staltz.com/the-web-began-dying-in-2014-heres-how.html
    Voting 0
  6. The problem is this: Facebook has become a feedback loop which can and does, despite its best intentions, become a vicious spiral. At Facebook’s scale, behavioral targeting doesn’t just reflect our behavior, it actually influences it. Over time, a service which was supposed to connect humanity is actually partitioning us into fractal disconnected bubbles.

    The way Facebook’s News Feed works is that the more you “engage” with posts from a particular user, the more often their posts are shown to you. The more you engage with a particular kind of post, the more you will see its ilk. So far so good! It’s just showing you what you’ve demonstrated you’re interested in. What’s wrong with that?

    The answer is twofold. First, this eventually constructs a small “in-group” cluster of Facebook friends and topics that dominate your feed; and as you grow accustomed to interacting with them, this causes your behavior to change, and you interact with them even more, reinforcing their in-group status … and (relatively) isolating you from the rest of your friends, the out-group.

    Second, and substantially worse, because “engagement” is the metric, Facebook inevitably selects for the shocking and the outrageous. Ev Williams summed up the results brilliantly:

    Of course this doesn’t just apply to Facebook.
    https://techcrunch.com/2017/06/04/whe...m_source=tctwreshare&sr_share=twitter
    Voting 0
  7. "All of us, when we are uploading something, when we are tagging people, when we are commenting, we are basically working for Facebook," he says.

    The data our interactions provide feeds the complex algorithms that power the social media site, where, as Mr Joler puts it, our behaviour is transformed into a product.

    Trying to untangle that largely hidden process proved to be a mammoth task.

    "We tried to map all the inputs, the fields in which we interact with Facebook, and the outcome," he says.

    "We mapped likes, shares, search, update status, adding photos, friends, names, everything our devices are saying about us, all the permissions we are giving to Facebook via apps, such as phone status, wifi connection and the ability to record audio."

    All of this research provided only a fraction of the full picture. So the team looked into Facebook's acquisitions, and scoured its myriad patent filings.

    The results were astonishing.

    Visually arresting flow charts that take hours to absorb fully, but which show how the data we give Facebook is used to calculate our ethnic affinity (Facebook's term), sexual orientation, political affiliation, social class, travel schedule and much more.
    Image copyright Share Lab
    Image caption Share Lab presents its information in minutely detailed tables and flow charts

    One map shows how everything - from the links we post on Facebook, to the pages we like, to our online behaviour in many other corners of cyber-space that are owned or interact with the company (Instagram, WhatsApp or sites that merely use your Facebook log-in) - could all be entering a giant algorithmic process.

    And that process allows Facebook to target users with terrifying accuracy, with the ability to determine whether they like Korean food, the length of their commute to work, or their baby's age.

    Another map details the permissions many of us willingly give Facebook via its many smartphone apps, including the ability to read all text messages, download files without permission, and access our precise location.

    Individually, these are powerful tools; combined they amount to a data collection engine that, Mr Joler argues, is ripe for exploitation.

    "If you think just about cookies, just about mobile phone permissions, or just about the retention of metadata - each of those things, from the perspective of data analysis, are really intrusive."
    http://www.bbc.com/news/business-39947942
    Voting 0
  8. Facebook’s entire project, when it comes to news, rests on the assumption that people’s individual preferences ultimately coincide with the public good, and that if it doesn’t appear that way at first, you’re not delving deeply enough into the data. By contrast, decades of social-science research shows that most of us simply prefer stuff that feels true to our worldview even if it isn’t true at all and that the mining of all those preference signals is likely to lead us deeper into bubbles rather than out of them.

    What’s needed, he argues, is some global superstructure to advance humanity.

    This is not an especially controversial idea; Zuckerberg is arguing for a kind of digital-era version of the global institution-building that the Western world engaged in after World War II. But because he is a chief executive and not an elected president, there is something frightening about his project. He is positioning Facebook — and, considering that he commands absolute voting control of the company, he is positioning himself — as a critical enabler of the next generation of human society. A minor problem with his mission is that it drips with megalomania, albeit of a particularly sincere sort. With his wife, Priscilla Chan, Zuckerberg has pledged to give away nearly all of his wealth to a variety of charitable causes, including a long-term medical-research project to cure all disease. His desire to take on global social problems through digital connectivity, and specifically through Facebook, feels like part of the same impulse.

    Yet Zuckerberg is often blasé about the messiness of the transition between the world we’re in and the one he wants to create through software. Building new “social infrastructure” usually involves tearing older infrastructure down. If you manage the demolition poorly, you might undermine what comes next.
    https://www.nytimes.com/2017/04/25/ma...n-facebook-fix-its-own-worst-bug.html
    Voting 0
  9. non voglio farla lunga, ma in allora, come oggi, io non controllavo affatto il dato e l’informazione personale volontariamente o forzosamente appresa ad ogni mio movimento; ciò che in qualche modo mi salvava nella tribolata adolescenza (non sempre invero) era il controllo della situazione sociale e del contesto.

    Il controllo sul dato-informazione non l’avevo con il macellaio del paese e non posso pensare di averlo oggi sul web con Google, Facebook e soprattutto con le mille agenzie statuali affette, per svariate e talvolta encomiabili ragioni, da bulimia informativa. Ma in allora avevo contezza e in qualche modo governavo le banali regole tecniche (le vie del paese, gli orari della corriera) e quelle sociali di prossimità del mio territorio.

    Oggi non ci riesco più. E non è solo per la quantità dei dati captati e memorizzati ad ogni passo ma per la totale opacità del contesto e delle regole tecniche e sociali che governano la nostra vita digitale.

    Algoritmi ignoti, insondabili ai loro stessi creatori, ricostruiscono la nostra immagine, creano punteggi e giudicano rilevanze e congruità a nostra totale insaputa. Banche, assicurazioni, imprese di ogni risma e fattezza (a breve l’internet delle cose ci stupirà) ma soprattutto lo Stato, con le sue mille agenzie di verifica e controllo, accedono ad ogni informazione decontestualizzandola, creando relazioni e correlazioni di cui non abbiamo coscienza, ma di cui subiamo quotidianamente le conseguenze.

    Non possiamo impedire tutto questo, il big data e gli open-data salveranno il mondo, d’accordo. Ma possiamo e dobbiamo pretendere di sapere il chi, il come e il quando. Abbiamo bisogno di sapere qual è il contesto, e quali sono le regole; solo così troveremo strategie, non per delinquere o eludere la legge (come sostiene parte della magistratura), ma per esercitare i diritti fondamentali della persona.

    Nel mondo fisico sappiamo quando lo Stato ha il diritto di entrare in casa nostra, o a quali condizioni possa limitare le nostre libertà personali, di movimento, d’espressione; nel mondo digitale non sappiamo, e neppure ci chiediamo, chi, quando e a quali condizioni possa impossessarsi dei nostri dati, dei nostri dispositivi tramite software occulti, della nostra vita. Accettiamo supinamente un’intollerabile opacità.

    Io ho qualcosa da nascondere da quando ho ricordi: sono riservatezze variabili a seconda dell’interlocutore, del tempo, del luogo e del contesto. E non voglio per me e i miei figli una società stupidamente disciplinata da una costante sorveglianza e decerebrata dagli algoritmi. Vorrei una società in cui l’asimmetria dell’informazione sia l’esatto opposto dell’attuale, dove purtroppo il cittadino è totalmente trasparente e lo Stato e le sue regole sono opache e incerte.
    Mostra commenti ( 0 )
    Carlo Blengino
    Carlo Blengino

    Avvocato penalista, affronta nelle aule giudiziarie il diritto delle nuove tecnologie, le questioni di copyright e di data protection. È fellow del NEXA Center for Internet & Society del Politecnico di Torino. @CBlengio su Twitter
    http://www.ilpost.it/carloblengino/2016/11/02/ho-qualcosa-da-nascondere
    Voting 0
  10. The reason I bring this up: first of all, it’s a great way of understanding how machine learning algorithms can give us stuff we absolutely don’t want, even though they fundamentally lack prior agendas. Happens all the time, in ways similar to the Donald.

    Second, some people actually think there will soon be algorithms that control us, operating “through sound decisions of pure rationality” and that we will no longer have use for politicians at all.

    And look, I can understand why people are sick of politicians, and would love them to be replaced with rational decision-making robots. But that scenario means one of three things:

    1. Controlling robots simply get trained by the people’s will and do whatever people want at the moment. Maybe that looks like people voting with their phones or via the chips in their heads. This is akin to direct democracy, and the problems are varied – I was in Occupy after all – but in particular mean that people are constantly weighing in on things they don’t actually understand. That leaves them vulnerable to misinformation and propaganda.

    2. Controlling robots ignore people’s will and just follow their inner agendas. Then the question becomes, who sets that agenda? And how does it change as the world and as culture changes? Imagine if we were controlled by someone from 1000 years ago with the social mores from that time. Someone’s gonna be in charge of “fixing” things.

    3. Finally, it’s possible that the controlling robot would act within a political framework to be somewhat but not completely influenced by a democratic process. Something like our current president. But then getting a robot in charge would be a lot like voting for a president. Some people would agree with it, some wouldn’t. Maybe every four years we’d have another vote, and the candidates would be both people and robots, and sometimes a robot would win, sometimes a person. I’m not saying it’s impossible, but it’s not utopian. There’s no such thing as pure rationality in politics, it’s much more about picking sides and appealing to some people’s desires while ignoring others.
    https://mathbabe.org/2016/08/11/donal...e-a-biased-machine-learning-algorithm
    Voting 0

Top of the page

First / Previous / Next / Last / Page 1 of 3 Online Bookmarks of M. Fioretti: Tags: algorithms + control

About - Propulsed by SemanticScuttle