mfioretti: privacy* + big data*

Bookmarks on this page are managed by an admin user.

93 bookmark(s) - Sort by: Date ↓ / Title / Voting / - Bookmarks from other users for this tag

  1. Di nuovo: dove sta lo scandalo di questi giorni, dunque? Lo scandalo sta nell’evidenza di un errore di fondo nella concezione delle interazioni umane, la concezione che Mark Zuckerberg ha imposto — per sua stessa ammissione, nel tanto agnognato intervento post-Cambridge Analytica — dal 2007. L’idea cioè di costruire un “web dove si è social di default”. Dove cioè la norma è condividere. Un principio che è strutturalmente opposto alla tutela della privacy individuale, che si fonda sulla riservatezza come norma, riguardo ai propri dati personali.

    Zuckerberg lo spiega benissimo nel suo più recente intervento, individuando - giustamente - in quell’errore filosofico e antropologico la radice della tempesta in cui è costretto a destreggiarsi: “Nel 2007, abbiamo lanciato la Facebook Platform nella convinzione (“vision”) che più app dovessero essere social. Il tuo calendario doveva poterti mostrare il compleanno degli amici, le tue mappe mostrare dove vivono i tuoi amici, il tuo address book le loro foto. Per farlo, abbiamo consentito di accedere alle app e condividere chi fossero i tuoi amici e alcune informazioni su di loro”.

    È questo che conduce, nel 2013, Kogan a ottenere l’accesso ai dati di milioni di persone. E certo, quei dati hanno un immenso valore scientifico — ed è giusto che la ricerca, se condotta nel pieno rispetto del consenso informato degli utenti divenuti soggetti sperimentali, possa accedervi. Per soli scopi accademici, però. E anche così, già nel 2014 il famoso esperimento condotto da Facebook stessa sulla manipolazione delle emozioni di centinaia di migliaia di utenti, a cui erano stati mostrati deliberatamente più contenuti positivi o negativi, aveva dimostrato che anche quando non ci sono di mezzo fini commerciali, la questione è ambigua, complessa. E che no, non basta accettare condizioni di utilizzo intricate e che non legge nessuno per dire che allora ogni utente ha, per il fatto stesso di avere accettato di essere su Facebook, di diventare indiscriminatamente un topo di laboratorio arruolato in esperimenti di cui ignora tutto.

    Eppure è proprio la piattaforma a rendersi conto, già in quello stesso anno, che così le cose non vanno. Che a quel modo Facebook perde il controllo su quali terze parti hanno accesso ai dati dei suoi utenti. La policy dunque cambia, e da allora gli “amici” devono acconsentire al trattamento dei propri dati da parte di una app. La nuova filosofia, ricorda Albright, è “people first”. Ma è tardi. E l’incapacità di rientrare davvero in possesso di quell’ammasso di informazioni, dimostrata dal caso Cambridge Analytica – possibile Facebook debba scoprire dai giornali che l’azienda non aveva cancellato i dati che diceva di aver cancellato, o che debba comunque condurre un auditing serio per verificarlo ora, dimostrando di non avere idea se lo siano o meno? – fa capire che il problema va ben oltre il singolo caso in questione, ma è sistematico.

    Per capirci più chiaramente: come scrive Albright, la prima versione delle API v.1.0 per il Facebook Graph – cioè ciò che gli sviluppatori di applicazioni potevano ottenere dal social network tra il 2010, data di lancio, e il 2014, data in cui la policy è cambiata – consentiva di sapere non su chi si iscriveva a una determinata app, ma dagli amici inconsapevoli, i seguenti dati: “about, azioni, attività, compleanno, check-ins, istruzione, eventi, giochi, gruppi, residenza, interessi, like, luogo, note, status, tag, foto, domande, relazioni, religione/politica, iscrizioni, siti, storia lavorativa”. Davvero si poteva pensare di controllare dove finissero tutti questi dati, per milioni e milioni di persone?

    E davvero Facebook lo scopre oggi? Nel 2011, la Federal Trade Commission americana aveva già segnalato la questione come problematica. Non ha insegnato nulla
    https://www.valigiablu.it/facebook-cambridge-analytica-scandalo
    Voting 0
  2. Stratumseind in Eindhoven is one of the busiest nightlife streets in the Netherlands. On a Saturday night, bars are packed, music blares through the street, laughter and drunken shouting bounces off the walls. As the night progresses, the ground becomes littered with empty shot bottles, energy drink cans, cigarette butts and broken glass.

    It’s no surprise that the place is also known for its frequent fights. To change that image, Stratumseind has become one of the “smartest” streets in the Netherlands. Lamp-posts have been fitted with wifi-trackers, cameras and 64 microphones that can detect aggressive behaviour and alert police officers to altercations. There has been a failed experiment to change light intensity to alter the mood. The next plan, starting this spring, is to diffuse the smell of oranges to calm people down. The aim? To make Stratumseind a safer place.

    We get that comment a lot – ‘Big brother is watching you’. I prefer to say, ‘Big brother is helping you’
    Peter van de Crommert

    All the while, data is being collected and stored. “Visitors do not realise they are entering a living laboratory,” says Maša Galic, a researcher on privacy in the public space for the Tilburg Institute of Law, Technology and Society. Since the data on Stratumseind is used to profile, nudge or actively target people, this “smart city” experiment is subject to privacy law. According to the Dutch Personal Data Protection Act, people should be notified in advance of data collection and the purpose should be specified – but in Stratumseind, as in many other “smart cities”, this is not the case.

    Peter van de Crommert is involved at Stratumseind as project manager with the Dutch Institute for Technology, Safety and Security. He says visitors do not have to worry about their privacy: the data is about crowds, not individuals. “We often get that comment – ‘Big brother is watching you’ – but I prefer to say, ‘Big brother is helping you’. We want safe nightlife, but not a soldier on every street corner.”
    Revellers in Eindhoven’s Stratumseind celebrate King’s Day.
    Facebook
    Twitter
    Pinterest
    Revellers in Eindhoven’s Stratumseind celebrate King’s Day. Photograph: Filippo Manaresi/Moment Editorial/Getty Images

    When we think of smart cities, we usually think of big projects: Songdo in South Korea, the IBM control centre in Rio de Janeiro or the hundreds of new smart cities in India. More recent developments include Toronto, where Google will build an entirely new smart neighbourhood, and Arizona, where Bill Gates plans to build his own smart city. But the reality of the smart city is that it has stretched into the everyday fabric of urban life – particularly so in the Netherlands.
    Advertisement

    In the eastern city of Enschede, city traffic sensors pick up your phone’s wifi signal even if you are not connected to the wifi network. The trackers register your MAC address, the unique network card number in a smartphone. The city council wants to know how often people visit Enschede, and what their routes and preferred spots are. Dave Borghuis, an Enschede resident, was not impressed and filed an official complaint. “I don’t think it’s okay for the municipality to track its citizens in this way,” he said. “If you walk around the city, you have to be able to imagine yourself unwatched.”

    Enschede is enthusiastic about the advantages of the smart city. The municipality says it is saving €36m in infrastructure investments by launching a smart traffic app that rewards people for good behaviour like cycling, walking and using public transport. (Ironically, one of the rewards is a free day of private parking.) Only those who mine the small print will discover that the app creates “personal mobility profiles”, and that the collected personal data belongs to the company Mobidot.
    https://www.theguardian.com/cities/20...-privacy-eindhoven-utrecht?CMP=twt_gu
    Voting 0
  3. urope has propelled past the United States when it comes to constraining the abuses of Big Tech. In June, the European Union fined Google $2.7 billion for steering web users to its shopping site, and investigations remain active over similar treatment on Android phones. European regulators fined Facebook for lying about whether it could match user profiles with phone numbers on its messaging acquisition WhatsApp. They demanded Apple repay $15.3 billion in back taxes in Ireland. And they forced Amazon to change its e-book contracts, which they claimed inappropriately squeezed publishers.
    AP Photo/Rick Bowmer

    Trust-Busted: In 2002, Microsoft Chairman Bill Gates had to testify at federal court in his company's antitrust case. The public trial led Microsoft to sfoten its aggressive strategy against rivals.

    Unfortunately, these actions were treated mainly as the cost of doing business. The Facebook fine totaled not even 1 percent of the $22 billion purchase price for WhatsApp, and it allowed the two companies to remain partnered. Government policy, in effect, has “told these companies that the smart thing to do is to lie to us and break the law,” said Scott Galloway in his presentation. Google’s remedy in the shopping case still forces rivals to bid for placement at the top of the page, with Google Shopping spun off as a stand-alone competitor. This does weaken Google’s power and solves the “equal treatment” problem, but it doesn’t protect consumers, who will ultimately pay for those costly bids. “The EU got a $2.7 billion fine to hold a party and bail out Greek banks,” said Gary Reback, an antitrust lawyer and critic of the EU’s actions. “No amount of money will make a difference.”

    However, one thing might: Europe’s increasing move toward data privacy. The General Data Protection Regulation (GDPR), scheduled for implementation in May 2018, empowers European web users to affirmatively opt out of having their data collected, with high penalties for non-compliance. Consumers will be able to obtain their personal data and learn how it is used. They can request that their data be erased completely (known as the “right to be forgotten”) as well as prohibited from sale to third parties. Platforms could not condition use of their products on data collection. A separate, not-yet-finalized regulation called ePrivacy would forbid platforms from tracking users across separate apps, websites, and devices.
    http://prospect.org/article/big-tech-new-predatory-capitalism
    Voting 0
  4. Some entries are ambiguous. Take Microsoft, under the “operational services” category. PayPal apparently supplies the tech company with an image of a customer–a photo or video–or their image from an identity document for the purposes of “facial image comparison for fraud protection” and “research and testing as to appropriateness of new products.” The former sounds like some kind of facial recognition system that PayPal uses to look for fraud. But the latter is uneasily broad. What kind of research is Microsoft doing using pictures of PayPal users’ faces? PayPal did not comment on this specific question.
    https://www.fastcodesign.com/90157501...ource=twitter.com&utm_campaign=buffer
    Voting 0
  5. Since Islam instructs followers to pray 5x daily at specific times, I wondered if one could identify devout Muslim hacks solely from their trip data. For drivers that do pray regularly, there are surely difficulties finding a place to park, wash up and pray at the exact time, but in many cases banding near prayer times is quite clear. I plotted a few examples.
    Each image shows fares for one cabbie in 2013. Yellow=active fare (carrying passengers). A minute is 1 pixel wide; a day is 2 pixels tall. Blue stripes indicate the 5 daily prayer start times which vary with the sun’s position throughout the year.
    http://www.theiii.org/index.php/997/u...-data-to-identify-muslim-taxi-drivers
    Tags: , , , , by M. Fioretti (2017-10-17)
    Voting 0
  6. Speaking as a statistician, it is quite easy to identify people in anonymous datasets. There are only so many 5'4" jews living in San Francisco with chronic back pain. Every bit of information we reveal about ourselves will be one more disease that we can track, and another life saved.

    If I want to know whether I will suffer a heart attack, I will have to release my data for public research. In the end, privacy will be an early death sentence.

    Already, health insurers are beginning to offer discounts for people who wear health trackers and let others analyze their personal movements. Many, if not most, consumers in the next generation will choose cash and a longer life in exchange for publicizing their most intimate details.

    What can we tell with basic health information, such as calories burned throughout the day? Pretty much everything.

    With a rudimentary step and calorie counter, I was able to distinguish whether I was having sex or at the gym, since the minute-by-minute calorie burn profile of sex is quite distinct (the image below from my health tracker shows lots of energy expended at the beginning and end, with few steps taken. Few activities besides sex have this distinct shape)
    https://medium.com/the-ferenstein-wir...rs-of-history-in-50-images-614c26059e
    Voting 0
  7. I have no illusions about what Facebook has figured out about me from my activity, pictures, likes, and posts. Friends have speculated about how algorithms might effectively predict hook-ups or dating patterns based on bursts of "Facebook stalking" activity (you know you are guilty of clicking through hundreds of tagged pictures of your latest crush). David Kilpatrick uncovered that Facebook "could determine with about 33 percent accuracy who a user was going to be in a relationship with a week from now." And based on extensive networks of gay friends, MIT's Gaydar claims to be able to out those who refrain from listing their sexual orientation on the network. When I first turned on Timeline, I discovered Facebook had correctly singled out that becoming friends with Nick was a significant event of 2007 (that's when we met and first started dating, and appropriately enough, part of why he joined Facebook).

    Since our engagement, there have been enough mentions of "engagement" and "wedding" in mine and my friends' comments littered throughout my profile to suggest to Facebook's keyword crawlers to deduce that we've got something big planned. The fact that he's tagged in my cover photo, we have numerous albums taken in remote locations where we're the only two people tagged, and that we both currently live in Chongqing, China, all should make it obvious to Facebook's relationship-weighing algorithms that we're pretty important to each other.

    friends 2007.png

    So shouldn't it also be obvious to Facebook that I "know him well" and he's "one of my best friends?" We wouldn't be tagged in so many pictures together (70) if it weren't true. And could there be any chance at all that "I don't know him" given these data points? Though Facebook isn't outright asking me if we're in a relationship, it sure sounds like that's what they are getting at. Moreover, why hasn't Facebook asked me the same question about someone like Jen Hudon? I share more mutual friends (121) and am tagged in almost as many photos (67) with Jen as I am with Nick, and her wall posts feature prominently in my Timeline. (Facebook might interpret these data points and suggest I choose her as one of my bridesmaids, which I have done). No, Facebook has us figured out: we went to High School together and she's "one of my best friends."

    Watson Hudon.png

    So why does Facebook care to know more about the nature of my relationship to Nick? The short answer is that Facebook wants to know as much as it can about my relationships, even though Facebook's current policy is not use information from user questions like this one for advertising.

    My response to the relationship question would act as an important input into the algorithms deciding what shows up in my feed. If I said Nick is "one of my best friends," Facebook might weight his posts more heavily than they already do. For example, my feed has recently been inundated with more posts about my cousins' wife's pregnancy now that I've confirmed him as a family member (though I hide it on my profile for security reasons).

    But what happens if I don't want these relationships to alter my feed? This is a "Filter Bubble" problem, where Facebook's personalization algorithm is opaque to us as users. I don't know what I'm missing, but I can tell that I'm seeing more of certain people as a result of declaring a certain kind of relationship to them. But there's no master switch board for us to tweak the dials on our social filters; if I'm seeing too many of a certain friend's posts, I have only the binary choice of turning them on or off, and I have to alter that detail on a person by person basis. Any other input into the algorithm requires a fair amount of proactive and clever gaming of the system (like declining family member requests to avoid filtration). And who wants to explain to Aunt Joan that's why you can't confirm she's your aunt?

    And if I did change my relationship status to engaged -- not just answer the question Facebook posed to me -- the company could then target ads based on that information. We've seen how pregnancies are a pivotal marketing opportunity for companies like Target. Marriage is another big life event where habits, loyalties, and purchasing behaviors change. And then there's the brief but highly lucrative wedding planning and purchasing period itself; it's a critical and fleeting moment that marketers are eager to pin down. It comes as no surprise that Facebook and its advertisers would want to know what stage of life I'm in right at this moment. They want to know if they could be making more money showing me engagement ring, registry, or mortgage advertisements. For the most part, that targeting is harmless, but it's gold to Facebook and advertisers to know that I've shifted demographic categories. I imagine that my literal value in terms of price per click might even go up as I enter into the "engaged" category.

    ***

    And even though the pairing of the carefully phrased question and advertising were coincidental, it's as if Facebook is saying, "I know you guys have been together for a while now, shouldn't you be thinking about getting engaged soon?" Hint hint, nudge nudge. And then it comes off as a sassy girlfriend shouting over martinis, "Girl, when's he gonna put a ring on it?" So Facebook isn't outright asking me if I'm engaged. But I find myself reading for subtext as I would an aunt's pointed but tactfully indirect question.
    http://www.theatlantic.com/technology...hy-is-it-asking-about-my-fianc/254479
    Voting 0
  8. While Bethany Howell napped on the couch last week, her daughter Ashlynd, 6 years old, used her mother’s thumb to unlock her phone and open the Amazon app. “$250 later, she has shopped for all her Christmas presents on Amazon,” said Ms. Howell, of Little Rock, Ark.

    After Ashlynd’s parents received 13 order confirmations for Pokémon items, they initially thought they’d been hacked, then they figured Ashlynd had bought them unintentionally. “No, Mommy, I was shopping,” Ms. Howell said her daughter told her. “But don’t worry—everything that I ordered is coming straight to the house.” Ms. Howell added: ”She is really proud of herself."

    The Howells could return only four of the items. So Ms. Howell came up with a solution and told Ashlynd, “Well, Santa found out and that is what Santa is going to bring you for Christmas.”

    Zeke Tischler, a 30-year-old social-media professional from Northridge, Calif., had the same sort of gift problem outside of the Christmas season. Ads for engagement rings began popping up in his Facebook news feed after he searched for rings online last year.

    One evening, as his girlfriend was looking over his shoulder, an ad for opal engagement rings—her favorite gemstone—popped up on his Facebook news feed. Mr. Tischler said he tried to pass it off as a glitch.

    Several weeks later, however, when he got down on one knee and presented the opal engagement ring, his girlfriend presented her own ring for him. Online ads ”ruined one of the largest surprises in my life,” Mr. Tischler said. His fiancée, he added, “thinks it’s pretty hilarious.”
    Take a Look at Other Recent A-Heds

    What’s Worse Than an Office Holiday Party? A Virtual Office Holiday Party
    Xi Jinping Likes English Beer and Iowa Farms—And Now All of China Does, Too
    Are You My Mother? These Birds Must Never Know

    A crush of package deliveries undid Brenna Jennings. Her United Parcel Service Inc. driver showed up so often to her New Hampshire home that her 8-year-old daughter started pondering the imponderable. Ms. Jennings shut it down with an explanation: Amazon and UPS are Santa’s helpers.
    http://www.wsj.com/articles/those-ads...rnet-are-ruining-christmas-1482507745
    Voting 0
  9. FREQUENT visitors to the Hustler Club, a gentlemen’s entertainment venue in New York, could not have known that they would become part of a debate about anonymity in the era of “big data”. But when, for sport, a data scientist called Anthony Tockar mined a database of taxi-ride details to see what fell out of it, it became clear that, even though the data concerned included no direct identification of the customer, there were some intriguingly clustered drop-off points at private addresses for journeys that began at the club. Stir voter-registration records into the mix to identify who lives at those addresses (which Mr Tockar did not do) and you might end up creating some rather unhappy marriages.

    The anonymisation of a data record typically means the removal from it of personally identifiable information. Names, obviously. But also phone numbers, addresses and various intimate details like dates of birth. Such a record is then deemed safe for release to researchers, and even to the public, to make of it what they will. Many people volunteer information, for example to medical trials, on the understanding that this will happen.

    But the ability to compare databases threatens to make a mockery of such protections.
    http://www.economist.com/news/science...n?fsrc=scn/tw/te/pe/ed/Wellseeyouanon
    Voting 0
  10. Quit fracking our lives to extract data that’s none of your business and that your machines misinterpret. — New Clues, #58

    That’s the blunt advice David Weinberger and I give to marketers who still make it hard to talk, sixteen years after many of them started failing to get what we meant by Markets are Conversations.

    In the world of 2001, people have become so machinelike that the most human character turns out to be a machine. That’s the essence of Kubrick’s dark prophecy: as we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence.

    Even if our own intelligence is not yet artificialized, what’s feeding it surely is.

    In The Filter Bubble, after explaining Google’s and Facebook’s very different approaches to personalized “experience” filtration, and the assumptions behind both, Eli Pariser says both companies approximations are based on “a bad theory of you,” and come up with “pretty poor representations of who we are, in part because there is no one set of data that describes who we are.” He says the ideal of perfect personalization dumps us into what animators, puppetry and robotics engineers call the uncanny valley: a “place where something is lifelike but not convincingly alive, and it gives people the creeps.”

    Sanity requires that we line up many different personalities behind a single first person pronoun: I, me, mine. And also behind multiple identifiers. In my own case, I am Doc to most of those who know me, David to various government agencies (and most of the entities that bill me for stuff), Dave to many (but not all) family members, @dsearls to Twitter, and no name at all to the rest of the world, wherein I remain, like most of us, anonymous (literally, nameless), because that too is a civic grace. (And if you doubt that, ask any person who has lost their anonymity through the Faustian bargain called celebrity.)

    Third, advertising needs to return to what it does best: straightforward brand messaging that is targeted at populations, and doesn’t get personal. For help with that, start reading
    https://medium.com/@dsearls/on-market...bad-guesswork-88a84de937b0#.deu5ue16x
    Voting 0

Top of the page

First / Previous / Next / Last / Page 1 of 10 Online Bookmarks of M. Fioretti: Tags: privacy + big data

About - Propulsed by SemanticScuttle