mfioretti: facebook* + percloud*

Bookmarks on this page are managed by an admin user.

67 bookmark(s) - Sort by: Date ↓ / Title / Voting / - Bookmarks from other users for this tag

  1. “I believe it’s important to tell people exactly how the information that they share on Facebook is going to be used.

    “That’s why, every single time you go to share something on Facebook, whether it’s a photo in Facebook, or a message, every single time, there’s a control right there about who you’re going to be sharing it with ... and you can change that and control that in line.

    “To your broader point about the privacy policy ... long privacy policies are very confusing. And if you make it long and spell out all the detail, then you’re probably going to reduce the per cent of people who read it and make it accessible to them.”
    https://www.theguardian.com/technolog...testimony-to-congress-the-key-moments
    Voting 0
  2. After Barack Obama won reelection in 2012, voter targeting and other uses of Big Data in campaigns was all the rage. The following spring, at a conference titled Data-Crunched Democracy that Turow organized with Daniel Kreiss of the University of North Carolina, I listened as Ethan Roeder, the head of data analytics for Obama 2012, railed against critics. “Politicians exist to manipulate you,” he said, “and that is not going to change, regardless of how information is used.” He continued: “OK, maybe we have a new form of manipulation, we have micro-manipulation, but what are the real concerns? What is the real problem that we see with the way information is being used? Because if it’s manipulation, that ship has long since sailed.” To Roeder, the bottom line was clear: “Campaigns do not care about privacy. All campaigns care about is winning.”

    A few of us at the conference, led by the sociologist Zeynep Tufekci, argued that because individual voter data was being weaponized with behavioral-science insights in ways that could be finely tuned and also deployed outside of public view, the potential now existed to engineer the public toward outcomes that wealthy interests would pay dearly to control. No one listened. Until last year, you could not get a major US foundation to put a penny behind efforts to monitor and unmask these new forms of hidden persuasion.

    If there’s any good news in the last week of revelations about the data firm Cambridge Analytica’s 2014 acquisition (and now-notorious 2016 use) of the profile data of 50 million Facebook members, it’s this: Millions of people are now awake to just how naked and exposed they are in the public sphere. And clearly, people care a lot more about political uses of their personal data than they do about someone trying to sell them a pair of shoes. That’s why so many people are suddenly talking about deleting their Facebook accounts.
    http://www.other-news.info/2018/03/po...eeds-to-be-restored-to-internet-users
    Voting 0
  3. Ormai sono giorni che non si fa che parlare del caso Cambridge Analytica legato alle scorse elezioni americane, questa la sintesi estrema di quanto successo con qualche considerazione da “addetto ai lavori“.

    Christopher Wylie nelle scorse settimane, concedendo a quanto pare un paio di ghiotte esclusive al Guardian ed al New York Times, ha denunciato un uso scorretto di una grande quantità di dati “prelevati” da Facebook.
    Ecco, questo è il primo punto su cui soffermarci, come sono stati prelevati questi dati?
    La stampa ha parlato di furto di dati, la stampa estera ha usato più volte il verbo “to harvest” che significa letteralmente “raccogliere” ed in gergo tecnico significa azionare un qualche script in grado di collezionare dati automaticamente.
    In ogni modo, ai tempi in cui questi dati sono stati raccolti, non c’era da fare molto per ottenere i dati non solo della persona target, ma anche quelli della propria lista di amici, cosa che ad oggi è divenuta impossibile.

    Il ragionamento quindi alla base di questa raccolta dati è lo stesso che si pone alla base di Facebook stesso: se hai bisogno di dati, probabilmente le persone te li daranno spontaneamente.
    Succede così anche oggi, ogni giorno, su internet.
    E bada bene, parlo di internet, non solo di Facebook.
    http://www.technicoblog.com/cambridge-analytica-cio-che-facciamo.htm
    Voting 0
  4. Di nuovo: dove sta lo scandalo di questi giorni, dunque? Lo scandalo sta nell’evidenza di un errore di fondo nella concezione delle interazioni umane, la concezione che Mark Zuckerberg ha imposto — per sua stessa ammissione, nel tanto agnognato intervento post-Cambridge Analytica — dal 2007. L’idea cioè di costruire un “web dove si è social di default”. Dove cioè la norma è condividere. Un principio che è strutturalmente opposto alla tutela della privacy individuale, che si fonda sulla riservatezza come norma, riguardo ai propri dati personali.

    Zuckerberg lo spiega benissimo nel suo più recente intervento, individuando - giustamente - in quell’errore filosofico e antropologico la radice della tempesta in cui è costretto a destreggiarsi: “Nel 2007, abbiamo lanciato la Facebook Platform nella convinzione (“vision”) che più app dovessero essere social. Il tuo calendario doveva poterti mostrare il compleanno degli amici, le tue mappe mostrare dove vivono i tuoi amici, il tuo address book le loro foto. Per farlo, abbiamo consentito di accedere alle app e condividere chi fossero i tuoi amici e alcune informazioni su di loro”.

    È questo che conduce, nel 2013, Kogan a ottenere l’accesso ai dati di milioni di persone. E certo, quei dati hanno un immenso valore scientifico — ed è giusto che la ricerca, se condotta nel pieno rispetto del consenso informato degli utenti divenuti soggetti sperimentali, possa accedervi. Per soli scopi accademici, però. E anche così, già nel 2014 il famoso esperimento condotto da Facebook stessa sulla manipolazione delle emozioni di centinaia di migliaia di utenti, a cui erano stati mostrati deliberatamente più contenuti positivi o negativi, aveva dimostrato che anche quando non ci sono di mezzo fini commerciali, la questione è ambigua, complessa. E che no, non basta accettare condizioni di utilizzo intricate e che non legge nessuno per dire che allora ogni utente ha, per il fatto stesso di avere accettato di essere su Facebook, di diventare indiscriminatamente un topo di laboratorio arruolato in esperimenti di cui ignora tutto.

    Eppure è proprio la piattaforma a rendersi conto, già in quello stesso anno, che così le cose non vanno. Che a quel modo Facebook perde il controllo su quali terze parti hanno accesso ai dati dei suoi utenti. La policy dunque cambia, e da allora gli “amici” devono acconsentire al trattamento dei propri dati da parte di una app. La nuova filosofia, ricorda Albright, è “people first”. Ma è tardi. E l’incapacità di rientrare davvero in possesso di quell’ammasso di informazioni, dimostrata dal caso Cambridge Analytica – possibile Facebook debba scoprire dai giornali che l’azienda non aveva cancellato i dati che diceva di aver cancellato, o che debba comunque condurre un auditing serio per verificarlo ora, dimostrando di non avere idea se lo siano o meno? – fa capire che il problema va ben oltre il singolo caso in questione, ma è sistematico.

    Per capirci più chiaramente: come scrive Albright, la prima versione delle API v.1.0 per il Facebook Graph – cioè ciò che gli sviluppatori di applicazioni potevano ottenere dal social network tra il 2010, data di lancio, e il 2014, data in cui la policy è cambiata – consentiva di sapere non su chi si iscriveva a una determinata app, ma dagli amici inconsapevoli, i seguenti dati: “about, azioni, attività, compleanno, check-ins, istruzione, eventi, giochi, gruppi, residenza, interessi, like, luogo, note, status, tag, foto, domande, relazioni, religione/politica, iscrizioni, siti, storia lavorativa”. Davvero si poteva pensare di controllare dove finissero tutti questi dati, per milioni e milioni di persone?

    E davvero Facebook lo scopre oggi? Nel 2011, la Federal Trade Commission americana aveva già segnalato la questione come problematica. Non ha insegnato nulla
    https://www.valigiablu.it/facebook-cambridge-analytica-scandalo
    Voting 0
  5. In seguito alla pubblicazione del documento stilato di comune accordo tra INE e Facebook, resta fuor di dubbio che l'azienda fondatrice della nota piattaforma social non abbia nessun obbligo formale, e non mostrerebbe neanche l'intenzione di combattere le cosiddette “fake news”, argomento molto discusso negli ultimi giorni.

    Dobbiamo anche ricordare che non lontano dal Messico, in Honduras, il Congresso sta discutendo su una proposta di legge che tenta di frenare la diffusione di notizie false, inerenti anche all'ambito elettorale, con modalità poco trasparenti.
    https://it.globalvoices.org/2018/03/l...ource=twitter.com&utm_campaign=buffer
    Voting 0
  6. Who is doing the targeting?

    Albright: It really depends on the platform and the news event. Just the extensiveness of the far right around the election: I can’t talk about that right this second, but I can say that, very recently, what I’ve tended to see from a linking perspective and a network perspective is that the left, and even to some degree center-left news organizations and journalists, are really kind of isolated in their own bubble, whereas the right have very much populated most of the social media resources and use YouTube extensively. This study I did over the weekend shows the depth of the content and how much reach they have. I mean, they’re everywhere; it’s almost ubiquitous. They’re ambient in the media information ecosystem. It’s really interesting from a polarization standpoint as well, because self-identified liberals and self-identified conservatives have different patterns in unfriending people and in not friending people who have the opposite of their ideology.

    From those initial maps of the ad tech and hyperlink ecosystem of the election-related partisan news realm, I dove into every platform. For example, I did a huge study on YouTube last year. It led me to almost 80,000 fake videos that were being auto-scripted and batch-uploaded to YouTube. They were all keyword-stuffed. Very few of them had even a small number of views, so what these really were was about impact — these were a gaming system. My guess is that they were meant to skew autocomplete or search suggestions in YouTube. It couldn’t have been about monetization because the videos had very few views the sheer volume wouldn’t have made sense with YouTube’s business model.

    Someone had set up a script that detected social signals off of Twitter. It would go out and scrape related news articles, pull the text back in, and read it out in a computer voice, a Siri-type voice. It would pull images from Google Images, create a slideshow, package that up and wrap it, upload it to YouTube, hashtag it and load it with keywords. There were so many of these and they were going up so fast that as I was pulling data from the YouTube API dozens more would go up.




    I worked with The Washington Post on a project where I dug into Twitter and got, for the last week leading up to the election, a more or less complete set of Twitter data for a group of hashtags. I found what were arguably the top five most influential bots through that last week, and we found that the top one was not a completely automated account, it was a person.

    The Washington Post’s Craig Timberg » looked around and actually found this person and contacted him and he agreed to an interview at his house. It was just unbelievable. It turns out that this guy was almost 70, almost blind.

    From Timberg’s piece: “Sobieski’s two accounts…tweet more than 1,000 times a day using ‘schedulers’ that work through stacks of his own pre-written posts in repetitive loops. With retweets and other forms of sharing, these posts reach the feeds of millions of other accounts, including those of such conservative luminaries as Fox News’s Sean Hannity, GOP strategist Karl Rove and Sen. Ted Cruz (R-Tex.), according to researcher Jonathan Albright…’Life isn’t fair,’ Sobieski said with a smile. ‘Twitter in a way is like a meritocracy. You rise to the level of your ability….People who succeed are just the people who work hard.'” »

    The most dangerous accounts, the most influential accounts, are often accounts that are supplemented with human input, and also a human identity that’s very strong and possibly already established before the elections come in.


    I mean, I do hold that it’s not okay to come in and try to influence someone’s election; when I look at these YouTube videos, I think: Someone has to be funding this. In the case of the YouTube research, though, I looked at this more from a systems/politics perspective.

    We have a problem that’s greater than the one-off abuse of technologies to manipulate elections. This thing is parasitic. It’s growing in size. The last week and a half are some of the worst things I’ve ever seen, just in terms of the trending. YouTube is having to manually go in and take these videos out. YouTube’s search suggestions, especially in the context of fact-checking, are completely counter-productive. I think Russia is a side effect of our larger problems.

    Why is it getting worse?

    Albright: There are more people online, they’re spending more time online, there’s more content, people are becoming more polarized, algorithms are getting better, the amount of data that platforms have is increasing over time.

    I think one of the biggest things that’s missing from political science research is that it usually doesn’t consider the amount of time that people spend online. Between the 2012 election and the 2016 election, smartphone use went up by more than 25 percent. Many people spend all of their waking time somehow connected.

    This is where psychology really needs to come in. There’s been very little psychology work done looking at this from an engagement perspective, looking at the effect of seeing things in the News Feed but not clicking out. Very few people actually click out of Facebook. We really need social psychology, we really need humanities work to come in and pick up the really important pieces. What are the effects of someone seeing vile or conspiracy news headlines in their News Feed from their friends all day?

    Owen: This is so depressing.
    http://www.niemanlab.org/2018/02/news...-what-to-do-as-things-crash-around-us
    Voting 0
  7. Four companies dominate our daily lives unlike any other in human history: Amazon, Apple, Facebook, and Google. We love our nifty phones and just-a-click-away services, but these behemoths enjoy unfettered economic domination and hoard riches on a scale not seen since the monopolies of the gilded age. The only logical conclusion? We must bust up big tech.
    https://www.esquire.com/news-politics...15895746/bust-big-tech-silicon-valley
    Voting 0
  8. “Facebook has never had on their report card, in my opinion, true social outcomes,” McGinn says. “From a business perspective, Facebook has done phenomenally well. Facebook is a cash cow. But from a social perspective, those metrics could be inversely related. The more Facebook builds profit, the more it’s at the expense of the American people.”
    https://www.theverge.com/2018/2/6/169...erg-pollster-tavis-mcginn-honest-data
    Tags: , , by M. Fioretti (2018-02-07)
    Voting 0
  9. I want to spend the bulk of my remaining time on another global problem: the rise and monopolistic behavior of the giant IT platform companies. These companies have often played an innovative and liberating role. But as Facebook and Google have grown into ever more powerful monopolies, they have become obstacles to innovation, and they have caused a variety of problems of which we are only now beginning to become aware.

    Companies earn their profits by exploiting their environment. Mining and oil companies exploit the physical environment; social media companies exploit the social environment. This is particularly nefarious because social media companies influence how people think and behave without them even being aware of it. This has far-reaching adverse consequences on the functioning of democracy, particularly on the integrity of elections.

    The distinguishing feature of internet platform companies is that they are networks and they enjoy rising marginal returns; that accounts for their phenomenal growth. The network effect is truly unprecedented and transformative, but it is also unsustainable. It took Facebook eight and a half years to reach a billion users and half that time to reach the second billion. At this rate, Facebook will run out of people to convert in less than 3 years.
    https://www.georgesoros.com/2018/01/2...delivered-at-the-world-economic-forum
    Voting 0
  10. Mark Zuckerberg also launched Facebook with a disdain for intrusive advertising, but it wasn’t long before the social network giant became Google’s biggest competitor for ad dollars. After going public with 845 million users in 2012, Facebook became a multibillion-dollar company and Zuckerberg one of the richest men on Earth, but with only a promise that the company would figure out how to monetize its platform.

    Facebook ultimately sold companies on its platform by promising “brand awareness” and the best possible data on what consumers actually liked. Brands could start their own Facebook pages, which people would actually “like” and interact with. This provided unparalleled information about what company each individual person wanted to interact with the most. By engaging with companies on Facebook, people gave corporate marketing departments more information than they could have ever dreamed of buying, but here it was offered up free.

    This was the “grand bargain,” as Columbia University law professor Tim Wu called it in his book, The Attention Merchants, that users struck with corporations. Wu wrote that Facebook’s “billions of users worldwide were simply handing over a treasure trove of detailed demographic data and exposing themselves to highly targeted advertising in return for what, exactly?”

    In other words: We will give you every detail of our lives and you will get rich by selling that information to advertisers.

    European regulators are now saying that bargain was a bad deal. The big question that remains is whether their counterparts in the U.S. will follow their lead.
    https://www.huffingtonpost.com/entry/...antitrust_us_5a625023e4b0dc592a088f6c
    Voting 0

Top of the page

First / Previous / Next / Last / Page 1 of 7 Online Bookmarks of M. Fioretti: Tags: facebook + percloud

About - Propulsed by SemanticScuttle