Tags: percloud*

240 bookmark(s) - Sort by: Date ↓ / Title / Voting /

  1. Popular internet platforms that currently mediate our everyday communications become more and more efficient in managing vast amounts of information, rendering their users more and more addicted and dependent on them. Alternative, more organic options like community networks do exist and they can empower citizens to build their own local networks from the bottom up. This chapter explores such technological options together with the adoption of a healthier internet diet in the context of a wider vision of sustainable living in an energy-limited world.


    The popular Internet platforms that mediate a significant portion of our everyday communications become thus more and more efficient in managing vast amounts of information. In turn, they also become more and more knowledgeable about designing user interaction design techniques that increase addiction, or “stickiness” when described as a performance metric, and dependency. This renders their users more and more addicted and dependent on them, subject to manipulation and exploitation for commercial and political objectives. This could be characterized as the second watershed of the Internet in the context of Illich’s analysis on the lifecycle of tools. As in the case of medicine and education, the Internet at its early stages was extremely useful. It dramatically increased our access to knowledge and to people all over the world. However, to achieve this, it relied on big organizations offering efficient and reliable services. These services now depend more and more on the participation of people and on the exploitation of the corresponding data produced for platforms to survive. This creates a vicious cycle between addictive design practices and unfair competition which breach the principle of net neutrality, and unethical uses of privately owned knowledge on human behavior which are generated through analyses of the data produced from our everyday online activities.

    In addition to the tremendous social, political, and economic implications of centralizing power on the Internet, there are also significant ecological consequences. At first glance, these seem to be positive. The centralization of online platforms has allowed their owners to build huge data centers in cold climates and invest in technologies that keep servers cool with lower energy costs. However, at the same time, the main aim of online platforms is to maximize the total time spent online as much as possible and to maximize the amount of information exchanged, not only between people but also between “things!” Their profitability depends on the processing of huge amounts of information that produces knowledge which can be sold to advertisers and politicians. Like the pharmaceutical companies, they create and maintain a world in which they are very much needed. This also explains why corporations like Facebook, Google, and Microsoft are at the forefront of the efforts to provide “Internet access to all” and why at the same time local communities face so many economic, political, and legal hurdles that encumber them to build, maintain, and control their own infrastructures.


    To achieve a sustainable level of Internet usage, one needs to provide the appropriate tools and processes for local communities to make decisions on the design of their ICT tools, including appropriate alternative and/or complementary design of places, institutions, and rituals that can impose certain constraints and replace online communications when these are not really necessary. To answer this demand, one should first answer a more fundamental question: How much online communication is needed in an energy-restricted world? In the case of food and housing, there are some reasonable basic needs. For example, each person should consume 2000 calories per day or 35 m2 of habitat (see P.M., 2014). But, how many Mbs does someone need to consume to sustain a good quality of life? What would be the analogy for a restricted vegetarian or even vegan Internet diet?
    The answer might differ depending on the services considered (social activities, collaborative work, or media) and the type of access to the network discussed above. For example, is it really necessary to have wireless connectivity “everywhere, anytime” using expensive mobile devices, or is it enough to have old-fashioned Internet cafes and only wired connections at home? Would it make sense to have Internet-free zones in cities? Can we imagine “shared” Internet usage in public spaces—a group of people interacting together in front of a screen and alternating in showing their favorite YouTube videos (a sort of an Internet jukebox)? There is a variety of more or less novel constraints which could be imposed on different dimensions:

    Time and Volume: A communications network owned by a local community, instead of a global or local corporation, could shut down for certain period of time each day if this is what the community decides. Or community members could agree to have certain time quotas for using the network (e.g., not more than 4 hours per day or 150 hours per month). Such constraints would not only reduce energy consumption; they would also enforce a healthier lifestyle and encourage face-to-face interactions.

    Reducing quotas on the speed (bandwidth) and volume (MB) that each person consumes is another way to restrict Internet consumption. Actually people are already used to such limits especially for 3G/4G connectivity. The difference is that a volume constraint does not necessarily translate to time constraints (if someone uses low volume services such as e-mail). So, volume constraints could encourage the use of less voluminous services (e.g., downloading a movie with low instead of High Definition resolution if this is to be watched in a low definition screen anyway) while time constraints might have the opposite effect (people using as much bandwidth as possible in their available time).

    However, to enforce such constraints, both time and volume based, on an individual basis, the network needs to know who is connecting to it and keep track of the overall usage. This raises the question of privacy and identification online and again the trade-off of trusting local vs. global institutions to take this role. Enforcing time or volume constraints for groups of people (e.g., the residents of a cooperative housing complex) is an interesting option to be considered when privacy is considered important.

    Devices: Energy consumption depends on the type of equipment used to access the Internet. For example, if access to the Internet happens only through desktop computers or laptops using ethernet cables instead of mobile smartphones, then the total energy consumed for a given service would be significantly reduced. Usage would also be dramatically affected: On the positive side, many people would spend less time online and use the Internet only for important tasks. On the negative side, others might stay at home more often and sacrifice outdoors activities in favor of Internet communications.
    https://rd.springer.com/chapter/10.1007/978-3-319-66592-4_13
    Voting 0
  2. “I believe it’s important to tell people exactly how the information that they share on Facebook is going to be used.

    “That’s why, every single time you go to share something on Facebook, whether it’s a photo in Facebook, or a message, every single time, there’s a control right there about who you’re going to be sharing it with ... and you can change that and control that in line.

    “To your broader point about the privacy policy ... long privacy policies are very confusing. And if you make it long and spell out all the detail, then you’re probably going to reduce the per cent of people who read it and make it accessible to them.”
    https://www.theguardian.com/technolog...testimony-to-congress-the-key-moments
    Voting 0
  3. After Barack Obama won reelection in 2012, voter targeting and other uses of Big Data in campaigns was all the rage. The following spring, at a conference titled Data-Crunched Democracy that Turow organized with Daniel Kreiss of the University of North Carolina, I listened as Ethan Roeder, the head of data analytics for Obama 2012, railed against critics. “Politicians exist to manipulate you,” he said, “and that is not going to change, regardless of how information is used.” He continued: “OK, maybe we have a new form of manipulation, we have micro-manipulation, but what are the real concerns? What is the real problem that we see with the way information is being used? Because if it’s manipulation, that ship has long since sailed.” To Roeder, the bottom line was clear: “Campaigns do not care about privacy. All campaigns care about is winning.”

    A few of us at the conference, led by the sociologist Zeynep Tufekci, argued that because individual voter data was being weaponized with behavioral-science insights in ways that could be finely tuned and also deployed outside of public view, the potential now existed to engineer the public toward outcomes that wealthy interests would pay dearly to control. No one listened. Until last year, you could not get a major US foundation to put a penny behind efforts to monitor and unmask these new forms of hidden persuasion.

    If there’s any good news in the last week of revelations about the data firm Cambridge Analytica’s 2014 acquisition (and now-notorious 2016 use) of the profile data of 50 million Facebook members, it’s this: Millions of people are now awake to just how naked and exposed they are in the public sphere. And clearly, people care a lot more about political uses of their personal data than they do about someone trying to sell them a pair of shoes. That’s why so many people are suddenly talking about deleting their Facebook accounts.
    http://www.other-news.info/2018/03/po...eeds-to-be-restored-to-internet-users
    Voting 0
  4. Ormai sono giorni che non si fa che parlare del caso Cambridge Analytica legato alle scorse elezioni americane, questa la sintesi estrema di quanto successo con qualche considerazione da “addetto ai lavori“.

    Christopher Wylie nelle scorse settimane, concedendo a quanto pare un paio di ghiotte esclusive al Guardian ed al New York Times, ha denunciato un uso scorretto di una grande quantità di dati “prelevati” da Facebook.
    Ecco, questo è il primo punto su cui soffermarci, come sono stati prelevati questi dati?
    La stampa ha parlato di furto di dati, la stampa estera ha usato più volte il verbo “to harvest” che significa letteralmente “raccogliere” ed in gergo tecnico significa azionare un qualche script in grado di collezionare dati automaticamente.
    In ogni modo, ai tempi in cui questi dati sono stati raccolti, non c’era da fare molto per ottenere i dati non solo della persona target, ma anche quelli della propria lista di amici, cosa che ad oggi è divenuta impossibile.

    Il ragionamento quindi alla base di questa raccolta dati è lo stesso che si pone alla base di Facebook stesso: se hai bisogno di dati, probabilmente le persone te li daranno spontaneamente.
    Succede così anche oggi, ogni giorno, su internet.
    E bada bene, parlo di internet, non solo di Facebook.
    http://www.technicoblog.com/cambridge-analytica-cio-che-facciamo.htm
    Voting 0
  5. Di nuovo: dove sta lo scandalo di questi giorni, dunque? Lo scandalo sta nell’evidenza di un errore di fondo nella concezione delle interazioni umane, la concezione che Mark Zuckerberg ha imposto — per sua stessa ammissione, nel tanto agnognato intervento post-Cambridge Analytica — dal 2007. L’idea cioè di costruire un “web dove si è social di default”. Dove cioè la norma è condividere. Un principio che è strutturalmente opposto alla tutela della privacy individuale, che si fonda sulla riservatezza come norma, riguardo ai propri dati personali.

    Zuckerberg lo spiega benissimo nel suo più recente intervento, individuando - giustamente - in quell’errore filosofico e antropologico la radice della tempesta in cui è costretto a destreggiarsi: “Nel 2007, abbiamo lanciato la Facebook Platform nella convinzione (“vision”) che più app dovessero essere social. Il tuo calendario doveva poterti mostrare il compleanno degli amici, le tue mappe mostrare dove vivono i tuoi amici, il tuo address book le loro foto. Per farlo, abbiamo consentito di accedere alle app e condividere chi fossero i tuoi amici e alcune informazioni su di loro”.

    È questo che conduce, nel 2013, Kogan a ottenere l’accesso ai dati di milioni di persone. E certo, quei dati hanno un immenso valore scientifico — ed è giusto che la ricerca, se condotta nel pieno rispetto del consenso informato degli utenti divenuti soggetti sperimentali, possa accedervi. Per soli scopi accademici, però. E anche così, già nel 2014 il famoso esperimento condotto da Facebook stessa sulla manipolazione delle emozioni di centinaia di migliaia di utenti, a cui erano stati mostrati deliberatamente più contenuti positivi o negativi, aveva dimostrato che anche quando non ci sono di mezzo fini commerciali, la questione è ambigua, complessa. E che no, non basta accettare condizioni di utilizzo intricate e che non legge nessuno per dire che allora ogni utente ha, per il fatto stesso di avere accettato di essere su Facebook, di diventare indiscriminatamente un topo di laboratorio arruolato in esperimenti di cui ignora tutto.

    Eppure è proprio la piattaforma a rendersi conto, già in quello stesso anno, che così le cose non vanno. Che a quel modo Facebook perde il controllo su quali terze parti hanno accesso ai dati dei suoi utenti. La policy dunque cambia, e da allora gli “amici” devono acconsentire al trattamento dei propri dati da parte di una app. La nuova filosofia, ricorda Albright, è “people first”. Ma è tardi. E l’incapacità di rientrare davvero in possesso di quell’ammasso di informazioni, dimostrata dal caso Cambridge Analytica – possibile Facebook debba scoprire dai giornali che l’azienda non aveva cancellato i dati che diceva di aver cancellato, o che debba comunque condurre un auditing serio per verificarlo ora, dimostrando di non avere idea se lo siano o meno? – fa capire che il problema va ben oltre il singolo caso in questione, ma è sistematico.

    Per capirci più chiaramente: come scrive Albright, la prima versione delle API v.1.0 per il Facebook Graph – cioè ciò che gli sviluppatori di applicazioni potevano ottenere dal social network tra il 2010, data di lancio, e il 2014, data in cui la policy è cambiata – consentiva di sapere non su chi si iscriveva a una determinata app, ma dagli amici inconsapevoli, i seguenti dati: “about, azioni, attività, compleanno, check-ins, istruzione, eventi, giochi, gruppi, residenza, interessi, like, luogo, note, status, tag, foto, domande, relazioni, religione/politica, iscrizioni, siti, storia lavorativa”. Davvero si poteva pensare di controllare dove finissero tutti questi dati, per milioni e milioni di persone?

    E davvero Facebook lo scopre oggi? Nel 2011, la Federal Trade Commission americana aveva già segnalato la questione come problematica. Non ha insegnato nulla
    https://www.valigiablu.it/facebook-cambridge-analytica-scandalo
    Voting 0
  6. Let’s Encrypt is a free and open certificate authority developed by the Internet Security Research Group (ISRG). Certificates issued by Let’s Encrypt are trusted by almost all browsers today.

    In this tutorial, we’ll provide a step by step instructions about how to secure your Nginx with Let’s Encrypt using the certbot tool on CentOS 7.
    https://linuxize.com/post/secure-nginx-with-let-s-encrypt-on-centos-7
    Voting 0
  7. In seguito alla pubblicazione del documento stilato di comune accordo tra INE e Facebook, resta fuor di dubbio che l'azienda fondatrice della nota piattaforma social non abbia nessun obbligo formale, e non mostrerebbe neanche l'intenzione di combattere le cosiddette “fake news”, argomento molto discusso negli ultimi giorni.

    Dobbiamo anche ricordare che non lontano dal Messico, in Honduras, il Congresso sta discutendo su una proposta di legge che tenta di frenare la diffusione di notizie false, inerenti anche all'ambito elettorale, con modalità poco trasparenti.
    https://it.globalvoices.org/2018/03/l...ource=twitter.com&utm_campaign=buffer
    Voting 0
  8. Who is doing the targeting?

    Albright: It really depends on the platform and the news event. Just the extensiveness of the far right around the election: I can’t talk about that right this second, but I can say that, very recently, what I’ve tended to see from a linking perspective and a network perspective is that the left, and even to some degree center-left news organizations and journalists, are really kind of isolated in their own bubble, whereas the right have very much populated most of the social media resources and use YouTube extensively. This study I did over the weekend shows the depth of the content and how much reach they have. I mean, they’re everywhere; it’s almost ubiquitous. They’re ambient in the media information ecosystem. It’s really interesting from a polarization standpoint as well, because self-identified liberals and self-identified conservatives have different patterns in unfriending people and in not friending people who have the opposite of their ideology.

    From those initial maps of the ad tech and hyperlink ecosystem of the election-related partisan news realm, I dove into every platform. For example, I did a huge study on YouTube last year. It led me to almost 80,000 fake videos that were being auto-scripted and batch-uploaded to YouTube. They were all keyword-stuffed. Very few of them had even a small number of views, so what these really were was about impact — these were a gaming system. My guess is that they were meant to skew autocomplete or search suggestions in YouTube. It couldn’t have been about monetization because the videos had very few views the sheer volume wouldn’t have made sense with YouTube’s business model.

    Someone had set up a script that detected social signals off of Twitter. It would go out and scrape related news articles, pull the text back in, and read it out in a computer voice, a Siri-type voice. It would pull images from Google Images, create a slideshow, package that up and wrap it, upload it to YouTube, hashtag it and load it with keywords. There were so many of these and they were going up so fast that as I was pulling data from the YouTube API dozens more would go up.




    I worked with The Washington Post on a project where I dug into Twitter and got, for the last week leading up to the election, a more or less complete set of Twitter data for a group of hashtags. I found what were arguably the top five most influential bots through that last week, and we found that the top one was not a completely automated account, it was a person.

    The Washington Post’s Craig Timberg » looked around and actually found this person and contacted him and he agreed to an interview at his house. It was just unbelievable. It turns out that this guy was almost 70, almost blind.

    From Timberg’s piece: “Sobieski’s two accounts…tweet more than 1,000 times a day using ‘schedulers’ that work through stacks of his own pre-written posts in repetitive loops. With retweets and other forms of sharing, these posts reach the feeds of millions of other accounts, including those of such conservative luminaries as Fox News’s Sean Hannity, GOP strategist Karl Rove and Sen. Ted Cruz (R-Tex.), according to researcher Jonathan Albright…’Life isn’t fair,’ Sobieski said with a smile. ‘Twitter in a way is like a meritocracy. You rise to the level of your ability….People who succeed are just the people who work hard.'” »

    The most dangerous accounts, the most influential accounts, are often accounts that are supplemented with human input, and also a human identity that’s very strong and possibly already established before the elections come in.


    I mean, I do hold that it’s not okay to come in and try to influence someone’s election; when I look at these YouTube videos, I think: Someone has to be funding this. In the case of the YouTube research, though, I looked at this more from a systems/politics perspective.

    We have a problem that’s greater than the one-off abuse of technologies to manipulate elections. This thing is parasitic. It’s growing in size. The last week and a half are some of the worst things I’ve ever seen, just in terms of the trending. YouTube is having to manually go in and take these videos out. YouTube’s search suggestions, especially in the context of fact-checking, are completely counter-productive. I think Russia is a side effect of our larger problems.

    Why is it getting worse?

    Albright: There are more people online, they’re spending more time online, there’s more content, people are becoming more polarized, algorithms are getting better, the amount of data that platforms have is increasing over time.

    I think one of the biggest things that’s missing from political science research is that it usually doesn’t consider the amount of time that people spend online. Between the 2012 election and the 2016 election, smartphone use went up by more than 25 percent. Many people spend all of their waking time somehow connected.

    This is where psychology really needs to come in. There’s been very little psychology work done looking at this from an engagement perspective, looking at the effect of seeing things in the News Feed but not clicking out. Very few people actually click out of Facebook. We really need social psychology, we really need humanities work to come in and pick up the really important pieces. What are the effects of someone seeing vile or conspiracy news headlines in their News Feed from their friends all day?

    Owen: This is so depressing.
    http://www.niemanlab.org/2018/02/news...-what-to-do-as-things-crash-around-us
    Voting 0
  9. Four companies dominate our daily lives unlike any other in human history: Amazon, Apple, Facebook, and Google. We love our nifty phones and just-a-click-away services, but these behemoths enjoy unfettered economic domination and hoard riches on a scale not seen since the monopolies of the gilded age. The only logical conclusion? We must bust up big tech.
    https://www.esquire.com/news-politics...15895746/bust-big-tech-silicon-valley
    Voting 0
  10. When we look at digital technology and platforms, it’s always instructive to remember that they exist to extract data. The longer you are on the platform, the more you produce and the more can be extracted from you. Polarization keys engagement, and engagement/attention are the what keep us on platforms. In the words of Tristan Harris, the former Google Design Ethicist, and one of the earliest SV folks to have the scales fall from his eyes, “What people don’t know about or see about Facebook is that polarization is built into the business model,” Harris told NBC News. “Polarization is profitable.”

    David Golumbia’s description of the scholarly concept of Cyberlibertarianism is useful here (emphasis mine) :

    In perhaps the most pointed form of cyberlibertarianism, computer expertise is seen as directly applicable to social questions. In The Cultural Logic of Computation, I argue that computational practices are intrinsically hierarchical and shaped by identification with power. To the extent that algorithmic forms of reason and social organization can be said to have an inherent politics, these have long been understood as compatible with political formations on the Right rather than the Left.

    So the cui bono of digital polarization are the wealthy, the powerful, the people with so much to gain promoting systems that maintain the status quo, despite the language of freedom, democratization, and community that are featured so prominently when people like Facebook co-founder Mark Zuckerberg or Twitter co-founder and CEO Jack Dorsey talk about technology. Digital technology in general, and platforms like Facebook, YouTube, and Twitter specifically, exist to promote polarization and maintain the existing concentration of power.

    To the extent that Silicon Valley is the seat of the technological power, it’s useful to note that the very ground of what we now call Silicon Valley is built on the foundation of segregating black and white workers. Richard Rothstein’s The Color of Law talks about auto workers in 1950’s California:

    So in 1953 the company (Ford) announced it would close its Richmond plant and reestablish operations in a larger facility fifty miles south in Milpitas, a suburb of San Jose, rural at the time. (Milpitas is a part of what we now call Silicon Valley.)

    Because Milpitas had no apartments, and houses in the area were off limits to black workers—though their incomes and economic circumstances were like those of whites on the assembly line—African Americans at Ford had to choose between giving up their good industrial jobs , moving to apartments in a segregated neighborhood of San Jose, or enduring lengthy commutes between North Richmond and Milpitas.
    https://hypervisible.com/polarization/power-technology
    Voting 0

Top of the page

First / Previous / Next / Last / Page 1 of 24 Online Bookmarks of M. Fioretti: tagged with "percloud"

About - Propulsed by SemanticScuttle