Tags: facebook* + surveillance*

115 bookmark(s) - Sort by: Date ↓ / Title / Voting /

  1. As problematic as Facebook has become, it represents only one component of a much broader shift into a new human connectivity that is both omnipresent (consider the smartphone) and hypermediated—passing through and massaged by layer upon layer of machinery carefully hidden from view. The upshot is that it’s becoming increasingly difficult to determine what in our interactions is simply human and what is machine-generated. It is becoming difficult to know what is real.

    Before the agents of this new unreality finish this first phase of their work and then disappear completely from view to complete it, we have a brief opportunity to identify and catalogue the processes shaping our drift to a new world in which reality is both relative and carefully constructed by others, for their ends. Any catalogue must include at least these four items:

    the monetisation of propaganda as ‘fake news’;
    the use of machine learning to develop user profiles accurately measuring and modelling our emotional states;
    the rise of neuromarketing, targeting highly tailored messages that nudge us to act in ways serving the ends of others;
    a new technology, ‘augmented reality’, which will push us to sever all links with the evidence of our senses.

    The fake news stories floated past as jetsam on Facebook’s ‘newsfeed’, that continuous stream of shared content drawn from a user’s Facebook’s contacts, a stream generated by everything everyone else posts or shares. A decade ago that newsfeed had a raw, unfiltered quality, the notion that everyone was doing everything, but as Facebook has matured it has engaged increasingly opaque ‘algorithms’ to curate (or censor) the newsfeed, producing something that feels much more comfortable and familiar.

    This seems like a useful feature to have, but the taming of the newsfeed comes with a consequence: Facebook’s billions of users compose their world view from what flows through their feeds. Consider the number of people on public transport—or any public place—staring into their smartphones, reviewing their feeds, marvelling at the doings of their friends, reading articles posted by family members, sharing video clips or the latest celebrity outrages. It’s an activity now so routine we ignore its omnipresence.

    Curating that newsfeed shapes what Facebook’s users learn about the world. Some of that content is controlled by the user’s ‘likes’, but a larger part is derived from Facebook’s deep analysis of a user’s behaviour. Facebook uses ‘cookies’ (invisible bits of data hidden within a user’s web browser) to track the behaviour of its users even when they’re not on the Facebook site—and even when they’re not users of Facebook. Facebook knows where its users spend time on the web, and how much time they spend there. All of that allows Facebook to tailor a newsfeed to echo the interests of each user. There’s no magic to it, beyond endless surveillance.

    What is clear is that Facebook has the power to sway the moods of billions of users. Feed people a steady diet of playful puppy videos and they’re likely to be in a happier mood than people fed images of war. Over the last two years, that capacity to manage mood has been monetised through the sharing of fake news and political feeds atuned to reader preference: you can also make people happy by confirming their biases.

    We all like to believe we’re in the right, and when we get some sign from the universe at large that we are correct, we feel better about ourselves. That’s how the curated newsfeed became wedded to the world of profitable propaganda.

    Adding a little art to brighten an other-wise dull wall seems like an unalloyed good, but only if one completely ignores bad actors. What if that blank canvas gets painted with hate speech? What if, perchance, the homes of ‘undesirables’ are singled out with graffiti that only bad actors can see? What happens when every gathering place for any oppressed community gets invisibly ‘tagged’? In short, what happens when bad actors use Facebook’s augmented reality to amplify their own capacity to act badly?

    But that’s Zuckerberg: he seems to believe his creations will only be used to bring out the best in people. He seems to believe his gigantic sharing network would never be used to incite mob violence. Just as he seems to claim that Facebook’s capacity to collect and profile the moods of its users should never be monetised—but, given that presentation unearthed by the Australian, Facebook tells a different story to advertisers.

    Regulating Facebook enshrines its position as the data-gathering and profile-building organisation, while keeping it plugged into and responsive to the needs of national powers. Before anyone takes steps that would cement Facebook in our social lives for the foreseeable future, it may be better to consider how this situation arose, and whether—given what we now know—there might be an opportunity to do things differently.
    Voting 0
  2. Mark Zuckerberg also launched Facebook with a disdain for intrusive advertising, but it wasn’t long before the social network giant became Google’s biggest competitor for ad dollars. After going public with 845 million users in 2012, Facebook became a multibillion-dollar company and Zuckerberg one of the richest men on Earth, but with only a promise that the company would figure out how to monetize its platform.

    Facebook ultimately sold companies on its platform by promising “brand awareness” and the best possible data on what consumers actually liked. Brands could start their own Facebook pages, which people would actually “like” and interact with. This provided unparalleled information about what company each individual person wanted to interact with the most. By engaging with companies on Facebook, people gave corporate marketing departments more information than they could have ever dreamed of buying, but here it was offered up free.

    This was the “grand bargain,” as Columbia University law professor Tim Wu called it in his book, The Attention Merchants, that users struck with corporations. Wu wrote that Facebook’s “billions of users worldwide were simply handing over a treasure trove of detailed demographic data and exposing themselves to highly targeted advertising in return for what, exactly?”

    In other words: We will give you every detail of our lives and you will get rich by selling that information to advertisers.

    European regulators are now saying that bargain was a bad deal. The big question that remains is whether their counterparts in the U.S. will follow their lead.
    Voting 0
  3. Facebook's ability to figure out the "people we might know" is sometimes eerie. Many a Facebook user has been creeped out when a one-time Tinder date or an ex-boss from 10 years ago suddenly pops up as a friend recommendation. How does the big blue giant know?

    While some of these incredibly accurate friend suggestions are amusing, others are alarming, such as this story from Lisa*, a psychiatrist who is an infrequent Facebook user, mostly signing in to RSVP for events. Last summer, she noticed that the social network had started recommending her patients as friends—and she had no idea why.

    "I haven't shared my email or phone contacts with Facebook," she told me over the phone.

    The next week, things got weirder.

    Most of her patients are senior citizens or people with serious health or developmental issues, but she has one outlier: a 30-something snowboarder. Usually, Facebook would recommend he friend people his own age, who snowboard and jump out of planes. But Lisa told me that he had started seeing older and infirm people, such as a 70-year-old
    Tags: , , , by M. Fioretti (2018-01-28)
    Voting 0
  4. urope has propelled past the United States when it comes to constraining the abuses of Big Tech. In June, the European Union fined Google $2.7 billion for steering web users to its shopping site, and investigations remain active over similar treatment on Android phones. European regulators fined Facebook for lying about whether it could match user profiles with phone numbers on its messaging acquisition WhatsApp. They demanded Apple repay $15.3 billion in back taxes in Ireland. And they forced Amazon to change its e-book contracts, which they claimed inappropriately squeezed publishers.
    AP Photo/Rick Bowmer

    Trust-Busted: In 2002, Microsoft Chairman Bill Gates had to testify at federal court in his company's antitrust case. The public trial led Microsoft to sfoten its aggressive strategy against rivals.

    Unfortunately, these actions were treated mainly as the cost of doing business. The Facebook fine totaled not even 1 percent of the $22 billion purchase price for WhatsApp, and it allowed the two companies to remain partnered. Government policy, in effect, has “told these companies that the smart thing to do is to lie to us and break the law,” said Scott Galloway in his presentation. Google’s remedy in the shopping case still forces rivals to bid for placement at the top of the page, with Google Shopping spun off as a stand-alone competitor. This does weaken Google’s power and solves the “equal treatment” problem, but it doesn’t protect consumers, who will ultimately pay for those costly bids. “The EU got a $2.7 billion fine to hold a party and bail out Greek banks,” said Gary Reback, an antitrust lawyer and critic of the EU’s actions. “No amount of money will make a difference.”

    However, one thing might: Europe’s increasing move toward data privacy. The General Data Protection Regulation (GDPR), scheduled for implementation in May 2018, empowers European web users to affirmatively opt out of having their data collected, with high penalties for non-compliance. Consumers will be able to obtain their personal data and learn how it is used. They can request that their data be erased completely (known as the “right to be forgotten”) as well as prohibited from sale to third parties. Platforms could not condition use of their products on data collection. A separate, not-yet-finalized regulation called ePrivacy would forbid platforms from tracking users across separate apps, websites, and devices.
    Voting 0
  5. When Facebook first came to Cambodia, many hoped it would help to usher in a new period of free speech, amplifying voices that countered the narrative of the government-friendly traditional press. Instead, the opposite has happened. Prime Minister Hun Sen is now using the platform to promote his message while jailing his critics, and his staff is doing its best to exploit Facebook’s own rules to shut down criticism — all through a direct relationship with the company’s staff.

    In Cambodia, Prime Minister Hun Sen has held power since 1998, a reign characterized by systematic looting, political patronage and violent suppression of human rights; when opposition parties used Facebook to organize a strong showing in the 2013 elections, Hun Sen turned to the tool to consolidate his slipping hold on power.

    In this he was greatly aided by Fresh News, a Facebook-based political tabloid that is analogous to far-right partisan US news sources like Breitbart; which acted as a literal stenographer for Sen, transcribing his remarks in "scoops" that vilify opposition figures and dissidents without evidence. Sen and Fresh News successfully forced an opposition leader into exile in France, and mined Facebook for the identities of political opponents, who were targeted for raids and arrests.

    The Cambodian government has cultivated a deep expertise in Facebook's baroque acceptable conduct rules, and they use this expertise to paint opposition speech as in violation of Facebook's policies, using the company's anti-abuse systems to purge their rivals from the platform.

    Offline, the government has targeted the independent press with raids and arrests, shutting down most of the media it does not control, making Facebook -- where the government is able to silence people with its rules-lawyering -- the only place for independent analysis and criticism of the state.

    Then, last October, Facebook used Cambodia in an experiment to de-emphasize news sources in peoples' feeds -- a change it will now roll out worldwide -- and hid those remaining independent reporters from the nation's view.

    Opposition figures have worked with independent researchers to show that the government is buying Facebook likes from clickfarms in the Philippines and India, racking up thousands of likes for Khmer-language posts in territories where Khmer isn't spoken. They reported these abuses to Facebook, hoping to get government posts downranked, but Facebook executives gave them the runaround or refused to talk to them. No action was taken on these violations of Facebook's rules.

    Among other things, the situation in Cambodia is a cautionary tale on the risks of "anti-abuse" policies, which are often disproportionately useful to trolls who devote long hours and careful study to staying on the right side of the lines that companies draw up, and scour systems for people they taunt into violations of these rules, getting the platforms to terminate them.

    When ordinary Facebook users find a post objectionable, they click a link on the post to report it. Then a Facebook employee judges whether it violates the platform’s rules and should be taken down. In practice, it’s a clunky process that involves no direct communication or chance for appeal, and the decisions made by Facebook can seem mysterious and arbitrary.

    But for the Cambodian government, that process has been streamlined by Facebook.

    Duong said every couple of months, his team would email an employee they work with at Facebook to request a set of accounts be taken down, either based on language they used or because their accounts did not appear to be registered to their real names, a practice Facebook’s rules forbid. Facebook often complies, he said.

    Clare Wareing, a spokesperson for Facebook, said the company removes “credible threats, hate speech, and impersonation profiles when we’re made aware of them.” Facebook says it only takes down material that violates its policies.
    Voting 0
  6. dismissing Facebook’s change as a mere strategy credit is perhaps to give short shrift to Zuckerberg’s genuine desire to leverage Facebook’s power to make the world a better place. Zuckerberg argued in his 2017 manifesto Building Global Community:

    Progress now requires humanity coming together not just as cities or nations, but also as a global community. This is especially important right now. Facebook stands for bringing us closer together and building a global community. When we began, this idea was not controversial. Every year, the world got more connected and this was seen as a positive trend. Yet now, across the world there are people left behind by globalization, and movements for withdrawing from global connection. There are questions about whether we can make a global community that works for everyone, and whether the path ahead is to connect more or reverse course.

    Our job at Facebook is to help people make the greatest positive impact while mitigating areas where technology and social media can contribute to divisiveness and isolation. Facebook is a work in progress, and we are dedicated to learning and improving. We take our responsibility seriously.

    That, though, leaves the question I raised in response to that manifesto:

    Even if Zuckerberg is right, is there anyone who believes that a private company run by an unaccountable all-powerful person that tracks your every move for the purpose of selling advertising is the best possible form said global governance should take?

    My deep-rooted suspicion of Zuckerberg’s manifesto has nothing to do with Facebook or Zuckerberg; I suspect that we agree on more political goals than not. Rather, my discomfort arises from my strong belief that centralized power is both inefficient and dangerous: no one person, or company, can figure out optimal solutions for everyone on their own, and history is riddled with examples of central planners ostensibly acting with the best of intentions — at least in their own minds — resulting in the most horrific of consequences; those consequences sometimes take the form of overt costs, both economic and humanitarian, and sometimes those costs are foregone opportunities and innovations. Usually it’s both.

    Facebook’s stated reasoning for this change only heightens these contradictions: if indeed Facebook as-is harms some users, fixing that is a good thing. And yet the same criticism becomes even more urgent: should the personal welfare of 2 billion people be Mark Zuckerberg’s personal responsibility?
    Voting 0
  7. "Continueremo a lavorare con le autorità francesi per garantire che gli utenti comprendano quali informazioni vengono raccolte e come vengono utilizzate", ha affermato WhatsApp in una dichiarazione inviata per posta elettronica. "Ci impegniamo a risolvere le diverse e talvolta contraddittorie preoccupazioni che hanno sollevato le autorità per la protezione dei dati, con un approccio comune a livello europeo prima che nuove norme sulla protezione dei dati a livello di blocco entrino in vigore nel maggio 2018".

    I trasferimenti di dati da WhatsApp a Facebook avvengono in parte senza il consenso dell'utente, ha ribadito l'ente francese che ha anche respinto le argomentazioni di WhatsApp secondo le quali l'azienda sarebbe soggetta solo alla legge degli Stati Uniti. Il monito francese, è "un avviso formale, non una sanzione", ma il colosso dei messaggi rischierebbe di incorrere in multe in una fase successiva.
    Voting 0
  8. Here’s how this golden age of speech actually works: In the 21st century, the capacity to spread ideas and reach an audience is no longer limited by access to expensive, centralized broadcasting infrastructure. It’s limited instead by one’s ability to garner and distribute attention. And right now, the flow of the world’s attention is structured, to a vast and overwhelming degree, by just a few digital platforms: Facebook, Google (which owns YouTube), and, to a lesser extent, Twitter.

    These companies—which love to hold themselves up as monuments of free expression—have attained a scale unlike anything the world has ever seen; they’ve come to dominate media

    Not to put too fine a point on it, but all of this invalidates much of what we think about free speech—conceptually, legally, and ethically.

    The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself.

    What’s more, all this online speech is no longer public in any traditional sense. Sure, Facebook and Twitter sometimes feel like places where masses of people experience things together simultaneously. But in reality, posts are targeted and delivered privately, screen by screen by screen.
    Voting 0
  9. the U.S. Government – meaning, at the moment, the Trump administration – has the unilateral and unchecked power to force the removal of anyone it wants from Facebook and Instagram by simply including them on a sanctions list. Does anyone think this is a good outcome? Does anyone trust the Trump administration, or any other government, to compel social media platforms to delete and block anyone it wants to be silenced? As the ACLU’s Jennifer Granick told the Times:
    Voting 0
  10. I do believe that this time is different, the beginning of a massive shift, and I believe it’s the fault of these social networks.

    One of the problems is that these platforms act, in many ways, like drugs. Facebook, and every other social-media outlet, knows that all too well. Your phone vibrates a dozen times an hour with alerts about likes and comments and retweets and faves. The combined effect is one of just trying to suck you back in, so their numbers look better for their next quarterly earnings report. Sean Parker, one of Facebook’s earliest investors and the company’s first president, came right out and said what we all know: the whole intention of Facebook is to act like a drug, by “ giving » you a little dopamine hit every once in a while, because someone liked or commented on a photo or a post or whatever.” That, Parker said, was by design. These companies are “exploiting a vulnerability in human psychology.” Former Facebook executive Chamath Palihapitiya has echoed this, too. “Do I feel guilty?” he asked rhetorically on CNN about the role Facebook is playing in society. “Absolutely I feel guilt.”

    And then, there’s the biggest reason why people are abandoning the platforms: the promise of connection has turned out to be a reality of division. We’ve all watched the way Donald J. Trump used social media to drive a wedge between us all, the way he tweets his sad and pathetic insecurities out to the world, without a care for how calling an equally insecure rogue leader a childish name might put us all on the brink of nuclear war. There’s a point that watching it all happen in real time makes you question what you’re doing with your life. As for conversing with our fellow Americans, we’ve all tried, unsuccessfully, to have a conversation on these platforms, which has so quickly devolved into a shouting match, or pile-on from perfect strangers because your belief isn’t the same as theirs. Years ago, a Facebook executive told me that the biggest reason people unfriend each other is because they disagree on an issue. The executive jokingly said, “Who knows, if this keeps up, maybe we’ll end up with people only having a few friends on Facebook.” Perhaps, worse of all, we’ve all watched as Russia has taken these platforms and used them against us in ways no one could have comprehended a decade ago.
    Voting 0

Top of the page

First / Previous / Next / Last / Page 1 of 12 Online Bookmarks of M. Fioretti: tagged with "facebook+surveillance"

About - Propulsed by SemanticScuttle