mfioretti: algorithms* + surveillance*

Bookmarks on this page are managed by an admin user.

17 bookmark(s) - Sort by: Date ↓ / Title / Voting / - Bookmarks from other users for this tag

  1. Similarly, GOOG in 2014 started reorganizing itself to focus on artificial intelligence only. In January 2014, GOOG bought DeepMind, and in September they shutdown Orkut (one of their few social products which had momentary success in some countries) forever. The Alphabet Inc restructuring was announced in August 2015 but it likely took many months of meetings and bureaucracy. The restructuring was important to focus the web-oriented departments at GOOG towards a simple mission. GOOG sees no future in the simple Search market, and announces to be migrating “From Search to Suggest” (in Eric Schmidt’s own words) and being an “AI first company” (in Sundar Pichai’s own words). GOOG is currently slightly behind FB in terms of how fast it is growing its dominance of the web, but due to their technical expertise, vast budget, influence and vision, in the long run its AI assets will play a massive role on the internet. They know what they are doing.

    These are no longer the same companies as 4 years ago. GOOG is not anymore an internet company, it’s the knowledge internet company. FB is not an internet company, it’s the social internet company. They used to attempt to compete, and this competition kept the internet market diverse. Today, however, they seem mostly satisfied with their orthogonal dominance of parts of the Web, and we are losing diversity of choices. Which leads us to another part of the internet: e-commerce and AMZN.

    AMZN does not focus on making profit.
    https://staltz.com/the-web-began-dying-in-2014-heres-how.html
    Voting 0
  2. "All of us, when we are uploading something, when we are tagging people, when we are commenting, we are basically working for Facebook," he says.

    The data our interactions provide feeds the complex algorithms that power the social media site, where, as Mr Joler puts it, our behaviour is transformed into a product.

    Trying to untangle that largely hidden process proved to be a mammoth task.

    "We tried to map all the inputs, the fields in which we interact with Facebook, and the outcome," he says.

    "We mapped likes, shares, search, update status, adding photos, friends, names, everything our devices are saying about us, all the permissions we are giving to Facebook via apps, such as phone status, wifi connection and the ability to record audio."

    All of this research provided only a fraction of the full picture. So the team looked into Facebook's acquisitions, and scoured its myriad patent filings.

    The results were astonishing.

    Visually arresting flow charts that take hours to absorb fully, but which show how the data we give Facebook is used to calculate our ethnic affinity (Facebook's term), sexual orientation, political affiliation, social class, travel schedule and much more.
    Image copyright Share Lab
    Image caption Share Lab presents its information in minutely detailed tables and flow charts

    One map shows how everything - from the links we post on Facebook, to the pages we like, to our online behaviour in many other corners of cyber-space that are owned or interact with the company (Instagram, WhatsApp or sites that merely use your Facebook log-in) - could all be entering a giant algorithmic process.

    And that process allows Facebook to target users with terrifying accuracy, with the ability to determine whether they like Korean food, the length of their commute to work, or their baby's age.

    Another map details the permissions many of us willingly give Facebook via its many smartphone apps, including the ability to read all text messages, download files without permission, and access our precise location.

    Individually, these are powerful tools; combined they amount to a data collection engine that, Mr Joler argues, is ripe for exploitation.

    "If you think just about cookies, just about mobile phone permissions, or just about the retention of metadata - each of those things, from the perspective of data analysis, are really intrusive."
    http://www.bbc.com/news/business-39947942
    Voting 0
  3. Facebook’s entire project, when it comes to news, rests on the assumption that people’s individual preferences ultimately coincide with the public good, and that if it doesn’t appear that way at first, you’re not delving deeply enough into the data. By contrast, decades of social-science research shows that most of us simply prefer stuff that feels true to our worldview even if it isn’t true at all and that the mining of all those preference signals is likely to lead us deeper into bubbles rather than out of them.

    What’s needed, he argues, is some global superstructure to advance humanity.

    This is not an especially controversial idea; Zuckerberg is arguing for a kind of digital-era version of the global institution-building that the Western world engaged in after World War II. But because he is a chief executive and not an elected president, there is something frightening about his project. He is positioning Facebook — and, considering that he commands absolute voting control of the company, he is positioning himself — as a critical enabler of the next generation of human society. A minor problem with his mission is that it drips with megalomania, albeit of a particularly sincere sort. With his wife, Priscilla Chan, Zuckerberg has pledged to give away nearly all of his wealth to a variety of charitable causes, including a long-term medical-research project to cure all disease. His desire to take on global social problems through digital connectivity, and specifically through Facebook, feels like part of the same impulse.

    Yet Zuckerberg is often blasé about the messiness of the transition between the world we’re in and the one he wants to create through software. Building new “social infrastructure” usually involves tearing older infrastructure down. If you manage the demolition poorly, you might undermine what comes next.
    https://www.nytimes.com/2017/04/25/ma...n-facebook-fix-its-own-worst-bug.html
    Voting 0
  4. non voglio farla lunga, ma in allora, come oggi, io non controllavo affatto il dato e l’informazione personale volontariamente o forzosamente appresa ad ogni mio movimento; ciò che in qualche modo mi salvava nella tribolata adolescenza (non sempre invero) era il controllo della situazione sociale e del contesto.

    Il controllo sul dato-informazione non l’avevo con il macellaio del paese e non posso pensare di averlo oggi sul web con Google, Facebook e soprattutto con le mille agenzie statuali affette, per svariate e talvolta encomiabili ragioni, da bulimia informativa. Ma in allora avevo contezza e in qualche modo governavo le banali regole tecniche (le vie del paese, gli orari della corriera) e quelle sociali di prossimità del mio territorio.

    Oggi non ci riesco più. E non è solo per la quantità dei dati captati e memorizzati ad ogni passo ma per la totale opacità del contesto e delle regole tecniche e sociali che governano la nostra vita digitale.

    Algoritmi ignoti, insondabili ai loro stessi creatori, ricostruiscono la nostra immagine, creano punteggi e giudicano rilevanze e congruità a nostra totale insaputa. Banche, assicurazioni, imprese di ogni risma e fattezza (a breve l’internet delle cose ci stupirà) ma soprattutto lo Stato, con le sue mille agenzie di verifica e controllo, accedono ad ogni informazione decontestualizzandola, creando relazioni e correlazioni di cui non abbiamo coscienza, ma di cui subiamo quotidianamente le conseguenze.

    Non possiamo impedire tutto questo, il big data e gli open-data salveranno il mondo, d’accordo. Ma possiamo e dobbiamo pretendere di sapere il chi, il come e il quando. Abbiamo bisogno di sapere qual è il contesto, e quali sono le regole; solo così troveremo strategie, non per delinquere o eludere la legge (come sostiene parte della magistratura), ma per esercitare i diritti fondamentali della persona.

    Nel mondo fisico sappiamo quando lo Stato ha il diritto di entrare in casa nostra, o a quali condizioni possa limitare le nostre libertà personali, di movimento, d’espressione; nel mondo digitale non sappiamo, e neppure ci chiediamo, chi, quando e a quali condizioni possa impossessarsi dei nostri dati, dei nostri dispositivi tramite software occulti, della nostra vita. Accettiamo supinamente un’intollerabile opacità.

    Io ho qualcosa da nascondere da quando ho ricordi: sono riservatezze variabili a seconda dell’interlocutore, del tempo, del luogo e del contesto. E non voglio per me e i miei figli una società stupidamente disciplinata da una costante sorveglianza e decerebrata dagli algoritmi. Vorrei una società in cui l’asimmetria dell’informazione sia l’esatto opposto dell’attuale, dove purtroppo il cittadino è totalmente trasparente e lo Stato e le sue regole sono opache e incerte.
    Mostra commenti ( 0 )
    Carlo Blengino
    Carlo Blengino

    Avvocato penalista, affronta nelle aule giudiziarie il diritto delle nuove tecnologie, le questioni di copyright e di data protection. È fellow del NEXA Center for Internet & Society del Politecnico di Torino. @CBlengio su Twitter
    http://www.ilpost.it/carloblengino/2016/11/02/ho-qualcosa-da-nascondere
    Voting 0
  5. Yours might be one of angst and despair, or celebrations and "I told you so's." It depends on the people you're friends with and the online community you've created with your clicks, likes and shares.

    Facebook's algorithm knows what you like based on the videos you watch, people you talk to, and content you interact with. It then shows you more of the same. This creates something called "filter bubbles." You begin to see only the content you like and agree with, while Facebook (FB, Tech30) hides dissenting points of view.

    This means news on Facebook comes with confirmation bias -- it reinforces what you already think is true -- and people are increasingly frustrated.

    Facebook denies it's a media company, yet almost half of U.S. adults get news from Facebook.

    When Facebook fired its human curators and began to rely on algorithms to surface popular stories earlier this year, fake news proliferated.

    Viral memes and propaganda spread among people with similar beliefs and interests. It's cheaper and easier to create and spread ideological disinformation than deeply-researched and reported news. And it comes from all over -- teens in Macedonia are responsible for a large portion of fake pro-Trump news, according to a BuzzFeed analysis.

    Related: The plague of fake news is getting worse -- here's how to protect yourself

    Filter bubbles became especially problematic during the presidential election.

    Hyperpartisan news sites and fake websites distributed false stories about voter fraud, election conspiracies, and the candidates' pasts that spread like wildfire on Facebook. It was more prevalent on right-leaning Facebook pages. As CNNMoney's Brian Stelter said in response to the growing number of false viral stories, people should have a "triple check before you share" rule.

    Today, many people are shocked by Trump's victory. Words of fear and sorrow fill their Facebook feeds, and even those with thousands of friends are probably only seeing posts that echo their feelings.

    But if you voted for Trump, chances are your feed reflects the opposite. You might see a cascade of #MakeAmericaGreatAgain hashtags and friends celebrating.
    http://money.cnn.com/2016/11/09/techn...2Fedition_us+%28RSS%3A+CNNi+-+U.S.%29
    Voting 0
  6. 5. Of course, algorithms aren't neutral, which is the real issue. Facebook is a powerful media gatekeeper because of the artificial scarcity of the News Feed — unlike, Twitter, which blasts users with a firehose of content, Facebook's News Feed algorithm controls what you see from all the people and organizations you follow. And changes to the News Feed algorithm divert enormous amounts of attention: last year Facebook was sending massive amounts of traffic to websites, but earlier this year Facebook prioritized video and that traffic dipped sharply. This month Facebook is prioritizing live video, so the media started making live videos. When media people want to complain, they complain about having to chase Facebook, because it feels like Facebook has a ton of control over the media. (Disclosure: Facebook is paying Verge parent company Vox Media to create Facebook Live videos.)
    http://www.theverge.com/2016/5/10/116...k-trending-box-bias-conservative-news
    Voting 0
  7. It's undeniable that companies like Google and Facebook have made the web much easier to use and helped bring billions online. They've provided a forum for people to connect and share information, and they've had a huge impact on human rights and civil liberties. These are many things for which we should applaud them.

    But their scale is also concerning. For example, Chinese messaging service Wechat (which is somewhat like Twitter) recently used its popularity to limit market choice. The company banned access to Uber to drive more business to their own ride-hailing service. Meanwhile, Facebook engineered limited web access in developing economies with its Free Basics service. Touted in India and other emerging markets as a solution to help underserved citizens come online, Free Basics allows viewers access to only a handful of pre-approved websites (including, of course, Facebook). India recently banned Free Basics and similar services, claiming that these restricted web offerings violated the essential rules of net neutrality.
    Algorithmic oversight

    Beyond market control, the algorithms powering these platforms can wade into murky waters. According to a recent study from the American Institute for Behavioral Research and Technology, information displayed in Google could shift voting preferences for undecided voters by 20 percent or more -- all without their knowledge. Considering how narrow the results of many elections can become, this margin is significant. In many ways, Google controls what information people see, and any bias, intentional or not, has a potential impact on society.

    In the future, data and algorithms will power even more grave decisions. For example, code will decide whether a self-driving car stops for an oncoming bus or runs into pedestrians.

    It's possible that we're reaching the point where we need oversight for consumer-facing
    http://buytaert.net/can-we-save-the-o...ource=twitter.com&utm_campaign=buffer
    Voting 0
  8. A May 2014 White House report on “big data” notes that the ability to determine the demographic traits of individuals through algorithms and aggregation of online data has a potential downside beyond just privacy concerns: Systematic discrimination.

    There is a long history of denying access to bank credit and other financial services based on the communities from which applicants come — a practice called “redlining.” Likewise, the report warns, “Just as neighborhoods can serve as a proxy for racial or ethnic identity, there are new worries that big data technologies could be used to ‘digitally redline’ unwanted groups, either as customers, employees, tenants or recipients of credit.” (See materials from the report’s related research conference for scholars’ views on this and other issues.)

    One vexing problem, according to the report, is that potential digital discrimination is even less likely to be pinpointed, and therefore remedied.

    Approached without care, data mining can reproduce existing patterns of discrimination, inherit the prejudice of prior decision-makers, or simply reflect the widespread biases that persist in society. It can even have the perverse result of exacerbating existing inequalities by suggesting that historically disadvantaged groups actually deserve less favorable treatment.” The paper’s authors argue that the most likely legal basis for anti-discrimination enforcement, Title VII, is not currently adequate to stop many forms of discriminatory data mining, and “society does not have a ready answer for what to do about it.”

    Their 2014 paper “Digital Discrimination: The Case of Airbnb.com” examined listings for thousands of New York City landlords in mid-2012. Airbnb builds up a reputation system by allowing ratings from guests and hosts.

    The study’s findings include:

    “The raw data show that non-black and black hosts receive strikingly different rents: roughly $144 versus $107 per night, on average.” However, the researchers had to control for a variety of factors that might skew an accurate comparison, such as differences in geographical location.
    “Controlling for all of these factors, non-black hosts earn roughly 12% more for a similar apartment with similar ratings and photos relative to black hosts.”
    “Despite the potential of the Internet to reduce discrimination, our results suggest that social platforms such as Airbnb may have the opposite effect. Full of salient pictures and social profiles, these platforms make it easy to discriminate — as evidenced by the significant penalty faced by a black host trying to conduct business on Airbnb.”

    “Given Airbnb’s careful consideration of what information is available to guests and hosts,” Edelman and Luca note. “Airbnb might consider eliminating or reducing the prominence of host photos: It is not immediately obvious what beneficial information these photos provide, while they risk facilitating discrimination by guests. Particularly when a guest will be renting an entire property, the guest’s interaction with the host will be quite limited, and we see no real need for Airbnb to highlight the host’s picture.” (For its part, Airbnb responded to the study by saying that it prohibits discrimination in its terms of service, and that the data analyzed were both older and limited geographically.)
    http://journalistsresource.org/studie...racial-discrimination-research-airbnb
    Voting 0
  9. there seems to be something wrong with personalization. We are continuously bumping into obtrusive, uninteresting ads. Our digital personal assistant isn’t all that personal. We’ve lost friends to the algorithmic abyss of the News feed. The content we encounter online seems to repeat the same things again and again. There are five main reasons why personalization remains broken.

    Additionally, there lies a more general paradox at the very heart of personalization.

    Personalization promises to modify your digital experience based on your personal interests and preferences. Simultaneously, personalization is used to shape you, to influence you and guide your everyday choices and actions. Inaccessible and incomprehensible algorithms make autonomous decisions on your behalf. They reduce the amount of visible choices, thus restricting your personal agency.

    Because of the personalization gaps and internal paradox, personalization remains unfulfilling and incomplete. It leaves us with a feeling that it serves someone else’s interests better than our own.
    http://techcrunch.com/2015/06/25/the-...n=Feed%3A+Techcrunch+%28TechCrunch%29
    Voting 0
  10. So what is it that the rich have today that the poor will get in a decade? Varian bets on personal assistants. Instead of maids and chauffeurs we would have self-driving cars, housecleaning robots and clever, omniscient apps that can monitor, inform and nudge us in real time.

    As Varian puts it: “These digital assistants will be so useful that everyone will want one and the scare stories you read today about privacy concerns will just seem quaint and old-fashioned.” Google Now, one such assistant, can monitor our emails, searches and locations and constantly remind us about forthcoming meetings or trips, all while patiently checking real-time weather and traffic in the background.
    Advertisement

    Varian’s juxtaposition of dishwashers with apps might seem reasonable but it’s actually misleading. When you hire somebody as your personal assistant, the transaction is relatively straightforward: you pay the person for the services tendered – often, in cash – and that’s the end of it. It’s tempting to say that the same logic is at work with virtual assistants: you surrender your data – the way you would surrender your cash – for Google to provide this otherwise free service.

    But something doesn’t add up here: few of us expect our personal assistants to walk away with a copy of all our letters and files in order to make a buck off them. For our virtual assistants, on the other hand, this is the only reason they exist.


    This second life-shaping feature of data as a unit of exchange is not yet well understood. However, it’s precisely this ability to shape our future even after we surrender it that turns data into an instrument of domination. While cash, with its usual anonymity, has no history and little connection to social life, data is nothing but a representation of social life – albeit crystallised into kilobytes. Google Now can work only if the company behind it manages to bring vast chunks of our existence – from communication to travel to reading – under its corporate umbrella. Once there, these activities can suddenly acquire a new economic dimension: they can finally be monetised.

    Facebook, Google’s closest competitor, pulls the same trick with connectivity. Its Internet.org initiative, which now operates in Latin America, south-east Asia and Africa, was ostensibly launched to promote digital inclusion and get the poor in the developing world online. Online they do get but it’s a very particular kind of “online”: Facebook and a few other sites and apps are free but users have to pay for everything else, often based on how much data their individual apps consume. As a result, few of these people – remember, we are talking about very poor populations – are likely to afford the world outside Facebook’s content empire.

    Here is the Varian rule at work again: on the face of it, the poor do get what the rich have already – internet connectivity. But the key difference is not hard to spot. Unlike the rich, who pay for their connectivity with their cash, the poor pay for it with their data – the data that Facebook would one day monetise in order to justify the entire Internet.org operation. We are not dealing with a charity here, after all. Facebook is interested in “digital inclusion” in much the same manner as loan sharks are interested in “financial inclusion”: it is in it for the money.

    Any service provider – be it in education, health, or journalism – would soon realise that to reach the millions using Internet.org, it had better launch and operate its apps inside Facebook rather than outside. In other words, the poor might eventually end up getting all those nice services that the rich already have, but only with their data – their congealed social life – covering the costs of it.
    http://www.theguardian.com/commentisf...ity-poor-pay-by-surrending-their-data
    Voting 0

Top of the page

First / Previous / Next / Last / Page 1 of 2 Online Bookmarks of M. Fioretti: Tags: algorithms + surveillance

About - Propulsed by SemanticScuttle