mfioretti: control* + google*

Bookmarks on this page are managed by an admin user.

34 bookmark(s) - Sort by: Date ↓ / Title / Voting / - Bookmarks from other users for this tag

  1. Earlier this month, writer James Bridle published an in-depth look at the underbelly of creepy, violent content targeted at kids on YouTube – from knock-off Peppa Pig cartoons, such as one where a trip to the dentist morphs into a graphic torture scene, to live-action “gross-out” videos, which show real kids vomiting and in pain.

    These videos are being produced and added to YouTube by the thousand, then tagged with what Bridle calls “keyword salad” – long lists of popular search terms packed into their titles. These keywords are designed to game or manipulate the algorithm that sorts, ranks and selects content for users to see. And thanks to a business model aimed at maximising views (and therefore ad revenue), these videos are being auto-played and promoted to kids based on their “similarity” – at least in terms of keywords used – to content that the kids have already seen. That means a child might start out watching a normal Peppa Pig episode on the official channel, finish it, then be automatically immersed in a dark, violent and unauthorised episode – without their parent realising it.
    Advertisement

    YouTube’s response to the problem has been to hand responsibility to its users, asking them to flag videos as inappropriate. From there, the videos go to a review team that YouTube says comprises thousands of people working 24 hours a day to review content. If the content is found to be inappropriate for children, it will be age-restricted and not appear in the YouTube Kids app. It will still appear on YouTube proper, however, where, officially, users must be at least 13 years old, but in reality, is still a system which countless kids use (just think about how often antsy kids are handed a phone or tablet to keep them occupied in a public space).

    Like Facebook’s scheme, this approach has several flaws: since it’s trying to ferret out inappropriate videos from kids’ content, it’s likely that most of the people who will encounter these videos are kids themselves. I don’t expect a lot of six-year-olds to become aggressive content moderators any time soon. And if the content is flagged, it still needs to be reviewed by humans, which, as YouTube has already acknowledged, takes “round the clock” monitoring.

    When we talk about this kind of challenge, the tech companies’ response is often that it’s simply the inevitability of scale – there’s no way to serve billions of users endless streams of engaging content without getting it wrong or allowing abuse to slip by some of the time. But of course, these companies don’t have to do any of this. Auto-playing an endless stream of algorithmically selected videos to kids isn’t some sort of mandate. The internet didn’t have to become a smorgasbord of “suggested content”. It’s a choice that YouTube made, because ad views are ad views. You’ve got to break a few eggs to make an omelette, and you’ve got to traumatise a few kids to build a global behemoth worth $600bn.
    Facebook asks users for nude photos in project to combat revenge porn
    Read more

    And that’s the issue: in their unblinking pursuit of growth over the past decade, these companies have built their platforms around features that aren’t just vulnerable to abuse, but literally optimised for it. Take a system that’s easy to game, profitable to misuse, intertwined with our vulnerable people and our most intimate moments, and operating at a scale that’s impossible to control or even monitor, and this is what you get.

    The question now is, when will we force tech companies to reckon with what they’ve wrought? We’ve long decided that we won’t let companies sell cigarettes to children or put asbestos into their building materials. If we want, we can decide that there are limits to what tech can do to “engage” us, too, rather than watching these platforms spin further and further away from the utopian dreams they were sold to us on.
    https://www.theguardian.com/technolog...ra-wachter-boettcher?CMP=share_btn_tw
    Voting 0
  2. Similarly, GOOG in 2014 started reorganizing itself to focus on artificial intelligence only. In January 2014, GOOG bought DeepMind, and in September they shutdown Orkut (one of their few social products which had momentary success in some countries) forever. The Alphabet Inc restructuring was announced in August 2015 but it likely took many months of meetings and bureaucracy. The restructuring was important to focus the web-oriented departments at GOOG towards a simple mission. GOOG sees no future in the simple Search market, and announces to be migrating “From Search to Suggest” (in Eric Schmidt’s own words) and being an “AI first company” (in Sundar Pichai’s own words). GOOG is currently slightly behind FB in terms of how fast it is growing its dominance of the web, but due to their technical expertise, vast budget, influence and vision, in the long run its AI assets will play a massive role on the internet. They know what they are doing.

    These are no longer the same companies as 4 years ago. GOOG is not anymore an internet company, it’s the knowledge internet company. FB is not an internet company, it’s the social internet company. They used to attempt to compete, and this competition kept the internet market diverse. Today, however, they seem mostly satisfied with their orthogonal dominance of parts of the Web, and we are losing diversity of choices. Which leads us to another part of the internet: e-commerce and AMZN.

    AMZN does not focus on making profit.
    https://staltz.com/the-web-began-dying-in-2014-heres-how.html
    Voting 0
  3. A snap decision by Google has begun to reshape the drug treatment industry, tilting the playing field toward large conglomerates — the precise opposite outcome Google had hoped to achieve.

    The fateful decision was made September 14. Google faced pressure from an exposé in The Verge released a week earlier, documenting how shady lead generators game its AdWords system. High-cost ads based on rehab keywords referred users to phone hotlines that gave the impression of being independent information services, but were actually owned by treatment center conglomerates. Representatives, who reap large fees based on how many patients they sign up, employ high-pressure sales tactics to push people into their favored facilities, whether or not that facility is the right one for the patient.

    This deceptive marketing can lead to substandard treatment and massive overbilling. It also made lots of money for Google, which was shown in the story actively courting addiction treatment advertisers.

    And so Google made a quick call: It effectively stopped running ads from treatment facilities. At first blush, that may look like a happy alignment of the public good and a company’s need for good public relations, with Google taking a hit to make the world a better place in the midst of an epidemic.

    But the problem of economic concentration is so deep in the United States today that peeling back one layer merely reveals another. Without ads, addicts or their parents are left only with the organic search results.
    https://theintercept.com/2017/10/17/google-search-drug-use-opioid-epidemic
    Tags: , , , by M. Fioretti (2017-10-30)
    Voting 0
  4. Running TPM absent Google’s various services is almost unthinkable. Like I literally would need to give it a lot of thought how we’d do without all of them. Some of them are critical and I wouldn’t know where to start for replacing them. In many cases, alternatives don’t exist because no business can get a footing with a product Google lets people use for free.

    But here’s where the rubber really meets the road. The publishers use DoubleClick. The big advertisers use DoubleClick. The big global advertising holding companies use Doubleclick. Everybody at every point in the industry is wired into DoubleClick. Here’s how they all play together. The adserving (Doubleclick) is like the road. (Adexchange) is the biggest car on the road. But only AdExchange gets full visibility into what’s available. (There’s lot of details here and argument about just what Google does and doesn’t know. But trust me on this. They keep the key information to themselves. This isn’t a suspicion. It’s the model.) So Google owns the road and gets first look at what’s on the road. Not only does Google own the road and makes the rules for the road, it has special privileges on the road. One of the ways it has special privileges is that it has all the data it gets from search, Google Analytics and Gmail. It also gets to make the first bid on every bit of inventory. Of course that’s critical. First dibs with more information than anyone else has access to. (Some exceptions to this. But that’s the big picture.) It’s good to be the king. It’s good to be a Google.

    There’s more I’ll get to in a moment but the interplay between DoubleClick and Adexchange is so vastly important to the entirety of the web, digital publishing and the entire ad industry that it is almost impossible to overstate. Again. They own the road. They make the rules for the road. They get special privileges on the road with every new iteration of rules.


    ow Google can say – and they are absolutely right – that every month they send checks for thousands and millions of dollars to countless publishers that make their journalism possible. And in general Google tends to be a relatively benign overlord. But as someone who a) knows the industry inside and out – down to the most nuts and bolts mechanics – b) someone who understands at least the rudiments of anti-trust law and monopoly economics and c) can write for a sizable audience, I can tell you this.: Google’s monopoly control is almost comically great. It’s a monopoly at every conceivable turn and consistently uses that market power to deepen its hold and increase its profits. Just the interplay between DoubleClick and Adexchange is textbook anti-competitive practices.

    There’s one way that Google is better than Facebook. When Facebook is getting a bigger and bigger share of the advertising pie, that money is almost all going to Facebook. There are some small exceptions but that’s basically the case. When Google is making insane amounts of money on advertising, it’s not really the same since a huge amount of that advertising is running on websites which are getting a cut. Still, the big story is that Google and Facebook now have a dominant position in the entirety of the advertising ecosystem and are using their monopoly power to take more and more of the money for themselves.

    We’re basically too small for Google to care about. So I wouldn’t say we’ve had any bad experiences with Google in the sense of Google trying to injure us or use its power against us. What we’ve experienced is a little different. Google is so big and so powerful that even when it’s trying to do something good, it can be dangerous and frightening.


    Now in practice all this meant was that two or three old stories about Dylann Roof could no longer run ads purchased through Google. I’d say it’s unlikely that loss to TPM amounted to even a cent a month. Totally meaningless. But here’s the catch. The way these warnings work and the way these particular warnings were worded, you get penalized enough times and then you’re blacklisted.

    Now, certainly you’re figuring we could contact someone at Google and explain that we’re not publishing hate speech and racist violence. We’re reporting on it. Not really. We tried that. We got back a message from our rep not really understanding the distinction and cheerily telling us to try to operate within the no hate speech rules. And how many warnings until we’re blacklisted? Who knows?

    If we were cut off, would that be Adexchange (the ads) or DoubleClick for Publishers (the road) or both? Who knows?

    If the first stopped we’d lose a big chunk of money that wouldn’t put us out of business but would likely force us to retrench. If we were kicked off the road more than half of our total revenue would disappear instantly and would stay disappeared until we found a new road – i.e., a new ad serving service or technology. At a minimum that would be a devastating blow that would require us to find a totally different ad serving system, make major technical changes to the site to accommodate the new system and likely not be able to make as much from ads ever again. That’s not including some unknown period of time – certainly weeks at least – in which we went with literally no ad revenue.

    Needless to say, the impact of this would be cataclysmic and could easily drive us out of business.

    Now, that’s never happened. And this whole scenario stems from what is at least a well-intentioned effort not to subsidize hate speech and racist groups. Again, it hasn’t happened. So in some sense the cataclysmic scenario I’m describing is as much a product of my paranoia as something Google could or might do. But when an outside player has that much power, often acts arbitrarily (even when well-intentioned) and is almost impossible to communicate with, a significant amount of paranoia is healthy and inevitable.
    http://talkingpointsmemo.com/edblog/a-serf-on-googles-farm
    Voting 0
  5. The FT reports on Wednesday that “Facebook and Google have announced they will restrict advertising on online platforms with fake news, after a furore over the role of such stories in last week’s US presidential election.”

    The following is a personal view and thus not representative of the wider views of the FT, so no doubt biased to whatever cultural norms impacted my formative years — among them being of Polish descent, being brought up Catholic, having staunchly anti-communist parents, experiencing a youthful rebellion against that framework and later moderating to a middle ground. With that out of the way…

    Surely having Facebook and Google restrict advertising on subjective grounds is the worst possible outcome of this entire affair?

    The idea all-powerful platforms like Google and Facebook should be charged with the responsibility of strategically filtering and determining what constitutes fake news is not just questionable but frightening in the Orwellian Newspeak sense of the word.


    Habermas’ most profound observation is that the formation of the public news arena is intimately connected to the rise of the coffee houses and stock exchanges. This is because it is only on the stock exchange that the full range of conflicting views collide to forge a clearing price. Repression or manipulation of information flow, meanwhile, only ensures that the clearing price will be off to someone’s advantage and to someone else’s disadvantage.

    Interestingly, back in the 90s and noughties, when the internet was first becoming a thing, media academics would often ponder whether this new form of information exchange represented the reconstitution of a public sphere in a digital form (especially in light of Herman/Chomsky’s Manufacturing Consent critique, which argued the advertising funding model had skewed the public debate and turned the industry into a corporate propaganda outlet). Mostly, they erred towards the notion it did not precisely because it captured a small slice of the population and had a tendency to compartmentalise discussion rather than broaden it.

    Based on all that, if Facebook and Google moves to filter “fake news” it will only exacerbate the problem because these institutions will always be governed by commercial interest not public duty. That as a whole makes them inequipped to judge what news is fit for publication and which is not. What it does do in the long run is open the door to an even more sinister advertising propaganda model than that which inspired Herman/Chomsky’s Manufacture of Consent.

    In that light, here’s some commentary from Habermas about what aspects of salon and coffee-house culture constituted a public sphere (and which I’d argue are lacking today):

    However exclusive the public might be in any given instance, it could never close itself off entirely and become consolidated as a clique; for it always understood and found itself immersed within a more inclusive public of all private people, persons who- insofar as they were propertied and educated — as readers, listeners, and spectators could avail themselves via the market of the objects that were subject to discussion. The issues discussed became “general” not merely in their significance, but also in their accessibility; everyone had to be able to participate.

    What of the uneducated and unpropertied or too poor to engage in the market for objects, you ask? According to Habermas, they were brought into the public sphere by way of festival gatherings, theatre performances and the music halls, all of which spurred public debate.

    In a highly atomised and compartmentalised culture, however — where even workplace gatherings don’t bring people together because everyone is being encouraged to “work for himself” in the gig economy or from home — there seem to be ever fewer occurrences where we, the public, have no choice but to interact with those who disagree with us.

    This in turn encourages the cultivation of safe spaces, which in turn twists our perception of reality into something it simply is not.
    https://ftalphaville.ft.com/2016/11/1...cebook-and-the-manufacture-of-consent
    Voting 0
  6. Nel 2007 Google ha acquisito DoubleClick, società che raccoglieva dati di navigazione web, assicurando che mai avrebbe incrociato tali risultati con le informazioni personali possedute grazie all'utilizzo dei propri servizi. Tuttavia, a distanza di quasi 10 anni ha aggiornato le proprie condizioni per l'uso dell'account Google, informando che adesso avrà la possibilità di effettuare tale incrocio. Nel documento si legge adesso: "A seconda delle impostazioni dell'account utente, la sua attività su altri siti e app potrebbe essere associata alle relative informazioni personali allo scopo di migliorare i servizi Google e gli annunci pubblicati da Google". La modifica alle impostazioni deve essere approvata, ed infatti Google richiede specificatamente, una volta effettuato l'accesso al proprio account via browser web, di accettare tali nuove condizioni. L'utente ha la possibilità di mantenere le impostazioni attuali e continuare ad utilizzare i servizi Google allo stesso modo, mentre per i nuovi account invece le nuove opzioni sono abilitate di default. Coi nuovi termini, se accettati, Google potrà unire i dati di navigazione acquisiti tramite i servizi di analisi o tracking alle informazioni già ottenute dal profilo utente. Tutto ciò permetterà alla casa di Mountain View di comporre un ritratto completo dei propri utenti composto dai dati personali, da ciò che viene scritto nelle email, dai siti web visitati e dalle ricerche effettuate, facendo cadere definitivamente il principio di anonimato del tracciamento web.
    http://www.saggiamente.com/2016/10/ad...ource=twitter.com&utm_campaign=buffer
    Voting 0
  7. There was nothing politically hapless about Eric Schmidt. I had been too eager to see a politically unambitious Silicon Valley engineer, a relic of the good old days of computer science graduate culture on the West Coast. But that is not the sort of person who attends the Bilderberg conference four years running, who pays regular visits to the White House, or who delivers “fireside chats” at the World Economic Forum in Davos.43 Schmidt’s emergence as Google’s “foreign minister”—making pomp and ceremony state visits across geopolitical fault lines—had not come out of nowhere; it had been presaged by years of assimilation within US establishment networks of reputation and influence.

    On a personal level, Schmidt and Cohen are perfectly likable people. But Google's chairman is a classic “head of industry” player, with all of the ideological baggage that comes with that role.44 Schmidt fits exactly where he is: the point where the centrist, liberal, and imperialist tendencies meet in American political life. By all appearances, Google's bosses genuinely believe in the civilizing power of enlightened multinational corporations, and they see this mission as continuous with the shaping of the world according to the better judgment of the “benevolent superpower.” They will tell you that open-mindedness is a virtue, but all perspectives that challenge the exceptionalist drive at the heart of American foreign policy will remain invisible to them. This is the impenetrable banality of “don’t be evil.” They believe that they are doing good. And that is a problem.


    Google is "different". Google is "visionary". Google is "the future". Google is "more than just a company". Google "gives back to the community". Google is "a force for good".
    Even when Google airs its corporate ambivalence publicly, it does little to dislodge these items of faith.45 The company’s reputation is seemingly unassailable. Google’s colorful, playful logo is imprinted on human retinas just under six billion times each day, 2.1 trillion times a year—an opportunity for respondent conditioning enjoyed by no other company in history.46 Caught red-handed last year making petabytes of personal data available to the US intelligence community through the PRISM program, Google nevertheless continues to coast on the goodwill generated by its “don’t be evil” doublespeak. A few symbolic open letters to the White House later and it seems all is forgiven. Even anti-surveillance campaigners cannot help themselves, at once condemning government spying but trying to alter Google’s invasive surveillance practices using appeasement strategies.47
    Nobody wants to acknowledge that Google has grown big and bad. But it has. Schmidt’s tenure as CEO saw Google integrate with the shadiest of US power structures as it expanded into a geographically invasive megacorporation. But Google has always been comfortable with this proximity. Long before company founders Larry Page and Sergey Brin hired Schmidt in 2001, their initial research upon which Google was based had been partly funded by the Defense Advanced Research Projects Agency (DARPA).48 And even as Schmidt’s Google developed an image as the overly friendly giant of global tech, it was building a close relationship with the intelligence community.
    https://wikileaks.org/google-is-not-what-it-seems
    Voting 0
  8. While the prospect of a Donald Trump presidency is a terrifying one, perhaps this is scarier: Facebook could use its unprecedented powers to tilt the 2016 presidential election away from him – and the social network’s employees have apparently openly discussed whether they should do so.

    As Gizmodo reported on Friday, “Last month, some Facebook employees used a company poll to ask Facebook founder Mark » Zuckerberg whether the company should try ‘to help prevent President Trump in 2017’.”

    Facebook employees are probably just expressing the fear that millions of Americans have of the Republican demagogue. But while there’s no evidence that the company plans on taking anti-Trump action, the extraordinary ability that the social network has to manipulate millions of people with just a tweak to its algorithm is a serious cause for concern.

    The fact that an internet giant like Facebook or Google could turn an election based on hidden changes to its code has been a hypothetical scenario for years (and it’s even a plot point in this season’s House of Cards). Harvard Law professor Jonathan Zittrain explained in 2010 how “Facebook could decide an election without anyone ever finding out”, after the tech giant secretly conducted a test in which they were able to allegedly increase voter turnout by 340,000 votes around the country on election day simply by showing users a photo of someone they knew saying “I voted”.
    http://www.theguardian.com/commentisf...facebook-election-manipulate-behavior
    Voting 0
  9. America’s next president could be eased into office not just by TV ads or speeches, but by Google’s secret decisions, and no one—except for me and perhaps a few other obscure researchers—would know how this was accomplished.

    Research I have been directing in recent years suggests that Google, Inc., has amassed far more power to control elections—indeed, to control a wide variety of opinions and beliefs—than any company in history has ever had. Google’s search algorithm can easily shift the voting preferences of undecided voters by 20 percent or more—up to 80 percent in some demographic groups—with virtually no one knowing they are being manipulated, according to experiments I conducted recently with Ronald E. Robertson .

    Given that many elections are won by small margins, this gives Google the power, right now, to flip upwards of 25 percent of the national elections worldwide. In the United States, half of our presidential elections have been won by margins under 7.6 percent, and the 2012 election was won by a margin of only 3.9 percent—well within Google’s control.

    There are at least three very real scenarios whereby Google—perhaps even without its leaders’ knowledge—could shape or even decide the election next year. Whether or not Google executives see it this way, the employees who constantly adjust the search giant’s algorithms are manipulating people every minute of every day. The adjustments they make increasingly influence our thinking—including, it turns out, our voting preferences.

    What we call in our research the Search Engine Manipulation Effect (SEME) turns out to be one of the largest behavioral effects ever discovered.
    http://www.politico.com/magazine/stor...he-2016-election-121548#ixzz46KLq1SZS
    Voting 0
  10. It's undeniable that companies like Google and Facebook have made the web much easier to use and helped bring billions online. They've provided a forum for people to connect and share information, and they've had a huge impact on human rights and civil liberties. These are many things for which we should applaud them.

    But their scale is also concerning. For example, Chinese messaging service Wechat (which is somewhat like Twitter) recently used its popularity to limit market choice. The company banned access to Uber to drive more business to their own ride-hailing service. Meanwhile, Facebook engineered limited web access in developing economies with its Free Basics service. Touted in India and other emerging markets as a solution to help underserved citizens come online, Free Basics allows viewers access to only a handful of pre-approved websites (including, of course, Facebook). India recently banned Free Basics and similar services, claiming that these restricted web offerings violated the essential rules of net neutrality.
    Algorithmic oversight

    Beyond market control, the algorithms powering these platforms can wade into murky waters. According to a recent study from the American Institute for Behavioral Research and Technology, information displayed in Google could shift voting preferences for undecided voters by 20 percent or more -- all without their knowledge. Considering how narrow the results of many elections can become, this margin is significant. In many ways, Google controls what information people see, and any bias, intentional or not, has a potential impact on society.

    In the future, data and algorithms will power even more grave decisions. For example, code will decide whether a self-driving car stops for an oncoming bus or runs into pedestrians.

    It's possible that we're reaching the point where we need oversight for consumer-facing
    http://buytaert.net/can-we-save-the-o...ource=twitter.com&utm_campaign=buffer
    Voting 0

Top of the page

First / Previous / Next / Last / Page 1 of 4 Online Bookmarks of M. Fioretti: Tags: control + google

About - Propulsed by SemanticScuttle