Tags: surveillance*

449 bookmark(s) - Sort by: Date ↓ / Title / Voting /

  1. We saw that President Obama, who was an outsider to the US military-intelligence complex, initially wanted to reign in the abuses of agencies like the CIA and the NSA, but in the end he did very little. Now we see a confrontation between president Trump and so-called Deep State, which includes the CIA and the NSA. Can a US president govern in opposition to such powerful entities?
    "Obama is certainly an instructive case. This is a president who campaigned on a platform of ending warrantless wiretapping in the United States, he said "that's not who we are, that's not what we do", and once he became the president, he expanded the program. He said he was going to close Guantanamo but he kept it open, he said he was going to limit extrajudicial killings and drone strikes that has been so routine in the Bush years. But Obama went on to authorize vastly more drone strikes than Bush. It became an industry. As for this idea that there is a Deep State, now the Deep State is not just the intelligence agencies, it is really a way of referring to the career bureaucracy of government. These are officials who sit in powerful positions, who don't leave when presidents do, who watch presidents come and go, they influence policy, they influence presidents and say: this is what we have always done, this is what we must do, and if you don't do this, people will die. It is very easy to persuade a new president who comes in, who has never had these powers, but has always wanted this job and wants very, very badly to do that job well. A bureaucrat sitting there for the last twenty years says: I understand what you said, I respect your principles, but if you do what you promised, people will die. It is very easy for a president to go: well, for now, I am going to set this controversy to the side, I'm going to take your advice, let you guys decide how these things should be done, and then I will revisit it, when I have a little more experience, maybe in a few months, maybe in a few years, but then they never do. This is what we saw quite clearly happen in the case of Barack Obama: when this story of Snowden exposing the NSA's mass surveillance » came forward in 2013, when Obama had been president for five years, one of the defences for this from his aides and political allies was: oh, Obama was just about to fix this problem! And sure enough, he eventually was forced from the wave of criticism to make some limited reforms, but he did not go far enough to end all of the programs that were in violation of the law or the constitution of the United States. That too was an intentional choice: he could have certainly used the scandal to advocate for all of the changes that he had campaigned on, to deliver on all of his promises, but in those five years he had become president, he discovered something else, which is that there are benefits from having very powerful intelligence agencies, there are benefits from having these career bureaucrats on your side, using their spider web over government for your benefit. Imagine you are Barack Obama, and you realise - yes, when you were campaigning you were saying: spying on people without a warrant is a problem, but then you realise: you can read Angela Merkel's text messages. Why bother calling her and asking her opinion, when you can just read her mind by breaking the law? It sounds like a joke, but it is a very seductive thing. Secrecy is perhaps the most corrupting of all government powers, because it takes public officials and divorces them from accountability to the public. When we look at the case of Trump, who is perhaps the worst of politicians, we see the same dynamic occurring. This is a president who said the CIA is the enemy, it's like Nazi Germany, they're listening to his phone calls, and all of these other things, some claims which are true, some claims which are absolutely not. A few months later, he is authorizing major powers for these same agencies that he has called his enemies. And this gets to the central crux of your question, which is: can any president oppose this? The answer is certainly. The president has to have some familiarity going in with the fact that this pitch is going to be made, that they are going to try to scare him or her into compliance. The president has to be willing to stand strongly on line and say: 'I was elected to represent the interests of the American people, and if you're not willing to respect the constitution and our rights, I will disband your agency, and create a new one'. I think they can definitely be forced into compliance, because these officials fear prison, just like every one of us."
    Voting 0
  2. 1)The UK Government have announced they have developed an algorithmic tool to remove ISIS presence from the web.
    2) Copyright industries have called for similar programs to be installed that can remove un-approved creative content in the United States.
    3) The European Commission has suggested that filters can be used to “proactively detect, identify, and remove” anything illegal – from comments sections on news sites to Facebook posts.
    4) The Copyright in the Digital Single Market Directive, currently being debated by MEPs, is proposing using technical filters to block copyrighted content from being posted.

    There’s a recklessness to all of these proposals – because so much of them involve sidestepping legal processes.

    EFF coined the term “shadow regulation” for rules that are made outside of the legislative process, and that’s what is happening here. A cosy relationship between business and governments has developed that the public are being left outside of when it comes to limiting online speech.
    Voting 0
  3. Stratumseind in Eindhoven is one of the busiest nightlife streets in the Netherlands. On a Saturday night, bars are packed, music blares through the street, laughter and drunken shouting bounces off the walls. As the night progresses, the ground becomes littered with empty shot bottles, energy drink cans, cigarette butts and broken glass.

    It’s no surprise that the place is also known for its frequent fights. To change that image, Stratumseind has become one of the “smartest” streets in the Netherlands. Lamp-posts have been fitted with wifi-trackers, cameras and 64 microphones that can detect aggressive behaviour and alert police officers to altercations. There has been a failed experiment to change light intensity to alter the mood. The next plan, starting this spring, is to diffuse the smell of oranges to calm people down. The aim? To make Stratumseind a safer place.

    We get that comment a lot – ‘Big brother is watching you’. I prefer to say, ‘Big brother is helping you’
    Peter van de Crommert

    All the while, data is being collected and stored. “Visitors do not realise they are entering a living laboratory,” says Maša Galic, a researcher on privacy in the public space for the Tilburg Institute of Law, Technology and Society. Since the data on Stratumseind is used to profile, nudge or actively target people, this “smart city” experiment is subject to privacy law. According to the Dutch Personal Data Protection Act, people should be notified in advance of data collection and the purpose should be specified – but in Stratumseind, as in many other “smart cities”, this is not the case.

    Peter van de Crommert is involved at Stratumseind as project manager with the Dutch Institute for Technology, Safety and Security. He says visitors do not have to worry about their privacy: the data is about crowds, not individuals. “We often get that comment – ‘Big brother is watching you’ – but I prefer to say, ‘Big brother is helping you’. We want safe nightlife, but not a soldier on every street corner.”
    Revellers in Eindhoven’s Stratumseind celebrate King’s Day.
    Revellers in Eindhoven’s Stratumseind celebrate King’s Day. Photograph: Filippo Manaresi/Moment Editorial/Getty Images

    When we think of smart cities, we usually think of big projects: Songdo in South Korea, the IBM control centre in Rio de Janeiro or the hundreds of new smart cities in India. More recent developments include Toronto, where Google will build an entirely new smart neighbourhood, and Arizona, where Bill Gates plans to build his own smart city. But the reality of the smart city is that it has stretched into the everyday fabric of urban life – particularly so in the Netherlands.

    In the eastern city of Enschede, city traffic sensors pick up your phone’s wifi signal even if you are not connected to the wifi network. The trackers register your MAC address, the unique network card number in a smartphone. The city council wants to know how often people visit Enschede, and what their routes and preferred spots are. Dave Borghuis, an Enschede resident, was not impressed and filed an official complaint. “I don’t think it’s okay for the municipality to track its citizens in this way,” he said. “If you walk around the city, you have to be able to imagine yourself unwatched.”

    Enschede is enthusiastic about the advantages of the smart city. The municipality says it is saving €36m in infrastructure investments by launching a smart traffic app that rewards people for good behaviour like cycling, walking and using public transport. (Ironically, one of the rewards is a free day of private parking.) Only those who mine the small print will discover that the app creates “personal mobility profiles”, and that the collected personal data belongs to the company Mobidot.
    Voting 0
  4. Finally, there’s what the authors call “political security” – using AI to automate tasks involved in surveillance, persuasion (creating targeted propaganda) and deception (eg, manipulating videos). We can also expect new kinds of attack based on machine-learning’s capability to infer human behaviours, moods and beliefs from available data. This technology will obviously be welcomed by authoritarian states, but it will also further undermine the ability of democracies to sustain truthful public debates. The bots and fake Facebook accounts that currently pollute our public sphere will look awfully amateurish in a couple of years.

    The report is available as a free download and is worth reading in full. If it were about the dangers of future or speculative technologies, then it might be reasonable to dismiss it as academic scare-mongering. The alarming thing is most of the problematic capabilities that its authors envisage are already available and in many cases are currently embedded in many of the networked services that we use every day. William Gibson was right: the future has already arrived.
    Voting 0
  5. Google tracks you on more than just their search engine. You may realize they also track you on YouTube, Gmail, Chrome, Android, Gmaps, and all the other services they run. For those, we recommend using private alternatives like DuckDuckGo for search. Yes, you can live Google-free. I’ve been doing it for many years.

    What you may not realize, though, is Google trackers are actually lurking behind the scenes on 75% of the top million websites. To give you a sense of how large that is, Facebook is the next closest with 25%. It’s a good bet that any random site you land on the Internet will have a Google tracker hiding on it. Between the two of them, they are truly dominating online advertising, by some measures literally making up 74%+ of all its growth. A key component of how they have managed to do that is through all these hidden trackers.

    Google Analytics is installed on most sites, tracking you behind the scenes, letting website owners know who is visiting their sites, but also feeding that information back to Google. Same for the ads themselves, with Google running three of the largest non-search ad networks installed on millions of sites and apps: Adsense, Admob, and DoubleClick.

    You know those ads that creepily follow you around everywhere? Most of those are actually run through these Google ad networks, where they let advertisers target you against your search history, browsing history, location history and other personal information they collect. Even less well known is they also enable advertisers like airlines to charge you different prices based upon your personal information.

    These ads are not only annoying — they are literally designed to manipulate you through targeting to make you buy more things, and just showing them to you is an act of Google profiting off of your personal information.

    At DuckDuckGo, we’ve expanded beyond our roots in search, to protect you no matter where you go on the Internet. Our DuckDuckGo browser extension and mobile app is available for all major browsers and devices, and blocks these Google trackers, along with the ones from Facebook and countless other data brokers. It does even more to protect you as well like providing smarter encryption.

    #3 — Get unbiased results, outside the Filter Bubble.

    When you search, you expect unbiased results, but that’s not what you get on Google. On Google, you get results tailored to what they think you’re likely to click on, based on the data profile they’ve built on you over time from all that tracking I described above.

    That may appear at first blush to be a good thing, but when most people say they want personalization in a search context they actually want localization. They want local weather and restaurants, which can actually be provided without tracking, like we do at DuckDuckGo. That’s because approximate location info is automatically embedded by your computer in the search request, which we can use to serve you local results and immediately throw away without tracking you.

    Beyond localization, personalized results are dangerous because to show you results they think you’ll click on, they must filter results they think you’ll skip. That’s why it’s called the Filter Bubble.

    So if you have political leanings one way or another, you’re more likely to get results you already agree with, and less likely to ever see opposing viewpoints. In the aggregate this leads to increased echo chambers that are significantly contributing to our increasingly polarized society.

    This Filter Bubble is especially pernicious in a search context because you have the expectation that you’re seeing what others are seeing, that you’re seeing the “results.” We’ve done studies over the years where we have people search for the same topics on Google at the same time and in “Incognito” mode, and found they are significantly tailored.
    Voting 0
  6. IoT will be able to take stock of your choices, moods, preferences and tastes, the same way Google Search does. With enough spreadsheets, many practical questions are rendered trivial. How hard will it be for the IoT — maybe through Alexa, maybe through your phone — to statistically study why, where and when you raise your voice at your child? If you can correlate people’s habits and physical attributes, it will be toddler-easy to correlate mood to environment. The digitally connected devices of tomorrow would be poor consumer products if they did not learn you well. Being a good and faithful servant means monitoring the master closely, and that is what IoT devices will do. They will analyze your feedback and automate their responses — and predict your needs. In the IoT, Big Data is weaponized, and can peer deeper into the seeds your life than the government has ever dreamed.
    Voting 0
  7. Rome and London are two huge, sluggish beasts of cities that have outlived millennia of eager reformers. They share a world where half the people already live in cities and another couple billion are on their way into town. The population is aging quickly, the current infrastructure must crumble and be replaced by its very nature, and climate disaster is taking the place of the past’s great urban fires, wars, and epidemics. Those are the truly important, dull but worthy urban issues.

    However, the cities of the future won’t be “smart,” or well-engineered, cleverly designed, just, clean, fair, green, sustainable, safe, healthy, affordable, or resilient. They won’t have any particularly higher ethical values of liberty, equality, or fraternity, either. The future smart city will be the internet, the mobile cloud, and a lot of weird paste-on gadgetry, deployed by City Hall, mostly for the sake of making towns more attractive to capital.

    Whenever that’s done right, it will increase the soft power of the more alert and ambitious towns and make the mayors look more electable. When it’s done wrong, it’ll much resemble the ragged downsides of the previous waves of urban innovation, such as railways, electrification, freeways, and oil pipelines. There will also be a host of boozy side effects and toxic blowback that even the wisest urban planner could never possibly expect.

    “information about you wants to be free to us.”

    This year, a host of American cities vilely prostrated themselves to Amazon in the hopes of winning its promised, new second headquarters. They’d do anything for the scraps of Amazon’s shipping business (although, nobody knows what kind of jobs Amazon is really promising). This also made it clear, though, that the flat-world internet game was up, and it’s still about location, location, and location.

    Smart cities will use the techniques of “smartness” to leverage their regional competitive advantages. Instead of being speed-of-light flat-world platforms, all global and multicultural, they’ll be digitally gated communities, with “code as law” that is as crooked, complex, and deceitful as a Facebook privacy chart.

    You still see this upbeat notion remaining in the current smart-city rhetoric, mostly because it suits the institutional interests of the left.

    The “bad part of town” will be full of algorithms that shuffle you straight from high-school detention into the prison system. The rich part of town will get mirror-glassed limos that breeze through the smart red lights to seamlessly deliver the aristocracy from curb into penthouse.

    These aren’t the “best practices” beloved by software engineers; they’re just the standard urban practices, with software layered over. It’s urban design as the barbarian’s varnish on urbanism.

    If you look at where the money goes (always a good idea), it’s not clear that the “smart city” is really about digitizing cities. Smart cities are a generational civil war within an urban world that’s already digitized.

    It’s a land grab for the command and control systems that were mostly already there.
    Voting 0
  8. As problematic as Facebook has become, it represents only one component of a much broader shift into a new human connectivity that is both omnipresent (consider the smartphone) and hypermediated—passing through and massaged by layer upon layer of machinery carefully hidden from view. The upshot is that it’s becoming increasingly difficult to determine what in our interactions is simply human and what is machine-generated. It is becoming difficult to know what is real.

    Before the agents of this new unreality finish this first phase of their work and then disappear completely from view to complete it, we have a brief opportunity to identify and catalogue the processes shaping our drift to a new world in which reality is both relative and carefully constructed by others, for their ends. Any catalogue must include at least these four items:

    the monetisation of propaganda as ‘fake news’;
    the use of machine learning to develop user profiles accurately measuring and modelling our emotional states;
    the rise of neuromarketing, targeting highly tailored messages that nudge us to act in ways serving the ends of others;
    a new technology, ‘augmented reality’, which will push us to sever all links with the evidence of our senses.

    The fake news stories floated past as jetsam on Facebook’s ‘newsfeed’, that continuous stream of shared content drawn from a user’s Facebook’s contacts, a stream generated by everything everyone else posts or shares. A decade ago that newsfeed had a raw, unfiltered quality, the notion that everyone was doing everything, but as Facebook has matured it has engaged increasingly opaque ‘algorithms’ to curate (or censor) the newsfeed, producing something that feels much more comfortable and familiar.

    This seems like a useful feature to have, but the taming of the newsfeed comes with a consequence: Facebook’s billions of users compose their world view from what flows through their feeds. Consider the number of people on public transport—or any public place—staring into their smartphones, reviewing their feeds, marvelling at the doings of their friends, reading articles posted by family members, sharing video clips or the latest celebrity outrages. It’s an activity now so routine we ignore its omnipresence.

    Curating that newsfeed shapes what Facebook’s users learn about the world. Some of that content is controlled by the user’s ‘likes’, but a larger part is derived from Facebook’s deep analysis of a user’s behaviour. Facebook uses ‘cookies’ (invisible bits of data hidden within a user’s web browser) to track the behaviour of its users even when they’re not on the Facebook site—and even when they’re not users of Facebook. Facebook knows where its users spend time on the web, and how much time they spend there. All of that allows Facebook to tailor a newsfeed to echo the interests of each user. There’s no magic to it, beyond endless surveillance.

    What is clear is that Facebook has the power to sway the moods of billions of users. Feed people a steady diet of playful puppy videos and they’re likely to be in a happier mood than people fed images of war. Over the last two years, that capacity to manage mood has been monetised through the sharing of fake news and political feeds atuned to reader preference: you can also make people happy by confirming their biases.

    We all like to believe we’re in the right, and when we get some sign from the universe at large that we are correct, we feel better about ourselves. That’s how the curated newsfeed became wedded to the world of profitable propaganda.

    Adding a little art to brighten an other-wise dull wall seems like an unalloyed good, but only if one completely ignores bad actors. What if that blank canvas gets painted with hate speech? What if, perchance, the homes of ‘undesirables’ are singled out with graffiti that only bad actors can see? What happens when every gathering place for any oppressed community gets invisibly ‘tagged’? In short, what happens when bad actors use Facebook’s augmented reality to amplify their own capacity to act badly?

    But that’s Zuckerberg: he seems to believe his creations will only be used to bring out the best in people. He seems to believe his gigantic sharing network would never be used to incite mob violence. Just as he seems to claim that Facebook’s capacity to collect and profile the moods of its users should never be monetised—but, given that presentation unearthed by the Australian, Facebook tells a different story to advertisers.

    Regulating Facebook enshrines its position as the data-gathering and profile-building organisation, while keeping it plugged into and responsive to the needs of national powers. Before anyone takes steps that would cement Facebook in our social lives for the foreseeable future, it may be better to consider how this situation arose, and whether—given what we now know—there might be an opportunity to do things differently.
    Voting 0
  9. By using algorithms to "triage" the neediness of poor people, system designers can ensure that the people harmed by the system are the least sympathetic and least likely to provoke outrage among those with political clout.

    Algorithmically defined guilt is also a problem because of the real problems agencies are trying to solve. In Allegheny, your child's at-risk score is largely defined by your use of social services to deal with financial crises, health crises, addiction and mental health problems. If you deal with these problems privately -- by borrowing from relatives or getting private addiction treatment -- you aren't entered into the system, which means that if these factors are indeed predictors of risk to children, then the children of rich people are being systematically denied interventions by the same system that is over-policing poor children.
    Voting 0
  10. Mark Zuckerberg also launched Facebook with a disdain for intrusive advertising, but it wasn’t long before the social network giant became Google’s biggest competitor for ad dollars. After going public with 845 million users in 2012, Facebook became a multibillion-dollar company and Zuckerberg one of the richest men on Earth, but with only a promise that the company would figure out how to monetize its platform.

    Facebook ultimately sold companies on its platform by promising “brand awareness” and the best possible data on what consumers actually liked. Brands could start their own Facebook pages, which people would actually “like” and interact with. This provided unparalleled information about what company each individual person wanted to interact with the most. By engaging with companies on Facebook, people gave corporate marketing departments more information than they could have ever dreamed of buying, but here it was offered up free.

    This was the “grand bargain,” as Columbia University law professor Tim Wu called it in his book, The Attention Merchants, that users struck with corporations. Wu wrote that Facebook’s “billions of users worldwide were simply handing over a treasure trove of detailed demographic data and exposing themselves to highly targeted advertising in return for what, exactly?”

    In other words: We will give you every detail of our lives and you will get rich by selling that information to advertisers.

    European regulators are now saying that bargain was a bad deal. The big question that remains is whether their counterparts in the U.S. will follow their lead.
    Voting 0

Top of the page

First / Previous / Next / Last / Page 1 of 45 Online Bookmarks of M. Fioretti: tagged with "surveillance"

About - Propulsed by SemanticScuttle