2018/12/12: a patent application from Amazon became public that would pair face surveillance — like Rekognition, the product that the company is aggressively marketing to police and Immigration and Customs Enforcement — with Ring, a doorbell camera company that Amazon bought earlier this year.
2018/11/30: An Amazon patent application sheds light on a way to monitor neighborhoods with a doorbell camera that could alert homeowners and police of suspicious activities and people.
The patent application, which was made public on the United States Patent and Trademark Office website Thursday, describes how a network of cameras could work together with facial recognition technology to identify people, and respond accordingly.
Amazon's application says the process leads to safer, more connected neighborhoods, as well as better informed homeowners and law enforcement.
The application describes creating a database of suspicious persons.
2018/06/04: are our smartphones actually listening?
According to Dr. Peter Hannay—The senior security consultant for cybersecurity firm Asterisk, and former lecturer and researcher at Edith Cowan University—the short answer is yes, but perhaps in a way that's not as diabolical as it sounds.
For your smartphone to actually pay attention and record you, there needs to be a trigger, like Hey Siri or Okay Google for example . Without these triggers, there's no recording, with just some general metrics being sent to your service provider. This might not seem a cause for an alarm, but when it comes to apps like Facebook, no one knows what the triggers are. In fact, there could be thousands.
“It’s just an extension from what advertising used to be on television,” says Peter. Only instead of prime time audiences, they’re now tracking web-browsing habits. It’s not ideal, but I don’t think it poses an immediate threat to most people.”
2017/10/22: Parent company Alphabet would provide services in response to data harvested. a city “where buildings have no static use”. Like biomass
Alphabet’s long-term goal is to remove barriers to the accumulation and circulation of capital in urban settings – mostly by replacing formal rules and restrictions with softer, feedback-based floating targets. It claims that in the past “prescriptive measures were necessary to protect human health, ensure safe buildings, and manage negative externalities”. Today, however, everything has changed and “cities can achieve those same goals without the inefficiency that comes with inflexible zoning and static building codes”.
This is a remarkable statement. Even neoliberal luminaries such as Friedrich Hayek and Wilhelm Röpke allowed for some non-market forms of social organisation in the urban domain. They saw planning – as opposed to market signals – as a practical necessity imposed by the physical limitations of urban spaces: there was no other cheap way of operating infrastructure, building streets, avoiding congestion.
For Alphabet, these constraints are no more: ubiquitous and continuous data flows can finally replace government rules with market signals. Now, everything is permitted – unless somebody complains.
Google Urbanism means the end of politics, as it assumes the impossibility of wider systemic transformations, such as limits on capital mobility and foreign ownership of land and housing. Instead it wants to mobilise the power of technology to help residents “adjust” to seemingly immutable global trends such as rising inequality and constantly rising housing costs (Alphabet wants us to believe that they are driven by costs of production, not by the seemingly endless supply of cheap credit).
2018/11/15: At the beginning of October, Amazon was quietly issued a patent that would allow its virtual assistant Alexa to decipher a user’s physical characteristics and emotional state based on their voice. Characteristics, or “voice features,” like language accent, ethnic origin, emotion, gender, age, and background noise would be immediately extracted and tagged to the user’s data file to help deliver more targeted advertising.
The algorithm would also consider a customer’s physical location — based on their IP address, primary shipping address, and browser settings — to help determine their accent. Should Amazon’s patent become a reality, or if accent detection is already possible, it would introduce questions of surveillance and privacy violations, as well as possible discriminatory advertising, experts said.
Like facial recognition, voice analysis underlines how existing laws and privacy safeguards simply aren’t capable of protecting users from new categories of data collection — or government spying, for that matter. Unlike facial recognition, voice analysis relies not on cameras in public spaces, but microphones inside smart speakers in our homes. It also raises its own thorny issues around advertising that targets or excludes certain groups of people based on derived characteristics like nationality, native language, and so on.
voice-based accent detection can determine a person’s ethnic background, it opens up a new category of information that is incredibly interesting to the government.
The Foreign Intelligence Surveillance Act, or FISA, makes it possible for the government to covertly demand such data.
2018/10/26: researchers logged global BGP route announcements and discovered China Telecom publishing bogus routes that sucked up massive amounts of Canadian and US traffic and pushed it through Chinese listening posts. Much of today's internet traffic is still unencrypted, meaning that the entities monitoring these listening posts would have been able to read massive amounts of emails, instant messages and web-sessions.
China Telecom's BGP attacks were also used to black-hole traffic in some instances (for example, traffic from an "Anglo-American bank's" branch in Milan was diverted wholesale to China, never arriving at its intended destination).
2018/10/11: "On the genetic level, you shouldn't expect much privacy, and decisions about your privacy are being made by your family (probably without consulting you)."
Earlier this year, news broke that police had devised an unexpected new method to crack cold cases. Rather than use a suspect's DNA to identify them, data from the DNA was used to search public repositories and identify an alleged killer's family members. From there, a bit of family tree building led to a limited number of suspects and the eventual identification of the person who was charged with the Golden State killings. In the months that followed, more than a dozen other cases were reported to have been solved in the same manner.
The potential for this sort of analysis had been identified by biologists as early as 2014, but they viewed it as a privacy risk—there was potential for personal information from research subjects to leak out to the public via their DNA sequences. Now, a US-Israeli team of researchers has gone through and quantified the chances of someone being identified through public genealogy data. If you live in the US and are of European descent, odds are 60 percent that you can be identified via information that your relatives have made public.
2016/03/16: In recent days, radio listeners may have heard advertisements for a company called TrustID offering “India’s 1st Aadhaar based mobile app to verify your maid, driver, electrician, tutor, tenant and everyone else instantly”. The app boasts it can do this in "less than a minute". Its punchline: “Shakal pe mat jaao, TrustID pe jaao.” Don't go by the face, use TrustID.
Think about what this means. A private company is advertising that it can use Aadhaar to collate information about citizens at a price. It says this openly, even as a case about the privacy of the information collected for the biometrics-linked government database is still pending in the Supreme Court.
This corporate ambition to exploit the business opportunities of this massive population database is now a part of the law that the government seems in a hurry to pass.
2018/09/16: Nothing says more about someone than the music they listen to and their porn habits. This is certainly ingrained in the streaming service’s business model.
Over the past few years, Spotify has been ramping up its data analytic capabilities in a bid to help marketers target consumers with adverts tailored to the mood they’re in. They deduce this from the sort of music you’re listening to, coupled with where and when you’re listening to it, along with third-party data that might be available.
Spotify is far from the only platform helping brands target people according to their emotions; real-time mood-based marketing is a growing trend and one we all ought to be cognisant of. In 2016, eBay launched a mood marketing tool, for example. And last year, Facebook told advertisers that it could identify when teenagers felt “insecure” and “worthless” or needed “a confidence boost”. This was just a few years after Facebook faced a backlash for running experiments to see if it could manipulate the mood of its users.
You can see where this could go, can’t you? As ad targeting gets ever more sophisticated, marketers will have the ability to target our emotions in potentially exploitative ways. You are more likely to spend more on a product if you’re feeling sad.
2018/9/18: For years, Facebook has publicly positioned its Messenger application as a way to connect with friends and as a way to help customers interact directly with businesses. But a new report from The Wall Street Journal today indicates that Facebook also saw its Messenger platform as a siphon for the sensitive financial data of its users, information it would not otherwise have access to unless a customer interacted with, say, a banking institution over chat. In this case, the WSJ report says not only did the banks find Facebook's methods obtrusive, but the companies also pushed back against the social network and, in some cases, moved conversations off Messenger to avoid handing Facebook any sensitive data. Among the financial firms Facebook is said to have argued with about customer data are American Express, Bank of America, and Wells Fargo.
The report says Facebook was interested in helping banks create bots for its Messenger platform, as part of a big push in 2016 to turn the chat app into an automated hub of digital life that could help you solve problems and avoid cumbersome customer service calls. But some of these bots, like the one American Express developed for Messenger last year, deliberately avoided sending transaction information over the platform after Facebook made clear it wanted to use customer spending habits as part of its ad targeting business. In some cases, companies like PayPal and Western Union negotiated special contracts that would let them offer many detailed and useful services like money transfers, the WSJ reports. But by and large, big banks in the U.S. have reportedly shied away from working with Facebook due to how aggressively it pushed for access to customer data.
2018/09/13: Racist bridges aren’t the only inanimate objects that have had quiet, clandestine control over people.
the residents of Scunthorpe, in the north of England, who were blocked from opening AOL accounts after the internet giant created a new profanity filter that objected to the name of their town.
an automatic hand-soap dispenser that perfectly released soap whenever white hands where placed under it did not recognize as hands those of a Nigerian man.
they discovered that home cooks were less likely to make claims on their home insurance and were therefore more profitable. The most significant item that gives you away as a responsible, house-proud person more than any other was fresh fennel.
there are concerns about this kind of data profiling being used in an exclusionary way: motorbike enthusiasts being deemed to have a risky hobby or people who eat sugar-free sweets being flagged as diabetic and turned down for insurance as a result. A study from 2015 demonstrated that Google was serving far fewer ads for high-paying executive jobs to women who were surfing the web than to men.
searches for “black-sounding names” were disproportionately likely to be linked to ads containing the word “arrest” (for example, “Have you been arrested?”) than those with “white-sounding names.”
2018/09/14: Google built a prototype of a censored search engine for China that links users’ searches to their personal phone numbers, thus making it easier for the Chinese government to monitor people’s queries, The Intercept can reveal.
The search engine, codenamed Dragonfly, was designed for Android devices, and would remove content deemed sensitive by China’s ruling Communist Party regime, such as information about political dissidents, free speech, democracy, human rights, and peaceful protest.
Sources familiar with Dragonfly said the search platform also appeared to have been tailored to replace weather and air pollution data with information provided directly by an unnamed source in Beijing.
Beijing is also rolling out a social credit system, using mass data collection to monitor and nudge citizens’ behaviour through strategic rewards and punishments. This is straight out of the dystopian TV series Black Mirror: in one episode, a woman is barred from buying a plane ticket due to her plummetting social ranking. In China, this is reality: one state-run media report admitted that 11 million train trips and 4 million plane trips have already been blocked due to low social credit scores. Such punishment can be triggered by misbehaviour ranging from the failure to pay back debts to spreading rumours, or even smoking or using expired tickets on trains. Conversely, a low credit score can be boosted by regular donations to charity. A senior official recently said that the system should ensure that “discredited people become bankrupt,” to underline the necessity of compliance.
The scan takes fractions of a second and has shown to be 99 percent accurate during testing, according to CBP Commissioner Kevin McAleenan, who was joined by MWAA President Jack Potter and airline representatives for an unveiling event Thursday.
2018/09/06: Never before has such a small number of firms been able to control what billions can say and see
For all the recent hand-wringing in the United States over Facebook’s monopolistic power, the mega-platform’s grip on the Philippines is something else entirely. Thanks to a social media–hungry populace and heavy subsidies that keep Facebook free to use on mobile phones, Facebook has completely saturated the country. And because using other data, like accessing a news website via a mobile web browser, is precious and expensive, for most Filipinos the only way online is through Facebook. The platform is a leading provider of news and information, and it was a key engine behind the wave of populist anger that carried Duterte all the way to the presidency.
If you want to know what happens to a country that has opened itself entirely to Facebook, look to the Philippines. What happened there — what continues to happen there — is both an origin story for the weaponization of social media and a peek at its dystopian future. It’s a society where, increasingly, the truth no longer matters, propaganda is ubiquitous, and lives are wrecked and people die as a result — half a world away from the Silicon Valley engineers who’d promised to connect their world.
2011/01/10/: there are two major classes of threat to freedom: one that has been present for many years now, and the other that is relatively new.
Like the oil barons at the turn of the 20th century, the data barons are determined to extract as much as possible of a resource that's central to the economy of their time. The more information they can get to feed the algorithms that power their ad-targeting machines and product-recommendation engines, the better. In the absence of serious competition or (until Europe's recently introduced General Data Protection Regulation) serious legal constraints on the handling of personal data, they are going to keep undermining privacy in their push to know as much about their users as they possibly canrnrnTheir dominance is allowing them to play a dangerous and outsize role in our politics and culture. The web giants have helped undermine confidence in democracy by underestimating the threat posed by Russian trolls, Macedonian fake-news farms, and other purveyors of propaganda. Zuckerberg at first dismissed claims that disinformation on Facebook had influenced the 2016 election as "pretty crazy." But Facebook itself now says that between June 2015 and August 2017, as many as 126 million people may have seen content on the network that was created by a Russian troll farm.rnrnrnWhy haven't antitrust regulators blocked deals to promote competition? It's mainly because of a change in US antitrust philosophy in the 1980s, inspired by neoclassical economists and legal scholars at the University of Chicago. Before the shift, antitrust enforcers were wary of any deals that reinforced a company's dominant position. After it, they became more tolerant of such combinations, as long as prices for consumers didn't rise. This was just fine with internet companies, since most of their services were free anyway. Critics say trustbusters exercised too little scrutiny. "Just because the web companies offer products for free doesn't mean they should get a free pass," says Jonathan Kanter, an antitrust lawyer at Paul Weiss.rnrnthanks to their vast wealth, fining them for any transgressions won't diminish their power.rnrnOne radical solution would be to break them up, just as the US government splintered the dominant Standard Oil monopoly in the early 1900s. Some progressive advocacy groups in the US have been running online campaigns with slogans like "Facebook has too much power over our lives and democracy. It's time for us to take that power back," and calling on the FTC to force the social network to sell Instagram, WhatsApp, and Messenger to create competition.rnrnSo how to curb the power of the data barons? Rather than waiting for legal battles that may or may not foster more competition, we urgently need to find ways to bolster rivals. That means reducing the vast chasm between the amounts of information held by the web giants and the rest. Regulation can help here: Europe's new data privacy regime requires companies to hold people's data in machine-readable form and let them move it easily to other businesses if they want to. This "data portability" rule will allow startups to get hold of more data quickly.rnrnrnSome argue that we need to think much more boldly-and not just with the big internet companies in mind. Viktor Mayer-Sch195182nberger, a professor at the University of Oxford, has proposed what he calls a "progressive data-sharing mandate" that would apply to all businesses. This would require a company that has passed a certain level of market share (say, 10 percent) to share some data with other firms in its industry that ask for it. The data would be chosen at random and stripped of all personal identifiers. Intuitively, the idea makes sense: the closer a company gets to dominating its market, the more data it would have to share, making it easier for rivals to compete by building a better product.