2017/02/06: all this encryption breaks Deep Packet Inspection. All the IDS's, IPS's and NGFW's that we bought are becoming obsolete. They can't inspect the encrypted packets. Of course they try to hold onto this technology by introducing technologies like SSL inspection (aka SSLbump). This technology basically breaks the trust model of Internet encryption by acting as a man-in-the-middle. The place where you work spoofs itself as the encrypted site you are going to. Because they control your computer, you don't even know it is happening. Then they decrypt your Internet traffic to use DPI on it and then re-encrypt it back to the Internet.
Instead of holding onto deep packet inspection, I think we need to transition to new methodologies for detecting bad things on the network. Telemetry data is one of these ways through passive monitoring of netflows or DNS queries By looking at traffic on your network and determining what looks anomalous, you may be able to determine where the nefarious activity is happening. By looking at your DNS queries and investigating Passive DNS with Bind RPZor using OpenDNS you can cut down on a huge amount of bad sites on the Internet and interrupt phishing campaigns and malware.
2018/12/12: a patent application from Amazon became public that would pair face surveillance — like Rekognition, the product that the company is aggressively marketing to police and Immigration and Customs Enforcement — with Ring, a doorbell camera company that Amazon bought earlier this year.
2017/05/17: The GDPR covers all personal data defined as any data from which a living individual is identified or identifiable, whether directly or indirectly. This broad definition includes data outside the scope of HIPAA, but GDPR includes specific requirements relating to “sensitive personal data” such as racial or ethnic origin, religious or philosophical beliefs, political opinions, trade union membership, genetic data, biometric data, data concerning health or data concerning a natural person’s sex life or sexual orientation. GDPR’s “data concerning health” and HIPAA’s “protected health information” are very similar. GDPR specifically defines data concerning health as personal data relating to the physical or mental health of an individual, including the provision of health care services, which reveal informat
2018/12/06: There’s nothing artificial about AI. It’s inspired by people, it’s created by people and more importantly, it impacts people.
the term “AI” is a mystification! The term that describes the reality is “Human-Trained Machine Learning”
why training these algorithms went so wrong: subconsciously mimicking their mostly male, misogynist, often white entrepreneurs and techies with their money-making monopolistic biases and often adolescent, libertarian fantasies.
2018/11/30: An Amazon patent application sheds light on a way to monitor neighborhoods with a doorbell camera that could alert homeowners and police of suspicious activities and people.
The patent application, which was made public on the United States Patent and Trademark Office website Thursday, describes how a network of cameras could work together with facial recognition technology to identify people, and respond accordingly.
Amazon's application says the process leads to safer, more connected neighborhoods, as well as better informed homeowners and law enforcement.
The application describes creating a database of suspicious persons.
2018/06/04: are our smartphones actually listening?
According to Dr. Peter Hannay—The senior security consultant for cybersecurity firm Asterisk, and former lecturer and researcher at Edith Cowan University—the short answer is yes, but perhaps in a way that's not as diabolical as it sounds.
For your smartphone to actually pay attention and record you, there needs to be a trigger, like Hey Siri or Okay Google for example . Without these triggers, there's no recording, with just some general metrics being sent to your service provider. This might not seem a cause for an alarm, but when it comes to apps like Facebook, no one knows what the triggers are. In fact, there could be thousands.
“It’s just an extension from what advertising used to be on television,” says Peter. Only instead of prime time audiences, they’re now tracking web-browsing habits. It’s not ideal, but I don’t think it poses an immediate threat to most people.”
2017/10/22: Parent company Alphabet would provide services in response to data harvested. a city “where buildings have no static use”. Like biomass
Alphabet’s long-term goal is to remove barriers to the accumulation and circulation of capital in urban settings – mostly by replacing formal rules and restrictions with softer, feedback-based floating targets. It claims that in the past “prescriptive measures were necessary to protect human health, ensure safe buildings, and manage negative externalities”. Today, however, everything has changed and “cities can achieve those same goals without the inefficiency that comes with inflexible zoning and static building codes”.
This is a remarkable statement. Even neoliberal luminaries such as Friedrich Hayek and Wilhelm Röpke allowed for some non-market forms of social organisation in the urban domain. They saw planning – as opposed to market signals – as a practical necessity imposed by the physical limitations of urban spaces: there was no other cheap way of operating infrastructure, building streets, avoiding congestion.
For Alphabet, these constraints are no more: ubiquitous and continuous data flows can finally replace government rules with market signals. Now, everything is permitted – unless somebody complains.
Google Urbanism means the end of politics, as it assumes the impossibility of wider systemic transformations, such as limits on capital mobility and foreign ownership of land and housing. Instead it wants to mobilise the power of technology to help residents “adjust” to seemingly immutable global trends such as rising inequality and constantly rising housing costs (Alphabet wants us to believe that they are driven by costs of production, not by the seemingly endless supply of cheap credit).
2018/11/15: At the beginning of October, Amazon was quietly issued a patent that would allow its virtual assistant Alexa to decipher a user’s physical characteristics and emotional state based on their voice. Characteristics, or “voice features,” like language accent, ethnic origin, emotion, gender, age, and background noise would be immediately extracted and tagged to the user’s data file to help deliver more targeted advertising.
The algorithm would also consider a customer’s physical location — based on their IP address, primary shipping address, and browser settings — to help determine their accent. Should Amazon’s patent become a reality, or if accent detection is already possible, it would introduce questions of surveillance and privacy violations, as well as possible discriminatory advertising, experts said.
Like facial recognition, voice analysis underlines how existing laws and privacy safeguards simply aren’t capable of protecting users from new categories of data collection — or government spying, for that matter. Unlike facial recognition, voice analysis relies not on cameras in public spaces, but microphones inside smart speakers in our homes. It also raises its own thorny issues around advertising that targets or excludes certain groups of people based on derived characteristics like nationality, native language, and so on.
voice-based accent detection can determine a person’s ethnic background, it opens up a new category of information that is incredibly interesting to the government.
The Foreign Intelligence Surveillance Act, or FISA, makes it possible for the government to covertly demand such data.
2018/10/11: "On the genetic level, you shouldn't expect much privacy, and decisions about your privacy are being made by your family (probably without consulting you)."
Earlier this year, news broke that police had devised an unexpected new method to crack cold cases. Rather than use a suspect's DNA to identify them, data from the DNA was used to search public repositories and identify an alleged killer's family members. From there, a bit of family tree building led to a limited number of suspects and the eventual identification of the person who was charged with the Golden State killings. In the months that followed, more than a dozen other cases were reported to have been solved in the same manner.
The potential for this sort of analysis had been identified by biologists as early as 2014, but they viewed it as a privacy risk—there was potential for personal information from research subjects to leak out to the public via their DNA sequences. Now, a US-Israeli team of researchers has gone through and quantified the chances of someone being identified through public genealogy data. If you live in the US and are of European descent, odds are 60 percent that you can be identified via information that your relatives have made public.
2018/09/19: DeAngelo was found using GEDMatch, a website that pools user-uploaded genetic profiles from other genealogy websites. GEDMatch exists “to provide DNA and genealogy tools for comparison and research services” and has an open-source database of 650,000 genetically connected profiles. DeAngelo, who evaded capture for decades, was meticulous in his crime scenes and certainly did not upload his DNA to a website in the hopes of uncovering his ancestry. But a third cousin of his did.
“The privacy concerns raised by the Golden State Killer investigation don’t disappear just because GEDmatch, the genealogical database investigators reportedly used, was a public site. In fact, investigators’ decision to upload a detailed genetic profile generated from crime-scene DNA to a public website likely violated the alleged perpetrator’s privacy rights,” Vera Eidelman of the American Civil Liberties Union (ACLU) wrote in an op-ed in the Washington Post. “Even if DeAngelo is found guilty of the crimes he is accused of, penalties for such crimes do not typically entail releasing a person’s entire genetic makeup. People may not be so troubled by such an intrusion when it comes to a serial killer, but imagine the implications of using this technique for shoplifters or trespassers.”
2018/09/30/: Facebook previously attracted the attention of the Indian government in 2016, when it was criticized for offering a free internet service that connected to only a limited number of websites (including Facebook). Called Free Basics, the program was shot down by the Indian government because it violated net neutrality.
Attention has since turned to WhatsApp, which Facebook purchased in 2014.
WhatsApp has introduced at least four new features over the past month that are designed to combat the mass messaging of rumors that have fueled mob violence and killings in India, the service's largest market with over 200 million users.
According to WhatsApp's website, its latest test feature "automatically performs checks to determine if a link is suspicious" and advises users to exercise caution when receiving and opening links.
WhatsApp previously started labeling messages to indicate they've been forwarded rather than composed by the sender. It's also testing limits on how many chats (individuals or groups) a message can be forwarded to simultaneously — 20 for the rest of the world, five for India.
Newspaper ads, which contain tips like "check information that seems unbelievable" and "be thoughtful about what you share," began appearing in national and regional newspapers across nine Indian states earlier this month.
Sparked by rumors of child abduction, mob killings have continued. The most recent took place two weeks ago, after some of WhatsApp's new features were rolled out.
2018/09/27: Insurance works because we are ignorant of our individual fates. It is the fact that any of us might turn out to be a bad risk that makes it sensible for everyone to insure against that remote chance. The pooling of individual risks that can only be known in aggregate underlies the whole system. But there is a subtle mismatch of aims between insurers and their customers. The customers want to avoid the consequences of misfortune; the insurers want customers who avoid misfortune. The two aims are reconciled because both sides are operating behind a veil of ignorance.
Insurers have an interest in knowing as much as possible about their customers. Customers have an interest in insurers underestimating their real risk. But both sides will benefit if ways are found to reduce the risk of the misfortune insured against. The balance between knowledge and ignorance of risk has traditionally been struck at the level of statistical knowledge about large groups.
But statistically significant groups are getting smaller in the age of big data.
2018/09/20: Google told U.S. senators that the company continues to allow developers to scan and share data from Gmail accounts, according to a letter made public Thursday.
Google said it uses automated scans and reports from security researchers to monitor third parties with access to Gmail data, but gave no details on how many add-ons have been caught violating its policies.
2018/09/16: Nothing says more about someone than the music they listen to and their porn habits. This is certainly ingrained in the streaming service’s business model.
Over the past few years, Spotify has been ramping up its data analytic capabilities in a bid to help marketers target consumers with adverts tailored to the mood they’re in. They deduce this from the sort of music you’re listening to, coupled with where and when you’re listening to it, along with third-party data that might be available.
Spotify is far from the only platform helping brands target people according to their emotions; real-time mood-based marketing is a growing trend and one we all ought to be cognisant of. In 2016, eBay launched a mood marketing tool, for example. And last year, Facebook told advertisers that it could identify when teenagers felt “insecure” and “worthless” or needed “a confidence boost”. This was just a few years after Facebook faced a backlash for running experiments to see if it could manipulate the mood of its users.
You can see where this could go, can’t you? As ad targeting gets ever more sophisticated, marketers will have the ability to target our emotions in potentially exploitative ways. You are more likely to spend more on a product if you’re feeling sad.
12018/09/19: one of the oldest and largest North American life insurers, will stop underwriting traditional life insurance and instead sell only interactive policies that track fitness and health data through wearable devices and smartphones.
Privacy and consumer advocates have raised questions about whether insurers may eventually use data to select the most profitable customers, while hiking rates for those who do not participate. The insurance industry has said that it is heavily regulated and must justify, in actuarial terms, its reasons for any rate increases or policy changes.
2018/9/18: For years, Facebook has publicly positioned its Messenger application as a way to connect with friends and as a way to help customers interact directly with businesses. But a new report from The Wall Street Journal today indicates that Facebook also saw its Messenger platform as a siphon for the sensitive financial data of its users, information it would not otherwise have access to unless a customer interacted with, say, a banking institution over chat. In this case, the WSJ report says not only did the banks find Facebook's methods obtrusive, but the companies also pushed back against the social network and, in some cases, moved conversations off Messenger to avoid handing Facebook any sensitive data. Among the financial firms Facebook is said to have argued with about customer data are American Express, Bank of America, and Wells Fargo.
The report says Facebook was interested in helping banks create bots for its Messenger platform, as part of a big push in 2016 to turn the chat app into an automated hub of digital life that could help you solve problems and avoid cumbersome customer service calls. But some of these bots, like the one American Express developed for Messenger last year, deliberately avoided sending transaction information over the platform after Facebook made clear it wanted to use customer spending habits as part of its ad targeting business. In some cases, companies like PayPal and Western Union negotiated special contracts that would let them offer many detailed and useful services like money transfers, the WSJ reports. But by and large, big banks in the U.S. have reportedly shied away from working with Facebook due to how aggressively it pushed for access to customer data.
2018/09/13: Racist bridges aren’t the only inanimate objects that have had quiet, clandestine control over people.
the residents of Scunthorpe, in the north of England, who were blocked from opening AOL accounts after the internet giant created a new profanity filter that objected to the name of their town.
an automatic hand-soap dispenser that perfectly released soap whenever white hands where placed under it did not recognize as hands those of a Nigerian man.
they discovered that home cooks were less likely to make claims on their home insurance and were therefore more profitable. The most significant item that gives you away as a responsible, house-proud person more than any other was fresh fennel.
there are concerns about this kind of data profiling being used in an exclusionary way: motorbike enthusiasts being deemed to have a risky hobby or people who eat sugar-free sweets being flagged as diabetic and turned down for insurance as a result. A study from 2015 demonstrated that Google was serving far fewer ads for high-paying executive jobs to women who were surfing the web than to men.
searches for “black-sounding names” were disproportionately likely to be linked to ads containing the word “arrest” (for example, “Have you been arrested?”) than those with “white-sounding names.”
2018/09/15: data from sensors located in vehicles have an important advantage over traditional data-gathering systems:
Currently, city managers and planners are faced with the challenge of relying on incomplete or out of date information.
A less obvious application of Geotab’s dataset is the ability to spot problems like potholes. Aggregated vertical axis accelerometer data from vehicles can be analyzed in near real-time to indicate areas in need of road maintenance. Other aspects of urban life that can be monitored in this way include areas where cars idle, thus wasting fuel and increasing air pollution, and roads where drivers are searching for parking places. Gathering this kind of data would be expensive using other approaches, but emerges naturally from aggregated traffic flows.
Huge datasets generated by sensors on connected vehicles offer interesting new opportunities for urban analytics. But there are naturally privacy concerns, too. Connected vehicles inevitably track the people who drive them. Analyzing the habits of drivers as revealed by their journeys can expose extremely sensitive information - think of repeated visits to a hospital, or unexpected overnight stays at private houses.
2018/09/14: Google built a prototype of a censored search engine for China that links users’ searches to their personal phone numbers, thus making it easier for the Chinese government to monitor people’s queries, The Intercept can reveal.
The search engine, codenamed Dragonfly, was designed for Android devices, and would remove content deemed sensitive by China’s ruling Communist Party regime, such as information about political dissidents, free speech, democracy, human rights, and peaceful protest.
Sources familiar with Dragonfly said the search platform also appeared to have been tailored to replace weather and air pollution data with information provided directly by an unnamed source in Beijing.
The scan takes fractions of a second and has shown to be 99 percent accurate during testing, according to CBP Commissioner Kevin McAleenan, who was joined by MWAA President Jack Potter and airline representatives for an unveiling event Thursday.