mfioretti: privacy* + percloud*

Bookmarks on this page are managed by an admin user.

64 bookmark(s) - Sort by: Date ↓ / Title / Voting / - Bookmarks from other users for this tag

  1. Mark Zuckerberg also launched Facebook with a disdain for intrusive advertising, but it wasn’t long before the social network giant became Google’s biggest competitor for ad dollars. After going public with 845 million users in 2012, Facebook became a multibillion-dollar company and Zuckerberg one of the richest men on Earth, but with only a promise that the company would figure out how to monetize its platform.

    Facebook ultimately sold companies on its platform by promising “brand awareness” and the best possible data on what consumers actually liked. Brands could start their own Facebook pages, which people would actually “like” and interact with. This provided unparalleled information about what company each individual person wanted to interact with the most. By engaging with companies on Facebook, people gave corporate marketing departments more information than they could have ever dreamed of buying, but here it was offered up free.

    This was the “grand bargain,” as Columbia University law professor Tim Wu called it in his book, The Attention Merchants, that users struck with corporations. Wu wrote that Facebook’s “billions of users worldwide were simply handing over a treasure trove of detailed demographic data and exposing themselves to highly targeted advertising in return for what, exactly?”

    In other words: We will give you every detail of our lives and you will get rich by selling that information to advertisers.

    European regulators are now saying that bargain was a bad deal. The big question that remains is whether their counterparts in the U.S. will follow their lead.
    https://www.huffingtonpost.com/entry/...antitrust_us_5a625023e4b0dc592a088f6c
    Voting 0
  2. The point is that Facebook has a strong, paternalistic view on what’s best for you, and it’s trying to transport you there. “To get people to this point where there’s more openness – that’s a big challenge. But I think we’ll do it,” Zuckerberg has said. He has reason to believe that he will achieve that goal. With its size, Facebook has amassed outsized powers. “In a lot of ways Facebook is more like a government than a traditional company,” Zuckerberg has said. “We have this large community of people, and more than other technology companies we’re really setting policies.”

    Without knowing it, Zuckerberg is the heir to a long political tradition. Over the last 200 years, the west has been unable to shake an abiding fantasy, a dream sequence in which we throw out the bum politicians and replace them with engineers – rule by slide rule. The French were the first to entertain this notion in the bloody, world-churning aftermath of their revolution. A coterie of the country’s most influential philosophers (notably, Henri de Saint-Simon and Auguste Comte) were genuinely torn about the course of the country. They hated all the old ancient bastions of parasitic power – the feudal lords, the priests and the warriors – but they also feared the chaos of the mob. To split the difference, they proposed a form of technocracy – engineers and assorted technicians would rule with beneficent disinterestedness. Engineers would strip the old order of its power, while governing in the spirit of science. They would impose rationality and order.
    Advertisement

    This dream has captivated intellectuals ever since, especially Americans. The great sociologist Thorstein Veblen was obsessed with installing engineers in power and, in 1921, wrote a book making his case. His vision briefly became a reality. In the aftermath of the first world war, American elites were aghast at all the irrational impulses unleashed by that conflict – the xenophobia, the racism, the urge to lynch and riot. And when the realities of economic life had grown so complicated, how could politicians possibly manage them? Americans of all persuasions began yearning for the salvific ascendance of the most famous engineer of his time: Herbert Hoover. In 1920, Franklin D Roosevelt – who would, of course, go on to replace him in 1932 – organised a movement to draft Hoover for the presidency.

    The Hoover experiment, in the end, hardly realised the happy fantasies about the Engineer King. A very different version of this dream, however, has come to fruition, in the form of the CEOs of the big tech companies. We’re not ruled by engineers, not yet, but they have become the dominant force in American life – the highest, most influential tier of our elite.

    There’s another way to describe this historical progression. Automation has come in waves. During the industrial revolution, machinery replaced manual workers. At first, machines required human operators. Over time, machines came to function with hardly any human intervention. For centuries, engineers automated physical labour; our new engineering elite has automated thought. They have perfected technologies that take over intellectual processes, that render the brain redundant. Or, as the former Google and Yahoo executive Marissa Mayer once argued, “You have to make words less human and more a piece of the machine.” Indeed, we have begun to outsource our intellectual work to companies that suggest what we should learn, the topics we should consider, and the items we ought to buy. These companies can justify their incursions into our lives with the very arguments that Saint-Simon and Comte articulated: they are supplying us with efficiency; they are imposing order on human life.

    Nobody better articulates the modern faith in engineering’s power to transform society than Zuckerberg. He told a group of software developers, “You know, I’m an engineer, and I think a key part of the engineering mindset is this hope and this belief that you can take any system that’s out there and make it much, much better than it is today. Anything, whether it’s hardware or software, a company, a developer ecosystem – you can take anything and make it much, much better.” The world will improve, if only Zuckerberg’s reason can prevail – and it will.

    The precise source of Facebook’s power is algorithms. That’s a concept repeated dutifully in nearly every story about the tech giants, yet it remains fuzzy at best to users of those sites. From the moment of the algorithm’s invention, it was possible to see its power, its revolutionary potential. The algorithm was developed in order to automate thinking, to remove difficult decisions from the hands of humans, to settle contentious debates.
    Advertisement

    The essence of the algorithm is entirely uncomplicated. The textbooks compare them to recipes – a series of precise steps that can be followed mindlessly. This is different from equations, which have one correct result. Algorithms merely capture the process for solving a problem and say nothing about where those steps ultimately lead.

    These recipes are the crucial building blocks of software. Programmers can’t simply order a computer to, say, search the internet. They must give the computer a set of specific instructions for accomplishing that task. These instructions must take the messy human activity of looking for information and transpose that into an orderly process that can be expressed in code. First do this … then do that. The process of translation, from concept to procedure to code, is inherently reductive. Complex processes must be subdivided into a series of binary choices. There’s no equation to suggest a dress to wear, but an algorithm could easily be written for that – it will work its way through a series of either/or questions (morning or night, winter or summer, sun or rain), with each choice pushing to the next.

    Facebook would never put it this way, but algorithms are meant to erode free will, to relieve humans of the burden of choosing, to nudge them in the right direction. Algorithms fuel a sense of omnipotence, the condescending belief that our behaviour can be altered, without our even being aware of the hand guiding us, in a superior direction. That’s always been a danger of the engineering mindset, as it moves beyond its roots in building inanimate stuff and begins to design a more perfect social world. We are the screws and rivets in the grand design.
    https://www.theguardian.com/technolog...r-on-free-will?CMP=Share_iOSApp_Other
    Voting 0
  3. "All of us, when we are uploading something, when we are tagging people, when we are commenting, we are basically working for Facebook," he says.

    The data our interactions provide feeds the complex algorithms that power the social media site, where, as Mr Joler puts it, our behaviour is transformed into a product.

    Trying to untangle that largely hidden process proved to be a mammoth task.

    "We tried to map all the inputs, the fields in which we interact with Facebook, and the outcome," he says.

    "We mapped likes, shares, search, update status, adding photos, friends, names, everything our devices are saying about us, all the permissions we are giving to Facebook via apps, such as phone status, wifi connection and the ability to record audio."

    All of this research provided only a fraction of the full picture. So the team looked into Facebook's acquisitions, and scoured its myriad patent filings.

    The results were astonishing.

    Visually arresting flow charts that take hours to absorb fully, but which show how the data we give Facebook is used to calculate our ethnic affinity (Facebook's term), sexual orientation, political affiliation, social class, travel schedule and much more.
    Image copyright Share Lab
    Image caption Share Lab presents its information in minutely detailed tables and flow charts

    One map shows how everything - from the links we post on Facebook, to the pages we like, to our online behaviour in many other corners of cyber-space that are owned or interact with the company (Instagram, WhatsApp or sites that merely use your Facebook log-in) - could all be entering a giant algorithmic process.

    And that process allows Facebook to target users with terrifying accuracy, with the ability to determine whether they like Korean food, the length of their commute to work, or their baby's age.

    Another map details the permissions many of us willingly give Facebook via its many smartphone apps, including the ability to read all text messages, download files without permission, and access our precise location.

    Individually, these are powerful tools; combined they amount to a data collection engine that, Mr Joler argues, is ripe for exploitation.

    "If you think just about cookies, just about mobile phone permissions, or just about the retention of metadata - each of those things, from the perspective of data analysis, are really intrusive."
    http://www.bbc.com/news/business-39947942
    Voting 0
  4. Facebook, which now owns WhatsApp, is fighting a challenge to its new privacy policy that it unveiled last year. According to the new privacy policy WhatsApp can share some user data with Facebook, which the Mark Zuckerberg-led company can then use in various ways. Although WhatsApp says that it will (still) not share all the information that users generate through their chats, India Today Tech noted earlier , Facebook only needs the phone number of a user to build a full WhatsApp profile for that user. The company most likely already has other details on users.

    Also Read: WhatsApp will ONLY share phone number but that is all Facebook needs

    The new WhatsApp privacy has been criticised worldwide. Just days ago, a court in Germany asked Facebook to stop harvesting user information from WhatsApp. After the court order, Facebook said that it was pausing the sharing of WhatsApp user data with Facebook in whole of Europe. The ruling came even as the European Union privacy watchdog continues to probe the new privacy policy.

    However, in India where privacy laws are non-existent, Facebook and WhatsApp have so far defended their new privacy policy. It is also important to note that India is one of the biggest markets for both Facebook and WhatsApp and that could also be one of the reasons why Facebook wants to enforce its new privacy policies here. Data from Indian users could be commercially very attractive for the company.
    http://indiatoday.intoday.in/technolo...hatsapp-facebook-lawyer/1/940551.html
    Voting 0
  5. the experts are right about many things. OpenPGP is old and more recent tools with more modern designs have a lot going for them. But I still think they're mostly wrong.

    The experts, by and large, have yet to offer any credible replacements for PGP. And when they suggest abandoning PGP, what they're really saying is we should give up on secure e-mail and just use something else. That doesn't fly. Many people have to use e-mail. E-mail is everywhere. Not improving the security of e-mail and instead expecting people to just use other tools (or go without), is the security elite proclaiming from their ivory tower: "Let them eat cake!"

    Furthermore, if that "something else" also requires people use their phone number for everything... well, that's the messaging world's equivalent of the widely despised Facebook Real Name Policy. If you ever needed a clear example of why the lack of diversity (and empathy) in tech is a problem, there it is!

    Compartmentalization, presenting different identities in different contexts, is a fundamental, necessary part of human behaviour. It's one of the basics. If you think taking that away and offering fancy crypto, forward secrecy, deniability instead is a win... well, I think your threat models need some work! You have failed and people will just keep on using insecure e-mail for their accounting, their work, their hobbies, their doctor visits and their interaction with local government. Because people know their needs better than you do.

    But I digress.

    The ridiculous phone number thing aside, I also take issue with the fact that when our opinionated experts do suggest replacements, the things they recommend are proprietary, centralized and controlled by for-profit companies. Some of them (mostly the underdogs) may be open source, but even the best of those use a centralized design and are hostile to federation. In pursuit of security and convenience (and, let's be honest, control, power and money), openness has been hung out to dry.

    This is short-sighted at best.

    These cool new apps may be secure today. But what about tomorrow? Odds are, they will be compromised by government mandate, blocked or shut down.
    https://www.mailpile.is/blog/2016-12-13_Too_Cool_for_PGP.html
    Voting 0
  6. Nel 2007 Google ha acquisito DoubleClick, società che raccoglieva dati di navigazione web, assicurando che mai avrebbe incrociato tali risultati con le informazioni personali possedute grazie all'utilizzo dei propri servizi. Tuttavia, a distanza di quasi 10 anni ha aggiornato le proprie condizioni per l'uso dell'account Google, informando che adesso avrà la possibilità di effettuare tale incrocio. Nel documento si legge adesso: "A seconda delle impostazioni dell'account utente, la sua attività su altri siti e app potrebbe essere associata alle relative informazioni personali allo scopo di migliorare i servizi Google e gli annunci pubblicati da Google". La modifica alle impostazioni deve essere approvata, ed infatti Google richiede specificatamente, una volta effettuato l'accesso al proprio account via browser web, di accettare tali nuove condizioni. L'utente ha la possibilità di mantenere le impostazioni attuali e continuare ad utilizzare i servizi Google allo stesso modo, mentre per i nuovi account invece le nuove opzioni sono abilitate di default. Coi nuovi termini, se accettati, Google potrà unire i dati di navigazione acquisiti tramite i servizi di analisi o tracking alle informazioni già ottenute dal profilo utente. Tutto ciò permetterà alla casa di Mountain View di comporre un ritratto completo dei propri utenti composto dai dati personali, da ciò che viene scritto nelle email, dai siti web visitati e dalle ricerche effettuate, facendo cadere definitivamente il principio di anonimato del tracciamento web.
    http://www.saggiamente.com/2016/10/ad...ource=twitter.com&utm_campaign=buffer
    Voting 0
  7. As well as taking a closer look at how we publish and maintain our own content, there are also frustrations with how we retrieve and access the content we want to see. Given the limited time available, we often end up habitually returning to one or two new websites during our commute, working day and evening to keep up-to-date. This involves becoming a human sifter of information. Parsing and filtering page after page looking for interesting content whilst trying to ignore the uninteresting leads. This is often tiresome, slow (traffic heavy) and feels like something avoidable.

    Hunting content is inefficient and not a good use of our time. It also narrows the field of sites we use due to lack of time, and therefore we rely on one or two to give us good content in the time we have. Would it not be better if we had a broader field of sites to chose from. If we could efficiently parse these sites for good, interesting content and have a broader picture to chose from.

    RSS/Atom feeds are the one option for us here. Today, RSS feels like a last remnant of a future that never quite materialised. Using RSS is a somewhat esoteric pursuit that many are unaware of. And yet it encapsulates how content should be shared. It allows multiple sources to be parsed by computer (they are good at that!), consolidated, sorted and presented which leaves us to just pick what to click. It could even be smart and learn what we like - is that too far-fetched?!

    And whilst RSS feels like a dying model, it is in fact the template for how all content should be shared. On an automated, pull-based system that queries our many sources of content, not just news, and consolidates it. Bringing together news, photos, friends updates and new blog posts for us.
    The future

    With so much to amuse us and enrich our lives on the web, it can seem at least misguided to suggest it is flawed. And yet, it is fundamentally flawed in many ways. Our entrapment by global platform providers is growing. We are losing our content, losing control of our online-selves and the sticky power of these platforms is increasingly difficult to resist.

    But there is an alternative. Things could be different. More about that in my next post.
    http://www.donaldmcintosh.net/resource/why-the-web-is-broken
    Voting 0
  8. Instagram can use your camera and microphone to record audio and take pictures and video, without asking you first. Gmail can read and modify your phone contacts. Viber has your precise GPS location at all times. Facebook can read all your text messages.

    “These are permissions that the apps require you to grant them before they are installed,” says Vladan Joler, the data wrangler behind the visualisation and director of the Serbian non-profit SHARE Foundation, which campaigns for internet freedoms. “The purpose of this visual is to show, in a clear way, what smartphone users agree to when they click 'yes' on terms and conditions.”
    http://www.wired.co.uk/magazine/archi...pps-are-giving-away-your-private-life
    Voting 0
  9. A May 2014 White House report on “big data” notes that the ability to determine the demographic traits of individuals through algorithms and aggregation of online data has a potential downside beyond just privacy concerns: Systematic discrimination.

    There is a long history of denying access to bank credit and other financial services based on the communities from which applicants come — a practice called “redlining.” Likewise, the report warns, “Just as neighborhoods can serve as a proxy for racial or ethnic identity, there are new worries that big data technologies could be used to ‘digitally redline’ unwanted groups, either as customers, employees, tenants or recipients of credit.” (See materials from the report’s related research conference for scholars’ views on this and other issues.)

    One vexing problem, according to the report, is that potential digital discrimination is even less likely to be pinpointed, and therefore remedied.

    Approached without care, data mining can reproduce existing patterns of discrimination, inherit the prejudice of prior decision-makers, or simply reflect the widespread biases that persist in society. It can even have the perverse result of exacerbating existing inequalities by suggesting that historically disadvantaged groups actually deserve less favorable treatment.” The paper’s authors argue that the most likely legal basis for anti-discrimination enforcement, Title VII, is not currently adequate to stop many forms of discriminatory data mining, and “society does not have a ready answer for what to do about it.”

    Their 2014 paper “Digital Discrimination: The Case of Airbnb.com” examined listings for thousands of New York City landlords in mid-2012. Airbnb builds up a reputation system by allowing ratings from guests and hosts.

    The study’s findings include:

    “The raw data show that non-black and black hosts receive strikingly different rents: roughly $144 versus $107 per night, on average.” However, the researchers had to control for a variety of factors that might skew an accurate comparison, such as differences in geographical location.
    “Controlling for all of these factors, non-black hosts earn roughly 12% more for a similar apartment with similar ratings and photos relative to black hosts.”
    “Despite the potential of the Internet to reduce discrimination, our results suggest that social platforms such as Airbnb may have the opposite effect. Full of salient pictures and social profiles, these platforms make it easy to discriminate — as evidenced by the significant penalty faced by a black host trying to conduct business on Airbnb.”

    “Given Airbnb’s careful consideration of what information is available to guests and hosts,” Edelman and Luca note. “Airbnb might consider eliminating or reducing the prominence of host photos: It is not immediately obvious what beneficial information these photos provide, while they risk facilitating discrimination by guests. Particularly when a guest will be renting an entire property, the guest’s interaction with the host will be quite limited, and we see no real need for Airbnb to highlight the host’s picture.” (For its part, Airbnb responded to the study by saying that it prohibits discrimination in its terms of service, and that the data analyzed were both older and limited geographically.)
    http://journalistsresource.org/studie...racial-discrimination-research-airbnb
    Voting 0
  10. I would not use public or third-party cloud as the primary backup of my data for various reasons. First of all, I have over 3 terabytes (TB) of data and it would be extremely expensive for me to buy 3 TB of cloud storage. I would have to pay over $120 per year for 1TB of data or $100 per month for 10TB on Google Drive. Cost is not the only deterrent; I will also consume huge amounts of bandwidth to access that data which may raise eyebrows from my ISP.

    The biggest danger is that then once you stop paying, you lose your data. That’s not the only problem with public cloud, the moment you start using such services your data becomes subject to numerous laws and can be accessed by government agencies without your knowledge. Your service provider gains control over your data and can lock you out of your own data for numerous reasons - most notably some ambiguous copyright violations.

    Private cloud like ownCloud or Seafile can be an option but once again, since your data left your network it is exposed to the rest of the world and, as usual, it will incur heavy bandwidth use and storage costs.

    I do use private cloud but that's mostly for the data that I want accessible outside the local network or which is shared with others. I never use it as back-up.
    http://www.linux.com/learn/tutorials/...-files-in-linux-with-the-command-line
    Voting 0

Top of the page

First / Previous / Next / Last / Page 1 of 7 Online Bookmarks of M. Fioretti: Tags: privacy + percloud

About - Propulsed by SemanticScuttle