mfioretti: data ownership*

Bookmarks on this page are managed by an admin user.

250 bookmark(s) - Sort by: Date ↓ / Title / Voting / - Bookmarks from other users for this tag

  1. Each of the pilots aims to extract public utility from the data they access by building a “data commons”, sourcing information from the public and using it to improve lives. In Barcelona, participants will be able to send their healthcare data and other personal information to the city to be aggregated and used to inform policy. With “Making Sense”, the city will distribute sensors for citizens to measure noise levels in their areas.

    In Amsterdam, the chosen projects are a neighbourhood-level social network, designed to empower and draw local people into policy formation and decision making, and a system to provide data to help govern the city’s alternative home renting platform: Fairbnb.

    A core part of the project is a secure “digital wallet” being created for each participant, said Symons, from which they manage different elements of their personal data. Developed by a team at the Netherlands’ Radboud University, this online platform will contain each participating individual’s attributes and credentials — sensor data collected by smart devices, for instance — and allow them to share information with the projects which they hope to contribute to.

    Unlike a conventional social network or internet platform, where all of an individual’s data is up for grabs, the wallet will ensure that only what was required for a specific project will be shared. Using distributed ledger technology, each participant will be able to set rules, which define how their data is to be used, and what is to be kept private or shared.

    The technology allows users to change permissions easily and redefine the conditions of access. In the future, it could let individuals manage who gains access to what data, and for what purpose, whether for a city-led project or a private service.

    Currently, cities find it difficult to improve and refine services through experimentation in the way that, for example, Facebook and Google are able to. Knowing how often individual citizens used a service, and what for, could rectify that.

    “Data could be enormously valuable in so many more ways than it currently is,” said Symons. “All of the latent social value is currently not being realised because people are not empowered to control and share their personal data as they want.”
    Community building

    But for the pilot projects to succeed they need a community of users, which is what DECODE’s architects see as the biggest challenge.

    Despite hashtags such as #deletefacebook, it is unclear how aware or concerned the public are about the way their data is used. As the hashtag reached its trending peak in April, time spent by Facebook users on the platform actually increased, and downloads of the Facebook app continued to grow in the following weeks. Recent research by the UK think tank doteveryone found that 45% of Britons were unaware that information they entered on websites and social media can help target advertisements.
    Voting 0
  2. We have just released the Santa Clara principles (PDF), calling on platforms to provide better information about how they moderate content online.

    The principles articulate a minimum set of standards for what information platforms should provide to users, what due process users should be able to expect when their posts are taken down or their accounts are suspended, and what data will be required to help ensure that the enforcement of company content guidelines is fair and unbiased. The three principles urge companies to:

    publish the numbers of posts removed and accounts permanently or temporarily suspended due to violations of their content guidelines;
    provide clear notice to all users about what types of content are prohibited, and clear notice to each affected user about the reason for the removal of their content or the suspension of their account; and
    enable users to engage in a meaningful and timely appeals process for any content removals or account suspensions.

    These principles were developed in collaboration digital rights organizations and civil society groups after the Content Moderation and Removal at Scale conference at Santa Clara University in February 2018. They incorporate research we’ve been working on as part of a grant from the Internet Policy Observatory, and my ongoing work funded by the Australian Research Council.
    What proportion of content is removed?

    As researchers, we need better information in order to study how well content moderation systems are working. As part of our research, we’ve been tracking how the content moderation processes of major platforms are actually working in practice. We’re using this information to evaluate these systems for bias, in a way that we can monitor improvements over time. We’ve created some very simple dashboards to help people explore this data — linked under each graph below.

    This data gives us a rare overview of the scale of content moderation on major platforms. We can see, for example, that somewhere around 7–9% of tweets are no longer available two weeks after they have been posted. We can also see trends in content censored in certain countries (Turkey and Germany are the biggest censors of tweets):
    Voting 0
  3. uno studio legale noleggia una stampante di alto livello, poi la macchina viene passata ad un altro cliente. Nulla di strano, il bello del noleggio è proprio di poter rinunciare alla macchina o cambiarla con una più nuova o performante. Però bisognerebbe ricordarsi di svuotare l’hard disk e di formattarlo con programmi di sicurezza che impediscono il recupero dei dati, non lasciare migliaia di scansioni di documenti legali a disposizione dell’utilizzatore successivo. Così come la ditta di noleggio dovrebbe attivarsi per un doppio controllo.

    Non dimentichiamo che la perdita di una chiavetta USB, il furto di un portatile o di uno smartphone possono causare una data breach, con tutta la procedura e le sanzioni conseguenti.
    Tags: , , , by M. Fioretti (2018-05-13)
    Voting 0
  4. rom this mis-diagnosis flows a proposed solution: limit Facebook and Google’s access to our personal data and/or ensure others have access to that personal data on equal terms (“data portability”). Data portability means almost nothing in a world where you have a dominant network. So what if I can get my data out of Facebook if no other network has a critical mass of participants. What is needed is that Facebook has a live, open read/write API that allows other platforms to connect if authorized by the user.

    In fact, personal data is a practical irrelevancies to the monopoly issue. Focusing on it serves only to distract us from the real solutions.

    Limiting Facebook’s and Google’s access to our personal data or making it more portable would make very little difference to their monopoly power, or reduce the deleterious effects of that power on innovation and freedom — the key freedoms of enterprise, choice and thought.

    It make little difference because their monopoly just doesn’t arise from their access to our personal data. Instead it comes from massive economies of scale (costless copying) plus platform effects. If you removed Google’s and Facebook’s ability to use personal data to target ads tomorrow it would make very little difference to their top or bottom lines because their monopoly on our attention would be little changed and their ad targeting would be little diminished — in Google’s case the fact you type in a specific search from a particular location is already enough to target effectively and similar Facebook’s knowledge of your broad demographic characteristics would be enough given the lock-hold they have on our online attention.

    What is needed in Google’s case is openness of the platform and in Facebook’s openness combined with guaranteed interoperability (“data portability” means little if everyone is on Facebook!).

    Worse, focusing on privacy actually reinforces their monopoly position. It does so because privacy concerns:

    Increase compliance costs which burden less wealthy competitors disproportionately. In particular, increased compliance costs make it harder for new firms to enter the market. A classic example is the “right to be forgotten” which actually makes it harder for alternative search firms to compete with Google.
    Make it harder to get (permitted) access to user data on the platform and it is precisely (user-permitted) read/write access to a platform’s data that is the best chance for competition. In fact, it now gives monopolists the perfect excuse to deny such access: Facebook can now deny other competing firms (user-permitted) access to user data citing “privacy concerns”.

    Similarly, the idea sometimes put forward that we just need another open-source decentralized social network is completely implausible (even if run by Tim Berners-Lee*).

    Platforms/networks like Facebook tend to standardize: witness phone networks, postal networks, electricity networks and even the Internet. We don’t want lots of incompatible social networks. We want one open one — just like we have one open Internet.

    In addition, the idea that some open-source decentralized effort is going to take on an entrenched highly resourced monopoly on its own is ludicrous (the only hope would be if there was serious state assistance and regulation — just in the way that China got its own social networks by effectively excluding Facebook).

    Instead, in the case of Facebook we need to address the monopoly at root: networks like this will always tend to standardization. The solution is ensure that we get an open rather than closed, proprietary global social network — just like we got with the open Internet.

    Right now that would mean enforcing equal access rights to facebook API for competitors or, enforcing full open sourcing of key parts of the software and tech stack plus getting guarantees ongoing non-discriminatory API access.

    Even more importantly we need to prevent these kind of monopolies in future — we want to stop shutting the door after the horse has bolted! This means systematic funding of open protocols and platforms. By open i mean the software, algorithms and non-personal data are open. And we need to fund the innovators who create and develop these and the way to do that is replacing patents/copyright with remuneration rights.
    Voting 0
  5. The move also flags up contradictions in’s messaging to its users. For instance we’ve asked the company why it’s shutting down in the EU if — as it claims on its website — it “respects your privacy”. We’re not holding our breath for a response.

    The market exit also looks like a tacit admission that has essentially been ignoring the EU’s existing privacy regime. Because GDPR does not introduce privacy rules to the region. Rather the regulation updates and builds on a data protection framework that’s more than two decades old at this point — mostly by ramping up enforcement, with penalties for privacy violations that can scale as high as 4% of a company’s global annual turnover.

    So suddenly the EU is getting privacy regs with teeth. And just as suddenly is deciding it needs to shut up the local shop…
    Voting 0
  6. The company’s financial performance is more of a reflection of Facebook’s unstoppability than its cause. Despite personal reservations about Facebook’s interwoven privacy, data, and advertising practices, the vast majority of people find that they can’t (and don’t want to) quit. Facebook has rewired people’s lives, routing them through its servers, and to disentangle would require major sacrifice. And even if one could get free of the service, the social pathways that existed before Facebook have shriveled up, like the towns along the roads that preceded the interstate highway system. Just look at how the very meaning of the telephone call has changed as we’ve expanded the number of ways we talk with each other. A method of communication that was universally seen as a great way of exchanging information has been transformed into a rarity reserved for close friends, special occasions, emergencies, and debt collectors.

    Most of the general pressures on the internet industry’s data practices, whether from Europe or anywhere else, don’t seem to scare Facebook. Their relative position will still be secure, unless something radical changes. In the company’s conference call with analysts last week, Sheryl Sandberg summed it up.

    “The thing that won’t change is that advertisers are going to look at the highest return-on-investment » opportunity,” Sandberg said. “And what’s most important in winning budgets is relative performance in the industry.”

    As long as dollars going into the Facebook ad machine sell products, dollars will keep going into the Facebook ad machine.

    As long as their friends are still on Instagram, Facebook, and WhatsApp, people will keep using Facebook products.
    Voting 0
  7. Un altro dei possibili punti di contesa è probabilmente legato alla pubblicità e alla condivisione di dati tra Whatsapp e Facebook. Al momento non sappiamo se la casa madre intenda introdurre inserzioni sulla app di messaggistica, una mossa che è stata sempre osteggiata da Koum e Acton. All’epoca della acquisizione i due cofondatori avevano ricevuto rassicurazioni sul fatto che non sarebbe stata aggiunta.

    Ma un anno e mezzo dopo Facebook ha convinto Whatsapp a cambiare i suoi termini di servizio per ottenere i numeri di telefono dei suoi utenti e inviare loro pubblicità mirata sul social (non sulla base delle loro conversazioni Whatsapp, che restavano inaccessibili all’azienda; ma sulla base del loro numero di telefono, che permetteva di farli trovare ad aziende che avevano liste di clienti e di loro cellulari, e che volevano raggiungerli con delle promozioni su Facebook).

    Nel maggio 2017 l’Unione europea ha multato Facebook con 110 milioni di euro per aver fornito informazioni fuorvianti al tempo dell’acquisizione di Whatsapp. Nel 2014 infatti il social aveva sostenuto che non avrebbe potuto collegare in modo automatico gli account degli utenti della app di messaggistica con i propri.

    Nello stesso periodo anche l’Autorità Garante della Concorrenza e del Mercato, in Italia, sanzionava Whatsapp per 3 milioni di euro, per aver indotto gli utenti “ad accettare integralmente i nuovi Termini di Utilizzo, in particolare la condivisione dei propri dati con Facebook, facendo loro credere che sarebbe stato, altrimenti, impossibile proseguire nell’uso dell’applicazione”.
    II futuro dopo Koum

    Alla base dell’uscita dei due cofondatori sembra esserci soprattutto uno scontro culturale fra il modello Whatsapp, che punta sull’idea di privacy, e il modello Facebook, che punta sull’utilizzo dei dati degli utenti per guadagnare con la pubblicità. E malgrado Facebook avesse vinto alcuni passaggi cruciali – come l’abbandono della sottoscrizione da 0,99 centesimi per Whatsapp (che era stata introdotta per nuovi utenti) o il cambio di termini di servizio ecc - i due cofondatori resistevano a modifiche più radicali. Che d’ora in poi potrebbero non trovare più ostacoli.

    Ma tutto ciò potrebbe anche essere un boomerang per Whatsapp. Non sembra il momento migliore per svendere la propria identità di servizio orientato alla privacy. Non a caso all’inizio del 2018 Acton ha deciso di mettere 50 milioni di dollari in Signal, la app cifrata, di nicchia ma apprezzatissima dalla comunità tecnologica, sul cui protocollo si basa la stessa cifratura di Whatsapp (di fatto i milioni li ha messi nella Signal Foundation, no-profit che dovrà ampliare la missione della app di “rendere più accessibili e ubique le comunicazioni private”).

    Nel contempo l’altra appcifrata più nota, Telegram, si erge (almeno a livello di immagine e marketing, non sulla qualità della cifratura e della sua implementazione) a paladina della libertà di espressione e della privacy, facendosi mettere al bando in Russia. In questo scenario, c’è da scommettere che difficilmente Koum se ne starà a lungo a giocare con le Porsche.
    Voting 0
  8. Today’s Internet and digital platforms are becoming increasingly centralised, slowing innovation and challenging their potential to revolutionise society and the economy in a pluralistic manner.

    The DECODE project will develop practical alternatives, through the creation, evaluation and demonstration of a distributed and open architecture for managing online access and aggregation of private information to allow a citizen-friendly and privacy-aware governance of access entitlements.

    Strong ethical and digital rights principles are at the base of DECODE’s mission, moving towards the implementation of open standards for a technical architecture resting on the use of Attribute Based Cryptography, distributed ledgers, secure operating system and a privacy focused smart rules language
    Voting 0
  9. Popular internet platforms that currently mediate our everyday communications become more and more efficient in managing vast amounts of information, rendering their users more and more addicted and dependent on them. Alternative, more organic options like community networks do exist and they can empower citizens to build their own local networks from the bottom up. This chapter explores such technological options together with the adoption of a healthier internet diet in the context of a wider vision of sustainable living in an energy-limited world.

    The popular Internet platforms that mediate a significant portion of our everyday communications become thus more and more efficient in managing vast amounts of information. In turn, they also become more and more knowledgeable about designing user interaction design techniques that increase addiction, or “stickiness” when described as a performance metric, and dependency. This renders their users more and more addicted and dependent on them, subject to manipulation and exploitation for commercial and political objectives. This could be characterized as the second watershed of the Internet in the context of Illich’s analysis on the lifecycle of tools. As in the case of medicine and education, the Internet at its early stages was extremely useful. It dramatically increased our access to knowledge and to people all over the world. However, to achieve this, it relied on big organizations offering efficient and reliable services. These services now depend more and more on the participation of people and on the exploitation of the corresponding data produced for platforms to survive. This creates a vicious cycle between addictive design practices and unfair competition which breach the principle of net neutrality, and unethical uses of privately owned knowledge on human behavior which are generated through analyses of the data produced from our everyday online activities.

    In addition to the tremendous social, political, and economic implications of centralizing power on the Internet, there are also significant ecological consequences. At first glance, these seem to be positive. The centralization of online platforms has allowed their owners to build huge data centers in cold climates and invest in technologies that keep servers cool with lower energy costs. However, at the same time, the main aim of online platforms is to maximize the total time spent online as much as possible and to maximize the amount of information exchanged, not only between people but also between “things!” Their profitability depends on the processing of huge amounts of information that produces knowledge which can be sold to advertisers and politicians. Like the pharmaceutical companies, they create and maintain a world in which they are very much needed. This also explains why corporations like Facebook, Google, and Microsoft are at the forefront of the efforts to provide “Internet access to all” and why at the same time local communities face so many economic, political, and legal hurdles that encumber them to build, maintain, and control their own infrastructures.

    To achieve a sustainable level of Internet usage, one needs to provide the appropriate tools and processes for local communities to make decisions on the design of their ICT tools, including appropriate alternative and/or complementary design of places, institutions, and rituals that can impose certain constraints and replace online communications when these are not really necessary. To answer this demand, one should first answer a more fundamental question: How much online communication is needed in an energy-restricted world? In the case of food and housing, there are some reasonable basic needs. For example, each person should consume 2000 calories per day or 35 m2 of habitat (see P.M., 2014). But, how many Mbs does someone need to consume to sustain a good quality of life? What would be the analogy for a restricted vegetarian or even vegan Internet diet?
    The answer might differ depending on the services considered (social activities, collaborative work, or media) and the type of access to the network discussed above. For example, is it really necessary to have wireless connectivity “everywhere, anytime” using expensive mobile devices, or is it enough to have old-fashioned Internet cafes and only wired connections at home? Would it make sense to have Internet-free zones in cities? Can we imagine “shared” Internet usage in public spaces—a group of people interacting together in front of a screen and alternating in showing their favorite YouTube videos (a sort of an Internet jukebox)? There is a variety of more or less novel constraints which could be imposed on different dimensions:

    Time and Volume: A communications network owned by a local community, instead of a global or local corporation, could shut down for certain period of time each day if this is what the community decides. Or community members could agree to have certain time quotas for using the network (e.g., not more than 4 hours per day or 150 hours per month). Such constraints would not only reduce energy consumption; they would also enforce a healthier lifestyle and encourage face-to-face interactions.

    Reducing quotas on the speed (bandwidth) and volume (MB) that each person consumes is another way to restrict Internet consumption. Actually people are already used to such limits especially for 3G/4G connectivity. The difference is that a volume constraint does not necessarily translate to time constraints (if someone uses low volume services such as e-mail). So, volume constraints could encourage the use of less voluminous services (e.g., downloading a movie with low instead of High Definition resolution if this is to be watched in a low definition screen anyway) while time constraints might have the opposite effect (people using as much bandwidth as possible in their available time).

    However, to enforce such constraints, both time and volume based, on an individual basis, the network needs to know who is connecting to it and keep track of the overall usage. This raises the question of privacy and identification online and again the trade-off of trusting local vs. global institutions to take this role. Enforcing time or volume constraints for groups of people (e.g., the residents of a cooperative housing complex) is an interesting option to be considered when privacy is considered important.

    Devices: Energy consumption depends on the type of equipment used to access the Internet. For example, if access to the Internet happens only through desktop computers or laptops using ethernet cables instead of mobile smartphones, then the total energy consumed for a given service would be significantly reduced. Usage would also be dramatically affected: On the positive side, many people would spend less time online and use the Internet only for important tasks. On the negative side, others might stay at home more often and sacrifice outdoors activities in favor of Internet communications.
    Voting 0
  10. Journalists have been asking me whether the revulsion against the abuse of Facebook data could be a turning point for the campaign to recover privacy. That could happen, if the public makes its campaign broader and deeper.

    Broader, meaning extending to all surveillance systems, not just Facebook. Deeper, meaning to advance from regulating the use of data to regulating the accumulation of data. Because surveillance is so pervasive, restoring privacy is necessarily a big change, and requires powerful measures.
    After the Facebook scandal it’s time to base the digital economy on public v private ownership of data
    Evgeny Morozov
    Read more

    The surveillance imposed on us today far exceeds that of the Soviet Union. For freedom and democracy’s sake, we need to eliminate most of it. There are so many ways to use data to hurt people that the only safe database is the one that was never collected. Thus, instead of the EU’s approach of mainly regulating how personal data may be used (in its General Data Protection Regulation or GDPR), I propose a law to stop systems from collecting personal data.

    The robust way to do that, the way that can’t be set aside at the whim of a government, is to require systems to be built so as not to collect data about a person. The basic principle is that a system must be designed not to collect certain data, if its basic function can be carried out without that data.

    Data about who travels where is particularly sensitive, because it is an ideal basis for repressing any chosen target. We can take the London trains and buses as a case for study.

    The Transport for London digital payment card system centrally records the trips any given Oyster or bank card has paid for. When a passenger feeds the card digitally, the system associates the card with the passenger’s identity. This adds up to complete surveillance.

    I expect the transport system can justify this practice under the GDPR’s rules. My proposal, by contrast, would require the system to stop tracking who goes where. The card’s basic function is to pay for transport. That can be done without centralising that data, so the transport system would have to stop doing so. When it accepts digital payments, it should do so through an anonymous payment system.

    Frills on the system, such as the feature of letting a passenger review the list of past journeys, are not part of the basic function, so they can’t justify incorporating any additional surveillance.
    Voting 0

Top of the page

First / Previous / Next / Last / Page 1 of 25 Online Bookmarks of M. Fioretti: Tags: data ownership

About - Propulsed by SemanticScuttle