mfioretti: percloud*

Bookmarks on this page are managed by an admin user.

249 bookmark(s) - Sort by: Date ↓ / Title / Voting / - Bookmarks from other users for this tag

  1. Our team of cloud operators will set everything up
    So you're ready to go without needing to be a Linux guru
    https://cloudvault.me/#pricing
    Tags: , , by M. Fioretti (2018-05-16)
    Voting 0
  2. A trip to KubeCon + CloudNativeCon reveals a community hard at work building an open, agile and scalable cloud platform to fuel the boom in ubiquitous services....

    To understand the importance of Kubernetes we need to return to containers briefly. Containers, by design, use less resources than virtual machines (VMs) as they share an OS and run ‘closer to the metal’. For developers, the technology has enabled them to package, ship and run their applications in isolated containers that run virtually anywhere. When continuous integration/continuous delivery software (e.g. Jenkins) and practices are added into the mix, this enables companies to benefit from nimble and responsive automation and it significantly speeds up development. For example, any changes that developers make to the source code will automatically trigger the creation, testing and deployment of a new container to staging and then into production.

    The idea of a container allowing one process only to run inside it has also led on to microservices. This is where applications are broken down into their processes and placed inside a container, which makes a lot of sense in the enterprise world where greater efficiencies are constantly being sought.

    However, this explosion of containerised apps has created the need for a way to manage or ‘orchestrate’ thousands of containers.
    https://www.techradar.com/news/bigger-than-linux-the-rise-of-cloud-native
    Voting 0
  3. Zot is the revolutionary protocol that powers Hubzilla, providing communications, identity management, and access control across a fully decentralised network of independent websites, often called "the grid". The resulting platform is a robust system that supports privacy and security while enabling the kind of rich web services typically seen only in centralized, proprietary solutions.

    Consider this typical scenario:

    Jaquelina wishes to share photos with Roberto from her blog at jaquelina.org, but to nobody else. Roberto maintains his own family hub at roberto.net on a completely independent server. Zot allows Jaquelina to publish her photos using an access control list (ACL) that includes only Roberto. That means that while Roberto can see the photos when he visits her blog, his brother Marco cannot, and neither can any of his other family members who have accounts on roberto.net.

    The magic in this scenario comes from the fact that Roberto never logged in to Jaquelina's website. Instead, he had to login only once using his password on his own website at roberto.net. When Roberto visits jaquelina.org, her hub seamlessly authenticates him by remotely querying his server in the background.

    It is not uncommon for servers to have technical problems or become inaccessible for a variety of reasons. Zot provides robustness for Roberto's online activities by allowing him to have clones of his online identity, or channel, on multiple independent hubs. Imagine that Roberto's server crashes for some reason and he cannot log in there. He simply logs in to one of his clones at gadfly.com, a site operated by his friend Peter. Once authenticated at gadfly.com, Roberto can view Jaquelina's blog as before, without Jaquelina having to grant any additional access!
    https://project.hubzilla.org/help/en/developer/zot_protocol#What_is_Zot_
    Voting 0
  4. rom this mis-diagnosis flows a proposed solution: limit Facebook and Google’s access to our personal data and/or ensure others have access to that personal data on equal terms (“data portability”). Data portability means almost nothing in a world where you have a dominant network. So what if I can get my data out of Facebook if no other network has a critical mass of participants. What is needed is that Facebook has a live, open read/write API that allows other platforms to connect if authorized by the user.

    In fact, personal data is a practical irrelevancies to the monopoly issue. Focusing on it serves only to distract us from the real solutions.

    Limiting Facebook’s and Google’s access to our personal data or making it more portable would make very little difference to their monopoly power, or reduce the deleterious effects of that power on innovation and freedom — the key freedoms of enterprise, choice and thought.

    It make little difference because their monopoly just doesn’t arise from their access to our personal data. Instead it comes from massive economies of scale (costless copying) plus platform effects. If you removed Google’s and Facebook’s ability to use personal data to target ads tomorrow it would make very little difference to their top or bottom lines because their monopoly on our attention would be little changed and their ad targeting would be little diminished — in Google’s case the fact you type in a specific search from a particular location is already enough to target effectively and similar Facebook’s knowledge of your broad demographic characteristics would be enough given the lock-hold they have on our online attention.

    What is needed in Google’s case is openness of the platform and in Facebook’s openness combined with guaranteed interoperability (“data portability” means little if everyone is on Facebook!).

    Worse, focusing on privacy actually reinforces their monopoly position. It does so because privacy concerns:

    Increase compliance costs which burden less wealthy competitors disproportionately. In particular, increased compliance costs make it harder for new firms to enter the market. A classic example is the “right to be forgotten” which actually makes it harder for alternative search firms to compete with Google.
    Make it harder to get (permitted) access to user data on the platform and it is precisely (user-permitted) read/write access to a platform’s data that is the best chance for competition. In fact, it now gives monopolists the perfect excuse to deny such access: Facebook can now deny other competing firms (user-permitted) access to user data citing “privacy concerns”.


    Similarly, the idea sometimes put forward that we just need another open-source decentralized social network is completely implausible (even if run by Tim Berners-Lee*).

    Platforms/networks like Facebook tend to standardize: witness phone networks, postal networks, electricity networks and even the Internet. We don’t want lots of incompatible social networks. We want one open one — just like we have one open Internet.

    In addition, the idea that some open-source decentralized effort is going to take on an entrenched highly resourced monopoly on its own is ludicrous (the only hope would be if there was serious state assistance and regulation — just in the way that China got its own social networks by effectively excluding Facebook).

    Instead, in the case of Facebook we need to address the monopoly at root: networks like this will always tend to standardization. The solution is ensure that we get an open rather than closed, proprietary global social network — just like we got with the open Internet.

    Right now that would mean enforcing equal access rights to facebook API for competitors or, enforcing full open sourcing of key parts of the software and tech stack plus getting guarantees ongoing non-discriminatory API access.

    Even more importantly we need to prevent these kind of monopolies in future — we want to stop shutting the door after the horse has bolted! This means systematic funding of open protocols and platforms. By open i mean the software, algorithms and non-personal data are open. And we need to fund the innovators who create and develop these and the way to do that is replacing patents/copyright with remuneration rights.
    https://blog.okfn.org/2018/05/09/solv...opolies-problem-facebook-google-et-al
    Voting 0
  5. The company’s financial performance is more of a reflection of Facebook’s unstoppability than its cause. Despite personal reservations about Facebook’s interwoven privacy, data, and advertising practices, the vast majority of people find that they can’t (and don’t want to) quit. Facebook has rewired people’s lives, routing them through its servers, and to disentangle would require major sacrifice. And even if one could get free of the service, the social pathways that existed before Facebook have shriveled up, like the towns along the roads that preceded the interstate highway system. Just look at how the very meaning of the telephone call has changed as we’ve expanded the number of ways we talk with each other. A method of communication that was universally seen as a great way of exchanging information has been transformed into a rarity reserved for close friends, special occasions, emergencies, and debt collectors.

    Most of the general pressures on the internet industry’s data practices, whether from Europe or anywhere else, don’t seem to scare Facebook. Their relative position will still be secure, unless something radical changes. In the company’s conference call with analysts last week, Sheryl Sandberg summed it up.

    “The thing that won’t change is that advertisers are going to look at the highest return-on-investment » opportunity,” Sandberg said. “And what’s most important in winning budgets is relative performance in the industry.”

    As long as dollars going into the Facebook ad machine sell products, dollars will keep going into the Facebook ad machine.

    As long as their friends are still on Instagram, Facebook, and WhatsApp, people will keep using Facebook products.
    https://www.theatlantic.com/technolog...18/05/facebook-the-unstoppable/559301
    Voting 0
  6. Imagine a world where everyone has their own space on the Internet, funded from the commons. This is a private space (an organ of the cyborg self) that all our so-called smart devices (also organs) link into.

    Instead of thinking of this space as a personal cloud, we must consider it a special, permanent node within a peer-to-peer structure wherein all our various devices (organs) connect to one another. Pragmatically, this permanent node is used to guarantee findability (initially using domain names) and availability (as it is hosted/always on) as we transition from the client/server architecture of the current Web to the peer-to-peer architecture of the next generation Internet.



    The service providers must, of course, be free to extend the capabilities of the system as long as they share their improvements back into the commons (“share alike”), thus avoiding lock-in. For providing services above and beyond the core services funded from the commons, individual organisations may set prices for and charge for value-added services. In this way, we can build a healthy economy of competition on top of an ethically sound core instead of the system of monopolies we have today on top of an ethically rotten core. And we can do so without embroiling the whole system in convoluted government bureaucracy that would stifle experimentation, competition, and the organic, decentralised evolution of the system.
    https://ar.al/notes/encouraging-indiv...ual-sovereignty-and-a-healthy-commons
    Tags: , by M. Fioretti (2018-04-23)
    Voting 0
  7. Our work with the City of Ghent on our Indienet project (https://indienet.info) gets a shout out in this month’s New Scientist, in Jacob Aron’s article titled “Stop being the product.”

    “On the face of it, it » sounds a lot like the old web, where people created simple pages hosted on computers they controlled. The crucial difference is that it used to be difficult to put things online without technical know-how – which is partly why easy-to-use services like Facebook are popular.”
    https://mastodon.ar.al/@aral/99841075595555746
    Tags: , by M. Fioretti (2018-04-23)
    Voting 0
  8. Today’s Internet and digital platforms are becoming increasingly centralised, slowing innovation and challenging their potential to revolutionise society and the economy in a pluralistic manner.

    The DECODE project will develop practical alternatives, through the creation, evaluation and demonstration of a distributed and open architecture for managing online access and aggregation of private information to allow a citizen-friendly and privacy-aware governance of access entitlements.

    Strong ethical and digital rights principles are at the base of DECODE’s mission, moving towards the implementation of open standards for a technical architecture resting on the use of Attribute Based Cryptography, distributed ledgers, secure operating system and a privacy focused smart rules language
    https://decodeproject.github.io/whitepaper/#pf6
    Voting 0
  9. Some have taken today’s problems with digital identity to move in the opposite direction. As a consequence, the notion of self-sovereignty has gained significant popularity among the digiterati. In a self-sovereign identity framework the individual uses tools such as cryptographic signing to create their own identity in order to allow others to authenticate their identity and properties. These identity credentials are said to be self-sovereign. The current hype cycle is backing blockchain ledgers to provide more trustworthy and more broadly available identity credentials, and many of the cryptocurrencies, such as bitcoin and Ethereum, function in this manner.

    This idea is exciting but misleading. To have a system that is completely dependent only on assertions by the individual themselves – namely true self-sovereignty – is an invitation for mistakes, fraud, and systematic corruption. Most recently financial authorities around the world have been reacting to this problem in the world of cryptocurrencies and outlawing or severely restricting these digital currencies.

    The mistake that both governments and tech pioneers are making is failing to realize that trustworthy identity depends on jointly-issued credentials, where credentials and certification must be based on trustworthy assertions by the community of people and institutions in which we live. Identity credentials are really mechanisms for collecting and documenting trusted relationships, not self-certifying systems. Trustworthy self-sovereign frameworks should really be called joint sovereignty or community sovereignty frameworks.
    https://blogs.wsj.com/cio/2018/04/03/...ntity-is-broken-heres-a-way-to-fix-it
    Tags: , , by M. Fioretti (2018-04-23)
    Voting 0
  10. Popular internet platforms that currently mediate our everyday communications become more and more efficient in managing vast amounts of information, rendering their users more and more addicted and dependent on them. Alternative, more organic options like community networks do exist and they can empower citizens to build their own local networks from the bottom up. This chapter explores such technological options together with the adoption of a healthier internet diet in the context of a wider vision of sustainable living in an energy-limited world.


    The popular Internet platforms that mediate a significant portion of our everyday communications become thus more and more efficient in managing vast amounts of information. In turn, they also become more and more knowledgeable about designing user interaction design techniques that increase addiction, or “stickiness” when described as a performance metric, and dependency. This renders their users more and more addicted and dependent on them, subject to manipulation and exploitation for commercial and political objectives. This could be characterized as the second watershed of the Internet in the context of Illich’s analysis on the lifecycle of tools. As in the case of medicine and education, the Internet at its early stages was extremely useful. It dramatically increased our access to knowledge and to people all over the world. However, to achieve this, it relied on big organizations offering efficient and reliable services. These services now depend more and more on the participation of people and on the exploitation of the corresponding data produced for platforms to survive. This creates a vicious cycle between addictive design practices and unfair competition which breach the principle of net neutrality, and unethical uses of privately owned knowledge on human behavior which are generated through analyses of the data produced from our everyday online activities.

    In addition to the tremendous social, political, and economic implications of centralizing power on the Internet, there are also significant ecological consequences. At first glance, these seem to be positive. The centralization of online platforms has allowed their owners to build huge data centers in cold climates and invest in technologies that keep servers cool with lower energy costs. However, at the same time, the main aim of online platforms is to maximize the total time spent online as much as possible and to maximize the amount of information exchanged, not only between people but also between “things!” Their profitability depends on the processing of huge amounts of information that produces knowledge which can be sold to advertisers and politicians. Like the pharmaceutical companies, they create and maintain a world in which they are very much needed. This also explains why corporations like Facebook, Google, and Microsoft are at the forefront of the efforts to provide “Internet access to all” and why at the same time local communities face so many economic, political, and legal hurdles that encumber them to build, maintain, and control their own infrastructures.


    To achieve a sustainable level of Internet usage, one needs to provide the appropriate tools and processes for local communities to make decisions on the design of their ICT tools, including appropriate alternative and/or complementary design of places, institutions, and rituals that can impose certain constraints and replace online communications when these are not really necessary. To answer this demand, one should first answer a more fundamental question: How much online communication is needed in an energy-restricted world? In the case of food and housing, there are some reasonable basic needs. For example, each person should consume 2000 calories per day or 35 m2 of habitat (see P.M., 2014). But, how many Mbs does someone need to consume to sustain a good quality of life? What would be the analogy for a restricted vegetarian or even vegan Internet diet?
    The answer might differ depending on the services considered (social activities, collaborative work, or media) and the type of access to the network discussed above. For example, is it really necessary to have wireless connectivity “everywhere, anytime” using expensive mobile devices, or is it enough to have old-fashioned Internet cafes and only wired connections at home? Would it make sense to have Internet-free zones in cities? Can we imagine “shared” Internet usage in public spaces—a group of people interacting together in front of a screen and alternating in showing their favorite YouTube videos (a sort of an Internet jukebox)? There is a variety of more or less novel constraints which could be imposed on different dimensions:

    Time and Volume: A communications network owned by a local community, instead of a global or local corporation, could shut down for certain period of time each day if this is what the community decides. Or community members could agree to have certain time quotas for using the network (e.g., not more than 4 hours per day or 150 hours per month). Such constraints would not only reduce energy consumption; they would also enforce a healthier lifestyle and encourage face-to-face interactions.

    Reducing quotas on the speed (bandwidth) and volume (MB) that each person consumes is another way to restrict Internet consumption. Actually people are already used to such limits especially for 3G/4G connectivity. The difference is that a volume constraint does not necessarily translate to time constraints (if someone uses low volume services such as e-mail). So, volume constraints could encourage the use of less voluminous services (e.g., downloading a movie with low instead of High Definition resolution if this is to be watched in a low definition screen anyway) while time constraints might have the opposite effect (people using as much bandwidth as possible in their available time).

    However, to enforce such constraints, both time and volume based, on an individual basis, the network needs to know who is connecting to it and keep track of the overall usage. This raises the question of privacy and identification online and again the trade-off of trusting local vs. global institutions to take this role. Enforcing time or volume constraints for groups of people (e.g., the residents of a cooperative housing complex) is an interesting option to be considered when privacy is considered important.

    Devices: Energy consumption depends on the type of equipment used to access the Internet. For example, if access to the Internet happens only through desktop computers or laptops using ethernet cables instead of mobile smartphones, then the total energy consumed for a given service would be significantly reduced. Usage would also be dramatically affected: On the positive side, many people would spend less time online and use the Internet only for important tasks. On the negative side, others might stay at home more often and sacrifice outdoors activities in favor of Internet communications.
    https://rd.springer.com/chapter/10.1007/978-3-319-66592-4_13
    Voting 0

Top of the page

First / Previous / Next / Last / Page 1 of 25 Online Bookmarks of M. Fioretti: Tags: percloud

About - Propulsed by SemanticScuttle