mfioretti: cloud computing*

Bookmarks on this page are managed by an admin user.

72 bookmark(s) - Sort by: Date ↓ / Title / Voting / - Bookmarks from other users for this tag

  1. They get you hooked for free and the next level is $1,496 per month… wtf! MongoLabs is little better. I don’t understand why everything is becoming an exorbitantly priced service. Keep in mind, platform as a service providers should have templates for hard-to-set-up stateful services like MongoDB.

    Rather than use templates from IAAS providers, I should be able to use someone else’s platform running on my chosen cloud/IAAS provider (call it what you like), like Amazon, to put pressure on them to lower prices. If they know it is easy to leave, their prices will go down. Let’s take back some of our ability to do hard things wherever we choose so that running a database does not cost … wait, what, $21,430 per month?

    Please, tell me I’m crazy. Tell me why I’m wrong. I sincerely want to know. So many products are inches from being able to compete with Amazon and give choice back to developers whether they want to run on Amazon or retain a bit more freedom (perhaps even run in another cloud or local cloud environment). Soon, Kubernetes will allow complicated stateful services to run inside containers.

    Meanwhile, Amazon announces competing components daily. Amazon has API Gateways, CloudFormations to spin up almost any stack or service, CodePipeline for continuous delivery, load balancing; you name, it they have it.

    Using Amazon for everything feels wrong to me.

    Now, you might be wondering, what is the problem? Just go with Amazon, everyone is doing it. “You aren’t cool unless you’re using Amazon.” I do work for a large organization that can afford to run everything on Amazon (maybe ;  some would disagree). However, I also work intensely on the multi-way trading platform Abecorn.com. That has no VC funding; I have learned to “do more with less.”
    http://techcrunch.com/2015/11/21/i-wa...nch+%28TechCrunch%29&sr_share=twitter
    Voting 0
  2. In a key case before the European Union's highest court, the Court of Justice of the European Union (CJEU), the European Commission admitted yesterday that the US-EU Safe Harbor framework for transatlantic data transfers does not adequately protect EU citizens' data from US spying. The European Commission's attorney Bernhard Schima told the CJEU's attorney general: "You might consider closing your Facebook account if you have one," euobserver reports.

    The case before the CJEU is the result of complaints lodged against five US companies—Apple, Facebook, Microsoft, Skype, and Yahoo—with the relevant data protection authorities in Germany, Ireland, and Luxembourg by the Austrian privacy activist Max Schrems, supported by crowdfunding. Because of the important points of European law raised, the Irish High Court referred the Safe Harbor case to the CJEU.

    The referral was prompted by Edward Snowden's revelations about the Prism data-collection program, which show that the US intelligence community has ready access to user data held by nine US Internet companies, including the five named in Schrems' complaints. The EU's Data Protection Directive prohibits the transfer of personal data to non-European Union countries that do not meet the EU's "adequacy" standard for privacy protection. To aid US companies operating in the EU, the Safe Harbor Framework was introduced, which allows US organizations to self-certify their compliance with the adequacy provision when they transfer EU personal data back to the US.
    http://arstechnica.com/tech-policy/20...t-to-keep-the-nsa-away-from-your-data
    Voting 0
  3. Libre Projects: 139 open source hosted web services
    http://libreprojects.net/#favs=joindi...oud,openstreetmap,jamendo,cloud9,plos
    Tags: , by M. Fioretti (2015-01-30)
    Voting 0
  4. Imagine if we ran applications on our laptops the same way we run applications in our data centers. Each time we launched a web browser or text editor, we’d have to specify which CPU to use, which memory modules are addressable, which caches are available, and so on. Thankfully, our laptops have an operating system that abstracts us away from the complexities of manual resource management.

    In fact, we have operating systems for our workstations, servers, mainframes, supercomputers, and mobile devices, each optimized for their unique capabilities and form factors.

    We’ve already started treating the data center itself as one massive warehouse-scale computer. Yet, we still don’t have an operating system that abstracts and manages the hardware resources in the data center just like an operating system does on our laptops.
    It’s time for the data center OS

    What would an operating system for the data center look like?

    From an operator’s perspective it would span all of the machines in a data center (or cloud) and aggregate them into one giant pool of resources on which applications would be run. You would no longer configure specific machines for specific applications; all applications would be capable of running on any available resources from any machine, even if there are other applications already running on those machines.

    From a developer’s perspective, the data center operating system would act as an intermediary between applications and machines, providing common primitives to facilitate and simplify building distributed applications.

    The data center operating system would not need to replace Linux or any other host operating systems we use in our data centers today. The data center operating system would provide a software stack on top of the host operating system. Continuing to use the host operating system to provide standard execution environments is critical to immediately supporting existing applications.

    The data center operating system would provide functionality for the data center that is analogous to what a host operating system provides on a single machine today: namely, resource management and process isolation. Just like with a host operating system, a data center operating system would enable multiple users to execute multiple applications (made up of multiple processes) concurrently, across a shared collection of resources, with explicit isolation between those applications.
    http://radar.oreilly.com/2014/12/why-...center-needs-an-operating-system.html
    Tags: by M. Fioretti (2014-12-04)
    Voting 0
  5. Docker is the hottest new idea in the world of cloud computing, a technology embraced by Silicon Valley’s elite engineers and backed the industry’s biggest names, including Google, Amazon, and Microsoft. Based on technologies that have long powered Google’s own online empire, it promises to overhaul software development across the net, providing a simpler and more efficient means of building and operating websites and other massive online applications.

    But some of Docker’s earliest supporters now believe that the company behind the technology, also called Docker, has strayed from its original mission, and they’re exploring a new project that aims to rebuild this kind of technology from scratch.

    On Monday, a San Francisco startup called CoreOS unveiled an open source software project called Rocket, billing it as a Docker alternative that’s closer to what Docker was originally designed to be. “The original premise of Docker was that it was a tool that you would use to build a system,” says Alex Polvi, the CEO and co-founder of CoreOS, a company that has been one of Docker’s biggest supporters since the technology was first released early last year. “We think that still needs to exist…so we’re doing something about it.”
    http://www.wired.com/2014/12/google-o...s-next-big-thing/?mbid=social_twitter
    Voting 0
  6. The idea of re-using waste server heat is not new, but German firm Cloud&Heat seems to have developed it further than most. For a flat installation fee, the company will install a rack of servers in your office, with its own power and Internet connection. Cloud&Heat then pays the bills and you get the heat. As well as Heat customers, the firm wants Cloud customers, who can buy a standard OpenStack-based cloud compute and storage service on the web. The company guarantees that data is encrypted and held within Germany — at any one of its Heat customers' premises. In principle, it's a way to build a data center with no real estate, by turning its waste heat into an asset. A similar deal is promised by French firm Qarnot.
    http://www.datacenterdynamics.com/foc...014/11/germans-get-free-heating-cloud
    Voting 0
  7. Japan faces a critical shortage of radiologists. Although major hospitals are well equipped to conduct scans, the scarcity of experts to read them and give patients their diagnoses means that people, especially those in rural areas, often have to wait a long time to discover their results. This can have tragic consequences for people with serious conditions.

    To address this shortage and help people get accurate diagnoses faster, Medical Network Systems Inc. (MNES) in Hiroshima started running a remote diagnosis service in 2000. Rather than waiting for patients to come to hospitals, we bring the radiology equipment to them. This teleradiology service has helped combat the challenge of getting scanning technology to people in remote areas; however, we are still short on specialists that can read the scans, and we wanted to find ways to give access to patients in areas without specialists.
    http://googleforwork.blogspot.it/2014/10/radiology-in-cloud.html
    Voting 0
  8. Red Hat is moving from Linux to OpenStack as its primary breadwinner.
    http://www.zdnet.com/red-hat-ceo-anno...-server-to-cloud-computing-7000033930
    Voting 0
  9. If there’s one structural flaw that could cause Bitcoin to collapse from within, it’s the network’s vulnerability to what’s called a “51 percent attack.”

    Threat of a 51 percent attack was, up until very recently, a theoretical problem that would only come about if one entity came to control more than half of the computing power being used to mine Bitcoin. In theory, with the majority of the network’s computing power, an entity could double-spend Bitcoin and engage in what’s called “selfish mining,” a process that would allow it to mine a disproportionately large share of new Bitcoin blocks.

    The first and most apparent problem with a mining pool that has more than 51 percent of the computing power of the entire network (hashing power, as it’s also called) is the potential for double-spending. The people trying to mine Bitcoin are the same ones tasked with auditing the network by confirming Bitcoin transactions.

    If you want to buy Bitcoin from me, first we settle on a price, then you ask the network to confirm that I haven’t already spent the Bitcoin I say I’m selling to you. When the network signs off on the confirmation, the transaction goes through and those who confirmed it receive a small transaction fee.

    So, basically, an entity that controls most of the mining power also controls most of the auditing power. If it chooses to act maliciously, that entity could potentially spend the same Bitcoin twice.
    http://www.dailydot.com/business/bitcoin-51-percent-attack
    Voting 0
  10. Docker is going to be an important tool for OpenStack administrators to familiarize themselves with, as it rises to stand beside traditional virtual machines in OpenStack clusters. Linux containers can be launched either independently through Heat, allowing for configuration and orchestration options to be developed here, or through Nova, treating containers as if they are another type of hypervisor through a specialized driver. What works best for you will depend on your exact use case.
    http://www.opensource.com/business/14/6/docker-and-openstack
    Voting 0

Top of the page

First / Previous / Next / Last / Page 1 of 8 Online Bookmarks of M. Fioretti: Tags: cloud computing

About - Propulsed by SemanticScuttle