mfioretti: software*

Bookmarks on this page are managed by an admin user.

57 bookmark(s) - Sort by: Date ↓ / Title / Voting / - Bookmarks from other users for this tag

  1. Italian proponents of the use of free and open source software by public administrations are protesting a decision by the town of Pesaro to switch from using OpenOffice to a proprietary cloud-based office solution. They say the city has garbled the cost calculations and omitted a required software assessment study.

    The move to the proprietary cloud-based office solution was announced in a press release by the vendor, published on 23 June. The vendor argues that the use of OpenOffice had resulted in higher than anticipated support costs and loss of productivity, and says that its office solution is cheaper.

    In December, the city published a tender, directly requesting licences of a proprietary, cloud-based office solution. According to city council documents, the request failed to get any response, after which Pesaro directly awarded a contract. The documents explain that this is allowed for amounts lower than EUR 40,000.

    At the same time, the city council was informed about problems related to the 2010 transition to OpenOffice, an alternative, open source suite of office productivity tools.

    Incomplete

    The city explains that this transition was never completed. Several users continued to use outdated versions of the proprietary office suite, resulting in a time-wasting mix of document formats. The city says OpenOffice was slow to open documents, particular documents on the Internet. Pesaro also reports document interoperability problems, including text formatting and difficulties with spreadsheets and links to a database system included in the proprietary office suite.

    These interoperability problems had caused “considerable inconvenience and loss of time”, Pesaro writes.

    Advocates of the use of free and open source software by public administrations decry the city’s decision. Pesaro has lost control over its infrastructure, and is further locking itself in to proprietary software, writes Paolo Vecchi, CEO of Omnis, a UK-based provider of IT services, in a report on Tech Economy, an Italian IT news site. A well-organised migration to LibreOffice, closely related to OpenOffice, will over time save Pesaro lots of money, he writes.

    “Pesaro invented the EUR 300,000 cost of OpenOffice” says Vecchi. “They have the courage to say OpenOffice does not suit them, while ignoring the recommendations and the plans provided by the company that supported the software.”
    https://joinup.ec.europa.eu/community...ckle-town-pesaro#.VbnBMAj9qjE.twitter
    Voting 0
  2. Look at some of the key themes at MWC this year…. 5G for example. Many people see it as just another iteration in the 1G, 2G, 3G, 4G where what matters is the additional bandwidth for the end user. But behind the scenes a drastic redesign of the telco mobile network is underway where fixed function networking equipment laid out in a static / predefined architecture is being replaced by mini-data centres of generic servers whose function is responsive to the needs of the network. 5G is really about the software-defined telco network.

    Another key theme is IoT (Internet of Things). Many believe M2M (the ancestor of IoT) has been part of MWC since times immemorial, so why make a fuss about it all of a sudden? Once again the answer is software. M2M was simple with unidirectional exchanges of data, reflecting the simple nature of the software being run on M2M devices – images were sent down to a digital signage box and telemetry data was sent from an industrial gateway to a monitoring server. But today things are very different. The software run by all these devices has evolved drastically which has changed the very simple nature of these exchanges. For example, as well as displaying advertisements, a digital signage screen might be count the people that pass it or act as a wifi hotspot. IoT is reall about software-defined smart devices.

    Autonomous cars, another big theme this year, is yet another example of the software-defined nature of things to come.
    https://insights.ubuntu.com/2017/02/2...es-software-defined-everything-matter
    Voting 0
  3. A modern processor could address every byte of data—whether in memory or storage—as if it were all one flat array. Disk storage would no longer be a separate entity but just another level in the memory hierarchy, turning what we now call main memory into a new form of cache. From the user’s point of view, all programs would be running all the time, and all documents would always be open.

    Is this notion of merging memory and storage an attractive prospect or a nightmare? I’m not sure. There are some huge potential problems. For safety and sanity we generally want to limit which programs can alter which documents. Those rules are enforced by the file system, and they would have to be re-engineered to work in the memory-mapped environment.

    Perhaps more troubling is the cognitive readjustment required by such a change in architecture. Do we really want everything at our fingertips all the time? I find it comforting to think of stored files as static objects, lying dormant on a disk drive, out of harm’s way; open documents, subject to change at any instant, require a higher level of alertness. I’m not sure I’m ready for a more fluid and frenetic world where documents are laid aside but never put away. But I probably said the same thing 30 years when I first confronted a machine capable of running multiple programs at once (anyone remember Multifinder?).

    The dichotomy between temporary memory and permanent storage is certainly not something built into the human psyche. I’m reminded of this whenever I help a neophyte computer user. There’s always an incident like this:

    “I was writing a letter last night, and this morning I can’t find it. It’s gone.”

    “Did you save the file?”

    “Save it? From what? It was right there on the screen when I turned the machine off.”
    http://bit-player.org/2016/wheres-my-petabyte-disk-drive
    Voting 0
  4. For the Wright brothers, the patent struggle was a series of Pyrrhic victories. They wanted justice and credit, and ideally the freedom to pursue their research further. Instead they found themselves consumed by litigation, and forced to watch others catch up with and overtake their technical lead, particularly in Europe, where areonautical research had strong state support. The endless legal battle over the airplane patent may even have contributed to Wilbur Wright's early death - he came down with typhoid at an especially rough patch in the legal proceedings, and died at age 45. His brother Orville lived long enough to see the Wright company taken over by Curtiss in 1929, in the most bitter of ironies. Neither brother made any substantive contribution to aviation after 1908.

    The United States government finally put an end to the patent strife in 1917. Mindful of the impending war, it insisted that the rival parties form a patent pool - in effect, removing patent barriers to creating new airplane designs. Together with the war, the patent pool inspired a golden age of American aviation. The pool stayed in effect until 1975; companies who wanted to preserve a competitive advantage did so using trade secrets (such as Boeing's secret recipe for hanging jet engines under an airliner wing).

    I believe that the Wright patent story drives home the intellectual bankruptcy of our patent system. The whole point of patents is supposed to be to encourage innovation, reward entrepreneurship, and make sure useful inventions get widely disseminated. But in this case (and in countless others, in other fields), the practical effect of patents turned out to be to hinder innovation - a patent war erupts, and ends up hamstringing truly innovative technologies, all without doing much for the inventors, who weren't motivated by money in the first place.

    It's illuminating to point out that all three transformative technologies of the twentieth century - aviation, the automobile, and the digital computer - started off in patent battles and required a voluntary suspension of hostilities (a collective decision to ignore patents) before the technology could truly take hold.
    http://idlewords.com/2003/12/100_years_of_turbulence.htm
    Voting 0
  5. The big problem we face isn’t coordinated cyber-terrorism, it’s that software sucks. Software sucks for many reasons, all of which go deep, are entangled, and expensive to fix. (Or, everything is broken, eventually). This is a major headache, and a real worry as software eats more and more of the world.

    There is a lot of interest, and boondoggle money, in exaggerating the “cyber-terrorism” threat (which is not unreal but making software better would help that a lot more than anything devoted solely to “cyber-terrorism” — but, hey, you know which buzzword gets the funding), and not much interest in spending real money in fixing the boring but important problems with the software infrastructure. This is partly lack of attention to preventive spending which plagues so many issues (Hello, Amtrak’s ailing rails!) but it’s also because lousy software allows … easier spying. And everyone is busy spying on everyone else, and the US government, perhaps best placed to take a path towards making software more secure, appears to have chosen that path as well. I believe this is a major mistake in the long run, but here we are.

    * * *

    I’m actually more scared at this state of events than I would’ve been at a one-off hacking event that took down the NYSE. Software is eating the world, and the spread of networked devices through the “internet of things” is only going to accelerate this. Our dominant operating systems, our way of working, and our common approach to developing, auditing and debugging software, and spending (or not) money on its maintenance, has not yet reached the requirements of the 21st century.
    https://medium.com/message/why-the-gr...uly-8th-should-scare-you-b791002fff03
    Voting 0
  6. Back in elder days, Ubuntu and most other Debian-based Linux distributions shipped with Synaptic, a graphical frontend for installing and removing applications through the Debian package management system. (Most of Ubuntu's core code is derived from Debian Linux, which is why they share the same system for adding and removing software.)

    Then, in 2009, Canonical announced plans to replace Synaptic with an app of its own making called Ubuntu Software Center—which the company at first tried to name the Ubuntu Software Store, to the dismay of many users. The Software Center did most of the same things as Synaptic, but it also offered ways for Canonical to promote certain apps to users, including some that were available for purchase.

    Fast forward to the present, and Canonical has announced that it will no longer be maintaining the Software Center. "The deb-based store read: center; apparently the "store" terminology dies hard » has continued to be a huge problem over time and in fact it has been increasingly expensive to keep running," according to one representative. Another indicated that, going forward, the "resources that were initially allocated to the classic desktop" will support "building the vision of the mobile store, initially released for the phone."

    For most desktop users, none of this is likely to matter too much. Synaptic and other graphical front ends for adding and removing programs on desktop versions of Ubuntu remain available.

    But the bigger item of note here—and what Canonical has not yet said in an entirely explicit way—is that Ubuntu developers appear poised to move further away from the Debian-based package management system as a whole. Instead, they'll be focusing on Snappy, which uses a separate, transactionally updated software-management platform.
    http://thevarguy.com/open-source-appl...us-mobile-apps-away-desktop-ubuntu-so
    Voting 0
  7. The red flags and marching songs of Syriza during the Greek crisis, plus the expectation that the banks would be nationalised, revived briefly a 20th-century dream: the forced destruction of the market from above. For much of the 20th century this was how the left conceived the first stage of an economy beyond capitalism. The force would be applied by the working class, either at the ballot box or on the barricades. The lever would be the state. The opportunity would come through frequent episodes of economic collapse.

    Instead over the past 25 years it has been the left’s project that has collapsed. The market destroyed the plan; individualism replaced collectivism and solidarity; the hugely expanded workforce of the world looks like a “proletariat”, but no longer thinks or behaves as it once did.

    If you lived through all this, and disliked capitalism, it was traumatic. But in the process technology has created a new route out, which the remnants of the old left – and all other forces influenced by it – have either to embrace or die. Capitalism, it turns out, will not be abolished by forced-march techniques. It will be abolished by creating something more dynamic that exists, at first, almost unseen within the old system, but which will break through, reshaping the economy around new values and behaviours. I call this postcapitalism.

    As with the end of feudalism 500 years ago, capitalism’s replacement by postcapitalism will be accelerated by external shocks and shaped by the emergence of a new kind of human being. And it has started.

    Postcapitalism is possible because of three major changes information technology has brought about in the past 25 years. First, it has reduced the need for work, blurred the edges between work and free time and loosened the relationship between work and wages. The coming wave of automation, currently stalled because our social infrastructure cannot bear the consequences, will hugely diminish the amount of work needed – not just to subsist but to provide a decent life for all.

    Second, information is corroding the market’s ability to form prices correctly. That is because markets are based on scarcity while information is abundant. The system’s defence mechanism is to form monopolies – the giant tech companies – on a scale not seen in the past 200 years, yet they cannot last. By building business models and share valuations based on the capture and privatisation of all socially produced information, such firms are constructing a fragile corporate edifice at odds with the most basic need of humanity, which is to use ideas freely.
    British capitalism is broken. Here’s how to fix it
    Read more

    Third, we’re seeing the spontaneous rise of collaborative production: goods, services and organisations are appearing that no longer respond to the dictates of the market and the managerial hierarchy.

    New forms of ownership, new forms of lending, new legal contracts: a whole business subculture has emerged over the past 10 years, which the media has dubbed the “sharing economy”. Buzzwords such as the “commons” and “peer-production” are thrown around, but few have bothered to ask what this development means for capitalism itself.
    Advertisement

    I believe it offers an escape route – but only if these micro-level projects are nurtured, promoted and protected by a fundamental change in what governments do. And this must be driven by a change in our thinking – about technology, ownership and work. So that, when we create the elements of the new system, we can say to ourselves, and to others: “This is no longer simply my survival mechanism, my bolt hole from the neoliberal world; this is a new way of living in the process of formation.”

    Even now many people fail to grasp the true meaning of the word “austerity”. Austerity is not eight years of spending cuts, as in the UK, or even the social catastrophe inflicted on Greece. It means driving the wages, social wages and living standards in the west down for decades until they meet those of the middle class in China and India on the way up.
    Advertisement

    Meanwhile in the absence of any alternative model, the conditions for another crisis are being assembled. Real wages have fallen or remained stagnant in Japan, the southern Eurozone, the US and UK. The shadow banking system has been reassembled, and is now bigger than it was in 2008. New rules demanding banks hold more reserves have been watered down or delayed. Meanwhile, flushed with free money, the 1% has got richer.

    Neoliberalism, then, has morphed into a system programmed to inflict recurrent catastrophic failures. Worse than that, it has broken the 200-year pattern of industrial capitalism wherein an economic crisis spurs new forms of technological innovation that benefit everybody.

    That is because neoliberalism was the first economic model in 200 years the upswing of which was premised on the suppression of wages and smashing the social power and resilience of the working class. If we review the take-off periods studied by long-cycle theorists – the 1850s in Europe, the 1900s and 1950s across the globe – it was the strength of organised labour that forced entrepreneurs and corporations to stop trying to revive outdated business models through wage cuts, and to innovate their way to a new form of capitalism.

    The result is that, in each upswing, we find a synthesis of automation, higher wages and higher-value consumption. Today there is no pressure from the workforce, and the technology at the centre of this innovation wave does not demand the creation of higher-consumer spending, or the re‑employment of the old workforce in new jobs. Information is a machine for grinding the price of things lower and slashing the work time needed to support life on the planet.

    the banking system, the planning system and late neoliberal culture reward above all the creator of low-value, long-hours jobs.

    Innovation is happening but it has not, so far, triggered the fifth long upswing for capitalism that long-cycle theory would expect. The reasons lie in the specific nature of information technology.

    In the 1990s economists and technologists began to have the same thought at once: that this new role for information was creating a new, “third” kind of capitalism – as different from industrial capitalism as industrial capitalism was to the merchant and slave capitalism of the 17th and 18th centuries. But they have struggled to describe the dynamics of the new “cognitive” capitalism. And for a reason. Its dynamics are profoundly non-capitalist.

    If we restate Arrow’s principle in reverse, its revolutionary implications are obvious: if a free market economy plus intellectual property leads to the “underutilisation of information”, then an economy based on the full utilisation of information cannot tolerate the free market or absolute intellectual property rights. The business models of all our modern digital giants are designed to prevent the abundance of information.

    I’ve surveyed the attempts by economists and business gurus to build a framework to understand the dynamics of an economy based on abundant, socially-held information. But it was actually imagined by one 19th-century economist in the era of the telegraph and the steam engine. His name? Karl Marx.

    ...

    The scene is Kentish Town, London, February 1858, sometime around 4am. Marx is a wanted man in Germany and is hard at work scribbling thought-experiments and notes-to-self. When they finally get to see what Marx is writing on this night, the left intellectuals of the 1960s will admit that it “challenges every serious interpretation of Marx yet conceived”. It is called “The Fragment on Machines”.
    Advertisement

    In the “Fragment” Marx imagines an economy in which the main role of machines is to produce, and the main role of people is to supervise them. He was clear that, in such an economy, the main productive force would be information. The productive power of such machines as the automated cotton-spinning machine, the telegraph and the steam locomotive did not depend on the amount of labour it took to produce them but on the state of social knowledge. Organisation and knowledge, in other words, made a bigger contribution to productive power than the work of making and running the machines.

    Given what Marxism was to become – a theory of exploitation based on the theft of labour time – this is a revolutionary statement.
    http://www.theguardian.com/books/2015...-of-capitalism-begun?CMP=share_btn_tw
    Voting 0
  8. The end-of-work argument has often been dismissed as the “Luddite fallacy,” an allusion to the 19th-century British brutes who smashed textile-making machines at the dawn of the industrial revolution, fearing the machines would put hand-weavers out of work. But some of the most sober economists are beginning to worry that the Luddites weren’t wrong, just premature. When former Treasury Secretary Lawrence Summers was an MIT undergraduate in the early 1970s, many economists disdained “the stupid people who » thought that automation was going to make all the jobs go away,” he said at the National Bureau of Economic Research Summer Institute in July 2013. “Until a few years ago, I didn’t think this was a very complicated subject: the Luddites were wrong, and the believers in technology and technological progress were right. I’m not so completely certain now.”

    2. Reasons to Cry Robot

    What does the “end of work” mean, exactly? It does not mean the imminence of total unemployment, nor is the United States remotely likely to face, say, 30 or 50 percent unemployment within the next decade. Rather, technology could exert a slow but continual downward pressure on the value and availability of work—that is, on wages and on the share of prime-age workers with full-time jobs. Eventually, by degrees, that could create a new normal, where the expectation that work will be a central feature of adult life dissipates for a significant portion of society.

    The share of U.S. economic output that’s paid out in wages fell steadily in the 1980s, reversed some of its losses in the ’90s, and then continued falling after 2000, accelerating during the Great Recession. It now stands at its lowest level since the government started keeping track in the mid‑20th century.

    A number of theories have been advanced to explain this phenomenon, including globalization and its accompanying loss of bargaining power for some workers. But Loukas Karabarbounis and Brent Neiman, economists at the University of Chicago, have estimated that almost half of the decline is the result of businesses’ replacing workers with computers and software.

    In 2013, Oxford University researchers forecast that machines might be able to perform half of all U.S. jobs in the next two decades. The projection was audacious, but in at least a few cases, it probably didn’t go far enough. For example, the authors named psychologist as one of the occupations least likely to be “computerisable.” But some research suggests that people are more honest in therapy sessions when they believe they are confessing their troubles to a computer, because a machine can’t pass moral judgment. Google and WebMD already may be answering questions once reserved for one’s therapist. This doesn’t prove that psychologists are going the way of the textile worker. Rather, it shows how easily computers can encroach on areas previously considered “for humans only.”

    After 300 years of breathtaking innovation, people aren’t massively unemployed or indentured by machines. But to suggest how this could change, some economists have pointed to the defunct career of the second-most-important species in U.S. economic history: the horse.

    Humans can do much more than trot, carry, and pull. But the skills required in most offices hardly elicit our full range of intelligence. Most jobs are still boring, repetitive, and easily learned. The most-common occupations in the United States are retail salesperson, cashier, food and beverage server, and office clerk. Together, these four jobs employ 15.4 million people—nearly 10 percent of the labor force, or more workers than there are in Texas and Massachusetts combined. Each is highly susceptible to automation, according to the Oxford study.

    Technology creates some jobs too, but the creative half of creative destruction is easily overstated. Nine out of 10 workers today are in occupations that existed 100 years ago, and just 5 percent of the jobs generated between 1993 and 2013 came from “high tech” sectors like computing, software, and telecommunications. Our newest industries tend to be the most labor-efficient: they just don’t require many people. It is for precisely this reason that the economic historian Robert Skidelsky, comparing the exponential growth in computing power with the less-than-exponential growth in job complexity, has said, “Sooner or later, we will run out of jobs.”

    I see three overlapping possibilities as formal employment opportunities decline. Some people displaced from the formal workforce will devote their freedom to simple leisure; some will seek to build productive communities outside the workplace; and others will fight, passionately and in many cases fruitlessly, to reclaim their productivity by piecing together jobs in an informal economy. These are futures of consumption, communal creativity, and contingency. In any combination, it is almost certain that the country would have to embrace a radical new role for government.

    Work is really three things, says Peter Frase, the author of Four Futures, a forthcoming book about how automation will change America: the means by which the economy produces goods, the means by which people earn income, and an activity that lends meaning or purpose to many people’s lives. “We tend to conflate these things,” he told me, “because today we need to pay people to keep the lights on, so to speak. But in a future of abundance, you wouldn’t, and we ought to think about ways to make it easier and better to not be employed.”

    Hunnicutt’s vision rests on certain assumptions about taxation and redistribution that might not be congenial to many Americans today. But even leaving that aside for the moment, this vision is problematic: it doesn’t resemble the world as it is currently experienced by most jobless people. By and large, the jobless don’t spend their downtime socializing with friends or taking up new hobbies. Instead, they watch TV or sleep. Time-use surveys show that jobless prime-age people dedicate some of the time once spent working to cleaning and childcare. But men in particular devote most of their free time to leisure, the lion’s share of which is spent watching television, browsing the Internet, and sleeping. Retired seniors watch about 50 hours of television a week, according to Nielsen. That means they spend a majority of their lives either sleeping or sitting on the sofa looking at a flatscreen. The unemployed theoretically have the most time to socialize, and yet studies have shown that they feel the most social isolation; it is surprisingly hard to replace the camaraderie of the water cooler.
    http://www.theatlantic.com/magazine/a...ive/2015/07/world-without-work/395294
    Voting 0
  9. The share of women in computer science started falling at roughly the same moment when personal computers started showing up in U.S. homes in significant numbers.”
    http://jaxenter.com/when-women-stopped-programming-111998.html
    Voting 0
  10. The point of the Watch is actions, not apps. No, it’s not that hard to pull your phone out of your pocket. But once you do, a sea of choices and distractions opens before you. The Watch usefully limits those choices to what makes sense to do in the moment. It’s your phone’s notification screen ported to your wrist.

    And as on the phone itself, those interactive notifications are making apps as we traditionally think of them a less prominent part of the user experience—a trend likely to march forward whether smartwatches take off or not. Apps move into the background to support the actions they enable on screens further up the stack—the phone’s lock screen, for example. Or a Watch.

    While PC software is for doing things on PCs, software on mobile devices is for doing things in the world. The usefulness of mobile devices depends on context; what we do with them depends on where we are. The more efficient the interaction between that software and our environment, the more useful the device.

    The great thing for users is that the Watch will impose a new rigor on developers seeking to justify their apps’ existence.
    http://www.wired.com/2015/03/apple-wa...pps-afterthought/?mbid=social_twitter
    Voting 0

Top of the page

First / Previous / Next / Last / Page 1 of 6 Online Bookmarks of M. Fioretti: Tags: software

About - Propulsed by SemanticScuttle