mfioretti: rms*

Bookmarks on this page are managed by an admin user.

25 bookmark(s) - Sort by: Date ↓ / Title / Voting / - Bookmarks from other users for this tag

  1. Among the 30% of respondents who said they did not think things would turn out well in the future were those who said the trajectory of technology will overwhelm labor markets, killing more jobs than it creates. They foresee a society where AI programs and machines do most of the work and raise questions about people’s sense of identity, the socio-economic divisions that already distress them, their ability to pay for basic needs, their ability to use the growing amount of “leisure time” constructively and the impact of all of this on economic systems. It should also be noted that many among the 70% who expect positive change in the next decade also expressed some of these concerns.

    Richard Stallman, Internet Hall of Fame member and president of the Free Software Foundation, commented, “I think this question has no answer. I think there won’t be jobs for most people a few decades from now, and that’s what really matters. As for the skills for the employed fraction of advanced countries, I think they will be difficult to teach. You could get better at them by practice, but you couldn’t study them much.”
    http://www.pewinternet.org/2017/05/03/the-future-of-jobs-and-jobs-training
    Voting 0
  2. the influence of the FSF seems to have lessened since 2007, when the third version of the GNU General Public License was released without a general consensus among the stakeholders consulted. In addition, the FSF's failure to strongly address licensing issues on Android or in cloud computing has reduced its authority. Perhaps, as John Sullivan, the FSF's executive director suggests, this perception is largely a matter of perception, but increased publicity could, in itself, be a priority.

    Still, from any perspective, the survey matters. The answers the FSF receives, and how it responds to them could easily determine not only if the FSF still exists thirty years from now, but whether it remains a strong voice in the next five years.
    http://www.datamation.com/open-source...-foundation-your-input-requested.html
    Tags: , , , by M. Fioretti (2016-01-19)
    Voting 0
  3. trattasi proprio di un questionario relativo alla propria impressione sull’attività e sul ruolo di FSF, sugli obiettivi primari ed i metodi, con diverse domande aperte e spazi per commentare ed approfondire.

    Quel che ancor più mi ha sorpreso è che, ad una impressione del tutto personale, dalle domande e dalle possibili risposte – laddove fossero risposte chiuse – traspare un senso di imbarazzo, di colpevolezza, e di piena consapevolezza di non essere un ente universalmente amato e supportato. Non serve necessariamente essere delle riottose malelingue (come me) per constatare oggettivamente che EFF, Wikipedia o persino Mozilla, altre realtà legate alla promozione dei diritti digitali e all’attivismo, hanno un seguito uno o due ordini di grandezza superiori rispetto a quello di FSF, che negli anni, in nome di valori troppo astratti per essere verosimili, si è progressivamente isolata e sempre più arroccata su posizioni più dannose che benefiche alla sua stessa mission.

    Per amor di condivisione riporto qui di seguito le domande e le mie risposte al questionario, il tutto tradotto in italiano, integrando con links agli approfondimenti ed eventuali commenti addizionali.
    https://madbob.wordpress.com/2016/01/09/chiedimi-se-sono-felice
    Tags: , , , , by M. Fioretti (2016-01-11)
    Voting 0
  4. In the very first response to Torvalds's post, a user wrote to express interest in the Swedish-speaking Finnish grad student's work, noting that, in contrast to Minix, the prevailing Unix-like OS for personal computers before the advent of Linux, Torvalds's new OS was going to be free. Again, the word meant "no-cost," not "open source."

    The mantra about sharing source code did not develop until later in Linux's history. And the arguments that are familiar to open source supporters today coalesced only in the late 1990s, when Raymond and other collaborators officially launched a campaign to promote "open source"—a term which, by the way, was not even invented until 1998, long after Linux's founding.

    It's also telling that, initially, Torvalds released Linux under a license that simply prevented users from making money off of it. It wasn't until later that he adopted the GNU General Public License, or GPL, that Stallman and his cohort had created to keep software code publicly accessible, regardless of whether it had a commercial use.

    The story runs deeper than this. There's a lot to say, too, about how Stallman's GNU movement was, in the first place, a reaction to the commercialization of software more than to concerns over the shareability of code. And about how the early hackers who supposedly gave rise to the open source movement had, in the 1970s, happily hacked away on proprietary Unix platforms whose source code was not available to them. (The fact that the proprietary Unixes failed to work well on the cheap personal computers that hit the market circa 1990 was the major impetus for people such as Torvalds to built a new type of Unix; for them, the issue was keeping Unix affordable by making it work on low-cost computers, not creating open code.)
    http://thevarguy.com/open-source-appl...s-about-saving-money-not-sharing-code
    Voting 0
  5. The most effective way to push for published hardware designs to be free is through rules in the repositories where they are published. Repository operators should place the freedom of the people who will use the designs above the preferences of people who make the designs. This means requiring designs of useful objects to be free, as a condition for posting them.

    For decorative objects, that argument does not apply, so we don’t have to insist they must be free. However, we should insist that they be sharable. Thus, a repository that handles both decorative object models and functional ones should have an appropriate license policy for each category. (For digital designs, I suggest that the repository insist on GNU GPL v3-or-later. For functional 3-D designs, the repository should ask the design’s author to choose one of four licenses: GNU GPL v3-or-later, CC-SA, CC-BY or CC-0. For decorative designs, it should allow any of the CC licenses, or GNU GPL v3-or-later.)
    http://www.wired.com/2015/03/richard-...man-how-to-make-hardware-designs-free
    Voting 0
  6. Stallman recognises his own stubbornness. Back in 1999 he told Michael Gross that when he began the GNU project people said, “‘Oh, this is an infinitely hard job; you can’t possibly write a whole system like Unix. It would be nice, but it’s just hopeless.’ That’s what they said. And I said, ‘I’m going to do it anyway.’ This is where I am great. I am great at being very, very stubborn and ignoring reasons why you should change your goal, reasons that many other people will be susceptible to. Many people want to be on the winning side. I didn’t give a damn about that. I wanted to be on the side that was right, and even if I didn’t win, at least I was going to give it a try.”
    http://www.linuxuser.co.uk/features/pulling-the-plug
    Tags: , , , by M. Fioretti (2015-02-26)
    Voting 0
  7. A larger and subtler change, the one easiest to forget, is how dependent we were on proprietary technology and closed-source software in those days. Today’s hacker culture is very strongly identified with open-source development by both insiders and outsiders (and, of course, I bear some of the responsibility for that). But it wasn’t always like that. Before the rise of Linux and the *BSD systems around 1990 we were tied to a lot of software we usually didn’t have the source code for.

    Part of the reason many of us tend to forget this is mythmaking by the Free Software Foundation. They would have it that there was a lost Eden of free software sharing that was crushed by commercialization in the late 1970s and early 1980s. This narrative projects Richard Stallman’s history at the MIT AI Lab on the rest of the world. But, almost everywhere else, it wasn’t like that either.

    One of the few other places it was almost like that was early Unix development from 1976-1984. They really did have something recognizably like today’s open-source culture, though much smaller in scale and with communications links that were very slow and expensive by today’s standards. I was there during the end of that beginning, the last few years before AT&T’s failed attempt to lock down and commercialize Unix in 1984.

    But the truth is, before the early to mid-1980s, the technological and cultural base to support anything like what we now call “open source” largely didn’t exist at all outside of those two settings. The reason is brutally simple: software wasn’t portable!

    Without the Unix-spawned framework of concepts and technologies, having source code simply didn’t help very much. This is hard for younger hackers to realize, because they have no experience of the software world before retargetable compilers and code portability became relatively common. It’s hard for a lot of older hackers to remember because we mostly cut our teeth on Unix environments that were a few crucial years ahead of the curve.

    But we shouldn’t forget. One very good reason is that believing a myth of the fall obscures the remarkable rise that we actually accomplished, bootstrapping ourselves up through a series of technological and social inventions to where open source on everyone’s desk and in everyone’s phone and ubiquitous in the Internet infrastructure is now taken for granted.

    We didn’t get here because we failed in our duty to protect a prelapsarian software commons, but because we succeeded in creating one. That is worth remembering.
    http://esr.ibiblio.org/?p=5277
    Voting 0
  8. Free software is built on a paradox. In order to give freedom to users, free software licences use something that takes away freedom – copyright, which is an intellectual monopoly based on limiting people's freedom to share, not enlarging it. That was a brilliant hack when Richard Stallman first came up with it in 1985, with the GNU Emacs General Public Licence, but maybe now it's time to move on.

    There are signs of that happening already. Eighteen months ago, people started noting the decline of copyleft licences in favour of more "permissive" ones like Apache and BSD. More recently, the rise of GitHub has attracted attention, and the fact that increasingly people have stopped specifying licences there (which is somewhat problematic).

    I don't think this declining use of copyleft licences is a sign of failure – on the contrary. As I wrote in my previous column, free software has essentially won, taking over most key computing sectors. Similarly, the move to "permissive" licences has only been possible because of the success of copyleft: the ideas behind collaborative creation and contributing back to a project are now so pervasive that we don't require "strong" copyleft licences to enforce them – it's part of coders' mental DNA.
    http://www.h-online.com/open/features...ing-open-source-licences-1802140.html
    Voting 0
  9. Stallman had on offer something far more precise and revolutionary: a way to think about the freedoms of individual users in specific contexts, as if the well-being of the mega-platform were of secondary importance. But that vision never came to pass. Instead, public advocacy efforts were channeled into preserving an abstract and reified configuration of digital technologies—“the Internet”—so that Silicon Valley could continue making money by hoovering up our private data.

    What unites most of these papers is a shared background assumption that, thanks to the coming of Web 2.0, we are living through unique historical circumstances. Except that there was no coming of Web 2.0—it was just a way to sell a technology conference to a public badly burned by the dotcom crash. Why anyone dealing with stress management or Wittgenstein would be moved by the logistics of conference organizing is a mystery.

    O’Reilly himself pioneered this 2.0-ification of public discourse, aggressively reinterpreting trends that had been happening for decades through the prism of Internet history—a move that presented all those trends as just a logical consequence of the Web 2.0 revolution.

    There was way too much craziness and bad science in Korzybski’s theories for him to be treated as a serious thinker, but his basic question—as Postman put it, “What are the characteristics of language which lead people into making false evaluations of the world around them?”—still remains relevant today.

    Tim O’Reilly is, perhaps, the most high-profile follower of Korzybski’s theories today. O’Reilly was introduced to Korzybski’s thought as a teenager while working with a strange man called George Simon in the midst of California’s counterculture of the early 1970s. O’Reilly and Simon were coteaching workshops at the Esalen Institute—then a hotbed of the “human potential movement” that sought to tap the hidden potential of its followers and increase their happiness. Bridging Korzybski’s philosophy with Sri Aurobindo’s integral yoga, Simon had an immense influence on the young O’Reilly. Simon’s rereading of general semantics, noted O’Reilly in 2004, “gave me a grounding in how to see people, and to acknowledge what I saw, that is the bedrock of my personal philosophy to this day.”

    O’Reilly, of course, sees his role differently, claiming that all he wants is to make us aware of what earlier commentators may have overlooked. “A metaphor is just that: a way of framing the issues such that people can see something they might otherwise miss,” he wrote in response to a critic who accused him of linguistic incontinence. But Korzybski’s point, if fully absorbed, is that a metaphor is primarily a way of framing issues such that we don’t see something we might otherwise see.

    In a fascinating essay published in 2000, O’Reilly sheds some light on his modus operandi. The thinker who emerges there is very much at odds with the spirit of objectivity that O’Reilly seeks to cultivate in public. That essay, in fact, is a revealing ode to what O’Reilly dubs “meme-engineering”: “Just as gene engineering allows us to artificially shape genes, meme-engineering lets us organize and shape ideas so that they can be transmitted more effectively, and have the desired effect once they are transmitted.” In a move worthy of Frank Luntz, O’Reilly meme-engineers a nice euphemism—“meme-engineering”—to describe what has previously been known as “propaganda.”

    “A big part of meme engineering is giving a name that creates a big tent that a lot of people want to be under, a train that takes a lot of people where they want to go,” writes O’Reilly.

    So what are we to make of O’Reilly’s exhortation that “it’s a trap for outsiders to think that Government 2.0 is a way to use new technology to amplify the voices of citizens to influence those in power”? We might think that the hallmark of successful participatory reforms would be enabling citizens to “influence those in power.” There’s a very explicit depoliticization of participation at work here. O’Reilly wants to redefine participation from something that arises from shared grievances and aims at structural reforms to something that arises from individual frustration with bureaucracies and usually ends with citizens using or building apps to solve their own problems.

    There is nothing “collective” about Amazon’s distributed intelligence; it’s just a bunch of individual users acting on their own.

    As a result, once-lively debates about the content and meaning of specific reforms and institutions are replaced by governments calling on their citizens to help find spelling mistakes in patent applications or use their phones to report potholes. If Participation 1.0 was about the use of public reason to push for political reforms, with groups of concerned citizens coalescing around some vague notion of the shared public good, Participation 2.0 is about atomized individuals finding or contributing the right data to solve some problem without creating any disturbances in the system itself.

    he real question is not whether developers should be able to submit apps to the App Store, but whether citizens should be paying for the apps or counting on the government to provide these services. To push for the platform metaphor as the primary way of thinking about the distribution of responsibilities between the private and the public sectors is to push for the economic-innovative dimension of Gov 2.0—and ensure that the private sector always emerges victorious.


    Once we follow O’Reilly’s exhortation not to treat the government as “the deus ex machina that we’ve paid to do for us what we could be doing for ourselves,” such questions are hard to avoid. In all of O’Reilly’s theorizing, there’s not a hint as to what political and moral principles should guide us in applying the platform model. Whatever those principles are, they are certainly not exhausted by appeals to innovation and efficiency—which is the language that O’Reilly wants us to speak.

    At least O’Reilly is perfectly clear about how people can succeed in the future. Toward the end of his Long Now Foundation talk, he admits that

    the » future of collective intelligence applications is a future in which the individual that we prize so highly actually has less power—except to the extent that that individual is able to create new mind storms. . . . How will we influence this global brain? The way we’ll influence it is seen in the way that people create these viral storms . . . . We’re going to start getting good at that. People will be able to command vast amounts of attention and direct large groups of people through new mechanisms.

    Yes, let that thought sink in: our Mindstormer-in-Chief is telling us that the only way to succeed in this brave new world is to become a Tim O’Reilly. Anyone fancy an O’Reilly manual on meme hustling?
    http://thebaffler.com/past/the_meme_hustler
    Voting 0
  10. It is clear that the larger companies get, the harder it is to enforce antitrust laws against them. Yet, a business-friendly government can vitiate the law simply by launching no antitrust cases – as the Bush administration did.

    When the government wins such a suit, the court splits up the company to remedy the specific anti-competitive behavior proved. It can’t split the company into 50 parts just to ensure they are all small enough. We can’t fix the problem of too-big-to-fail companies this way.

    I propose another method ­– one that can be applied to all companies. It works through taxes. There will be no need to sue companies and split them up – because they will split themselves up.

    The method is simple: a progressive tax on businesses. We tax a company’s gross income, with a tax rate that increases as the company gets bigger. Companies would be able to reduce their tax rates by splitting themselves up.

    With this incentive, over time many companies will likely get smaller. They could subdivide in ways they consider most efficient – rather than as decided by a court. We can adjust the strength of the incentive by adjusting the tax rates. If too few companies split, we can turn up the heat.
    http://blogs.reuters.com/great-debate/2013/02/04/fixing-too-big-to-fail
    Voting 0

Top of the page

First / Previous / Next / Last / Page 1 of 3 Online Bookmarks of M. Fioretti: Tags: rms

About - Propulsed by SemanticScuttle