mfioretti: algorithms* + solutionism*

Bookmarks on this page are managed by an admin user.

17 bookmark(s) - Sort by: Date ↓ / Title / Voting / - Bookmarks from other users for this tag

  1. The point is not that making a world to accommodate oneself is bad, but that when one has as much power over the rest of the world as the tech sector does, over folks who don’t naturally share its worldview, then there is a risk of a strange imbalance. The tech world is predominantly male—very much so. Testosterone combined with a drive to eliminate as much interaction with real humans as possible—do the math, and there’s the future.

    We’ve gotten used to service personnel and staff who have no interest or participation in the businesses where they work. They have no incentive to make the products or the services better. This is a long legacy of the assembly line, standardising, franchising and other practices that increase efficiency and lower costs. It’s a small step then from a worker that doesn’t care to a robot. To consumers, it doesn’t seem like a big loss.

    Those who oversee the AI and robots will, not coincidentally, make a lot of money as this trend towards less human interaction continues and accelerates—as many of the products produced above are hugely and addictively convenient. Google, Facebook and other companies are powerful and yes, innovative, but the innovation curiously seems to have had an invisible trajectory. Our imaginations are constrained by who and what we are. We are biased in our drives, which in some ways is good, but maybe some diversity in what influences the world might be reasonable and may be beneficial to all.

    To repeat what I wrote above—humans are capricious, erratic, emotional, irrational and biased in what sometimes seem like counterproductive ways. I’d argue that though those might seem like liabilities, many of those attributes actually work in our favor. Many of our emotional responses have evolved over millennia, and they are based on the probability that our responses, often prodded by an emotion, will more likely than not offer the best way to deal with a situation.

    Neuroscientist Antonio Damasio wrote about a patient he called Elliot, who had damage to his frontal lobe that made him unemotional. In all other respects he was fine—intelligent, healthy—but emotionally he was Spock. Elliot couldn’t make decisions. He’d waffle endlessly over details. Damasio concluded that though we think decision-making is rational and machinelike, it’s our emotions that enable us to actually decide.

    With humans being somewhat unpredictable (well, until an algorithm completely removes that illusion), we get the benefit of surprises, happy accidents and unexpected connections and intuitions. Interaction, cooperation and collaboration with others multiplies those opportunities.

    We’re a social species—we benefit from passing discoveries on, and we benefit from our tendency to cooperate to achieve what we cannot alone. In his book, Sapiens, Yuval Harari claims this is what allowed us to be so successful. He also claims that this cooperation was often facilitated by a possibility to believe in “fictions” such as nations, money, religions and legal institutions. Machines don’t believe in fictions, or not yet anyway. That’s not to say they won’t surpass us, but if machines are designed to be mainly self-interested, they may hit a roadblock. If less human interaction enables us to forget how to cooperate, then we lose our advantage.

    Our random accidents and odd behaviors are fun—they make life enjoyable. I’m wondering what we’re left with when there are fewer and fewer human interactions. Remove humans from the equation and we are less complete as people or as a society. “We” do not exist as isolated individuals—we as individuals are inhabitants of networks, we are relationships. That is how we prosper and thrive.
    http://davidbyrne.com/journal/eliminating-the-human
    Voting 0
  2. That idea of efficiency through speed brought by the tech industry has consequences for society. First, the immediacy of the communications creates moments of intense information overload and distractions. Like other moments of major revolution in information technology, people are racing behind to adapt to the increasing pace of information exchange. In the Big Now, the pool of instantaneous information has dramatically increased, however the pool of available understanding of what that information means has not. People and organizations are still seeking new practices and means to filter, categorize and prioritize information in a world obsessed with the production and consumption of the freshest data points (see Social media at human pace). Doing so, they animate almost uniquely their capacity to fast-check status updates and leave their ability for reflection unstimulated (see, in French, L’écologie de l’attention). The Big Now is not designed for people to step back and understand information in a bigger context (e.g. poor debates in the recent US elections, inability to foresee the 2008 economic crisis). It is only recently that alternatives have started to emerge. For example, the recent strategic changes at Medium proposes to reverse the tendency:

    “We believe people who write and share ideas should be rewarded on their ability to enlighten and inform, not simply their ability to attract a few seconds of attention”.

    Secondly, the asynchronous Internet diminished the frontiers between work, family and leisure. In response, the tech world proposes to ‘hack’ time and to remove frictions (e.g. Soylent diet) to free up time. The flourishing personal productivity books and apps promise peace of mind with time-management advice tailored to the era of connected devices (see The global village and its discomfort). However, like building bigger roads make traffic worse, many of these solutions only provide a quick fix that induces even busier and more stressed lifestyles (see Why time management is ruining our lives). In the Big Now and its cybernetic loops, the more efficient we get at doing things and the more data we generate, the faster the Internet gets back to us, keeps us busy and grabs our limited amount of attention. Besides the promises of time-compression technologies to save us valuable time and free us for life’s important things, in the past half-century, leisure time has remained overall about the same (see Fast-world values).

    Try to imagine another version of the Internet in which the sense of simultaneity that Adam Greenfield described moves to the background of our lives and leaves stage for temporal depth and quality. Connecting people to share and collaborate has been a wonderful thing. Today, I believe that giving us the time to think will be even better (see The collaboration curse). As an illustration, regardless of current methodological trends, creativity rarely emerges rapidly. Many ideas need time to mature, they need different contexts or mindsets to get stronger. This does not often happen when teams are in ‘sprints’ or a young start-up feels under the gun in its ‘incubator’. I participated in ‘start-up accelerator’ mentoring sessions in which I advised young entrepreneurs to step back and consider if their objectives were about speed and scale. Many of them were lured by that Silicon Valley’s unicorn fantasy. Not surprisingly, the first startup decelerator program has now been created, and socratic design workshops are becoming a thing for tech executives to reconsider what’s important.
    https://medium.com/@girardin/after-the-big-now-f0a3f1857294
    Voting 0
  3. At the end of the twentieth century, the long predicted convergence of the media, computing and telecommunications into hypermedia is finally happening. 2 » Once again, capitalism’s relentless drive to diversify and intensify the creative powers of human labour is on the verge of qualitatively transforming the way in which we work, play and live together. By integrating different technologies around common protocols, something is being created which is more than the sum of its parts. When the ability to produce and receive unlimited amounts of information in any form is combined with the reach of the global telephone networks, existing forms of work and leisure can be fundamentally transformed. New industries will be born and current stock market favourites will swept away. At such moments of profound social change, anyone who can offer a simple explanation of what is happening will be listened to with great interest. At this crucial juncture, a loose alliance of writers, hackers, capitalists and artists from the West Coast of the USA have succeeded in defining a heterogeneous orthodoxy for the coming information age: the Californian Ideology.

    This new faith has emerged from a bizarre fusion of the cultural bohemianism of San Francisco with the hi-tech industries of Silicon Valley. Promoted in magazines, books, TV programmes, websites, newsgroups and Net conferences, the Californian Ideology promiscuously combines the free-wheeling spirit of the hippies and the entrepreneurial zeal of the yuppies. This amalgamation of opposites has been achieved through a profound faith in the emancipatory potential of the new information technologies. In the digital utopia, everybody will be both hip and rich. Not surprisingly, this optimistic vision of the future has been enthusiastically embraced by computer nerds, slacker students, innovative capitalists, social activists, trendy academics, futurist bureaucrats and opportunistic politicians across the USA. As usual, Europeans have not been slow in copying the latest fad from America. While a recent EU Commission report recommends following the Californian free market model for building the information superhighway, cutting-edge artists and academics eagerly imitate the post human philosophers of the West Coast’s Extropian cult. 3 » With no obvious rivals, the triumph of the Californian Ideology appears to be complete.

    The widespread appeal of these West Coast ideologues isn’t simply the result of their infectious optimism. Above all, they are passionate advocates of what appears to be an impeccably libertarian form of politics – they want information technologies to be used to create a new ‘Jeffersonian democracy’ where all individuals will be able to express themselves freely within cyberspace. 4 » However, by championing this seemingly admirable ideal, these techno-boosters are at the same time reproducing some of the most atavistic features of American society, especially those derived from the bitter legacy of slavery. Their utopian vision of California depends upon a wilful blindness towards the other – much less positive – features of life on the West Coast: racism, poverty and environmental degradation. 5 » Ironically, in the not too distant past, the intellectuals and artists of the Bay Area were passionately concerned about these issues.
    http://www.imaginaryfutures.net/2007/04/17/the-californian-ideology-2
    Voting 0
  4. The reason I bring this up: first of all, it’s a great way of understanding how machine learning algorithms can give us stuff we absolutely don’t want, even though they fundamentally lack prior agendas. Happens all the time, in ways similar to the Donald.

    Second, some people actually think there will soon be algorithms that control us, operating “through sound decisions of pure rationality” and that we will no longer have use for politicians at all.

    And look, I can understand why people are sick of politicians, and would love them to be replaced with rational decision-making robots. But that scenario means one of three things:

    1. Controlling robots simply get trained by the people’s will and do whatever people want at the moment. Maybe that looks like people voting with their phones or via the chips in their heads. This is akin to direct democracy, and the problems are varied – I was in Occupy after all – but in particular mean that people are constantly weighing in on things they don’t actually understand. That leaves them vulnerable to misinformation and propaganda.

    2. Controlling robots ignore people’s will and just follow their inner agendas. Then the question becomes, who sets that agenda? And how does it change as the world and as culture changes? Imagine if we were controlled by someone from 1000 years ago with the social mores from that time. Someone’s gonna be in charge of “fixing” things.

    3. Finally, it’s possible that the controlling robot would act within a political framework to be somewhat but not completely influenced by a democratic process. Something like our current president. But then getting a robot in charge would be a lot like voting for a president. Some people would agree with it, some wouldn’t. Maybe every four years we’d have another vote, and the candidates would be both people and robots, and sometimes a robot would win, sometimes a person. I’m not saying it’s impossible, but it’s not utopian. There’s no such thing as pure rationality in politics, it’s much more about picking sides and appealing to some people’s desires while ignoring others.
    https://mathbabe.org/2016/08/11/donal...e-a-biased-machine-learning-algorithm
    Voting 0
  5. We’re now operating in a world where automated algorithms make impactful decisions that can and do amplify the power of business and government. I’ve argued in this paper that we need to do better in deciphering the contours of that power. As algorithms come to regulate society and perhaps even implement law directly,47we should proceed with caution and think carefully about how we choose to regulate them back.48Journalists might productively offer themselves as a check and balance on algorithmic power while the legislative regulation of algorithms takes shape over a longer time horizon. In this paper I’ve offered a basis for understanding algorithmic power in terms of the types of decisions algorithms make in prioritizing, classifying, associating, and filtering information. Understanding those wellsprings of algorithmic power suggests a number of diagnostic questions that further inform a more critical stance toward algorithms. Given the challenges to effectively employing transparency for algorithms, namely trade secrets, the consequences of manipulation, and the cognitive overhead of complexity, I propose that journalists might effectively engage with algorithms through a process of reverse engineering. By understanding the input-output relationships of an algorithm we can start to develop stories about how that algorithm operates. Sure, there are challenges here too: legal, ethical, and technical, but reverse engineering is another tactic for the tool belt—a technique that has already shown it can be useful at times. Next time you hear about software or an algorithm being used to help make a decision, you might get critical and start asking questions about how that software could be affecting outcomes. Try to FOIA it, try to understand whether you can reverse engineer it, and when you’re finished, write up your method for how you got there. By method-sharing we’ll expand our ability to replicate these types of stories, and, over time, perhaps even develop enough expertise to suggest standards for algorithmic transparency that acknowledge business concerns while still surfacing useful information for the public.
    http://towcenter.org/research/algorit...on-the-investigation-of-black-boxes-2
    Voting 0
  6. We can’t create a brilliant citizen-facing interface, which then invites the public into an outdated, broken, rigorous, or unable-to-adapt process.

    A lack of citizen engagement or a lack of transparency is a symptom of the problem, not the problem itself.

    The people I know in government want to do the best that they can for the citizens of their community and would gladly adopt a new technology if it meant they could do their jobs better and more efficiently.

    But when citizen input and open data is thrown at a perceived problem, with no direction on how to help government officials analyze that data or how to easily get it in the hands of decision makers (i.e. elected officials) in a way that improves their workload, it creates frustration on both sides.

    This is why so few governments allow comments on their Facebook pages. There is no clean way to incorporate that feedback into their processes without added staff time, so many only use it as a billboard.

    This is the next and necessary evolution of open data and civic tech, and the real measurement of how to improve civic engagement in total: applying technology to government processes in such a way that the processes improve, decisions are more well-informed, and government staff and officials can more easily do their jobs. Then citizens will be invited into a better experience and have a vastly improved interaction with government. Not the other way around.
    http://opensource.com/government/15/6/next-frontier-civic-tech
    Voting 0
  7. They want to replace politicians with engineers and our modern financial system with one backed by the laws of science. They dream of a world without scarcity, where the miracles of technology can easily meet the needs of everyone in the nation.

    No, we’re not talking about today’s Bitcoin-hawking Silicon Valley techno-utopians. We’re talking about Technocracy Inc., an organization founded in 1931 to promote the ideas of a man named Howard Scott.

    Scott saw government and industry as wasteful and unfair. He believed that a new economy run by engineers would be more efficient and equitable. His core idea was that what he called the “price system”—essentially the capitalist economy and the fiat currencies it uses—should be replaced with a new economic system based on how much energy it takes to produce specific goods. Under Scott’s plan, engineers would run a new continent-wide government called the Technate and optimize the use of energy to assure abundance.

    It would be an exaggeration to say that modern Silicon Valley is self-consciously carrying on the legacy of Howard Scott and Technocracy Inc. itself. But it’s hard not to hear echoes of his ideas today when tech moguls propose floating city-states and pitch idealistic high-tech solutions as answers to deep-seated social issues such as homelessness. The group influenced inventor and utopian thinker Buckminster Fuller, whose ideas in turn shaped the thinking of Steward Brand, the founder of the Whole Earth Catalog—the DIY tome that shaped the early personal computing and networked computing era and the thinking of everyone from Steve Jobs to the founders of WIRED
    http://www.wired.com/2015/06/technocracy-inc/?mbid=social_twitter
    Voting 0
  8. If only Andreessen weren’t so rare in this respect. It’s not hyperbole to say that we are living through one of the most dysfunctional, polarized, and narrow-minded eras in US political history. But that makes 2015 the perfect time for what Silicon Valley calls disruption. For example, did you know that the US Senate doesn’t allow the use of cloud-based services by its members? Or that those same senators are prevented from using social media analytics tools—even to measure constituent sentiment? Crazy, right? The attempt to drag our esteemed representatives into the 21st century is what got senior staff writer Jessi Hempel interested in talking to US senator Cory Booker about some of the reforms he and fellow senator Claire McCaskill—a social media ace—have proposed
    http://www.wired.com/2015/05/editors-letter-june-2015
    Voting 0
  9. The Digital Savior Complex openly embraces technological determinism, the narrow idea that technology determines the progress of societies as well as the progress of their moral and cultural values. Ideas of progress, modernization and civilization are projected as universal goals to aspire to without taking into account simple questions such as: Who controls technology? Who stands to benefit from technology? Who profits from it? How do governments deploy technology in order to manipulate geopolitical interests?

    The particular kind of neutrality associated with digital humanitarianism is quite dangerous. To be detached from the cause you claim to support, to spread information without quite asking what this does and, more importantly, to never quite find out where and how your money is traveling and to what end it is being used is an utter travesty.

    While Patrick Meier’s work claims the origins of digital humanitarianism with Haiti’s 2010 earthquake, I would argue that its origins are much older, and that sometimes the worst and best things always begin in Africa.

    Digital modes were first implemented during the creation, dissemination and promotion of the Save Darfur campaign, often projected as one of the most dire humanitarian crises, and even termed the “21st Century’s Genocide.”

    Reading Mahmood Mamdani’s singularly brilliant book on the subject, Saviors and Survivors: Darfur, Politics and the War on Terror, allows the reader to cut through the frenzy around Save Darfur with great rigor and precision. But Mamdani does not expressly speak of the ways in which the digital impacted activism around the conflict in Sudan. A significant portion of his work centers around his frustration with the highly curated commodification of Darfur, coupled with the way in which media and technology were used in this campaign. I believe there are three factors that turned this campaign into a specifically digital phenomenon: the strategic use of numbers; the image-centric nature of the entire campaign; and the targeting of youth. This is an indispensible triad if the intention is to popularize the cause or, actually, if you want it to “go viral.” Which it did.

    The Save Darfur campaign was marked by aestheticized images, the constant tick-tock of body counts and numbers of dead, and the incredibly successful mobilization of university students. The use of Google Earth, the world’s largest Facebook campaign and the video games such as Darfur is Dying (trailer below) were only pieces in the extraordinary digital machinery.
    http://www.warscapes.com/opinion/digital-savior-complex
    Voting 0
  10. a broad survey of on-demand workers found that many encountered lower pay than they expected and hours tied tightly to periods of peak demand. They discovered they had to work earlier or later than they expected, and longer hours in general, because the systems weren’t as flexible as they assumed. The upshot: people are leaving on-demand work after finding out the promised advantages over traditional jobs don’t hold up.

    And this dissatisfaction could wind up being a big problem for Silicon Valley.

    Half of the respondents said they planned to stop working for on-demand companies within the year.

    An overwhelming majority of respondents—75 percent—said their top reason for doing on-demand work was because they thought it offered “greater schedule flexibility.” Yet nearly half of respondents said “peak hours and demand” was the most significant factor dictating when they worked. (“Family” came in second at 35 percent). Many workers didn’t end up straying too far from a traditional 9-to-5 schedule because this was when demand tended to be high. Inflexible schedules were a particular problem in ride service work.

    Still, insufficient pay, not scheduling, turned out to be the most common reason workers left their jobs. In fact, the likelihood of respondents staying in the job or leaving was directly tied to their earnings, which varied depending on the type of on-demand job.
    http://www.wired.com/2015/05/demand-s...promises-workers/?mbid=social_twitter
    Voting 0

Top of the page

First / Previous / Next / Last / Page 1 of 2 Online Bookmarks of M. Fioretti: Tags: algorithms + solutionism

About - Propulsed by SemanticScuttle