mfioretti: algorithms* + solutionism*

Bookmarks on this page are managed by an admin user.

22 bookmark(s) - Sort by: Date ↓ / Title / Voting / - Bookmarks from other users for this tag

  1. Which brings us back to Facebook, which to this day seems at best to dimly understand how the news business works, as is evident in its longstanding insistence that it's not a media company. Wired was even inspired to publish a sarcastic self-help quiz for Facebook execs on "How to tell if you're a media company." It included such questions as "Are you the country's largest source of news?"

    The answer is a resounding yes. An astonishing 45 percent of Americans get their news from this single source. Add Google, and above 70 percent of Americans get their news from a pair of outlets. The two firms also ate up about 89 percent of the digital-advertising growth last year, underscoring their monopolistic power in this industry.

    Facebook's cluelessness on this front makes the ease with which it took over the press that much more bizarre to contemplate. Of course, the entire history of Facebook is pretty weird, even by Silicon Valley standards, beginning with the fact that the firm thinks of itself as a movement and not a giant money-sucking machine.


    That Facebook saw meteoric rises without ever experiencing a big dip in users might have something to do with the fact that the site was consciously designed to be addictive, as early founder Parker recently noted at a conference in Philadelphia.

    Facebook is full of features such as "likes" that dot your surfing experience with neuro-rushes of micro-approval – a "little dopamine hit," as Parker put it. The hits might come with getting a like when you post a picture of yourself thumbs-upping the world's third-largest cheese wheel, or flashing the "Live Long and Prosper" sign on International Star Trek day, or whatever the hell it is you do in your cyber-time. "It's a social-validation feedback loop," Parker explained. "Exactly the kind of thing that a hacker like myself would come up with, because you're exploiting a vulnerability in human psychology."
    https://www.rollingstone.com/politics...e-be-saved-social-media-giant-w518655
    Voting 0
  2. Stratumseind in Eindhoven is one of the busiest nightlife streets in the Netherlands. On a Saturday night, bars are packed, music blares through the street, laughter and drunken shouting bounces off the walls. As the night progresses, the ground becomes littered with empty shot bottles, energy drink cans, cigarette butts and broken glass.

    It’s no surprise that the place is also known for its frequent fights. To change that image, Stratumseind has become one of the “smartest” streets in the Netherlands. Lamp-posts have been fitted with wifi-trackers, cameras and 64 microphones that can detect aggressive behaviour and alert police officers to altercations. There has been a failed experiment to change light intensity to alter the mood. The next plan, starting this spring, is to diffuse the smell of oranges to calm people down. The aim? To make Stratumseind a safer place.

    We get that comment a lot – ‘Big brother is watching you’. I prefer to say, ‘Big brother is helping you’
    Peter van de Crommert

    All the while, data is being collected and stored. “Visitors do not realise they are entering a living laboratory,” says Maša Galic, a researcher on privacy in the public space for the Tilburg Institute of Law, Technology and Society. Since the data on Stratumseind is used to profile, nudge or actively target people, this “smart city” experiment is subject to privacy law. According to the Dutch Personal Data Protection Act, people should be notified in advance of data collection and the purpose should be specified – but in Stratumseind, as in many other “smart cities”, this is not the case.

    Peter van de Crommert is involved at Stratumseind as project manager with the Dutch Institute for Technology, Safety and Security. He says visitors do not have to worry about their privacy: the data is about crowds, not individuals. “We often get that comment – ‘Big brother is watching you’ – but I prefer to say, ‘Big brother is helping you’. We want safe nightlife, but not a soldier on every street corner.”
    Revellers in Eindhoven’s Stratumseind celebrate King’s Day.
    Facebook
    Twitter
    Pinterest
    Revellers in Eindhoven’s Stratumseind celebrate King’s Day. Photograph: Filippo Manaresi/Moment Editorial/Getty Images

    When we think of smart cities, we usually think of big projects: Songdo in South Korea, the IBM control centre in Rio de Janeiro or the hundreds of new smart cities in India. More recent developments include Toronto, where Google will build an entirely new smart neighbourhood, and Arizona, where Bill Gates plans to build his own smart city. But the reality of the smart city is that it has stretched into the everyday fabric of urban life – particularly so in the Netherlands.
    Advertisement

    In the eastern city of Enschede, city traffic sensors pick up your phone’s wifi signal even if you are not connected to the wifi network. The trackers register your MAC address, the unique network card number in a smartphone. The city council wants to know how often people visit Enschede, and what their routes and preferred spots are. Dave Borghuis, an Enschede resident, was not impressed and filed an official complaint. “I don’t think it’s okay for the municipality to track its citizens in this way,” he said. “If you walk around the city, you have to be able to imagine yourself unwatched.”

    Enschede is enthusiastic about the advantages of the smart city. The municipality says it is saving €36m in infrastructure investments by launching a smart traffic app that rewards people for good behaviour like cycling, walking and using public transport. (Ironically, one of the rewards is a free day of private parking.) Only those who mine the small print will discover that the app creates “personal mobility profiles”, and that the collected personal data belongs to the company Mobidot.
    https://www.theguardian.com/cities/20...-privacy-eindhoven-utrecht?CMP=twt_gu
    Voting 0
  3. Rome and London are two huge, sluggish beasts of cities that have outlived millennia of eager reformers. They share a world where half the people already live in cities and another couple billion are on their way into town. The population is aging quickly, the current infrastructure must crumble and be replaced by its very nature, and climate disaster is taking the place of the past’s great urban fires, wars, and epidemics. Those are the truly important, dull but worthy urban issues.

    However, the cities of the future won’t be “smart,” or well-engineered, cleverly designed, just, clean, fair, green, sustainable, safe, healthy, affordable, or resilient. They won’t have any particularly higher ethical values of liberty, equality, or fraternity, either. The future smart city will be the internet, the mobile cloud, and a lot of weird paste-on gadgetry, deployed by City Hall, mostly for the sake of making towns more attractive to capital.


    Whenever that’s done right, it will increase the soft power of the more alert and ambitious towns and make the mayors look more electable. When it’s done wrong, it’ll much resemble the ragged downsides of the previous waves of urban innovation, such as railways, electrification, freeways, and oil pipelines. There will also be a host of boozy side effects and toxic blowback that even the wisest urban planner could never possibly expect.

    “information about you wants to be free to us.”

    This year, a host of American cities vilely prostrated themselves to Amazon in the hopes of winning its promised, new second headquarters. They’d do anything for the scraps of Amazon’s shipping business (although, nobody knows what kind of jobs Amazon is really promising). This also made it clear, though, that the flat-world internet game was up, and it’s still about location, location, and location.

    Smart cities will use the techniques of “smartness” to leverage their regional competitive advantages. Instead of being speed-of-light flat-world platforms, all global and multicultural, they’ll be digitally gated communities, with “code as law” that is as crooked, complex, and deceitful as a Facebook privacy chart.


    You still see this upbeat notion remaining in the current smart-city rhetoric, mostly because it suits the institutional interests of the left.

    The “bad part of town” will be full of algorithms that shuffle you straight from high-school detention into the prison system. The rich part of town will get mirror-glassed limos that breeze through the smart red lights to seamlessly deliver the aristocracy from curb into penthouse.

    These aren’t the “best practices” beloved by software engineers; they’re just the standard urban practices, with software layered over. It’s urban design as the barbarian’s varnish on urbanism.

    If you look at where the money goes (always a good idea), it’s not clear that the “smart city” is really about digitizing cities. Smart cities are a generational civil war within an urban world that’s already digitized.

    It’s a land grab for the command and control systems that were mostly already there.
    https://www.theatlantic.com/technology/archive/2018/02/stupid-cities/553052
    Voting 0
  4. As problematic as Facebook has become, it represents only one component of a much broader shift into a new human connectivity that is both omnipresent (consider the smartphone) and hypermediated—passing through and massaged by layer upon layer of machinery carefully hidden from view. The upshot is that it’s becoming increasingly difficult to determine what in our interactions is simply human and what is machine-generated. It is becoming difficult to know what is real.

    Before the agents of this new unreality finish this first phase of their work and then disappear completely from view to complete it, we have a brief opportunity to identify and catalogue the processes shaping our drift to a new world in which reality is both relative and carefully constructed by others, for their ends. Any catalogue must include at least these four items:

    the monetisation of propaganda as ‘fake news’;
    the use of machine learning to develop user profiles accurately measuring and modelling our emotional states;
    the rise of neuromarketing, targeting highly tailored messages that nudge us to act in ways serving the ends of others;
    a new technology, ‘augmented reality’, which will push us to sever all links with the evidence of our senses.



    The fake news stories floated past as jetsam on Facebook’s ‘newsfeed’, that continuous stream of shared content drawn from a user’s Facebook’s contacts, a stream generated by everything everyone else posts or shares. A decade ago that newsfeed had a raw, unfiltered quality, the notion that everyone was doing everything, but as Facebook has matured it has engaged increasingly opaque ‘algorithms’ to curate (or censor) the newsfeed, producing something that feels much more comfortable and familiar.

    This seems like a useful feature to have, but the taming of the newsfeed comes with a consequence: Facebook’s billions of users compose their world view from what flows through their feeds. Consider the number of people on public transport—or any public place—staring into their smartphones, reviewing their feeds, marvelling at the doings of their friends, reading articles posted by family members, sharing video clips or the latest celebrity outrages. It’s an activity now so routine we ignore its omnipresence.

    Curating that newsfeed shapes what Facebook’s users learn about the world. Some of that content is controlled by the user’s ‘likes’, but a larger part is derived from Facebook’s deep analysis of a user’s behaviour. Facebook uses ‘cookies’ (invisible bits of data hidden within a user’s web browser) to track the behaviour of its users even when they’re not on the Facebook site—and even when they’re not users of Facebook. Facebook knows where its users spend time on the web, and how much time they spend there. All of that allows Facebook to tailor a newsfeed to echo the interests of each user. There’s no magic to it, beyond endless surveillance.

    What is clear is that Facebook has the power to sway the moods of billions of users. Feed people a steady diet of playful puppy videos and they’re likely to be in a happier mood than people fed images of war. Over the last two years, that capacity to manage mood has been monetised through the sharing of fake news and political feeds atuned to reader preference: you can also make people happy by confirming their biases.

    We all like to believe we’re in the right, and when we get some sign from the universe at large that we are correct, we feel better about ourselves. That’s how the curated newsfeed became wedded to the world of profitable propaganda.

    Adding a little art to brighten an other-wise dull wall seems like an unalloyed good, but only if one completely ignores bad actors. What if that blank canvas gets painted with hate speech? What if, perchance, the homes of ‘undesirables’ are singled out with graffiti that only bad actors can see? What happens when every gathering place for any oppressed community gets invisibly ‘tagged’? In short, what happens when bad actors use Facebook’s augmented reality to amplify their own capacity to act badly?

    But that’s Zuckerberg: he seems to believe his creations will only be used to bring out the best in people. He seems to believe his gigantic sharing network would never be used to incite mob violence. Just as he seems to claim that Facebook’s capacity to collect and profile the moods of its users should never be monetised—but, given that presentation unearthed by the Australian, Facebook tells a different story to advertisers.

    Regulating Facebook enshrines its position as the data-gathering and profile-building organisation, while keeping it plugged into and responsive to the needs of national powers. Before anyone takes steps that would cement Facebook in our social lives for the foreseeable future, it may be better to consider how this situation arose, and whether—given what we now know—there might be an opportunity to do things differently.
    https://meanjin.com.au/essays/the-last-days-of-reality
    Voting 0
  5. When we look at digital technology and platforms, it’s always instructive to remember that they exist to extract data. The longer you are on the platform, the more you produce and the more can be extracted from you. Polarization keys engagement, and engagement/attention are the what keep us on platforms. In the words of Tristan Harris, the former Google Design Ethicist, and one of the earliest SV folks to have the scales fall from his eyes, “What people don’t know about or see about Facebook is that polarization is built into the business model,” Harris told NBC News. “Polarization is profitable.”

    David Golumbia’s description of the scholarly concept of Cyberlibertarianism is useful here (emphasis mine) :

    In perhaps the most pointed form of cyberlibertarianism, computer expertise is seen as directly applicable to social questions. In The Cultural Logic of Computation, I argue that computational practices are intrinsically hierarchical and shaped by identification with power. To the extent that algorithmic forms of reason and social organization can be said to have an inherent politics, these have long been understood as compatible with political formations on the Right rather than the Left.

    So the cui bono of digital polarization are the wealthy, the powerful, the people with so much to gain promoting systems that maintain the status quo, despite the language of freedom, democratization, and community that are featured so prominently when people like Facebook co-founder Mark Zuckerberg or Twitter co-founder and CEO Jack Dorsey talk about technology. Digital technology in general, and platforms like Facebook, YouTube, and Twitter specifically, exist to promote polarization and maintain the existing concentration of power.

    To the extent that Silicon Valley is the seat of the technological power, it’s useful to note that the very ground of what we now call Silicon Valley is built on the foundation of segregating black and white workers. Richard Rothstein’s The Color of Law talks about auto workers in 1950’s California:

    So in 1953 the company (Ford) announced it would close its Richmond plant and reestablish operations in a larger facility fifty miles south in Milpitas, a suburb of San Jose, rural at the time. (Milpitas is a part of what we now call Silicon Valley.)

    Because Milpitas had no apartments, and houses in the area were off limits to black workers—though their incomes and economic circumstances were like those of whites on the assembly line—African Americans at Ford had to choose between giving up their good industrial jobs , moving to apartments in a segregated neighborhood of San Jose, or enduring lengthy commutes between North Richmond and Milpitas.
    https://hypervisible.com/polarization/power-technology
    Voting 0
  6. The point is not that making a world to accommodate oneself is bad, but that when one has as much power over the rest of the world as the tech sector does, over folks who don’t naturally share its worldview, then there is a risk of a strange imbalance. The tech world is predominantly male—very much so. Testosterone combined with a drive to eliminate as much interaction with real humans as possible—do the math, and there’s the future.

    We’ve gotten used to service personnel and staff who have no interest or participation in the businesses where they work. They have no incentive to make the products or the services better. This is a long legacy of the assembly line, standardising, franchising and other practices that increase efficiency and lower costs. It’s a small step then from a worker that doesn’t care to a robot. To consumers, it doesn’t seem like a big loss.

    Those who oversee the AI and robots will, not coincidentally, make a lot of money as this trend towards less human interaction continues and accelerates—as many of the products produced above are hugely and addictively convenient. Google, Facebook and other companies are powerful and yes, innovative, but the innovation curiously seems to have had an invisible trajectory. Our imaginations are constrained by who and what we are. We are biased in our drives, which in some ways is good, but maybe some diversity in what influences the world might be reasonable and may be beneficial to all.

    To repeat what I wrote above—humans are capricious, erratic, emotional, irrational and biased in what sometimes seem like counterproductive ways. I’d argue that though those might seem like liabilities, many of those attributes actually work in our favor. Many of our emotional responses have evolved over millennia, and they are based on the probability that our responses, often prodded by an emotion, will more likely than not offer the best way to deal with a situation.

    Neuroscientist Antonio Damasio wrote about a patient he called Elliot, who had damage to his frontal lobe that made him unemotional. In all other respects he was fine—intelligent, healthy—but emotionally he was Spock. Elliot couldn’t make decisions. He’d waffle endlessly over details. Damasio concluded that though we think decision-making is rational and machinelike, it’s our emotions that enable us to actually decide.

    With humans being somewhat unpredictable (well, until an algorithm completely removes that illusion), we get the benefit of surprises, happy accidents and unexpected connections and intuitions. Interaction, cooperation and collaboration with others multiplies those opportunities.

    We’re a social species—we benefit from passing discoveries on, and we benefit from our tendency to cooperate to achieve what we cannot alone. In his book, Sapiens, Yuval Harari claims this is what allowed us to be so successful. He also claims that this cooperation was often facilitated by a possibility to believe in “fictions” such as nations, money, religions and legal institutions. Machines don’t believe in fictions, or not yet anyway. That’s not to say they won’t surpass us, but if machines are designed to be mainly self-interested, they may hit a roadblock. If less human interaction enables us to forget how to cooperate, then we lose our advantage.

    Our random accidents and odd behaviors are fun—they make life enjoyable. I’m wondering what we’re left with when there are fewer and fewer human interactions. Remove humans from the equation and we are less complete as people or as a society. “We” do not exist as isolated individuals—we as individuals are inhabitants of networks, we are relationships. That is how we prosper and thrive.
    http://davidbyrne.com/journal/eliminating-the-human
    Voting 0
  7. That idea of efficiency through speed brought by the tech industry has consequences for society. First, the immediacy of the communications creates moments of intense information overload and distractions. Like other moments of major revolution in information technology, people are racing behind to adapt to the increasing pace of information exchange. In the Big Now, the pool of instantaneous information has dramatically increased, however the pool of available understanding of what that information means has not. People and organizations are still seeking new practices and means to filter, categorize and prioritize information in a world obsessed with the production and consumption of the freshest data points (see Social media at human pace). Doing so, they animate almost uniquely their capacity to fast-check status updates and leave their ability for reflection unstimulated (see, in French, L’écologie de l’attention). The Big Now is not designed for people to step back and understand information in a bigger context (e.g. poor debates in the recent US elections, inability to foresee the 2008 economic crisis). It is only recently that alternatives have started to emerge. For example, the recent strategic changes at Medium proposes to reverse the tendency:

    “We believe people who write and share ideas should be rewarded on their ability to enlighten and inform, not simply their ability to attract a few seconds of attention”.

    Secondly, the asynchronous Internet diminished the frontiers between work, family and leisure. In response, the tech world proposes to ‘hack’ time and to remove frictions (e.g. Soylent diet) to free up time. The flourishing personal productivity books and apps promise peace of mind with time-management advice tailored to the era of connected devices (see The global village and its discomfort). However, like building bigger roads make traffic worse, many of these solutions only provide a quick fix that induces even busier and more stressed lifestyles (see Why time management is ruining our lives). In the Big Now and its cybernetic loops, the more efficient we get at doing things and the more data we generate, the faster the Internet gets back to us, keeps us busy and grabs our limited amount of attention. Besides the promises of time-compression technologies to save us valuable time and free us for life’s important things, in the past half-century, leisure time has remained overall about the same (see Fast-world values).

    Try to imagine another version of the Internet in which the sense of simultaneity that Adam Greenfield described moves to the background of our lives and leaves stage for temporal depth and quality. Connecting people to share and collaborate has been a wonderful thing. Today, I believe that giving us the time to think will be even better (see The collaboration curse). As an illustration, regardless of current methodological trends, creativity rarely emerges rapidly. Many ideas need time to mature, they need different contexts or mindsets to get stronger. This does not often happen when teams are in ‘sprints’ or a young start-up feels under the gun in its ‘incubator’. I participated in ‘start-up accelerator’ mentoring sessions in which I advised young entrepreneurs to step back and consider if their objectives were about speed and scale. Many of them were lured by that Silicon Valley’s unicorn fantasy. Not surprisingly, the first startup decelerator program has now been created, and socratic design workshops are becoming a thing for tech executives to reconsider what’s important.
    https://medium.com/@girardin/after-the-big-now-f0a3f1857294
    Voting 0
  8. At the end of the twentieth century, the long predicted convergence of the media, computing and telecommunications into hypermedia is finally happening. 2 » Once again, capitalism’s relentless drive to diversify and intensify the creative powers of human labour is on the verge of qualitatively transforming the way in which we work, play and live together. By integrating different technologies around common protocols, something is being created which is more than the sum of its parts. When the ability to produce and receive unlimited amounts of information in any form is combined with the reach of the global telephone networks, existing forms of work and leisure can be fundamentally transformed. New industries will be born and current stock market favourites will swept away. At such moments of profound social change, anyone who can offer a simple explanation of what is happening will be listened to with great interest. At this crucial juncture, a loose alliance of writers, hackers, capitalists and artists from the West Coast of the USA have succeeded in defining a heterogeneous orthodoxy for the coming information age: the Californian Ideology.

    This new faith has emerged from a bizarre fusion of the cultural bohemianism of San Francisco with the hi-tech industries of Silicon Valley. Promoted in magazines, books, TV programmes, websites, newsgroups and Net conferences, the Californian Ideology promiscuously combines the free-wheeling spirit of the hippies and the entrepreneurial zeal of the yuppies. This amalgamation of opposites has been achieved through a profound faith in the emancipatory potential of the new information technologies. In the digital utopia, everybody will be both hip and rich. Not surprisingly, this optimistic vision of the future has been enthusiastically embraced by computer nerds, slacker students, innovative capitalists, social activists, trendy academics, futurist bureaucrats and opportunistic politicians across the USA. As usual, Europeans have not been slow in copying the latest fad from America. While a recent EU Commission report recommends following the Californian free market model for building the information superhighway, cutting-edge artists and academics eagerly imitate the post human philosophers of the West Coast’s Extropian cult. 3 » With no obvious rivals, the triumph of the Californian Ideology appears to be complete.

    The widespread appeal of these West Coast ideologues isn’t simply the result of their infectious optimism. Above all, they are passionate advocates of what appears to be an impeccably libertarian form of politics – they want information technologies to be used to create a new ‘Jeffersonian democracy’ where all individuals will be able to express themselves freely within cyberspace. 4 » However, by championing this seemingly admirable ideal, these techno-boosters are at the same time reproducing some of the most atavistic features of American society, especially those derived from the bitter legacy of slavery. Their utopian vision of California depends upon a wilful blindness towards the other – much less positive – features of life on the West Coast: racism, poverty and environmental degradation. 5 » Ironically, in the not too distant past, the intellectuals and artists of the Bay Area were passionately concerned about these issues.
    http://www.imaginaryfutures.net/2007/04/17/the-californian-ideology-2
    Voting 0
  9. The reason I bring this up: first of all, it’s a great way of understanding how machine learning algorithms can give us stuff we absolutely don’t want, even though they fundamentally lack prior agendas. Happens all the time, in ways similar to the Donald.

    Second, some people actually think there will soon be algorithms that control us, operating “through sound decisions of pure rationality” and that we will no longer have use for politicians at all.

    And look, I can understand why people are sick of politicians, and would love them to be replaced with rational decision-making robots. But that scenario means one of three things:

    1. Controlling robots simply get trained by the people’s will and do whatever people want at the moment. Maybe that looks like people voting with their phones or via the chips in their heads. This is akin to direct democracy, and the problems are varied – I was in Occupy after all – but in particular mean that people are constantly weighing in on things they don’t actually understand. That leaves them vulnerable to misinformation and propaganda.

    2. Controlling robots ignore people’s will and just follow their inner agendas. Then the question becomes, who sets that agenda? And how does it change as the world and as culture changes? Imagine if we were controlled by someone from 1000 years ago with the social mores from that time. Someone’s gonna be in charge of “fixing” things.

    3. Finally, it’s possible that the controlling robot would act within a political framework to be somewhat but not completely influenced by a democratic process. Something like our current president. But then getting a robot in charge would be a lot like voting for a president. Some people would agree with it, some wouldn’t. Maybe every four years we’d have another vote, and the candidates would be both people and robots, and sometimes a robot would win, sometimes a person. I’m not saying it’s impossible, but it’s not utopian. There’s no such thing as pure rationality in politics, it’s much more about picking sides and appealing to some people’s desires while ignoring others.
    https://mathbabe.org/2016/08/11/donal...e-a-biased-machine-learning-algorithm
    Voting 0
  10. We’re now operating in a world where automated algorithms make impactful decisions that can and do amplify the power of business and government. I’ve argued in this paper that we need to do better in deciphering the contours of that power. As algorithms come to regulate society and perhaps even implement law directly,47we should proceed with caution and think carefully about how we choose to regulate them back.48Journalists might productively offer themselves as a check and balance on algorithmic power while the legislative regulation of algorithms takes shape over a longer time horizon. In this paper I’ve offered a basis for understanding algorithmic power in terms of the types of decisions algorithms make in prioritizing, classifying, associating, and filtering information. Understanding those wellsprings of algorithmic power suggests a number of diagnostic questions that further inform a more critical stance toward algorithms. Given the challenges to effectively employing transparency for algorithms, namely trade secrets, the consequences of manipulation, and the cognitive overhead of complexity, I propose that journalists might effectively engage with algorithms through a process of reverse engineering. By understanding the input-output relationships of an algorithm we can start to develop stories about how that algorithm operates. Sure, there are challenges here too: legal, ethical, and technical, but reverse engineering is another tactic for the tool belt—a technique that has already shown it can be useful at times. Next time you hear about software or an algorithm being used to help make a decision, you might get critical and start asking questions about how that software could be affecting outcomes. Try to FOIA it, try to understand whether you can reverse engineer it, and when you’re finished, write up your method for how you got there. By method-sharing we’ll expand our ability to replicate these types of stories, and, over time, perhaps even develop enough expertise to suggest standards for algorithmic transparency that acknowledge business concerns while still surfacing useful information for the public.
    http://towcenter.org/research/algorit...on-the-investigation-of-black-boxes-2
    Voting 0

Top of the page

First / Previous / Next / Last / Page 1 of 3 Online Bookmarks of M. Fioretti: Tags: algorithms + solutionism

About - Propulsed by SemanticScuttle