2018/10/10: There is a "machine learning is hard" angle to this: while the flawed outcomes from the flawed training data was totally predictable, the system's self-generated discriminatory criteria were surprising and unpredictable. No one told it to downrank resumes containing "women's" -- it arrived at that conclusion on its own, by noticing that this was a word that rarely appeared on the resumes of previous Amazon hires.
The group created 500 computer models focused on specific job functions and locations. They taught each to recognize some 50,000 terms that showed up on past candidates’ resumes. The algorithms learned to assign little significance to skills that were common across IT applicants, such as the ability to write various computer codes, the people said.
Instead, the technology favored candidates who described themselves using verbs more commonly found on male engineers’ resumes, such as “executed” and “captured,” one person said.
Gender bias was not the only issue. Problems with the data that underpinned the models’ judgments meant that unqualified candidates were often recommended for all manner of jobs, the people said. With the technology returning results almost at random, Amazon shut down the project, they said.
2018/10/03: If anything, rich countries are leapfrogging ahead of the poor, by benefiting from the expanded market and lower labour costs that they provide.
The latest technologies are almost always designed for advanced markets and the rich who live in them, and are well beyond the means of the poorest. Hence, if these technologies do indeed have benefits associated with them, these will accrue disproportionately to the rich. Poor countries and people are either left to pick up the scraps of remaining older technologies, or have to purchase inferior products at the lower end of the market. The Internet of Things and Artificial Intelligence are going to be used in the so-called Smart Cities of the developed world long before they are used at all widely in remote rural villages in Africa or Asia; big data are going to be used by large corporations with the expertise to analyse them, long before they are understood, let alone, used by people in the poorest countries of the world.
This is why terms such as “bridging the digital divide” or “digital leapfrogging”, although widely used, are so inappropriate. When the rich are designing and implementing technologies in their own interests, to move them further ahead of their competitors, the gap or divide between rich and poor becomes yet more difficult to reduce, or bridge; the horizon is always moving further and further into the distance… Moreover, the notion of a “divide” generally implies a binary divide, as in the gender divide, whereas in reality it is complex and multifaceted; it is not one divide, but many. The notion of leapfrogging is also problematic, since it implies benefiting from someone else; using a person’s back to lever an advantage ahead of them.
2018/09/17: Unless we know when to trust our own instincts over the output of a piece of software, however, it also brings the potential for disruption, injustice and unfairness.
If we permit flawed machines to make life-changing decisions on our behalf – by allowing them to pinpoint a murder suspect, to diagnose a condition or take over the wheel of a car – we have to think carefully about what happens when things go wrong.
Back in 2012, a group of 16 Idaho residents with disabilities received some unexpected bad news. The Department of Health and Welfare had just invested in a “budget tool” – a swish piece of software, built by a private company, that automatically calculated their entitlement to state support. It had declared that their care budgets should be slashed by several thousand dollars each, a decision that would put them at serious risk of being institutionalised.
The problem was that the budget tool’s logic didn’t seem to make much sense. While this particular group of people had deep cuts to their allowance, others in a similar position actually had their benefits increased by the machine. As far as anyone could tell from the outside, the computer was essentially plucking numbers out of thin air.
From the inside, this wasn’t far from the truth. The algorithm was junk. The data was riddled with errors. The calculations were so bad that the court would eventually rule its determinations unconstitutional.
2018/04/05: Teachers can take a break from menial tasks. AI can help set more realistic student goals. Gaps in the curriculum are clearly identified.
2018-09-18: In recent decades, China and India have presented the world with two different models for how such countries can climb the development ladder. In the China model, a nation leverages its large population and low costs to build a base of blue-collar manufacturing. It then steadily works its way up the value chain by producing better and more technology-intensive goods. In the India model, a country combines a large English-speaking population with low costs to become a hub for outsourcing of low-end, white-collar jobs in fields such as business-process outsourcing and software testing. If successful, these relatively low-skilled jobs can be slowly upgraded to more advanced white-collar industries. Both models are based on a country's cost advantages in the performance of repetitive, non-social and largely uncreative work -- whether manual labor in factories or cognitive labor in call centers. Unfortunately for emerging economies, AI thrives at performing precisely this kind of work.
Without a cost incentive to locate in the developing world, corporations will bring many of these functions back to the countries where they're based. That will leave emerging economies, unable to grasp the bottom rungs of the development ladder, in a dangerous position.
the best thing emerging economies can do is to "recognize that the traditional paths to economic development -- the China and India models -- are no longer viable." Countries with "less-educated workers" are advised to build up human-centered service industries.
2018/09/13: Facebook is a two-billion-strong democratic community and the personal plaything of an unaccountable thirty-something billionaire.
If it comes down to a contest between the membership and the ownership of Facebook, Zuckerberg will probably win, as he gets to set the rules. In the end it is only the regulatory power of the state that can make Facebook safe for democracy.
there were two big risks with turning the state into a giant automaton. The first was that it wouldn’t be powerful enough.
The second was that it would too closely resemble the things it was designed to regulate. In a world of machines, the state might go native. It could become entirely artificial.
This is the original fear of the modern age: not what happens when the machines become too much like us, but what happens if we become too much like machines.
The machines that most frightened Hobbes were corporations.
Many of the things that we fret about when we imagine a future world of AIs are the same worries that have been harboured about corporations for centuries.
September 12’s vote will be about whether startups, SMEs and the broader research community will receive a workable legal basis to conduct TDM. Without it, only those who already possess — or can leverage existing users to access — data points will be able to train superior algorithms. Without an effective data mining policy, startups and innovators in Europe will run dry. It’s not only that Europe’s AI landscape will stumble along like a 16-bit system, but also that the rest of the world will be running on 64-bits.
A bit like in the run-up to the Ariane-5 launch, the European Commission and many in the European Parliament don’t see the bug: Two points need fixing, according to Vice President Ansip, but he makes no mention of the crucial link between data mining and AI. A copyright directive which does not grant the possibility to conduct data mining on lawfully accessible content will leave Europe with a 16-bit version of artificial intelligence. In that version, only some researchers will be allowed to innovate — not startups, SMEs, journalists, libraries, or the wider research community.
2018/04/11: Those statements from Zuckerberg, together with his inability to define “hate speech” are nothing less than an official announcement that NewSpeak is coming. The paragraphs above are straight from Orwell’s novell “1984”. Do go and read that whole chapter, while you can still understand it. It contains the only constraints that could actually reduce discourse inside Facebook to something manageable without false flags by “artificial intelligence”. That is what Zuckerberg was talking about, whether he realizes or not.
More serious in the long term is growing conjecture that current programming methods are no longer fit for purpose given the size, complexity and interdependency of the algorithmic systems we increasingly rely on.
The article suggests re-thinking our legal system to assign blame for any badly malfunctioning algorithms... Solutions exist or can be found for most of the problems described here, but not without incentivizing big tech to place the health of society on a par with their bottom lines.
I did see many things almost as tragic that no one could miss -- AI being squeezed into almost every conceivable bit of consumer electronics. But none were convincing. If ever there was a solution looking for a problem, it's ramming AI into gadgets to show of a company's machine learning prowess. For the consumer it adds unreliability, cost and complexity, and the annoyance of being prompted.
A border is being drawn in the Middle East for a "new civilization." Spanning across Saudi Arabia, Jordan and Egypt, it will house Crown Prince Mohammed bin Salman’s $500 billion vision for the future of living: a fully-automated megacity run on artificial intelligence (AI) called Neom. Here, there will be more robots than people, so residents will be free to spend time on what matters to them and lead happier lives. That is, of course, if the AI is friendly.
Beijing is putting billions of dollars behind facial recognition and other technologies to track and control its citizens.
2018/06/10: Eric Schmidt, former Google Ceo, says that Elon Musk, CEO of Tesla, is “exactly wrong” about Artificial Intelligence (AI). I dare suggest that Schmidt's vision may not be “exactly complete”.
More than 1 million jobs will be lost to AI by 2030, according to one estimate. But new jobs are also being created. Are banks and their employees ready?
2018/05/39: In April 2018, the “Finnish Non-Discrimination and Equality Tribunal” prohibited “discriminatory use” of artificial intelligence. I am not sure they did the right thing.
Two articles about a great issue of our time just made me a bit sad.. One is a great piece in which Joi Ito explains how and why “we need social advocates, lawyers, artists, philosophers, and other citizens to engage in designing extended [artificial] intelligence from the outset”. I completely agree with Ito when he says that doing what he proposes may be “the only way to reduce the social costs and increase the benefits of Artificial Intelligence as it becomes embedded in our culture.
Do mere human beings stand a chance against software that claims to reveal what a real-life face-to-face chat can't?
Killer robots remain a thing of futuristic nightmare. The real threat from artificial intelligence is far more immediate
In These Times features award-winning investigative reporting about corporate malfeasance and government wrongdoing, insightful analysis of national and international affairs, and sharp cultural criticism about events and ideas that matter.
GIS set the foundation for businesses to begin collecting and visualizing geographic information. But in order to stay competitive, businesses know they need to focus on the intelligence from their location data, not just on the geographic information itself.