2019/01/01: The next revolution will be the ascent of analog systems over which the dominion of digital programming comes to an end. Nature’s answer to those who sought to control nature through programmable machines is to allow us to build machines whose nature is beyond programmable control.
2018/12/06: There’s nothing artificial about AI. It’s inspired by people, it’s created by people and more importantly, it impacts people.
the term “AI” is a mystification! The term that describes the reality is “Human-Trained Machine Learning”
why training these algorithms went so wrong: subconsciously mimicking their mostly male, misogynist, often white entrepreneurs and techies with their money-making monopolistic biases and often adolescent, libertarian fantasies.
2018/11/26: Social issues and user experience are the most intriguing among them.
At Gartner’s Symposium/ITExpo in Barcelona, Spain, earlier this month, the research firm shared a report on 10 strategic trends affecting the Internet of Things (IoT) from 2019 to 2023. In the report, titled Top Strategic IoT Trends and Technologies Through 2023, according to multiple published reports, the firm identified the following as the 10 most impactful IoT trends:
Artificial intelligence (AI)
Social, legal, and ethical IoT
Infonomics and data broking
The shift from intelligent edge to intelligent mesh
Trusted hardware and operating systems
New IoT user experiences
Innovation on the chip
New wireless networking technologies for IoT
That’s an intriguing and comprehensive list, but not all the points come with of equal certainty or importance, and some — AI, wireless networking, edge computing and mesh computing — are already on the radar of many industry observers. So, let’s take a closer look at a couple of the most interesting and under-appreciated factors affecting the future of the IoT: social concerns and user experience.
2018/11/07: Despite their obvious differences, the men shared one thing in common: They were polymaths who disrupted the status quo during their respective time periods. Their names? Aristotle, Leonardo Da Vinci, and Benjamin Franklin.
The world’s most intriguing individuals have always been “deep generalists.”
Continued success of today’s organizations, companies, and communities depends on polymaths who think outside the box.
The most innovative developments of the future — in business, science and the arts — will come from creative generalists who blend unique disciplines with technological skill sets.
Despite the world’s immense need for polymaths, these individuals seem to be quite rare.
That’s because society promotes specialization over generalization, based on a long-standing assumption: The more deeply you specialize, the more easily you can find employment.
Ironically, the majority of mankind’s biggest breakthroughs haven’t come from specialists; they have come from multifaceted individuals.
2018/11/02: A new free website spearheaded by the Library Innovation Lab at the Harvard Law School makes available nearly 6.5 million state and federal cases dating from the 1600s to earlier this year, in an initiative that could alter and inform the future availability of similar areas of public-sector big data.
Led by the Lab, which was founded in 2010 as an arena for experimentation and exploration into expanding the role of libraries in the online era, the Caselaw Access Project went live Oct. 29 after five years of discussions, planning and digitization of roughly 100,000 pages per day over two years.
The effort was inspired by the Google Books Project; the Free Law Project, a California 501(c)(3) that provides free, public online access to primary legal sources, including so-called “slip opinions,” or early but nearly final versions of legal opinions; and the Legal Information Institute, a nonprofit service of Cornell University that provides free online access to key legal materials.
The conversion, done in-house at the Harvard Law School Library to preserve the chain of custody of millions of cases it had collected, used a hydraulic cutter to trim the binding from thousands of volumes; and a machine similar to those employed in the meatpacking industry to vacuum-seal them after scanning. Scanning costs were in the millions of dollars. Scanned, resealed volumes were shipped out-of-state for long-term storage underground at a former limestone mine in Louisville, Ky. Pages were subsequently uploaded to an optical character recognition (OCR) vendor for extraction into text files.
The project, which was funded by venture capital-backed startup Ravel Law and the Harvard Law School, doesn’t aggregate every court battle. Its legal trove primarily focuses on supreme court and appellate decisions, but is limited, the Lab’s director said, by the extent to which bygone officials “cared enough at the time” to compile decisions. Director Adam Ziegler said the project has a high concentration of federal trial opinions and lots of trial opinions from the state of New York, an early legal center, but fewer from some other states.
In standing up the project website, Ziegler said the Lab hopes to provide “anyone and everyone” with easy access to the law via court opinions, but noted that concept will have different meanings to different groups and “definitely means things we don’t even envision ourselves.”
2018/10/10: There is a "machine learning is hard" angle to this: while the flawed outcomes from the flawed training data was totally predictable, the system's self-generated discriminatory criteria were surprising and unpredictable. No one told it to downrank resumes containing "women's" -- it arrived at that conclusion on its own, by noticing that this was a word that rarely appeared on the resumes of previous Amazon hires.
The group created 500 computer models focused on specific job functions and locations. They taught each to recognize some 50,000 terms that showed up on past candidates’ resumes. The algorithms learned to assign little significance to skills that were common across IT applicants, such as the ability to write various computer codes, the people said.
Instead, the technology favored candidates who described themselves using verbs more commonly found on male engineers’ resumes, such as “executed” and “captured,” one person said.
Gender bias was not the only issue. Problems with the data that underpinned the models’ judgments meant that unqualified candidates were often recommended for all manner of jobs, the people said. With the technology returning results almost at random, Amazon shut down the project, they said.
2018/10/03: If anything, rich countries are leapfrogging ahead of the poor, by benefiting from the expanded market and lower labour costs that they provide.
The latest technologies are almost always designed for advanced markets and the rich who live in them, and are well beyond the means of the poorest. Hence, if these technologies do indeed have benefits associated with them, these will accrue disproportionately to the rich. Poor countries and people are either left to pick up the scraps of remaining older technologies, or have to purchase inferior products at the lower end of the market. The Internet of Things and Artificial Intelligence are going to be used in the so-called Smart Cities of the developed world long before they are used at all widely in remote rural villages in Africa or Asia; big data are going to be used by large corporations with the expertise to analyse them, long before they are understood, let alone, used by people in the poorest countries of the world.
This is why terms such as “bridging the digital divide” or “digital leapfrogging”, although widely used, are so inappropriate. When the rich are designing and implementing technologies in their own interests, to move them further ahead of their competitors, the gap or divide between rich and poor becomes yet more difficult to reduce, or bridge; the horizon is always moving further and further into the distance… Moreover, the notion of a “divide” generally implies a binary divide, as in the gender divide, whereas in reality it is complex and multifaceted; it is not one divide, but many. The notion of leapfrogging is also problematic, since it implies benefiting from someone else; using a person’s back to lever an advantage ahead of them.
2018/09/17: Unless we know when to trust our own instincts over the output of a piece of software, however, it also brings the potential for disruption, injustice and unfairness.
If we permit flawed machines to make life-changing decisions on our behalf – by allowing them to pinpoint a murder suspect, to diagnose a condition or take over the wheel of a car – we have to think carefully about what happens when things go wrong.
Back in 2012, a group of 16 Idaho residents with disabilities received some unexpected bad news. The Department of Health and Welfare had just invested in a “budget tool” – a swish piece of software, built by a private company, that automatically calculated their entitlement to state support. It had declared that their care budgets should be slashed by several thousand dollars each, a decision that would put them at serious risk of being institutionalised.
The problem was that the budget tool’s logic didn’t seem to make much sense. While this particular group of people had deep cuts to their allowance, others in a similar position actually had their benefits increased by the machine. As far as anyone could tell from the outside, the computer was essentially plucking numbers out of thin air.
From the inside, this wasn’t far from the truth. The algorithm was junk. The data was riddled with errors. The calculations were so bad that the court would eventually rule its determinations unconstitutional.
2018/04/05: Teachers can take a break from menial tasks. AI can help set more realistic student goals. Gaps in the curriculum are clearly identified.
2018-09-18: In recent decades, China and India have presented the world with two different models for how such countries can climb the development ladder. In the China model, a nation leverages its large population and low costs to build a base of blue-collar manufacturing. It then steadily works its way up the value chain by producing better and more technology-intensive goods. In the India model, a country combines a large English-speaking population with low costs to become a hub for outsourcing of low-end, white-collar jobs in fields such as business-process outsourcing and software testing. If successful, these relatively low-skilled jobs can be slowly upgraded to more advanced white-collar industries. Both models are based on a country's cost advantages in the performance of repetitive, non-social and largely uncreative work -- whether manual labor in factories or cognitive labor in call centers. Unfortunately for emerging economies, AI thrives at performing precisely this kind of work.
Without a cost incentive to locate in the developing world, corporations will bring many of these functions back to the countries where they're based. That will leave emerging economies, unable to grasp the bottom rungs of the development ladder, in a dangerous position.
the best thing emerging economies can do is to "recognize that the traditional paths to economic development -- the China and India models -- are no longer viable." Countries with "less-educated workers" are advised to build up human-centered service industries.
2018/09/13: Facebook is a two-billion-strong democratic community and the personal plaything of an unaccountable thirty-something billionaire.
If it comes down to a contest between the membership and the ownership of Facebook, Zuckerberg will probably win, as he gets to set the rules. In the end it is only the regulatory power of the state that can make Facebook safe for democracy.
there were two big risks with turning the state into a giant automaton. The first was that it wouldn’t be powerful enough.
The second was that it would too closely resemble the things it was designed to regulate. In a world of machines, the state might go native. It could become entirely artificial.
This is the original fear of the modern age: not what happens when the machines become too much like us, but what happens if we become too much like machines.
The machines that most frightened Hobbes were corporations.
Many of the things that we fret about when we imagine a future world of AIs are the same worries that have been harboured about corporations for centuries.
September 12’s vote will be about whether startups, SMEs and the broader research community will receive a workable legal basis to conduct TDM. Without it, only those who already possess — or can leverage existing users to access — data points will be able to train superior algorithms. Without an effective data mining policy, startups and innovators in Europe will run dry. It’s not only that Europe’s AI landscape will stumble along like a 16-bit system, but also that the rest of the world will be running on 64-bits.
A bit like in the run-up to the Ariane-5 launch, the European Commission and many in the European Parliament don’t see the bug: Two points need fixing, according to Vice President Ansip, but he makes no mention of the crucial link between data mining and AI. A copyright directive which does not grant the possibility to conduct data mining on lawfully accessible content will leave Europe with a 16-bit version of artificial intelligence. In that version, only some researchers will be allowed to innovate — not startups, SMEs, journalists, libraries, or the wider research community.
More serious in the long term is growing conjecture that current programming methods are no longer fit for purpose given the size, complexity and interdependency of the algorithmic systems we increasingly rely on.
The article suggests re-thinking our legal system to assign blame for any badly malfunctioning algorithms... Solutions exist or can be found for most of the problems described here, but not without incentivizing big tech to place the health of society on a par with their bottom lines.
I did see many things almost as tragic that no one could miss -- AI being squeezed into almost every conceivable bit of consumer electronics. But none were convincing. If ever there was a solution looking for a problem, it's ramming AI into gadgets to show of a company's machine learning prowess. For the consumer it adds unreliability, cost and complexity, and the annoyance of being prompted.
A border is being drawn in the Middle East for a "new civilization." Spanning across Saudi Arabia, Jordan and Egypt, it will house Crown Prince Mohammed bin Salman’s $500 billion vision for the future of living: a fully-automated megacity run on artificial intelligence (AI) called Neom. Here, there will be more robots than people, so residents will be free to spend time on what matters to them and lead happier lives. That is, of course, if the AI is friendly.
Beijing is putting billions of dollars behind facial recognition and other technologies to track and control its citizens.
More than 1 million jobs will be lost to AI by 2030, according to one estimate. But new jobs are also being created. Are banks and their employees ready?
Do mere human beings stand a chance against software that claims to reveal what a real-life face-to-face chat can't?
Killer robots remain a thing of futuristic nightmare. The real threat from artificial intelligence is far more immediate
In These Times features award-winning investigative reporting about corporate malfeasance and government wrongdoing, insightful analysis of national and international affairs, and sharp cultural criticism about events and ideas that matter.