2019/01/01: The next revolution will be the ascent of analog systems over which the dominion of digital programming comes to an end. Nature’s answer to those who sought to control nature through programmable machines is to allow us to build machines whose nature is beyond programmable control.
2018/12/06: There’s nothing artificial about AI. It’s inspired by people, it’s created by people and more importantly, it impacts people.
the term “AI” is a mystification! The term that describes the reality is “Human-Trained Machine Learning”
why training these algorithms went so wrong: subconsciously mimicking their mostly male, misogynist, often white entrepreneurs and techies with their money-making monopolistic biases and often adolescent, libertarian fantasies.
2017/10/22: Parent company Alphabet would provide services in response to data harvested. a city “where buildings have no static use”. Like biomass
Alphabet’s long-term goal is to remove barriers to the accumulation and circulation of capital in urban settings – mostly by replacing formal rules and restrictions with softer, feedback-based floating targets. It claims that in the past “prescriptive measures were necessary to protect human health, ensure safe buildings, and manage negative externalities”. Today, however, everything has changed and “cities can achieve those same goals without the inefficiency that comes with inflexible zoning and static building codes”.
This is a remarkable statement. Even neoliberal luminaries such as Friedrich Hayek and Wilhelm Röpke allowed for some non-market forms of social organisation in the urban domain. They saw planning – as opposed to market signals – as a practical necessity imposed by the physical limitations of urban spaces: there was no other cheap way of operating infrastructure, building streets, avoiding congestion.
For Alphabet, these constraints are no more: ubiquitous and continuous data flows can finally replace government rules with market signals. Now, everything is permitted – unless somebody complains.
Google Urbanism means the end of politics, as it assumes the impossibility of wider systemic transformations, such as limits on capital mobility and foreign ownership of land and housing. Instead it wants to mobilise the power of technology to help residents “adjust” to seemingly immutable global trends such as rising inequality and constantly rising housing costs (Alphabet wants us to believe that they are driven by costs of production, not by the seemingly endless supply of cheap credit).
2018/08/27: the best parts of travel are precisely the things that technology cannot touch.
Algorithms are great at giving you something you like, but terrible at giving you something you love. Worse, by promoting familiarity, algorithms punish culture.
One caveat: Avoiding algorithms doesn’t apply to traveling in beautiful places. I depend on algorithms in expansive natural parks. When I’m in Patagonia, I want to do the best hike. In Alaska, I was to see the prettiest glacier. The focus is on nature, not people. Depending on algorithms, however, doesn’t work as well in cities, where culture is more important than geography. City travel works best when we put down our phones, seek serendipity, and lean into another culture.
Distinct, foreign experiences in cities — which evolve around people —cannot be bought or sold. They can’t be found in guidebooks and you’ll find no reviews on Yelp. More, as I scroll down these algorithmic websites, I find bland experience after bland experience. When we rely too much on algorithms, we travel around the world and end up with the same experiences we’d find in our backyard.
As tourism increases, I wonder if we’re actually traveling less. To travel is to escape familiarity and learn into the head-scratching quirks of another culture.
Dan Wang said it best:
“We’re all traveling to more places now, but I wonder if their novelty is limited by our tendency to travel to them in all the same ways. We use online booking to find hotels close to the city center, Yelp for restaurants nearby, and grab coffee in cafés that frankly all feel the same at this point.”
Unfortunately, algorithms discourage this kind of culturally rich travel. In turn, they destroy difference and encourage similarly — a “globalized sameness-as-a-service.
2018/10/10: There is a "machine learning is hard" angle to this: while the flawed outcomes from the flawed training data was totally predictable, the system's self-generated discriminatory criteria were surprising and unpredictable. No one told it to downrank resumes containing "women's" -- it arrived at that conclusion on its own, by noticing that this was a word that rarely appeared on the resumes of previous Amazon hires.
The group created 500 computer models focused on specific job functions and locations. They taught each to recognize some 50,000 terms that showed up on past candidates’ resumes. The algorithms learned to assign little significance to skills that were common across IT applicants, such as the ability to write various computer codes, the people said.
Instead, the technology favored candidates who described themselves using verbs more commonly found on male engineers’ resumes, such as “executed” and “captured,” one person said.
Gender bias was not the only issue. Problems with the data that underpinned the models’ judgments meant that unqualified candidates were often recommended for all manner of jobs, the people said. With the technology returning results almost at random, Amazon shut down the project, they said.
2018/10/10: Il sistema può scalare? (“Scalare” significa cambiare livello di scala, ovvero essere esteso da una porzione particolare a un intero ambito o a una generalità di ambiti). Ad esempio le mie valutazioni scolastiche nei test possono essere utilizzate come una variabile per valutare la mia affidabilità nella concessione di un mutuo? È proprio la scalabilità che rende l’ADM un’arma terribile. Sin tanto che una valutazione negativa, giusta o sbagliata, resta in un ambito ristretto, il danno è limitato, ma se “scala” a un contesto più ampio, il danno può essere terribile. Provate a ritrovarvi classificati dal sistema come “cattivi pagatori” e a vivere comunque una vita normale: questo capita già oggi, poiché per essere considerati tali basta aver saltato il pagamento delle rate di un debito. E che succederebbe se creassimo un modello che identifica un probabile “cattivo pagatore” mettendo questa variabile in relazione con altre caratteristiche personali rilevate statisticamente, fino a scoprire una correlazione tra “cattivi pagatori” e neri o meridionali? Chi pensa che sia futurologia o fantascienza alla Minority report non conosce i primi sistemi anti-cheating di Invalsi che prevedevano una correzione automatica dei risultati sulla base della provenienza regionale.
Il libro di Cathy O’Neill non è un invito a rinunciare al potere descrittivo e modellizzante della matematica, ma a riconoscere il suo enorme potere, i suoi usi nefasti se non addirittura fraudolenti, per poterne così chiedere un uso corretto e legittimo.
2018/10/01: When an algorithm convicts the defendant in a murder trial… that too is life or death.
Some child protective services use an algorithm to decide which kids to take. The algorithm assigns a risk score based on inputs like how many calls the department has received, and if the parents are hostile towards caseworkers.
Other courts already use an algorithm to figure out the recidivism risk—if a criminal is likely to re-offend. The higher the risk, the longer the sentence.
If I smoked a joint or got in a fight in high school, am I 40% more likely to commit a crime?
Will I be assumed guilty based on probability alone? How can I be sure the algorithm isn’t biased?
When ProPublica analyzed the recidivism algorithm, they found that it was racially biased.
O’Neil Risk Consulting & Algorithmic Auditing is another organization looking into how fair the algorithms used by the justice system are. But they admit that even the audits are subjective to a degree.
That’s pretty scary to have algorithms control the criminal justice system. Due process might be replaced by computer processors.
2018/09/17: Unless we know when to trust our own instincts over the output of a piece of software, however, it also brings the potential for disruption, injustice and unfairness.
If we permit flawed machines to make life-changing decisions on our behalf – by allowing them to pinpoint a murder suspect, to diagnose a condition or take over the wheel of a car – we have to think carefully about what happens when things go wrong.
Back in 2012, a group of 16 Idaho residents with disabilities received some unexpected bad news. The Department of Health and Welfare had just invested in a “budget tool” – a swish piece of software, built by a private company, that automatically calculated their entitlement to state support. It had declared that their care budgets should be slashed by several thousand dollars each, a decision that would put them at serious risk of being institutionalised.
The problem was that the budget tool’s logic didn’t seem to make much sense. While this particular group of people had deep cuts to their allowance, others in a similar position actually had their benefits increased by the machine. As far as anyone could tell from the outside, the computer was essentially plucking numbers out of thin air.
From the inside, this wasn’t far from the truth. The algorithm was junk. The data was riddled with errors. The calculations were so bad that the court would eventually rule its determinations unconstitutional.
2018/06/27: We are witnessing a massive transition in Value Creation from the means of production to the means of Market Production and Curation.
Take for example Uber – here the taxi driver is a bare transitionary commodity and interchangeable. The real value creation instrument is Uber which creates, curates the market – this process now extends from Retail – Amazon – to Manufacturing, AliBaba. This reality signals a great transfer of value creation from the relatively distributed means of production to the massively globally centralised & privatised means of market making & marker curation. The implications of this are massive for inequality and scaling of precarious citizenship.
what is being disrupted is not the plumber or craftsmen but the middle classes – the management, administrative and intermediary skills.
Our Governance model is broken, we live in a ‘systemocracy’ – a world of massive inter-dependency yet we are holding on to 19th century versions of governance. This creates the illusion of sovereignty & supremacy – acting as a denial of the complexity we must confront.
2018/09/22: Dating takes resource and focus away from problems Facebook should actually be fixing
Facebook getting into dating looks very much like a mid-life crisis — as a veteran social network desperately seeks a new strategy to stay relevant in an age when app users have largely moved on from social network ‘lifecasting’ to more bounded forms of sharing, via private messaging and/or friend groups inside dedicated messaging and sharing apps.
Zuckerberg is not trying to compete with online dating behemoth Tinder, though. Which Facebook dismisses as a mere ‘hook up’ app — a sub category it claims it wants nothing to do with.
Facebook Dating has been carefully positioned to avoid sounding like a sex app but as a tasteful take on the online dating game.
Here are just a few reasons why we think you should stay as far away from Facebook’s dalliance with dating as you possibly can.
Algorithmic dating is both empty promise and cynical attempt to humanize Facebook surveillance
Facebook typically counters the charge that because it tracks people to target them with ads its in the surveillance business by claiming people tracking benefits humanity because it can serve you “relevant ads”. Of course that’s a paper thin argument since all display advertising is something no one has chosen to see and therefore is necessarily a distraction from whatever a person was actually engaged with.
All of a sudden a space that’s always been sold — and traded — as a platonic place for people to forge ‘friendships’ is suddenly having sexual opportunity injected into it.
2018/09/13: Racist bridges aren’t the only inanimate objects that have had quiet, clandestine control over people.
the residents of Scunthorpe, in the north of England, who were blocked from opening AOL accounts after the internet giant created a new profanity filter that objected to the name of their town.
an automatic hand-soap dispenser that perfectly released soap whenever white hands where placed under it did not recognize as hands those of a Nigerian man.
they discovered that home cooks were less likely to make claims on their home insurance and were therefore more profitable. The most significant item that gives you away as a responsible, house-proud person more than any other was fresh fennel.
there are concerns about this kind of data profiling being used in an exclusionary way: motorbike enthusiasts being deemed to have a risky hobby or people who eat sugar-free sweets being flagged as diabetic and turned down for insurance as a result. A study from 2015 demonstrated that Google was serving far fewer ads for high-paying executive jobs to women who were surfing the web than to men.
searches for “black-sounding names” were disproportionately likely to be linked to ads containing the word “arrest” (for example, “Have you been arrested?”) than those with “white-sounding names.”
More serious in the long term is growing conjecture that current programming methods are no longer fit for purpose given the size, complexity and interdependency of the algorithmic systems we increasingly rely on.
The article suggests re-thinking our legal system to assign blame for any badly malfunctioning algorithms... Solutions exist or can be found for most of the problems described here, but not without incentivizing big tech to place the health of society on a par with their bottom lines.
Trump-like so many other politicians and pundits-has found search and social media companies to be convenient targets in the debate over free speech and censorship online. "This is a very serious situation-will be addressed!"rnrnBut in this moment, the conversation we should be having-how can we fix the algorithms?-is instead being co-opted and twisted by politicians and pundits howling about censorship and miscasting content moderation as the demise of free speech online. It would be good to remind them that free speech does not mean free reach. There is no right to algorithmic amplification. In fact, that's the very problem that needs fixing.rnrnThe algorithms don't understand what is propaganda and what isn't, or what is "fake news" and what is fact-checked. Their job is to surface relevant content (relevant to the user, of course), and they do it exceedingly well. So well, in fact, that the engineers who built these algorithms are sometimes baffled: "Even the creators don't always understand why it recommends one video instead of another," says Guillaume Chaslot, an ex-YouTube engineer who worked on the site's algorithm.rnrn YouTube's algorithms can also radicalize by suggesting "white supremacist rants, Holocaust denials, and other disturbing content," Zeynep Tufekci recently wrote in the Times. "YouTube may be one of the most powerful radicalizing instruments of the rnrnThe problem extends beyond YouTube, though. On Google search, dangerous anti-vaccine misinformation can commandeer the top results. And on Facebook, hate speech can thrive and fuel genocidernrnSo what can we do about it? The solution isn't to outlaw algorithmic ranking or make noise about legislating what results Google can return. Algorithms are an invaluable tool for making sense of the immense universe of information online. There's an overwhelming amount of content available to fill any given person's feed or search query; sorting and ranking is a necessity, and there has never been evidence indicating that the results display systemic partisan bias. rnIt's imperative that we focus on solutions, not politics.
If our supersmart tech leaders knew a bit more about history or philosophy we wouldn't be in the mess we're in now
2015/05/14: If a fraudster puts out a ridiculously deceptive piece of information and nobody falls for it, is it still fraud?
Probably yes, but today’s attempted manipulation of Avon’s stock price by somebody who slipped a false takeover offer on the Securities and Exchange Commission’s EDGAR system raises the question of whether anybody – any human, that is – could have been dumb enough to believe it. Maybe that's the point: It was designed to fool word-scanning, dumb computer trading systems.
This was a fraud designed for algorithmic traders. It was not designed to fool anybody who’d actually read it. It was designed to fool some system that scans SEC filings for certain words but doesn’t actually read them.
2014/06/01: The vision of a free-floating digital cryptocurrency economy, divorced from the politics of colossal banks and aggressive governments, is under threat. Take, for example, the purists at Dark Wallet, accusing the Bitcoin Foundation of selling out to the regulators and the likes of the Winklevoss Twins.
Bitcoin sometimes appears akin to an illegal immigrant, trying to decide whether to seek out a rebellious existence in the black-market economy, or whether to don the slick clothes of the Silicon Valley establishment. The latter position – involving publicly accepting regulation and tax whilst privately lobbying against it – is obviously more acceptable and familiar to authorities.
Beijing is putting billions of dollars behind facial recognition and other technologies to track and control its citizens.
Photo by Kain Kalju, CC BY 2.0.
By their very definition data and algorithms reduce a complex reality to a simpler view of the world. Only the parts of the world that are easily measurable can be used.
The videos it recommends seem to get more and more extreme.