2018/11/28: the Internet Archive's Wayback Machine, which many people assume keeps a permanent trail and origin of web-content, has little feasible choice but to comply with DMCA takedown notices. As a result of which, a portion of the archive of things people submit to the website continues to quietly fade away. Gizmodo:
Over the last few years, there has been a change in how the Wayback Machine is viewed, one inspired by the general political mood. What had long been a useful tool when you came across broken links online is now, more than ever before, seen as an arbiter of the truth and a bulwark against erasing history. That archive sites are trusted to show the digital trail and origin of content is not just a must-use tool for journalists, but effective for just about anyone trying to track down vanishing web pages. With that in mind, that the Internet Archive doesn't really fight takedown requests becomes a problem. That's not the only recourse: When a site admin elects to block the Wayback crawler using a robots.txt file, the crawling doesn't just stop. Instead, the Wayback Machine's entire history of a given site is removed from public view.
In other words, if you deal in a certain bottom-dwelling brand of controversial content and want to avoid accountability, there are at least two different, standardized ways of erasing it from the most reliable third-party web archive on the public internet. For the Internet Archive, like with quickly complying with takedown notices challenging their seemingly fair use archive copies of old websites, the robots.txt strategy, in practice, does little more than mitigating their risk while going against the spirit of the protocol. And if someone were to sue over non-compliance with a DMCA takedown request, even with a ready-made, valid defense in the Archive's pocket, copyright litigation is still incredibly expensive. It doesn't matter that the use is not really a violation by any metric. If a rightsholder makes the effort, you still have to defend the lawsuit.
2018/11/25: Lawyers and engineers noticed in the early part of this decade that lawmaking and software engineering have a lot in common when it comes to tracking changes to their code—whether it be legal code or software code. The District has managed to take a practice of modern software development and apply it to its legal code by putting its legal code onto GitHub at https://github.com/DCCouncil/dc-law-xml
This isn’t a copy of the DC law. It is an authoritative source. It is where the DC Council stores the digital versions of enacted laws, and this source feeds directly into the Council’s DC Code website at https://code.dccouncil.us/dc/council/code/
Last week, I opened the file on GitHub that had the typo, edited the file, and submitted my edit using GitHub’s “pull request” feature.
2018/10/01: When an algorithm convicts the defendant in a murder trial… that too is life or death.
Some child protective services use an algorithm to decide which kids to take. The algorithm assigns a risk score based on inputs like how many calls the department has received, and if the parents are hostile towards caseworkers.
Other courts already use an algorithm to figure out the recidivism risk—if a criminal is likely to re-offend. The higher the risk, the longer the sentence.
If I smoked a joint or got in a fight in high school, am I 40% more likely to commit a crime?
Will I be assumed guilty based on probability alone? How can I be sure the algorithm isn’t biased?
When ProPublica analyzed the recidivism algorithm, they found that it was racially biased.
O’Neil Risk Consulting & Algorithmic Auditing is another organization looking into how fair the algorithms used by the justice system are. But they admit that even the audits are subjective to a degree.
That’s pretty scary to have algorithms control the criminal justice system. Due process might be replaced by computer processors.
2018/10/02: Do airlines use these cost factors to calculate a rational price for my ticket? No. That is determined by Rudy the Fare Chicken, who decides the price of each ticket individually by pecking on a computer keyboard sprinkled with corn.
How might we best research this from our side—the one where humans use browsers and actually buy stuff? Is it possible to figure out how we're being profiled, if at all—and how might we do that? Are there shortcuts to finding the cheapest Amazon price for a given product, among all the different prices it presents at different times and ways on different browsers, to persons logged in or not? Is this whole thing so opaque that we'll never know much more than a damn thing, and we're simply at the mercy of machines probing and manipulating us constantly?
We have just released the Santa Clara principles (PDF), calling on platforms to provide better information about how they moderate content online. The principles articulate a minimum set of standards
Lokniti's PILs are often taken up with alacrity by the central government.
The open data movement is growing apace. What better demonstration of this than news that the UK coalition government is making its Combine...
Greece has been much in the news recently as the Syriza government tries to deal with the country's massive economic problems. We hear plenty about its high-level negotiations with the EU; what we don't hear about is the Greek government's innovative use of openness to tackle key issues in everyday life.
Mining" is the engine that keeps the Bitcoin network working, but it has swelled into a resource-hungry, capital-intensive, centralized syndicate. Is the Internet's native currency worth all the effort?
Sunlight is said to be the best of disinfectants
Check out Greenhouse, the hot new browser extension for Chrome, Safari and Firefox that exposes the influence of money in Congress. See the story behind the story on any webpage.
The last couple of months have been quite an interesting time for those advocating for open government in Italy.
Breaking news and analysis from TIME.com. Politics, world news, photos, video, tech reviews, health, science and entertainment news.
The Department of Finance Archive The content on this page and other Finance archive pages is provided to assist research and may contain references to activities or policies that have no current application. See the full archive disclaimer.
Today two key committees of the European Parliament voted in favour of creating public registries of who ultimately owns and controls companies and trusts registered in the EU, in a move welcomed by Global Witness.
Two years ago, then US Trade Representative Ron Kirk told Reuters, effectively, that he would not release the negotiating texts of the Trans Pacific Partnership (TPP) agreement, because if the public knew what was in TPP, it would not allow the...
The following guest post is by Duncan Edwards from the Institute of Development Studies. I've had a lingering feeling of unease that things were not quite right in the world of open development and
Legislation is difficult to read and understand. So difficult that it largely goes unread. This is something I learned when I first started building bill drafting systems over a decade ago. It was quite a let down. The people you would expect to read legislation don't actually do that. Instead they must rely on analyses,
Looks at the value of information collected by public sector organisations and how this information could be further utilised.