mfioretti: digital dark ages*

Bookmarks on this page are managed by an admin user.

18 bookmark(s) - Sort by: Date ↓ / Title / Voting / - Bookmarks from other users for this tag

  1. I contenuti di Facebook – dicono i due esperti – non sono indicizzati dai motori di ricerca, quindi qualsiasi discussione rilevante che avviene da quelle parti rimarrà confinata agli iscritti. Linkare simili contenuti da fuori è quasi impossibile, specie se non possiedi un profilo Facebook.

    E cosa accadrebbe domani se Facebook dovesse chiudere? Le nostre parole scivolerebbero via come lacrime nella pioggia, l’esatto opposto di quello che Internet ha da sempre immaginato. Già oggi Facebook vieta all’Internet Archive, l’anima documentale di Internet, di salvare schermate rilevanti da archiviare per i posteri.

    Alcuni anni fa la Biblioteca del Congresso varò un progetto per archiviare tutti i tweet prodotti al mondo. Erano forse i bibliotecari americani interessati alle sciocchezze irrilevanti che scriviamo dal divano mentre guardiamo la partita? Ovviamente no. Avevano semplicemente capito che la memoria storica oggi viaggia nascosta nei piccoli frammenti delle comunicazioni di rete. È per questo che il peccato di superbia di Facebook è oggi un tema pubblico di dimensioni gigantesche.
    http://www.pagina99.it/2017/06/08/fac...-chiude-indicizzazione-oblio-internet
    Voting 0
  2. What about the actual functioning of the application: What tweets are displayed to whom in what order? Every major social-networking service uses opaque algorithms to shape what data people see. Why does Facebook show you this story and not that one? No one knows, possibly not even the company’s engineers. Outsiders know basically nothing about the specific choices these algorithms make. Journalists and scholars have built up some inferences about the general features of these systems, but our understanding is severely limited. So, even if the LOC has the database of tweets, they still wouldn’t have Twitter.

    In a new paper, “Stewardship in the ‘Age of Algorithms,’” Clifford Lynch, the director of the Coalition for Networked Information, argues that the paradigm for preserving digital artifacts is not up to the challenge of preserving what happens on social networks.

    Over the last 40 years, archivists have begun to gather more digital objects—web pages, PDFs, databases, kinds of software. There is more data about more people than ever before, however, the cultural institutions dedicated to preserving the memory of what it was to be alive in our time, including our hours on the internet, may actually be capturing less usable information than in previous eras.

    “We always used to think for historians working 100 years from now: We need to preserve the bits (the files) and emulate the computing environment to show what people saw a hundred years ago,” said Dan Cohen, a professor at Northeastern University and the former head of the Digital Public Library of America. “Save the HTML and save what a browser was and what Windows 98 was and what an Intel chip was. That was the model for preservation for a decade or more.”

    Which makes sense: If you want to understand how WordPerfect, an old word processor, functioned, then you just need that software and some way of running it.

    But if you want to document the experience of using Facebook five years ago or even two weeks ago ... how do you do it?

    The truth is, right now, you can’t. No one (outside Facebook, at least) has preserved the functioning of the application. And worse, there is no thing that can be squirreled away for future historians to figure out. “The existing models and conceptual frameworks of preserving some kind of ‘canonical’ digital artifacts are increasingly inapplicable in a world of pervasive, unique, personalized, non-repeatable performances,” Lynch writes.

    Nick Seaver of Tufts University, a researcher in the emerging field of “algorithm studies,” wrote a broader summary of the issues with trying to figure out what is happening on the internet. He ticks off the problems of trying to pin down—or in our case, archive—how these web services work. One, they’re always testing out new versions. So there isn’t one Google or one Bing, but “10 million different permutations of Bing.” Two, as a result of that testing and their own internal decision-making, “You can’t log into the same Facebook twice.” It’s constantly changing in big and small ways. Three, the number of inputs and complex interactions between them simply makes these large-scale systems very difficult to understand, even if we have access to outputs and some knowledge of inputs.

    “What we recognize or ‘discover’ when critically approaching algorithms from the outside is often partial, temporary, and contingent,” Seaver concludes.

    The world as we experience it seems to be growing more opaque. More of life now takes place on digital platforms that are different for everyone, closed to inspection, and massively technically complex. What we don't know now about our current experience will resound through time in historians of the future knowing less, too. Maybe this era will be a new dark age, as resistant to analysis then as it has become now.

    If we do want our era to be legible to future generations, our “memory organizations” as Lynch calls them, must take radical steps to probe and document social networks like Facebook. Lynch suggests creating persistent, socially embedded bots that exist to capture a realistic and demographically broad set of experiences on these platforms. Or, alternatively, archivists could go out and recruit actual humans to opt in to having their experiences recorded, as ProPublica has done with political advertising on Facebook.
    https://www.theatlantic.com/technolog...ans-to-understand-our-internet/547463
    Voting 0
  3. And it isn’t just about preserving all the bits. “The more critical question is no matter what the medium is in which digital bits are recorded, how long will we be able to read them, and how long will we make sense out of them…? E » ven pretending you could read the disk again, do you have the software that knows what the bits mean?’
    http://thenewstack.io/vint-cerf-warns...anity-can-data-survive-longer-century
    Tags: , by M. Fioretti (2016-11-19)
    Voting 0
  4. Greer’s archive includes floppy disks, tape cassettes and CD-roms, once cutting-edge technologies that are now obsolete. They are vulnerable to decay and disintegration, leftovers from the unrelenting tide of technological advancement. They will last mere decades, unlike the paper records, which could survive for hundreds of years.

    Buchanan and her team are now working out how to access, catalogue and preserve the thousands of files on these disks, some of them last opened in the 1980s. “We don’t really know what’s going to unfold,” Buchanan says.

    The Greer archivists are facing a challenge that extends far beyond the scope of their collection. Out of this process come enormous questions about the fate of records that are “born digital”, meaning they didn’t start out in paper form. Record-keepers around the world are worried about information born of zeroes and ones – binary code, the building blocks of any digital file.

    Archives are the paydirt of history. Everything else is opinion
    Germaine Greer

    Like floppy disks of the past, information stored on USB sticks, on shared drives or in the cloud is so easily lost, changed or corrupted that we risk losing decades of knowledge if we do not figure out how to manage it properly.

    Though the problem applies to everyone – from classic video-game enthusiasts to people who keep photos on smartphones – it is particularly pressing for universities and other institutions responsible for the creation and preservation of knowledge.
    https://www.theguardian.com/books/201...archive-digital-treasure-floppy-disks
    Voting 0
  5. "We save it as a picture as it's longer life than a file. You don't rely on PowerPoint or Word. In 50 years they can still just look at it,"
    http://www.theinquirer.net/inquirer/n...preserve-human-history-argues-vatican
    Voting 0
  6. Se infatti lo scorso 12 marzo sono state pubblicate in Gazzetta Ufficiale le nuove Regole tecniche per la conservazione dei documenti informatici e per il protocollo informatico, manca ancora all'appello un tassello fondamentale per permettere che la gestione e la conservazione dei documenti informatici avvengano in maniera davvero corretta e sicura, ovvero le Regole tecniche in materia di formazione e di gestione documentale e le Regole tecniche sulla sicurezza dei dati, dei sistemi e delle infrastrutture.

    Come possiamo procedere, infatti, alla conservazione digitale dei documenti informatici, obbligatoria per legge, se prima non abbiamo tutte le Regole tecniche che costituiscono l'abc per formare e gestire correttamente e in sicurezza questi documenti?
    Per garantire ai nuovi archivi digitali, e quindi al patrimonio inestimabile della nostra memoria, sicurezza, autenticità e affidabilità nel tempo, gli Stati Generali della Memoria Digitale sollecitano quindi con urgenza le istituzioni competenti a emanare le Regole Tecniche ancora mancanti, il cui iter è bloccato da ormai troppo tempo, nella convinzione che solo con un apparato normativo completo l'innovazione digitale potrà proseguire nel nostro Paese su solide basi.
    http://anorc.it/notizia/612_Petizione..._Generali_inviano_le_firme_alle_.html
    Voting 0
  7. ‘All of it’ turned out to be 25 boxes full of tins containing several thousand 60-metre rolls of photos, and quickly-deteriorating magnetic film with infrared imagery – unopened, and labeled with useless information on orbit numbers rather than locations. But the prize was too great, and he was running out of time: with the surviving NASA scientists who had taken the original images well into their 80s, he knew it wasn’t long before the knowledge he needed to decipher the data would be gone forever.

    Gallaher started the process of sifting through roll after roll of film. The visible-light images he was scanning weren’t the originals: using the best technology of the time, the had played back the images from the satellites on a TV monitor, then snapped photographs of the TV. What he had were those images, sporadically placed along rolls of film as long as the wingspan of a Boeing 787.

    Gallaher sent the film containing infrared data off to a Montreal, Quebec-based company, JBI, which rescued the data for $10 a spool. By the end, Gallaher had over 200,000 images – a remarkable 99 percent of the data – amounting to several gigabytes of data. A truckload of film canisters fit on a thumb drive.

    “That was an incredible amount of data,” he says. In the sixties, when the images were recorded, “that was more storage than there was available on the planet.”

    It was worth the wait. What Gallaher and his NSIDC colleague Garrett Campbell had discovered was both the largest and the smallest Antarctic sea ice extent ever recorded, one year apart, as well as the earliest sea ice maximum ever just three years later; it was an inexplicable hole in the Arctic sea ice even while the overall extent agreed with modern trends; it was the earliest known picture of Europe from space; it was a picture of the Aral Sea with water still in it.

    It was, as Gallaher puts it, like looking at “the Precambrian of satellite data.”


    The team at the National Snow and Ice Data Center made the images available online in the searchable, standardized format that Gallaher originally wished it had been. The images hadn’t been intended for use in sea ice research, or in long-term study of trends, but it has been repurposed to serve a multitude of purposes.

    The data dump is currently facilitating a flurry of activity as scientists around the world use the images to answer questions about deforestation, weather patterns, and any other line of inquiry that can benefit from an answer to the question, “what happened before that?”

    And there is more of it out there – more canisters awaiting liberation in dusty back rooms of storage centres, and even data from entirely different sources besides satellites. Taken together, it’s called “dark data,” potentially valuable information locked away in unusable formats, unknown to most of the world, some of it never even seen by human eyes.

    Spurred on by his discovery, Gallaher is in the process of starting an organization to recover more dark data, if he can raise enough money.

    The original cost to American taxpayers to gather the images was in the billions (in today’s dollars). “For a few hundred thousand dollars I could get it all back,” he says.

    “Before this stuff gets lost, let’s keep it.”
    http://barentsobserver.com/en/2014/10...h-billions-14-10#.VEJgOe1jtAR.twitter
    Voting 0
  8. Long ago, before even the first dot-com bubble, Sumner Redstone opined, “Content is king.” We all cherish content, like a lost lunar Earthrise image or a newly discovered Warhol. Indeed, I would submit that this community has gotten pretty good at preserving at least certain kinds of digital content. But is content enough? What counts as context for all our digital content? Is it the crushed Atari box retrieved from a landfill? Or is it software, actual executables, the processes and procedures, the interfaces and environments that create and sustain content? Perhaps it is indeed time to revive discussion about something like a National Software Registry or Repository; not necessarily as a centralized facility like Culpeper, dug into the side of a mountain, but as a network of allied efforts and stakeholders. In software, we sometimes call such networks of aggregate reusable resources libraries. Software, folks: it’s a thing. Thank you.
    https://medium.com/@mkirschenbaum/software-its-a-thing-a550448d0ed3
    Voting 0
  9. Compro molti più libri così che quando andavo in libreria: come dicevo, lo consentono i costi contenuti, la facilità e la velocità di reperimento, la maneggevolezza, l’ingombro minimo, la possibilità di modificare le impostazioni per facilitare la lettura. Come autrice, invece, trovo ancora parecchi difetti nel modello di business, perché il mercato non ci crede ancora. Si gioca molto sulla leva che poi porta alla deriva del self-publishing: io ti pubblico, ti faccio contento e investo su di te, tanto mi costa poco e ho un margine notevole nel caso in cui poi esploda il fenomeno à la Cinquanta sfumature, che porta al passaggio finale, il libro di carta. Allora sì che investo.

    Nel cartaceo, a causa dei costi vivi più alti, c’è una selezione diversa e anche un trattamento diverso dell’autore. C’è un mercato di cui si conoscono abbastanza bene i meccanismi, c’è un indotto che può servire anche per far leva sul prodotto, c’è uno storico su cui poggiarsi. Nel digitale siamo ancora alla ricerca di questo modello, ne comprendiamo i vantaggi e le potenzialità ma ancora non abbiamo capito come sfruttarli, o forse non ci interessa davvero. Per esempio, prima o poi verrà fuori il problema dell’ereditarietà dell’account: io ho ancora tutta la biblioteca di mio padre e di mio nonno, mentre delle mie centinaia di copie in ebook si perderà traccia, nonostante io le abbia pagate e debba quindi essere messa nella piena disponibilità del bene.
    http://officinamasterpiece.corriere.i...i-spinge-i-lettori-forti-verso-lebook
    Voting 0
  10. Fenella France, chief of preservation research and testing at the Library of Congress is trying to figure out how CDs age so that we can better understand how to save them. But it's a tricky business, in large part because manufacturers have changed their processes over the years and even CDs made by the same company in the same year and wrapped in identical packaging might have totally different lifespans. 'We're trying to predict, in terms of collections, which of the types of CDs are the discs most at risk,' says France. 'The problem is, different manufacturers have different formulations so it's quite complex in trying to figure out what exactly is happening because they've changed the formulation along the way and it's proprietary information.' There are all kinds of forces that accelerate CD aging in real time. Eventually, many discs show signs of edge rot, which happens as oxygen seeps through a disc's layers. Some CDs begin a deterioration process called bronzing, which is corrosion that worsens with exposure to various pollutants. The lasers in devices used to burn or even play a CD can also affect its longevity. 'The ubiquity of a once dominant media is again receding. Like most of the technology we leave behind, CDs are are being forgotten slowly,' concludes LaFrance. 'We stop using old formats little by little. They stop working. We stop replacing them. And, before long, they're gone.'"
    http://beta.slashdot.org/story/202019
    Voting 0

Top of the page

First / Previous / Next / Last / Page 1 of 2 Online Bookmarks of M. Fioretti: Tags: digital dark ages

About - Propulsed by SemanticScuttle