Dodging the memory hole – Saving online news

The fifth edition of the ‘Dodging the memory hole’ conference this year took place at the Internet Archive (15-16 November). The conference brought together journalists, librarians, archivists and entrepreneurs exploring solutions to the loss of online news content. The conference sessions were recorded in their integrity and are available on YouTube

Several speakers talked about their own initiatives or projects. The approach of the Swedish Royal Library for example was presented. Next to performing broad crawls of the Swedish top-level domain for web archiving, the Swedish Royal Library currently archives about 140 daily newspapers via RSS feeds. The RSS feeds are read by the library systems every hour, but they plan to increase the frequency of capture to once every minute or every five minutes. Even though they already capture a lot of content, they are convinced it would be interesting to capture even more such as social media comments that would allow researchers to analyse how the news changes in reaction to these comments. Unfortunately, access to the web archive is currently  not permitted.

Several employees of the Internet Archive also talked about the different collections held by the institution, the Wayback Machine, and their ‘community webs’ initiative that allows local libraries to create their own local news web archive by means of the Archive-It service. Other initiatives also focus on local news. The Boston TV news digital library and the American Archive of Public Broadcasting for example both aim to centralise access to local televised news.

A number of digital preservation initiatives by universities were also discussed: the Stanford Digital Repository, Kentucky Digital Newspaper Program at the University of Kentucky, the PA news archive portal of the Pennsylvania State University Libraries, the Texas Digital Newspaper Program by the University of North Texas and lastly Cobweb that is essentially a centralised catalogue of collection and seed-level metadata of web archives (a collaboration between Harvard University, UCLA and the California Digital Library). This clearly shows that universities are aware of the need to preserve news and research data and facilitate the discovery of the different collections so that users can easily gain access to them and reuse the data or information for their own research.

A representative of the New York Times talked about the move of their archives to the cloud.  Legacy issues were discussed, for example their efforts to ensure that Flash content will be preserved and stay actionable as Flash will no longer be supported by 2020.

A number of tools were also presented. ReproZip  for example is a tool that allows packaging news applications and other dynamic websites for preservation. Webrecorder on the other hand is a high fidelity web archiving tool. Another tool that is currently being developed is called Esper. It allows to automatically extract complex metadata from videos by means of facial recognition detection or by means of the skeleton feature through which the posture of people appearing in the video can be detected. More information was also provided on LOCKSS (Lots Of Copies Keep Stuff Safe), a preservation network that provides permanent access to web materials and is based on a multi-copy model.

Other speakers focused on general problems that need to be solved, such as reference rot, the fact that copyright is too restrictive or the underrepresentation of minority groups in preserved news content.  There was also a general call to integrate a number of subjects into the curriculum for journalism programs such as metadata information organisation, the importance of tagging or effective information retrieval.

The Dodging the Memory Hole conference provided a good overview of the myriad of problems that currently impede the preservation of online news. However, more importantly, it underlined that a number of individuals and organisations are highly committed to preserving this ephemeral content.

A lot of aspects were evoked during the conference that need to be kept at the back of our minds when further developing the PROMISE project: metadata management, complexities of preserving specific types of content such as Flash elements, the tools available, the inevitability of bias in selective collections, the possibility of harvesting news via RSS feeds, … The conference has definitely provided us with food for thought.

 

 

 

The W3C Publishing Summit

The first ever W3C Publishing Summit took place in San Francisco on 9-10 November 2017. Different players in the field of publishing discussed the central theme of the conference: “how web technologies are shaping publishing today, tomorrow and beyond”.

Sessions spanned a number of topics. Several speakers introduced their respective publishing companies and their vision on the future (Adobe, Rakuten Kobo, Media Do, Penguin Random House, O’Reilly Media) while other speakers talked about the market for e-books in their respective countries/regions (South-America, Europe, Japan).

More details were also provided on the work being done by the W3C: the different working groups (Publishing working group, publishing business group, EPUB3 community group), their progress on making EPUB more accessible for people with disabilities and the need for web standards in order to obtain a state of ‘frictionless publishing’. Frictionless publishing means avoiding problems that are caused by actors in the publishing sector who make use of different apps, devices, platforms etc., which results in each technology only being applicable to a portion of the overall workflow.

Representatives of Google, Mozilla and Microsoft focused on the role that browsers and web developers play as the publishing sector is evolving  towards the open web. The future of reading systems and the role that Virtual Reality could play were also discussed (for example reading about pharaohs and being able to explore a tomb in an immersive way). Artificial intelligence was another point that was touched upon, especially in relation to the educational publishing sector where a trend has set in to create adaptive artificial intelligence content. Such adaptive platforms could for example build knowledge graphs, profiles of students, pedagogical models of how to best teach the material, …  Analytics could also be provided to interested parties: overview of how a student or class is proceeding through a book, analytics about how effective the material is etc.

Another theme were the tools that are changing publishing. CSS grid for example allows ‘traditional typeset material’ to be put on the web by migrating layouts to the web without absolute positioning. A number of experiments with CSS grid can be found on Jen Simmons’ website. Another example is Vivliostyle: a publishing workflow tool that is essentially an HTML and CSS layout engine in JavaScript that can be used to create e-publications. This tool allows the table of content, page numbers, footnotes and indices to be automatically adapted to the changing size of the page. Another tool that was presented is geared towards the consumers. The New York Public Library has developed an e-reading app called SimplyE that allows library patrons to easily gain access to e-books. Currently the open source app is used by more than 60 libraries across the USA and Canada.

Changes in publishing workflows were also addressed by a number of speakers, underlining the need for automating workflows and moving away from simply adapting existing workflows (based on a ‘paper mindset’) to electronic publications and towards digital-first workflows.

The main message of the summit was that publishers are either preparing themselves to move to the web or are in the process of doing so. It is clear that the developments discussed above have implications for heritage institutions such as the Royal Library of Belgium, especially in the light of the broadening of the legal deposit to electronic publications regardless of their material carrier. It is therefore indispensable to anticipate the evolution and changes in publishing in order to accommodate the preservation of a large variety of content.