“Creative with Digital Heritage” – E-Space MOOC is again accepting enrollments for October 2017

For the second academic year, the E-Space MOOC “Creative with Digital Heritage” is accepting enrollments. Europeana Space was a EC-funded project about the creative reuse of digital cultural heritage, which concluded in March 2017 with “excellent” evaluation from the EC reviewers. On the framework of continued activities of the E-Space network beyond the end of the funding pèeriod, and after the great success of the first run in 2016-2017, this MOOC is repeated.

The educational idea behind the E-Space MOOC is to lower barriers to the access and reuse of cultural heritage content on Europeana and similar sources. Whether you are a student or teacher with an interest in cultural heritage, a GLAM professional, a developer or simply a cultural heritage enthusiast without prior technical knowledge, this MOOC is for you: how can you engage with and reuse the wealth of digital cultural heritage available online in many repositories such as Europeana? How can you become an active user of this content, using, remixing and reinventing it for your research, lessons, and development?

The course is free and requires an effort of 2-4 hours per week, over a 8-weeks lenght. The course is available on KU Leuven section at EdX platform.

mooc edx

What you’ll learn:

  • How to become creative with digital cultural heritage
  • What repositories, tools and APIs are available online
  • How to access and use them
  • How digital cultural heritage can be effectively and successfully reused
  • How to deal with Intellectual Property Rights in the context of reuse of digital cultural heritage

As the online availability of digital cultural heritage continues to grow, it becomes more and more important that users, from passive readers, learn to become active re-users. The mission of this course is to share the creative ways with which people use and re-use Europeana and digital cultural content, to demonstrate what Europeana can bring to the learning community, and to bring about the essential concept that cultural content is not just to contemplate, but to live and engage with.

More details and information, and link to enrolling is available here: http://www.europeana-space.eu/education/mooc/ 


Heritage, Tourism and Hospitality conference

hthic

The Heritage, Tourism and Hospitality conferences focus on the question: “How can tourism destinations succeed in attracting tourists while simultaneously engaging all stakeholders in contributing to the preservation of natural and cultural heritage?

A special theme this year will be: “Narratives for a World in Transition“. The organisers welcome contributions advancing the understanding of the role of storytelling and narrative approaches and techniques.

Storytelling is a multi-layered and multi-purpose phenomenon. Geographical destinations and tourism dynamics need people to tell and share stories to co-create heritage values and induce valuable tourist experiences.

Storytelling can play a role as a branding, marketing, stakeholder and visitor engagement, sustainable management and innovation strategy and tool. Knowledge of critical success factors and skills in narrative management are required.

pori2

In addition, in this world in transition, characterised by globalisation, continuous growth in tourism and mobility based on migrant citizenship, there is the need for researchers and practitioners alike to explore the possibilities of reframing tourism beyond “the tourist gaze” and study the interaction, dialogues and conflicts that arise between visitors, hosts and cultural institutions in the presentation and re-use of the past for touristic purposes.

HTHIC2017 will take place in Pori, Finland, on 27-29 September.

For more information: www.heritagetourismhospitality.org

pori


Interview with Marion Jaks
marion

Marion at work

 

Hey Marion! Introduce yourself please.

Hey Ashley! I am a video archivist at the Austrian Mediathek (Österreichischer Mediathek), the Austrian video and sound archive. My main area of work is the digitization of analogue videos and quality control of video files entering our digital archive. Since in the recent years more and more digital content is added to our collection, the dealing with digital files from various sources is becoming a growing part of my job.

What does your media ingest process look like? Does your media ingest process include any tests (manual or automated) on the incoming content? If so, what are the goals of those tests?

The Austrian Mediathek started with digitizing its audio collection in the year 2000 and its video collection in the year 2010. Since our analogue collection is still growing there is still an ongoing digitization demand at the Mediathek and we will continue our in-house digitization efforts. Therefore, the biggest share of ingested files to our digital archive are produced in the in-house digitization department. The digitization systems for audio and video both have quality control steps implemented. One main goal of quality control in our institution is to detect artifacts and to find out which artifacts are due to problems during the digitization process and can be undone through certain actions. Both digitization workflows have steps implemented where file metadata is read out, so that the operator can control if the right settings were used. In DVA-Profession, which we use for our video digitization workflow, ffprobe is used to provide the metadata. This is in my opinion an essential check because it can prevent human error. In digitization we all work with presets that determine the settings of the capture procedure, but still it can happen that someone presses the wrong button…

So in my opinion, for quality control of digitization processes the main questions are: is the digital file a proper representation of the analogue original? And is the produced file meeting all requirements of a proper archive master file? And for the latter MediaConch is a great help for archivists.

Where do you use MediaConch? Do you use MediaConch primarily for file validation, for local policy checking, for in-house quality control, for quality testing for vendor files?

In our institution the main use case for MediaConch are files that were produced outside of the default workflow procedure. For example, when we edit clips from already digitized videos, I use MediaConch to easily find out if the files meet the policy for our archival master.

At what point in the archival process do you use MediaConch?

I use MediaConch when receiving files that were produced outside of our regular workflows where other checks are already implemented in the system. At the moment this the case for files that were processed using editing software. At the Austrian Mediathek we are aiming to make as much of our digital (digitized) collection accessible to the public as possible. Due to rights issues often only parts of the videos are allowed to be published online – that’s where we need to produce special video clips. Those video clips are exported as an archive master in FFV1 so that we can easily produce further copies of the clips in the future. When we are planning a launch of a new project website those clips can easily count a few hundred. MediaConch is very helpful when there are a lot of files to check if all the export settings were set correctly – it saves a lot of time to check the files before any further work is done with them.

Do you use MediaConch for MKV/FFV1/LPCM video files, for other video files, for non-video files, or something else?

We use MediaConch to check if produced video files meet the criteria of our archive master settings. In 2010 our decision for an archival master was to produce FFV1/PCM in an AVI-container. In the near future we are thinking of changing the container to MKV. Peter Bubestinger published the policy of the Austrian Mediathek’s archival master copies: https://mediaarea.net/MediaConchOnline/publicPolicies

Why do you think file validation is important?

I think the most central point in archiving is being in control of your collection. In regards of a digital collection this means to know what kind of files you got and in what state they are in. With a growing digital collection, you will get all kinds of files in different codecs and containers. At the Austrian Mediathek we collect video and audio files from different institutions, professionals as well as private people – so our collection is very diverse and there is no way that we can prescribe any delivery conditions for files entering our institutions. Most of the donors of content do not know much about file formats, codecs, containers. At first I found it very surprising that even film/video professionals often cannot tell you what codec they use for their copies – after some years it is the other way round and I am impressed when filmmakers can tell me specifics about their files.

With this in mind the first step when receiving a digital collection is to find out what kind of files they are. Files that do not have a longer term perspective (e. g. proprietary codecs) should be normalized using a lossless codec like FFV1. With a very diverse collection file transcoding can be a demanding task, when your goal is to make sure that all features of the original are transferred to the archival master copy. Since we all are not immune to human error there must be checks implemented to make sure newly produced archival masters meet the defined criteria. Otherwise you can never be sure about your digital collection and therefore lost control over your archive: every archivist’s nightmare.

Anything else you’d like to add?

I think that MediaConch fully unfolds its abilities in the quality control of files produced by digitization vendors. A few years ago we did outsource the digitization of 16mm film material. I was involved in the quality control of the produced files. At that time MediaConch was not around and the simple checks if the digitized files met the criteria of e. g. codecs/container were very time consuming. Nowadays MediaConch makes tasks like that so much easier and faster. I also like very much that MediaConch has a great, easy to use GUI so that colleagues not used to using the command line can easily do file validation checks. Thank you very much for the great work you are doing at MediaConch!

Thanks Marion! For more from the Mediathek, check out their audio/video web portal!


veraPDF 1.8 released

veraPDF-logo-600-300x149The latest version of veraPDF is now available to download on the PREFORMA Open Source Portal. This release includes new PDF information in the features report and automatic configuration of the features extractor when applying custom policy profiles. There are also a number of low level PDF fixes and improvements which are documented in the latest release notes: https://github.com/veraPDF/veraPDF-library/releases/latest

This is the final release of veraPDF under the PREFORMA project. From September, veraPDF will make the transition from a funded project to a stand alone open source project.

 

Download veraPDF

http://www.preforma-project.eu/verapdf-download.html

 

Help improve veraPDF

Most of the changes in this release are in response to the constructive feedback we’ve received from our users. Testing and user feedback is key to improving the software. Please download and use the latest release. If you experience problems, or wish to suggest improvements, please add them to the project’s GitHub issue tracker: https://github.com/veraPDF/veraPDF-library/issues  or post them to our user mailing list: http://lists.verapdf.org/listinfo/users.

 

Getting started

User guides and documentation are published at: http://docs.verapdf.org.

 

Save the date!

Join us for our next webinar on 30 August ‘Life after PREFORMA, the future of veraPDF’. Registration opening soon.

 

About

The veraPDF consortium (http://verapdf.org/) is funded by the PREFORMA project (http://www.preforma-project.eu/). PREFORMA (PREservation FORMAts for culture information/e-archives) is a Pre-Commercial Procurement (PCP) project co-funded by the European Commission under its FP7-ICT Programme. The project’s main aim is to address the challenge of implementing standardised file formats for preserving digital objects in the long term, giving memory institutions full control over the acceptance and management of preservation files into digital repositories.


DPF Manager 3.5 available to download

dpf_news_3.5

 

A new version of the DPF Manager has been released!

There are several improvements in this update. The most significant one is the redesign of the reports section, where now it is easier (and much faster) to browse the analyzed reports, transforming between formats, and also converting a quick check to a full check. The global report has also been improved with a more visual and interactive panel.

Other interesting new functionalities and enhancements are the following:

  • The METS report is generated in all the cases (previously it was only genereated if there where no errors at all.
  • Download reports from the GUI functionality
  • Added pagination functionality to improve the efficiency (specially when thousands of files are checked)
  • Included execution time duration to global reports
  • Fixed bugs related to the zipped tiff files validation.
  • Delete report from global reports page
  • Sorting reports by the passed column now shows first the ones that have warnings
  • Bugfix when trying to send feedback with non-xml reports.
  • Compatibility with old reports (previous to version 3.3)

See the full list of new functionalities in the github tag page and the list of solved issues.


CEPROQHA project: Preservation and Restoration of Qatar Cultural Heritage through Advanced Holoscopic 3D Imaging

CEPROQHA (NPRP9-181-1-036) is a cooperative research project between Qatar University and Brunel University (UK), funded by the Qatar National Research Fund (QNRF) under the National Priorities Research Program (NPRP). As the State of Qatar transitions to a knowledge-based economy, it seeks to ensure that the nation’s path to modernity is rooted in its values and traditions, considering digitization tecnologies and tools  for the preservation of Qatar’s culture, traditions and heritage. As part of the Digital Content Program, the national plan aims to provide incentives for the development of a vibrant digital ecosystem through which future generations can tap into their past and create new expressions of Qatari culture on the global stage.

ceproqha

The global aim of this project is to develop a new framework for a cost-effective cultural heritage preservation using cutting-edge 3D Holoscopic imaging technology and archival tools. For this, CEPROQHA leverages its partners research and innovation expertise to achieve its objectives.

The documentation of cultural assets is inherently a multimedia process, addressed through digital representation of the shape, appearance and preservation condition of the Cultural Heritage (CH) object. CH assets are not clone-able physically or impeccably restorable, and hence their curation as well as long-term preservation require the leveraging of advanced 3D modelling technologies. However, this poses serious challenges since the generation of high quality 3D models is still very time-consuming and expensive, not least because the modelling is carried out for individual objects rather than for entire collections. Furthermore, the outcome of digital reconstructions is frequently provided in formats that are not interoperable, and therefore cannot be easily accessed and/or re-used and understood by scholars, curators or those working in cultural heritage industries, thereby resulting in serious risks menacing the sustainability of the reconstructions. Indeed, the digital model is progressively becoming the actual representation of a cultural heritage asset for anybody, anywhere and anytime, and therefore this project intends to acknowledge the changing role that reconstruction, visualisation and management now play in the curation of heritage and its analysis.

Project website: http://www.ceproqha.qa/


BIENNIAL EVENT / SINOPALE 2017

The 6th edition of Sinopale–International Sinop Biennial under the common title “Transposition” will take place in Sinop, Turkey, from 1th August to 17th September 2017.

Affirming the periphery, Sinopale brings international contemporary art to the city of Sinop by the Black Sea in Northern Turkey. Sinopale is a biennial event – investigating various forms of resistance and adaptation of local movements and initiatives from civil society, ecological activism and nongovernmental politics. The summer of 2017 will align artistic processes of sharing, commonality and difference. The artists will be working in situ. A horizontal collaborative structure will bring the invited artists together with the citizens of Sinop to create spaces for aesthetic, social and political practice.

sinopale2

Sinopale 6 will revolve around the notion of Transposition. Transposition is a word with several meanings, which all signify a shift between values. Playing with the gap in-between, it becomes possible to open room for process and transference, to open a conceptual space as well as a scope for action and imagination. Sinopale 6 furthermore taps into dialogues on the history of material and cultural memory in order to create an associative space full of cross-references.

Sinopale is a young biennial for contemporary art on the periphery, consisting of a multitude of events of different formats. As long-term organizer of Sinopale, the European Cultural Association emphasizes on its sustainable micro-political and emancipatory efforts. The organization works in close co-operation with international curators who are responsible for the selection of the artists and the program. We take a pragmatic and functional collective approach to the exhibition, events and their thematic and discursive direction in order to generate a format of cross-cultural exchange with the local context.

Download press release (PDF, 206 Kb)

Website: http://sinopale.org/


Future Minds – Art and Technology in the future

Kyoto University and Goldsmiths, University of London are organizing an International Symposium entitled “Future Mind” – Art and Technology in the future.

Location: the Stuart Hall Building, Goldsmith, University of London

future mind

Program:

Registration and Coffee (9:00 – 9:35)

Welcoming talk (9:35~9:45)
Ambassador of Japan Extraordinary and Plenipotentiary Embassy of Japan in the United Kingdom

Talk by Kyoto University’s President and Goldsmiths College’s Warden (9:45~10:15)
Dr. Juichi Yamagiwa, President, Kyoto University
Mr. Patrick Loughrey, Warden (President), Goldsmiths College, University of London

Break (10:15 – 10:30)

Session 1 (10:30 – 12:00)Art of Future, Future City and Looking for Japan
Presenter: Mr. Conrad Bodman
Presenter: Prof. Sherry Dobbin
Presenter: Prof. Naoko Tosa

Lunch (12:00-13:30)
Greeting Talk (13:30-13:40)
By Mark D’Inverno, Professor of Computer Science and  Pro-Warden (International) at Goldsmiths

Session 2(13:40 – 15:00)Communication of the Future, Vision and Mind
Presenter: Prof. Ryohei Nakatsu
Presenter: Prof. Frederic Fol Leymarie

Break (15:00 – 15:20)

Session 3 (15:20 – 16:20) VR Art and Imaging of the Future
Presenter: Prof. William Latham
Presenter: Prof. Koji Koyamada

Session 4 (16:20 – 17:40)AI, Art Critic of the Future Mind
Presenter: Prof. Liang Zhao
Presenter: Dr. Guido Orgs
Presenter: Prof. Koji Yoshioka

Social Gathering (17:40 -19:30)

For more information, please visit the event website at http://at.kokoro.kyoto-u.ac.jp/

 


Interview with Kieran O’Leary
kieran

Kieran’s workstation

 

Hey Kieran! Introduce yourself please.

Hi! I’m Kieran O’Leary, originally from a relatively rural part of County Cork in Ireland, now living in Dublin City, working in the Irish Film Archive within the Irish Film Institute. I’ve been fascinated by digital video since about 2002, when I got my first home PC (I was around 17 years old, so relatively late to the party). I started encoding my DVD collection to MPEG-4 ASP AVI files and became fascinated with the process. A few years later, I got into some terrible amateur filmmaking and photography, which ultimately landed me an internship with the Irish Film Institute and I’m still here, mostly working on code, workflows, metadata, digitisation/migration, facilitating access to our collections and other good stuff.

What does your media ingest process look like? Does your media ingest process include any tests (manual or automated) on the incoming content? If so, what are the goals of those tests?

My colleagues Eoin O’Donohoe and Anja Mahler handle deliveries of new, contemporary content, such as AS-11 broadcast files and DCPs/DCDMs that are delivered as part of overarching agreements with The Irish Film Board, the Broadcasting Authority of Ireland, and The Arts Council. They are considering implementing MediaConch to confirm that the deliveries are as agreed, but I mostly deal with reformatted tapes or scanned film. A lot of the tests that we do are based around figuring out if we did everything as best we could – could we have migrated that tape better, were the correct settings definitely used, etc. As we transcode our uncompressed video to FFV1/MKV, the main checks that we do involve framemd5s to ensure that the FFV1/MKV file produces the exact same content when decoded. Occasionally, we are able to convince some vendors to provide us with checksums, so we perform fixity checks upon arrival. We recently had to use FCP7 for some tape capture, and the v210/MOV files in the Capture Scratch had no interlacement or aspect ratio recorded in the container. I wrote a simple mediainfo check to see if these conditions were true: Check if Pixel Aspect Ratio = Square (It should be 16/15) and check if no interlacement info is specified (it should be interlaced top field first).

I am currently working on a project where we received about 25 hard drives full of production material from a production company, Loopline Film. We will need a lot of checks here as the content is all quite heterogenous – a large mix of P2 cards, various XDCAM flavours, DSLR, migrated tape, FCP projects, subtitle files and many more. There are a lot of duplicate files so we will have to perform various tests and checks to figure out which are the best candidates to move forward into our ingest workflows.

Generally, our ingest process involves registration, metadata extraction, packaging into a consistent folder structure, fixity, descriptive and technical cataloguing, and attempting to log every step in the process as best we can.

Where do you use MediaConch? Do you use MediaConch primarily for file validation, for local policy checking, for in-house quality control, for quality testing for vendor files?

MediaConch always makes me feel guilty because I know that I should be using it in more of our workflows. Currently, we mostly use the GUI when files are delivered from vendors. We are supposed to get a hard drive full of files with various attributes, and MediaConch automates a lot of this work via local policy creation, usually created from an ‘ideal file’.

As we have a lot of FFV1/Matroska, it would be an important preservation event to perform an actual validation against the FFV1/Matroska standard via Mediaconch in order to ensure that we have valid, standard compliant files.

I recently started experimenting with using MediaConch’s implementation checker for our FFV1/Matroska files. The pull request is here: https://github.com/kieranjol/IFIscripts/pull/201. It is designed around the package structure for our existing FFV1/MKV files. It finds an MKV file, launches MediaConch’s command line interface, creates an implementation report in XML format and stores it in our metadata folder, then the script parses the XML to check if all tests were a success or not. These preservation events are logged in a growing log file in our logs folder, and then the checksum manifest for the package is updated to reflect the new/changed files. It really was lovely to be able to quickly integrate MediaConch into our workflows, as well as enrich existing packages. I think I’ll probably get this process to run as part of our FFV1/Matroska normalisation script as well, so we have instant validation.

These events will ultimately be logged as ‘eventType=validation’ PREMIS events as well. Just in case you were curious, I ran the process on 290 files and all passed 🙂 Each validation only took a few seconds, though one stubborn file took a lot longer. Dave Rice got to the bottom of the issue eventually though.

It has also been used intermittently for in house quality control. I recently had an epiphany where I realised that MediaConch HAD to be used for our in house quality control, because one of my scripts had a stupid bug that would have been caught much quicker with a MediaConch policy.

At what point in the archival process do you use MediaConch?

It’s usually at an early stage. We use the Spectrum Collections Management standard here in the IFI, so the phase is ‘Object Entry’. This is a pre-accession period where the files are still undergoing quality control measures, and they may be rejected.

Do you use MediaConch for MKV/FFV1/LPCM video files, for other video files, for non-video files, or something else?

It’s actually been mostly used for vendor supplied files, which are usually v210/mov, sometimes prores/mov, but occasionally DPX or AS-11. As mentioned, it is in the process of integrating into our FFV1/Matroska workflow.

Why do you think file validation is important?

File validation is an important step in ensuring that we are putting the best possible files forward for ingest. In terms of the material that we receive from vendors, validation of the content ensures that we are getting what we pay for, and that there are no issues that could end up being a preservation risk. In terms of file format validation, it’s really important that we can verify and document (as a validation Event in PREMIS!) that we have created or received files that comply with the file format specification. It’s one of the reasons that I like FFV1/Matroska so much. I know a lot less about Matroska than FFV1, but it’s good to know that we can figure out if we have valid files which ultimately should have the greatest level of interoperability going into the future.

Anything else you’d like to add?

Not much, just that I wish I’d engaged with the command line interface for MediaConch sooner. It is super flexible and I wish that there were more examples of use out there.

Thanks to Kieran! Check out all of Irish Film Archive’s scripts here on GitHub.


CHNT22 Urban Archaeology and Integration

chnt22

The programme of the Cultural Heritage and New technologies CHNT conference this year includes different sessions:

  • Integrating historical maps and archaeological data using digital technologies
  • Adding life to written sources by studying the dead
  • New realities 3: virtual, augmented reality and other techniques in Cultural and historical Heritage for the general public
  • 3D digital reconstruction and related documentation sources
  • 3D Documentation in Underwater Archaeology: Photogrammetry, Georeferencing, Monitoring, and Surveying
  • New Approaches to Medieval Structures and Spaces
  • Reflections and research on archaeological practices in the digital era
  • The Employment of Mobile Applications for Survey, Documentation and Information

The programme also includes a special session on Cultural Heritage and Armed Conflict and a series of roundtables, trainings, poster sessions, and more.

The conference will be opened by Dr. Brigitte RIGELE (Head of the municipal and provincial archives of Vienna).
This year’s keynote speech will be hold by Martin SCHAICH about “Vianden Castle3D – “linked” in space and time for historical building research, visualization and presentation”.

The full programme is available here: http://www.chnt.at/program-2017/

This year we organize for the first time a special APP-Award for Young Scientists: the Vienna City Award for Innovative Apps in Cultural Heritage for young researchers

This award will be sponsored by the Vienna Municipal Department of Cultural Affairs with a donation of EUR 1000.

Specific terms and conditions

  • Age under 35
  • no commercial product
  • The app should be produced in English.
  • The app presenter(s) must be on site.
  • The app should be available to interested users in any appropriate form (including the stores – Play Store and Apple Store  – free download is required).

For more information  – visit our homepage – www.chnt.at