Remix public domain artworks with GIF IT UP 2017 – the international competition for history nuts and culture lovers

gifitupThe competition encourages people to create new, fun and unique artworks from digitized cultural heritage material. A GIF is an image, video or text that has been digitally manipulated to become animated. Throughout the month, they can create and submit their own, using copyright-free digital video, images or text from Europeana Collections, Digital Public Library of America (DPLA), Trove, or DigitalNZ.

All entries help promote public domain and openly licensed collections to a wider audience, and increase the reuse of material from these four international digital libraries, including Europeana Collections. The contest is supported by GIPHY, the world’s largest library of animated GIFs.

The 2017 competition will have a special focus on first-time GIF-makers and introduce them to openly licensed content. A GIF-making workshop, providing tools and tutorials to help visitors create their first artworks, will be held on 14-15 October in cooperation with THE ARTS+, the creative business festival at the Frankfurt Book Fair.

The jury, made up of representatives from GIPHY, DailyArt and Public Domain Review, will be awarding one grand prize winner with an Electric Object – a digital photo frame especially for GIFs – sponsored by GIPHY. Prizes of online gift cards will go to three runners-up as well as winners in a first-time GIF-makers category. Special prizes will be allocated in thematic categories: transport, holidays, animals and Christmas cards.

People are also invited to take part in the People’s Choice Award and vote on the competition website for their favourite GIF, which will receive a Giphoscope. All eligible entries will be showcased on the GIPHY channel dedicated to the competition, and promoted on social media with the hashtag #GIFITUP2017.

GIF IT UP started in 2014 as an initiative by the Digital Public Library of America (DPLA) and DigitalNZ, and has since become a cultural highlight. 368 entries from 33 countries are featured on the GIF IT UP Tumblr. In 2016, the grand prize was awarded to ‘The State Caterpillar’, created by Kristen Carter and Jeff Gill from Los Angeles, California, using source material from the National Library of France via Europeana. Nono Burling, who got awarded the 2016 People’s Choice Award for ‘Butterflies’, said: “I adore animated GIFs made from historic materials and have for many years. The first contest in 2014 inspired me to make them myself, and every year I try to improve my skills.”

Results of the 2017 competition will be announced in November on the GIF IT UP website and related social media.

 


Europeana Research Grants Programme 2017: call for submissions and guidelines for applicants

Europeana Research is delighted to be launching the second Europeana Research Grants Programme. Applicants are encouraged to come up with individual research projects which make use of Europeana Collections content to showcase and explore the theme of intercultural dialogue. Applicants should employ state-of-the-art tools and methods in digital humanities to address a specific research question. it is expected to see applications employing as much of the Europeana platform (e.g., the API, metadata,) as possible.

grantshiresposterfinal3

Funding is available for up to 8,000 euros per successful project. Up to three successful proposals will be funded. Funding can be used for buyout of researcher time, travel, meetings, or developer costs. This funding does not include overheads

Read the entire call on Europeana at: https://pro.europeana.eu/post/europeana-research-grants-programme-2017-call-for-submissions-and-guidelines-for-applicants


The 20th International Conference on Cultural Economics

RMIT University  is pleased to host the 20th International Conference on Cultural Economics, presented by the Association for Cultural Economics International (ACEI).

The Conference will be held in Melbourne, Australia, from Tuesday June 26th to Friday June 29th , 2018. The program chair is Prof. Alan Collins (University of Portsmouth, UK), ACEI president-elect.

Conference theme include: creative industries, technological disruption in the arts, international trade in art and culture, cultural festivals, network structures in the arts, culture and sustainable development, digital participation, big data in the arts and culture, sport economics, artistic labour markets, arts and cultural organisations, creative cities, funding the arts, cultural heritage, art markets, the economics of food and wine, Indigenous art and culture, performing arts, valuing the arts….and more!

rmit1

A Call for Papers is currently open until 31 January 2018

ACEI2018 aims to provide a forum for discussion on a range of issues impacting the arts and culture and for the first time the conference will also address issues related to sport. The conference brings together a range of academics from a number of disciplines that share an interest in empirically motivated research on topics related to the arts and culture such as creative industries, creative cities, art markets and artistic labour to name a few. The conference also welcomes the insights and contributions from professionals, arts practitioners, policy makers and arts administrators in developing a fruitful dialogue that connects theory with practice.

With the conference host, RMIT based in the Melbourne CBD, the location of ACEI2018 combined with the social and cultural programme that accompany the conference, will provide delegates an ideal opportunity to experience Australian culture and explore the city of Melbourne.

Website: http://sites.rmit.edu.au/acei2018/

Conference presentation (PDF, 1.49 Mb)


Interview with Brendan Coates

brendan

 

Hey Brendan! Introduce yourself please.

Hey everybody, I’m Brendan and at my day job I’m the AudioVisual Digitization Technician at the University of California, Santa Barbara. I run three labs here in the Performing Arts Department of Special Research Collections where we basically take care of all the AV migration, preservation, and access requests for the department and occasionally the wider library. I’m a UMSI alum, I got a music production degree there too, so working with AV materials in a library setting is really what I’m all about.

And, I get to work on lots of cool stuff here too. We’re probably most famous for our cylinder program, we have the largest collection of “wax” cylinders outside of the Library of Congress at roughly 17,000 items, some 12,000 of which you can listen to online. I’m particularly fond of all the Cuban recordings from the Lynn Andersen Collection that we recently put up. We’re also doing a pilot project with the Packard Humanities Institute to digitize all of our disc holdings, almost half a million commercial recordings on 78rpm, over the next 5 years.

And we’re building out our video program, too. We can do most of the major cartridge formats. We’re only doing 1:1 digitization though, a lot of my work these days is figuring out how to speed up the back-end – we have 5000 or so videotapes at the moment but I know that number is only going to go up.

Outside of work, Morgan Morel (of BAVC) and I have a thing called Future Days where we’re trying to expand our skills while working with smaller institutions. Last year we made a neat tool called QCT-Parse, which runs through a QCTools Report and tells you, for example, how many frames have a luma bit-value above 235 or below 16 (for 8-bit video), outside of the broadcast range. You can make your own rules, too. We had envisioned it as like MediaConch for your QCTools reports and sorta got there… we’re both excited to be involved with SignalServer, though, which will actually get there (and much, much further).

Today, though, I’m going to be talking about work I did with one of our clients, revising their automated ingest workflow.

What does your media ingest process look like? Does your media ingest process include any tests (manual or automated) on the incoming content? If so, what are the goals of those tests?

Videos come in as raw captures off an XDCAM, each individual video is almost 10mins long, they’re concatenated into 30min segments, the segments are linked to an accession/ interview number. They chose this route to maintain consistency with their tape-based workflow. This organization has been active since the 90’s, so they’re digitizing and bringing in new material simultaneously, I was only working on the new stuff, but it made it organizationally easier for them to keep that consistency.

After raw captures are concatenated, we make flv, mpeg, and mp4 derivatives and they’re hashed and sent to a combination of spinning discs and LTO, all of their info lives in a PBCore FileMaker database. Derivatives are then sent out to teams of transcribers/ indexers/ editors to make features and send to their partners.

When I started this project, there was no in-house conformance checking to speak of. Their previous automated workflow used Java to control Compressor for the transcodes and, whatever else might be said about that setup, they were satisfied with the consistency of the results.

Looking back on it now, I ~should~ have used MediaConch right at the start to generate format policies from that process and then evaluated my new scripts/ outputs against them, sort of a “test driven development” approach.

Where do you use MediaConch? Do you use MediaConch primarily for file validation, for local policy checking, for in-house quality control, for quality testing for vendor files?

We use MediaConch in two places: first, on the raw XDCAM captures to make sure that they’re appropriate inputs to the ingest script (the ol’ “garbage in, garbage out”); and second, on the outputs, just to make sure the script is functioning correctly. Anything that doesn’t pass gets the dreaded human intervention.

At what point in the archival process do you use MediaConch?

Pre-ingest, we don’t want to ingest stuff that wasn’t made correctly, which you’ll find out more about later.

I think that this area is one where the MediaConch/ MediaInfo/ QCTools/ SignalServer apparatus can help AV people in archives to contextualize their work and make it more visible. These tools really shine a light on our practice and, where possible, we should use them to advocate for resources. Lots of people either think that a video comes off the tape and is done or that it’s only through some kind of incantation and luck that the miracle of digitization is achieved.

Which, you know, tape is magical, computers are basically rocks that think and that is kind of a miracle. But, to the extent that we can open the black box of our work, we should be doing that. We need to set those expectations for others that a lot of stuff has to happen to a video file before it’s ready for its forever home, similar to regular archival processing, and that that work needs support. We’re not just trying to get some software running, we’re implementing policy.

Do you use MediaConch for MKV/FFV1/LPCM video files, for other video files, for non-video files, or something else?

Each filetype has its own policy, 5 in total. The XDCAM and preservation masters are both mpeg2video, 1080i, dual-mono raw pcm audio, NDF timecode. Each derivative has its own policy as well, derived from files from the previous generation of processing.

Why do you think file validation is important?

Because it’ll save you a lot of heartache.

So, rather than start this project with MediaConch, I just ran ffprobe on some files from the older generation of processing and used that to make the ffmpeg strings for the new files. As a team, we then reviewed the test outputs manually and moved ahead.

The problems with that are 1) ffprobe doesn’t tell you as much as MediaConch/ MediaInfo does (they tell you crucially different stuff), and 2) manual testing only works if you know what to look for, which, because we were implementing something new, we didn’t know what to look for.

It turns out that the ffmpeg concat demuxer messes with the [language] Default and Alternate Group tags of audio streams. Those tags control the default behavior of decoders and how they handle the various audio streams in a file, showing/ hiding them or allowing users to choose between them.

What that bug did in practice was hide the second mono audio stream from my client’s NLE. For a while, nobody thought anything of it (I didn’t even have a version of their NLE that I could test on), so we processed files incorrectly for like three months. The streams were still the preservation masters (WHEW) but, at best, they could only be listened to individually in VLC. If you want to know more about that issue you can check out my bug report.

If we had used MediaConch from the beginning, we would have caught it right away. Instead, a years worth of videos had to be re-done, over 500 hours of raw footage in total, over 4000 individual files.

It’s important to verify that the things that you think you have are the things that you actually have. If you don’t build in ways to check that throughout your process, it will get messy and it’s extremely costly and time consuming to fix.

Anything else you’d like to add?

I’m really digging this series and learning how other organizations are grappling with this stuff. It’s a rich time to be working on the QC side of things and I’m just excited to see what new tools and skills and people get involved with it.


New Europeana Pro: the Beta version is out for your input

In September 2017, the Europeana Pro website went throught a major redesign, also integrating Europeana Labs and Europeana Research, which were previously living on separate websites. One of the main reasons for this redesign was to place people first: the new site approach is to be more ‘people’ oriented, to highlight Europeana’s close relationships with institutions and to reinforce the work done by all the Europeana ecosystem communities to make a difference in the digital cultural heritage world.

Europeana looks forward to users’ feedback.

Check the new website at https://pro.europeana.eu/post/new-europeana-pro-the-beta-version-is-out-for-your-input

The top-4 pages to start exploring the new site:

Europeana: we transform the world with culture.

 


BAMit! Buy, Sell and Discover Art

In July of this year Baxters International launched an exciting new app – BAM! a mobile and online market place for buying and selling art pieces from all over the world.  Conceived as a social enterprise, and born out of a desire to support and promote Global Art Entrepreneurs, the sole objective of BAM! is to bring artists to light enabling more and more of them to forge a sustainable career.

BAM! is a perfect platform for artistic individuals and students, both those already established in their field as well as those on the cusp of their careers; a tool to enable them to establish their brand, promote their works and grow and develop as creative innovators.

It’s free and it’s easy: in just a few simple steps artists can upload their work and start connecting with Art lovers around the world giving them access to a much wider audience and the potential opportunity to forge a sustainable career.

More info: https://www.baxters-art.com/

bam2


The National Gallery predicts the future with artificial intelligence

August 16 2017  

The National Gallery, London, is working in collaboration with museum analytics firm, Dexibit, to use big data for predictive analytics.

For decades, directors at the helms of the world’s cultural institutions have faced the challenge of balancing the historical and cultural objectives of telling curatorial stories with the economic needs of a museum dependent on a visiting public paying to visit temporary exhibitions and use its other commercial services. One of the most difficult challenges is the ability to accurately predict visitorship both to the museum, and to temporary exhibitions.

The National Gallery, which houses one of the greatest collections of paintings in the world and has more than 6 million visitors a year, is taking a new approach to tackle this problem, together with Dexibit. Using big data, Dexibit helps cultural institutions increase visitation, harness social outcomes and deliver efficiencies. With machine learning, the Gallery will explore how to move beyond simply analysing past visitor experiences in the museum, to employing innovative predictive analytics in forecasting future attendance and visitor engagement.

national_gallery_london_data

Chris Michaels, Digital Director, The National Gallery said:
“The National Gallery has put big data and analytics at the core of our digital strategy. We are delighted to be working with Dexibit to explore the potential of predictive analytics for better understanding on how we can serve our audiences. Machine learning and artificial intelligence have huge potential value for helping museums build better insight and develop new kinds of financial sustainability. We believe these new models can help us create better value for our visitors, and that the learnings we generate can help not only us but the wider sector. We look forward to working with Dexibit to unlock this exciting new area.”

Angie Judge, Chief Executive Officer, Dexibit said:
“Big data brings crucial innovation to the cultural sector at a time when the ground is shifting underneath museums and galleries. The National Gallery’s digital vision leads the way for the cultural sector – as museum analytics transition from retrospectively reporting the institutions’ own history, to using artificial intelligence in predicting our cultural future.”

With nearly 100 years of data with up to a thousand data points for every one of the millions of visitors the Gallery sees each year, this combination of art and science puts The National Gallery and Dexibit at the frontier of big data analytics.

ABOUT THE NATIONAL GALLERY

The National Gallery houses one of the greatest collections of paintings in the world. Located in London’s Trafalgar Square, the Gallery is free to visit and open 361 days a year. The National Gallery Collection comprises over 2,300 paintings in the Western European tradition from late medieval times to the early 20th century by artists including Botticelli, Leonardo, Titian, Rembrandt, Velázquez, Monet, and Van Gogh. The Gallery is also a world centre of excellence for the scientific study, art historical research, and care of paintings from this period. More at www.nationalgallery.org.uk.

ABOUT DEXIBIT

Dexibit is the global market leader for museum analytics. Dexibit’s software as a service includes personalised dashboards, automated reporting and intelligent insights specifically designed for cultural institutions. More at www.dexibit.com.


DI4R 2017 – connecting the building blocks for Open Science

Once again this year, EUDAT is co-organising the Digital Infrastructures for Research (DI4R) event together with RDA Europe, PRACE, EGI, OpenAIRE and GÉANT. The event takes place in the heart of the European Union in Brussels (Belgium) on 30th November and 1st December 2017, hosted at the stunning Square in the city centre and co-located with the first EOSCpilot Stakeholder event (28-29 November 2017).

DI4R2017

What’s new?

This year the conference will revolve around the theme “Connecting the building blocks for Open Science” with the overarching goal of demonstrating how open science, higher education and innovators can benefit from these building blocks, and ultimately to advance integration and cooperation between initiatives.

This is the reason why EUDAT encourages all researchers, developers and service providers to have their say in the conference by submitting an abstract for a 5-minute lightning talk, a 15 minute presentation, an interactive session (90 mins), a poster or a demo (see call for abstracts). The call is now open and closes on 13th October 2017 (click here to submit)!

Registration to the conference is also open: make sure you register by 31st October to benefit from the early-bird rate!

Website: https://www.digitalinfrastructures.eu/


a new EU project to “ROCK” historic city centres

In May 2017 a new EC-funded project in H2020, named ROCK, has been launched by coordinator Municipality of Bologna. Supported by a large international consortium of Universities, Municipalities, Development and Consulting Groups, Dissemination Networks, SMEs and developers, and Industry Driven Associations, ROCK aims to support the transformation of historic city centres afflicted by physical decay, social conflicts and poor life quality into Creative and Sustainable Districts through shared generation of new sustainable environmentalsocialeconomic processes.

rock head

ROCK – Regeneration and Optimization of Cultural heritage in creative and Knowledge cities – focuses on historic city centres as extraordinary laboratories to demonstrate how Cultural Heritage can be a unique and powerful engine of regeneration, sustainable development and economic growth for the whole city. Scope of the project is to develop an innovative, collaborative and systemic approach for promoting an effective regeneration and adaptive reuse in historic city centres.

ROCK will therefore implement a repertoire of successful heritage-led regeneration initiatives related to 7 Role Model selected cities: Athens, Cluj-Napoca, Eindhoven, Liverpool, Lyon, Turin and Vilnius. The replicability and effectiveness of the approach and of the related models in addressing the specific needs of historic city centres and in integrating site management plans with associated financing mechanisms will be tested in 3 Replicator Cities: Bologna, Lisbon and Skopje.

rock cities

Three are the drivers of ROCK’s actions:

  • Organizational and technological innovation at local level, to boost city spaces by improving safety, mitigating social conflicts, attracting visitors and tourists
  • Social innovation and educational programmes to bridge generational gaps of the citizens and to value and empower the elderly population
  • Innovative training solutions including incubation actions, workshops and events to stimulate business creation

More information about ROCK: https://www.rockproject.eu/

pic from the Bologna kick-off meeting

pic from the Bologna kick-off meeting


TECHNOLOGY for ALL Forum, 4th edition

t4a
The fourth edition of the TECHNOLOGYforALL Forum will be held in Rome from 17 to 19 October 2017.

Italy’s role in the development and conservation of the world Heritage is a framework where we will try to analyse the weighted contribution of the technologies, that overcome the impact of the first innovative enthusiasm, and can actually be admitted to a cycle of production normed with shared standards for sustainable socio-economic development in which the intelligent innovation play a key role for the Territory, the Cultural Heritage and the Cities.

The program will emphasize as much as possible, the emerging content of the Forum in the use within the international context and the operations of Italian companies in the sectors where Italy plays a testimonial role in the world. The aim is not only the integration and interactivity impact of the technology, but it is also the sustainable socio-economic contribution in the production cycle to the final destination.

The day before the Conference, inside a Roma’s archaelogical area, a Workshop on the field will be organized, where the manufacturers of instruments and service providers will be dynamically involved in the acquisition of data with advanced solutions from the production phase to the publication of mega and metadata.

In parallel with the conference are organized some events for training on the development, structuring and organization of information and on the web and on mobile vertical applications. The production processes that will be described affect a broad range of users, ranging from government agencies to private companies or professional researchers or finally from students to citizens.

The Conference aims to collect experiences with interventions on the results of the workshop on the field, giving the possibility for participants to retrace the process of acquisition and processing, enriched with the expertise and the presentations of keynote experts, best practices, achievements and projects.

Three days based on information and training, socialization and sharing, discussion and debate.

Website: https://www.technologyforall.it/en/