Call: ideation workshop in Helsinki – How can we create new interfaces to heritage collections of social digital photography

The Collecting Social Photo project is happy to invite creative participants for a workshop in Helsinki, in November 2017, where we will develop prototype interfaces for social digital photography collections. The purpose of the workshop is to ideate together around new ways of disseminating photocollections in the age of social media photography. The project will cover travel inside Europe.

Collecting Social Photo is a three year Nordic research project. The main goal is to develop new work practices  for collecting and disseminating the ephemeral everyday pictures in heritage institutions. The project is hosted by Nordiska museet/Nordic Museum in Sweden, the other partners are Aalborg City Archives, Denmark, The Finnish Museum og Photography, Helsinki and Stockholm regional Museum, Sweden.

About the workshop(s )

Prototyping collections interfaces for social digital photography – i

The research project Collecting Social Photo will host a workshop series exploring new ways of displaying the networked, assembled and volatile social digital photograph by heritage institutions for their stakeholders. The first ideation workshop will take place on November 23–24, 2017, at the The Finnish Museum of Photography, Helsinki, Finland. This will be followed by a second workshop, a hackathon, in March 2018, Stockholm, Sweden. Our facilitator for the two workshops is Facilitator: Risto Sarvas, Futurice. In both workshops we will have technicians as well as people from arts, heritage and commercial sector.

Workshop 1: Open call for participation – let’s do this together!

We are inviting 3–4 individuals, professionals or students from heritage, arts, commercial and academic sectors around Europe. Join us for two days in Helsinki, November 23–24. We start with half day of introduction to the project, the social digital photograph and it’s challenges for memory institutions, and presentations of participants. Then we will host a full day of ideation workshop. If your application is accepted the project will pay your air fare and hotel, as well as a speaker’s fee.

dataviz01 (1)

Problem

The problem we will attempt to solve together: How can we create new interfaces to heritage collections of social digital photography, that consider:

  • The character of the social digital photograph, i.e. volatile, assemblages, networked etc.
  • The mission of the institutions
  • The need to invite stakeholders to dialog, participation and co-creation
  • The need to merge collecting and disseminating
  • Legal and ethical issues
  • The ecosystem of the social digital photograph
  • Existing databases and interfaces

We are looking at producing creative prototypes that innovate around user involvement and the way we collect and disseminate social digital photography. The prototypes will be openly shared with the international museum/archival communities, along with a project report from the Collecting Social Photo project.

Who should join?

If you’re interested in…

  • Contributing to an active and important research project and learning about social digital photography as cultural heritage
  • Gain new experiences and insights in a rapidly developing area
  • Sharing ideas about the best ways to use digital technologies in humanities research
  • Working collaboratively in diverse teams to tackle research challenges
  • Making new friends and connections

Contact

Please adress questions to:
Kajsa Hartig, Nordiska museet/Nordic museum, kajsa.hartig@nordiskamuseet.se
Anni Wallenius, The Finnish Museum of Photography, anni.wallenius@fmp.fi

Apply

Deadline for application: October 15, 2017. 

Application form >> see here http://collectingsocialphoto.nordiskamuseet.se/workshop-call-for-participation/

Terms and conditions
By accepting the invitation you agree to actively join and contributing to the two days of workshop. You are expected to make a 15 minute presentation about your current work. You agree that the outcomes from the project is openly shared with all participants to further develop on.

The project will book your flights and accommodation. Expenses for transfer to airport, and speaker’s fee, will be reimbursed only through invoice.

About us

Collecting Social Photo is a three year research project aiming at developing new recommendations for Nordic heritage organisations around collecting and disseminating social digital photography. The project will also develop prototype interfaces for collecting and disseminating this type of photography. The project has now teamed up with Futurice, Helsinki, to organize a series of workshops, where we, in cross disciplinary groups, will explore innovative interfaces for social digital photography collecting and collections.

About the Collecting Social Photo project: http://collectingsocialphoto.nordiskamuseet.se/about
About Futurice: www.futurice.com

 

images: Image: Visualizations: Left, Kencf0618, source Wikipedia, https://en.wikipedia.org/wiki/File:Kencf0618FacebookNetwork.jpg. Right: Martin Grandjean, source Wikipedia: https://en.wikipedia.org/wiki/File:Social_Network_Analysis_Visualization.png


European Culture Forum 2017 – registrations open

The European Culture Forum is a biennial flagship event organised by the European Commission to raise the profile of European cultural cooperation, to bring together cultural sectors’ key players and to debate on EU culture policy and initiatives. Its 2017 edition will also mark the official launch of the European Year of Cultural Heritage 2018, the thematic EU year devoted to our common cultural assets and all their aspects.

ecf-title-date

The event, for the first time outside of Brussels, will take place at Superstudio in the booming creative neighbourhood of Tortona in Milan. This extraordinary event venue, once a bicycle factory and then a place for fashion publishing and art created by a renowned Italian art director Flavio Lucchini in the 1980s, will provide an inspiring and thought-provoking decorum for lively discussions, unexpected meetings and fruitful exchanges.

Can culture help to tackle European and global challenges? Does cultural heritage matter to Europeans? How can culture in cities and regions help to shape more cohesive and inclusive societies? Come to Milan to discuss, listen, think and get inspired.

More info, registration: https://ec.europa.eu/culture/event/forum-2017_en

A Game Jam is also organized in conjunction with the Forum. The Game Jam, organised in cooperation with the Video Game Museum of Rome, welcomes developers and students of every competence and skill level: we are looking for passion and new ideas. For those interested in participating, please send an email to direzione@vigamus.com  with your details and area of expertise.


ALCHEMIC BODY | FIRE . AIR . WATER . EARTH

ALCHEMIC BODY | FIRE . AIR . WATER . EARTH is an exhibition of contemporary art, an opportunity for international artists to exhibit their works in the blooming Bogotà arts scene, destination of curators and collectors from all over the world. The exhibition is focused on the concept of transformation, starting from the alchemical processes, that aim to purify the nature elements: earth, air, water and fire. The alchemists, recognized as the ancestors of modern chemists, worked to discover the way the world is made and to modify its structure. The mysterious practice of alchemy always influenced the art and the artists from all the centuries. Alchemists and artists are fascinated by the concept of transformation and they both work to discover and create new possibile worlds. We invite artists to share their researches about the natural elements and the creative processes of mixing and changing, using the body as field of artistic transformation.

alchemic_body_jorge_jurado_001_web

Image courtesy of Elena Anrgrill.

ALCHEMIC BODY | FIRE . AIR . WATER . EARTH, International photography, painting, installation, video-art and performance art festival, was hosted in Bogotà (Colombia), at Jorge Jurado Galleryfrom November 01 to November 30, 2017 and a second edition will be organized from March 07 to March 30, 2018

The events are open to photography, video art, painting, installation/sculpture and performing art.

More:  http://www.itsliquid.com/call-alchemic-body-fire-air-water-earth-bogota-2017.html

 


EUDAT conference 2018: Putting the EOSC vision into practice

eudat2018After the success of the previous EUDAT conference in Amsterdam which brought together more than 300 participants, we are ready to kick-off 2018 with the EUDAT conference Putting the EOSC vision into practice“.

The conference takes place in the stunning location of Porto, Portugal from the 23rd to the 25th of January 2018.

Why join?
The conference will:

  • Discuss the progress of the European Open Science Cloud (EOSC) and the European Data Infrastructure (EDI) and the contribution of EUDAT as well as other infrastructures to these two initiatives;
  • Showcase the latest trends in data infrastructure and data management solutions for research;
  • Demo the solutions developed by EUDAT to address researchers and research communities’ data management needs, through concrete pilots;
  • Inform about the concrete opportunities offered by the newly established EUDAT Collaborative Data Infrastructure (CDI) to the service providers and research communities.

Additionally, it will give you the opportunity to showcase your work to an audience of community decision-makers and data managers, scientists and research communities, policy-makers, e-Infrastructure projects, research infrastructures and ESFRI projects during a poster & demo session (Submissions open until 30 November 2017).

Finally, it will host a set of co-located workshops covering different topics and disciplines to complement the main programme. The co-located workshops will be announced soon!

Curious about the programme? A sneak preview is available here. More information will be available in the upcoming weeks.

Mark the dates in your agenda and start planning your trip to Porto. Registration is open: make sure you register by 17 November 2017 to benefit from the early-bird rate!


First meet-up of European GLAM Hackathon organisers

Invitation: First meet-up of European GLAM Hackathon organisers in Berlin
Date: 04th of December 2017 from 10 to 15
Venue: Wikimedia Deutschland e.V. Tempelhofer Ufer 23-24 in 10963 Berlin, Germany
Host: the organisers of Coding da Vinci (DDB, digiS and WMDE)

After many years of immense efforts to digitize cultural heritage throughout Europe and make it both visible and accessible it is about time to celebrate the achieved treasures in a Union wide range of events, projects and conferences. The EU commission has thus decided to declare an European Year for Cultural Heritage in 2018. As the European Year for Cultural Heritage emerges it becomes more and more important to show the potentials of big cultural heritage data to science, art and creative industries. Hackathons are events where coders in the time of a weekend gather together to “hack” (= developing code) ideas based on data available.

Coding da Vinci, the art & culture hackathon, was founded 2014 by Deutsche Digitale Bibliothek, Servicestelle Digitalisierung Berlin, Open Knowledge Foundation Deutschland & Wikimedia Deutschland. It provides a wide range of hands on products revealing the potential of cultural heritage in applications ready to use. Since 2014 Coding da Vinci has been held 3 times in Germany. At this point, we would like to share experience with other hackathon organizers and improve our format and for this reason the first meet-up is organized on the 4th December.

codingdavinci

Right now we are running the fourth year of Coding Da Vinci: the Kick-Off is 21st of October and the award ceremony at 2nd of December. It will scale up 2018 by having up to 4 parallel editions, one in Rhein/Main area; in Munich; in Leipzig and one in Hamburg, empowered by the national European Year of Cultural Heritage coordinator “ Deutsche Nationalkomitee für Denkmalschutz ”.

The meet up is organized following the award ceremony of the hackathon (which will take place on 2nd December at the same location).

An idea for 2018

For the coming year we aim together with Europeana to forge a pan European network of existing hackathons focussing on GLAM: The Coding da Vinci for Europe – Network (worktitle). Inviting its members to develop common quality standards and a toolkit to hold hackathons on cultural heritage data enrolling diverse target groups. The goal is to enhance reuse of cultural heritage data both for leisure and for economic purposes. EUROPEANA is already offering big cultural heritage data. But we need to convince more institutions, their stakeholders and the wider audience that the digitized treasures are valuable and searched raw material for science, citizen science, art, creative industries and joyful encounter for everybody by showing them the awesome range of what could be created out of it.
Some information on Coding da Vinci as it differs from regular hackathons:

1. Coding da Vinci focusses on free licensed data from cultural heritage institutions (GLAM)
2. Coding da Vinci fosters the production of open licensed art & culture applications, many of them aiming for youngsters as target group.
3. Coding da Vinci invites GLAM to present their data and challenges to the coders present
4. Coding da Vinci invites coders to develop applications based on the presented data following the convenient challenges. The majority of the 100 – 150 participating coders and designers per event are students occasionally professional developers and minors.
5. Coding da Vinci has a threefold structure starting with a kick-off to form teams and ideas – a sprint to develop ready products such as apps, websites … – and a award ceremony to present the products under open license and assign the prizes to the winning ones
6. Coding da Vinci is performed by a mixed team presenting the GLAM und coding community alike
7. Coding da Vinci is regionalizing its events to foster deeper relation building in between GLAM and coder community in the region
8. Coding da Vinci is providing a presentation of all data (more than 100 datasets within 3 year) and all product-projects (more than 50 projects) on its website .

Please read more in detail on Coding da Vinci: https://codingdavinci.de/about/

Get in touch to join the meet-up:

Barbara Fischer / Kuratorin für Kulturpartnerschaften

via Twitter: @fischerdata

Wikimedia Deutschland e.V. | Tempelhofer Ufer 23-24 | 10963 Berlin

Tel. (030) 219 158 26-(0)44

http://wikimedia.de

barbara.fischer @ wikimedia.de

 


Remix public domain artworks with GIF IT UP 2017 – the international competition for history nuts and culture lovers

gifitupThe competition encourages people to create new, fun and unique artworks from digitized cultural heritage material. A GIF is an image, video or text that has been digitally manipulated to become animated. Throughout the month, they can create and submit their own, using copyright-free digital video, images or text from Europeana Collections, Digital Public Library of America (DPLA), Trove, or DigitalNZ.

All entries help promote public domain and openly licensed collections to a wider audience, and increase the reuse of material from these four international digital libraries, including Europeana Collections. The contest is supported by GIPHY, the world’s largest library of animated GIFs.

The 2017 competition will have a special focus on first-time GIF-makers and introduce them to openly licensed content. A GIF-making workshop, providing tools and tutorials to help visitors create their first artworks, will be held on 14-15 October in cooperation with THE ARTS+, the creative business festival at the Frankfurt Book Fair.

The jury, made up of representatives from GIPHY, DailyArt and Public Domain Review, will be awarding one grand prize winner with an Electric Object – a digital photo frame especially for GIFs – sponsored by GIPHY. Prizes of online gift cards will go to three runners-up as well as winners in a first-time GIF-makers category. Special prizes will be allocated in thematic categories: transport, holidays, animals and Christmas cards.

People are also invited to take part in the People’s Choice Award and vote on the competition website for their favourite GIF, which will receive a Giphoscope. All eligible entries will be showcased on the GIPHY channel dedicated to the competition, and promoted on social media with the hashtag #GIFITUP2017.

GIF IT UP started in 2014 as an initiative by the Digital Public Library of America (DPLA) and DigitalNZ, and has since become a cultural highlight. 368 entries from 33 countries are featured on the GIF IT UP Tumblr. In 2016, the grand prize was awarded to ‘The State Caterpillar’, created by Kristen Carter and Jeff Gill from Los Angeles, California, using source material from the National Library of France via Europeana. Nono Burling, who got awarded the 2016 People’s Choice Award for ‘Butterflies’, said: “I adore animated GIFs made from historic materials and have for many years. The first contest in 2014 inspired me to make them myself, and every year I try to improve my skills.”

Results of the 2017 competition will be announced in November on the GIF IT UP website and related social media.

 


Europeana Research Grants Programme 2017: call for submissions and guidelines for applicants

Europeana Research is delighted to be launching the second Europeana Research Grants Programme. Applicants are encouraged to come up with individual research projects which make use of Europeana Collections content to showcase and explore the theme of intercultural dialogue. Applicants should employ state-of-the-art tools and methods in digital humanities to address a specific research question. it is expected to see applications employing as much of the Europeana platform (e.g., the API, metadata,) as possible.

grantshiresposterfinal3

Funding is available for up to 8,000 euros per successful project. Up to three successful proposals will be funded. Funding can be used for buyout of researcher time, travel, meetings, or developer costs. This funding does not include overheads

Read the entire call on Europeana at: https://pro.europeana.eu/post/europeana-research-grants-programme-2017-call-for-submissions-and-guidelines-for-applicants


The 20th International Conference on Cultural Economics

RMIT University  is pleased to host the 20th International Conference on Cultural Economics, presented by the Association for Cultural Economics International (ACEI).

The Conference will be held in Melbourne, Australia, from Tuesday June 26th to Friday June 29th , 2018. The program chair is Prof. Alan Collins (University of Portsmouth, UK), ACEI president-elect.

Conference theme include: creative industries, technological disruption in the arts, international trade in art and culture, cultural festivals, network structures in the arts, culture and sustainable development, digital participation, big data in the arts and culture, sport economics, artistic labour markets, arts and cultural organisations, creative cities, funding the arts, cultural heritage, art markets, the economics of food and wine, Indigenous art and culture, performing arts, valuing the arts….and more!

rmit1

A Call for Papers is currently open until 31 January 2018

ACEI2018 aims to provide a forum for discussion on a range of issues impacting the arts and culture and for the first time the conference will also address issues related to sport. The conference brings together a range of academics from a number of disciplines that share an interest in empirically motivated research on topics related to the arts and culture such as creative industries, creative cities, art markets and artistic labour to name a few. The conference also welcomes the insights and contributions from professionals, arts practitioners, policy makers and arts administrators in developing a fruitful dialogue that connects theory with practice.

With the conference host, RMIT based in the Melbourne CBD, the location of ACEI2018 combined with the social and cultural programme that accompany the conference, will provide delegates an ideal opportunity to experience Australian culture and explore the city of Melbourne.

Website: http://sites.rmit.edu.au/acei2018/

Conference presentation (PDF, 1.49 Mb)


Interview with Brendan Coates

brendan

 

Hey Brendan! Introduce yourself please.

Hey everybody, I’m Brendan and at my day job I’m the AudioVisual Digitization Technician at the University of California, Santa Barbara. I run three labs here in the Performing Arts Department of Special Research Collections where we basically take care of all the AV migration, preservation, and access requests for the department and occasionally the wider library. I’m a UMSI alum, I got a music production degree there too, so working with AV materials in a library setting is really what I’m all about.

And, I get to work on lots of cool stuff here too. We’re probably most famous for our cylinder program, we have the largest collection of “wax” cylinders outside of the Library of Congress at roughly 17,000 items, some 12,000 of which you can listen to online. I’m particularly fond of all the Cuban recordings from the Lynn Andersen Collection that we recently put up. We’re also doing a pilot project with the Packard Humanities Institute to digitize all of our disc holdings, almost half a million commercial recordings on 78rpm, over the next 5 years.

And we’re building out our video program, too. We can do most of the major cartridge formats. We’re only doing 1:1 digitization though, a lot of my work these days is figuring out how to speed up the back-end – we have 5000 or so videotapes at the moment but I know that number is only going to go up.

Outside of work, Morgan Morel (of BAVC) and I have a thing called Future Days where we’re trying to expand our skills while working with smaller institutions. Last year we made a neat tool called QCT-Parse, which runs through a QCTools Report and tells you, for example, how many frames have a luma bit-value above 235 or below 16 (for 8-bit video), outside of the broadcast range. You can make your own rules, too. We had envisioned it as like MediaConch for your QCTools reports and sorta got there… we’re both excited to be involved with SignalServer, though, which will actually get there (and much, much further).

Today, though, I’m going to be talking about work I did with one of our clients, revising their automated ingest workflow.

What does your media ingest process look like? Does your media ingest process include any tests (manual or automated) on the incoming content? If so, what are the goals of those tests?

Videos come in as raw captures off an XDCAM, each individual video is almost 10mins long, they’re concatenated into 30min segments, the segments are linked to an accession/ interview number. They chose this route to maintain consistency with their tape-based workflow. This organization has been active since the 90’s, so they’re digitizing and bringing in new material simultaneously, I was only working on the new stuff, but it made it organizationally easier for them to keep that consistency.

After raw captures are concatenated, we make flv, mpeg, and mp4 derivatives and they’re hashed and sent to a combination of spinning discs and LTO, all of their info lives in a PBCore FileMaker database. Derivatives are then sent out to teams of transcribers/ indexers/ editors to make features and send to their partners.

When I started this project, there was no in-house conformance checking to speak of. Their previous automated workflow used Java to control Compressor for the transcodes and, whatever else might be said about that setup, they were satisfied with the consistency of the results.

Looking back on it now, I ~should~ have used MediaConch right at the start to generate format policies from that process and then evaluated my new scripts/ outputs against them, sort of a “test driven development” approach.

Where do you use MediaConch? Do you use MediaConch primarily for file validation, for local policy checking, for in-house quality control, for quality testing for vendor files?

We use MediaConch in two places: first, on the raw XDCAM captures to make sure that they’re appropriate inputs to the ingest script (the ol’ “garbage in, garbage out”); and second, on the outputs, just to make sure the script is functioning correctly. Anything that doesn’t pass gets the dreaded human intervention.

At what point in the archival process do you use MediaConch?

Pre-ingest, we don’t want to ingest stuff that wasn’t made correctly, which you’ll find out more about later.

I think that this area is one where the MediaConch/ MediaInfo/ QCTools/ SignalServer apparatus can help AV people in archives to contextualize their work and make it more visible. These tools really shine a light on our practice and, where possible, we should use them to advocate for resources. Lots of people either think that a video comes off the tape and is done or that it’s only through some kind of incantation and luck that the miracle of digitization is achieved.

Which, you know, tape is magical, computers are basically rocks that think and that is kind of a miracle. But, to the extent that we can open the black box of our work, we should be doing that. We need to set those expectations for others that a lot of stuff has to happen to a video file before it’s ready for its forever home, similar to regular archival processing, and that that work needs support. We’re not just trying to get some software running, we’re implementing policy.

Do you use MediaConch for MKV/FFV1/LPCM video files, for other video files, for non-video files, or something else?

Each filetype has its own policy, 5 in total. The XDCAM and preservation masters are both mpeg2video, 1080i, dual-mono raw pcm audio, NDF timecode. Each derivative has its own policy as well, derived from files from the previous generation of processing.

Why do you think file validation is important?

Because it’ll save you a lot of heartache.

So, rather than start this project with MediaConch, I just ran ffprobe on some files from the older generation of processing and used that to make the ffmpeg strings for the new files. As a team, we then reviewed the test outputs manually and moved ahead.

The problems with that are 1) ffprobe doesn’t tell you as much as MediaConch/ MediaInfo does (they tell you crucially different stuff), and 2) manual testing only works if you know what to look for, which, because we were implementing something new, we didn’t know what to look for.

It turns out that the ffmpeg concat demuxer messes with the [language] Default and Alternate Group tags of audio streams. Those tags control the default behavior of decoders and how they handle the various audio streams in a file, showing/ hiding them or allowing users to choose between them.

What that bug did in practice was hide the second mono audio stream from my client’s NLE. For a while, nobody thought anything of it (I didn’t even have a version of their NLE that I could test on), so we processed files incorrectly for like three months. The streams were still the preservation masters (WHEW) but, at best, they could only be listened to individually in VLC. If you want to know more about that issue you can check out my bug report.

If we had used MediaConch from the beginning, we would have caught it right away. Instead, a years worth of videos had to be re-done, over 500 hours of raw footage in total, over 4000 individual files.

It’s important to verify that the things that you think you have are the things that you actually have. If you don’t build in ways to check that throughout your process, it will get messy and it’s extremely costly and time consuming to fix.

Anything else you’d like to add?

I’m really digging this series and learning how other organizations are grappling with this stuff. It’s a rich time to be working on the QC side of things and I’m just excited to see what new tools and skills and people get involved with it.


New Europeana Pro: the Beta version is out for your input

In September 2017, the Europeana Pro website went throught a major redesign, also integrating Europeana Labs and Europeana Research, which were previously living on separate websites. One of the main reasons for this redesign was to place people first: the new site approach is to be more ‘people’ oriented, to highlight Europeana’s close relationships with institutions and to reinforce the work done by all the Europeana ecosystem communities to make a difference in the digital cultural heritage world.

Europeana looks forward to users’ feedback.

Check the new website at https://pro.europeana.eu/post/new-europeana-pro-the-beta-version-is-out-for-your-input

The top-4 pages to start exploring the new site:

Europeana: we transform the world with culture.