PREFORMA presented at the Belgian Federal Scientific Institutions

BELSPO_logo_ENOn 18th of December 2015,  Erik Buelinckx from the Royal Institute for Cultural Heritage presented the project at the monthly meeting of the follow-up committee of the Belgian Federal Policy digitisation program.

These meetings are organised in the framework of a big digitalisation project (see below). As this project is working to set up a long term preservation platform for memory institutions, the results of PREFORMA and particularly the conformity checker would fit nicely in as a tool to be added to this platform.

A general presentation of PREFORMA, was delivered to representatives from the Federal scientific Institutions.  The group consists, among others, of the Royal Library of Belgium, the national archives, some big museums, the royal cinematheque, etc.

The reactions were very good and many questions were raised. In particular, the archives were interested, so this will be followed by a more specific meeting with them.

 

About the Digitisation Plan

Belgium retains in its federal scientific institutions (FSI’s) an exceptional scientific, cultural, historical and artistic heritage, whose value was estimated in 2002 at 6.2 billion euros. Their collections make up for an important part of the world heritage in many areas and provide valuable support for scientific research.

At the beginning of this millennium, and despite the efforts of several of the FSI’s separately, this heritage remained mainly inaccessible and little exploited, particularly in digital form. The most vulnerable or most damaged pieces threatened to disappear. It was deemed appropriate to respond to the structural dynamics in Europe to digitise cultural and scientific content, a dynamic that led, in particular, to the building up of a virtual and multilingual European Library (Europeana).

The Council of Ministers has taken on July 19, 2001 notice of these problems and given his agreement for a thorough study aimed to find a structural funding for the digitisation and electronic distribution of the collections.

Therefore, a Digitisation Plan for 10 years was launched in April 2004 with the financing of the first phase (2005-2008) of nine targeted priority projects. The aim of these 9 projects was a better dissemination and utilisation of the collections of the FSI and digitisation efforts in the FSI’s increase by seeking synergies and complementarity, taking into account the specific nature of the various institutions. That first phase of the Digitisation Plan was extended until the end of March 2012.

One problem encountered was to address the large-scale digitisation needs (especially in terms of overall vision, resources and deadlines). From this first experience lessons were drawn for the start of a more ambitious second phase.

The new program (2013-2018) is designed so that the digitisation activities of the various institutions involved and the associated resources will not remain fragmented. The aim is a genuine inter-institutional collaboration to establish on the basis of common investment, an industrial-scale infrastructure and joint management of digitised data and activities allowing synergies to be realised. Of course, always with the specific needs of each institution taken into account.

The creation of three different platforms is provided with the necessary interfaces:

  1. a common platform for storage and long-term preservation (LTP)
  2. a digitisation platform (in-situ and external digitisation)
  3. a valorisation platform.

The common platform for long-term preservation (LTP) needs to bring a solution for the huge archiving problem of the storage, conservation and management of the digitised objects, the FSI’s cannot solve alone.

The FSI’s wish to ensure through the LTP platform that existing digital objects and the new digitised objects will still be accessible in the future, and for a very long time (over 10 years) remain intact, clearly longer than the lifetime of current specific storage systems and technologies.

The sizes and dimensions of preserving digital objects are very diverse (especially PDF, TIFF, WAV, JPEG, JPEG2000, MOV, AVI, MXF, TAR, DPX, ranging from a few Kb to 4 TB). The total data volume is today one PB and can reach more than 12 PB end of 2018.
This platform should ensure at all times the integrity, authenticity and availability of such data. The LTP platform should support as much as possible by the industry accepted open standards, protocols and components. The data and metadata have to be stored in such a way that they can be recovered through open standard protocols. This protects users against an eventual “vendor lock-in” or against any disappearance of a software or a specific vendor. The medium in which the data is stored should be as standard as possible and accessible through open systems. The data and metadata may under no circumstances leave the Belgian territory.

The platform will be installed in the data centers of the federal government. All FSI’s have direct access to high bandwidth on the Belnet network.

The 10 Federal Scientific Institutions are:

An additional Belgian federal institution with a rich cultural heritage was added to this project:


EUDAT News bullettin – November / December 2014

In this issue we introduce the EUDAT License Wizard and ask what researchers think about open data, before having a look back at some of the achievements over the course of the year. On behalf of the EUDAT team, we’d like to wish you the very best for the holiday season and we look forward to working with you in 2015.

 

Get B2Sharing during the season of goodwill with EUDAT’s License Wizard

We’re delighted to announce that a new version of the EUDAT service B2SHARE has just been released. If you’d like to use B2SHARE to deposit data or software but you’re unsure as to which licence you should use, the EUDAT License Wizard is the answer to your problems. This user-friendly tool guides you step by step through the bewildering array of licences, helping you choose the right one for your dataset or software. Try out the demo version which is available online now and let us know what you think.

Open access: What communities really think

Trailed at the September Conference, EUDAT’s study of our communities’ attitudes to open access to data will be published early in the New Year. In the meantime, a summary of the findings will appear shortly in the December/January edition of European Research Consortium for Informatics and Mathematics (ERCIM) News.

infografia-01


INNOVA Master’s Degree in Virtual Cultural Heritage

arqueologia-virtual-archaeologyHow do you prepare a professional in Cultural Heritage in the digital age? How the University can solve the gap between Science and Humanities? Is the current University ready to deal with education in Cultural Heritage? Are then Humanities a good professional option?

The INNOVA MASTER’s Degree in Virtual Heritage: The Cultural Heritage in the Digital Era is a degree from INNOVA (Virtual Archaeology International Network) and SEAV (Spanish Society of Virtula Archaeology). Specialists from 36 Institutions, research centers and international companies across 12 countries have developed the degree consisting of 60 ECTs credits. The degree has three mandatory modules to be undertaken by the student throughout 62 teaching weees, with corresponding holiday periods over Christmas, Easter and Summer.
The programme is taught entirely online through the virtual classroom of the International Campus SEAV TRAINING, with personalised access and weekly advice of expert tutors.
The course covers theoretical topics, practical activities and includes objective test to be graded for each chapter.
Student literature, resources and certain software will be provided to the student for the completion of the proposed activities. Three reviews will be conducted during the course that will result in a final grade.
1_LOGO-INNOVAAt its completion, the student will receive the title INNOVA MASTER’s Degree in Virtual Heritage: The Cultural Heritage in the Digital Era

For more information visit www.seavtraining.com/campus


RICHES International Conference concluded successfully!

The first international conference of the RICHES project took place in Pisa on 4-5 December 2014, preceded by the plenary meeting of the consortium. The whole event was organised by Italian partner Promoter Srl in the aristocratic venue of Palazzo Lanfranchi, a patrician palace on the riverbanks of Arno river, that hosts the collection of the Museum of Graphics of the city.

room

The conference opened in the afternoon of 4th December, when the attendees, after the registration and a nice welcome coffee, took seat in the large room of the second floor, fully dedicated to the conference. Welcome speeches of Antonella Fresa from Promoter, Dario Danti Chancellor of Culture in representation of Pisa municipality, Alessandro Tosi scientific director of the Museum of Graphics and Mauro Fazio from the Italian Ministry of Economic Development introduced the day. Then there were speeches by two associate partners of the RICHES project: Francesca Lanz from Politecnico di Milano presenting “MeLa Project: European Museums in an age of migrations” and José María MartÍn Civantos from Universidad de Granada,  presenting “MEMOLA Project: Mediterranean Mountainous Landscapes”. Finally, Neil Forbes from Coventry University spoke about the vision, the research areas and the outcomes of the RICHES project.

The evening concluded with two pleasant cultural activities: a guided tour to the exhibition of Tullio Pericoli, renowned Italian painter and illustrator, on show at the first floor of the Museum of Graphics, and a visit of the crowdsourced photographic exhibition of the All Our Yesterdays series, showing digitised images of vintage photographs collected among the citizens of Pisa during the Europeana Photography main exhibition All Our Yesterdays (1839-1939) Scene di vita in Europa attraverso gli occhi dei primi fotografi (11 April – 2 June 2014).

On the second day, the conference began with Neil Forbes who took again the microphone for the first keynote speech entitled “Assessing value in cultural heritage”. It is widely recognised European cultural heritage is an important component of collective and individual identity and that it contributes to the cohesion of Europe and to the creation of links between citizens. At the same time, a number of challenges and pressures threaten to undermine this immeasurably rich endowment. The over-riding need, it is said, is to promote cultural heritage’s intrinsic value. But what is meant by “value” in this context? The speech by prof. Forbes illustrated a few of the issues involved by drawing on selected examples of contested values around cultural heritage.

Second keynote speech, entitled Digital art and digital cultural heritage in China”, was by Xiaochun Situ, trying either to describe how Chinese artists, Chinese art critics and Chinese media think about “digital” and to investigate the status of digitalisation of cultural heritage in China, with a focus on libraries, museums and galleries. This issue was discussed according to the introduction and implementation of the Chinese government’s directives, showing how the governmental organisations are working and providing some indications about what is on plan.

poster session

Unable to be present on site, the third keynote speaker of the conference, Bill Thompson from BBC , made a virtual greeting to the public via skype; then his speech was shown to the attendees trough a video presentation. Thompson’s intervention, entitled “Broadcast Archives as Cultural Heritage: can the BBC engage as well as it informs, educates and entertains?”, tryied to investigate how the big broadcaster BBC, as a store and source of cultural heritage, can actively engage its public; whether it is possible for BBC to permit unmediated access to the cultural assets it creates without mediation and control and what impact technological innovation will have on the BBC’s future role.

Last but now least, Karol Jan Borowiecki intervened in the conference as fourth keynote, with a lecture about “Personal relationships and the formation of cultural heritage: The case of music composers in history”. Using data on the lives of 522 prominent music composers born in the 18th and 19th centuries, Borowiecki showed how creative clusters formed in Paris, Vienna and London and how locating in a musical city greatly increased each composer’s productivity. Borowiecki’s research signifies the importance of personal relationships in the formation of cultural heritage.

B4FwJ2eIEAAmhqSAfter the lunch and the visit to the poster session of the conference, the second part of the day begun, introduced and moderated by Dick Van Dijk. The afternon was devoted to presenting the co-creation sessions executed as part of the RICHES project in the Netherlands. A co-creation session can be defined as an experimental activity aimed at demonstrating how the public can be creator (and so co-creator, together with the heritage professionals) as well as user of cultural contents. In other words, it is a practical example  of “relationship recalibration”.

This session of the conference included presentations by: Janine Prins (Waag Society), Douwe-Sjoerd Boschman (Waag Society), Ilias Zian (National Museum of World Cultures, Leiden) & Emma Waslander (Stedelijk Museum, Amsterdam), Hodan Warsame, Simone Zeefuik & Tirza Balk (collective Redmond Amsterdam) and Laura van Broekhoven (Stichting Rijksmuseum voor Volkenkunde, Leiden).

Central question underlying the Netherlands’ activities is how young people relate to heritage and heritage practices; the method to direct this conversation gets through design thinking and co-creation with young adults, museum staff and designers from Waag Society. The aim of the co-creation activities is to contribute to identify novel strategic directions for museums. The results of such activities can contribute to (re-)think what it means for a museum to relate to contemporary society, fostering recognition of identity and history and contemporary life of young adults with multicultural backgrounds.

The conference ended with final conclusions and remarks and the sense of having created a really interesting event: not only a milemarker for the Riches project, but also a wealth of consideration and input around the theme of reducing the distance between people and culture.

The overall topic of these two-days was indeed: recalibrating the relationship between heritage professionals and heritage users in order to maximise cultural creativity and ensure that the whole European community can benefit from the social and economic potential of Cultural Heritage.

For more information visit the RICHES website and the RICHES International Conference website!


Kick-off of the Swedish Working Group

USA215 archivists met on 15th of December 2015 at the City Archives in the centre of Uppsala for the first preparatory meeting of the PREFORMA National Working Group in Sweden. 6 persons were from archive authorities or memory intuitions and the rest were archivist at governmental agencies.

 

Magnus Geber from Riksarkivet, responsible for the networking Work Package in PREFORMA and National Referee the Swedish Working Group, delivered a general presentation of the project and of its networking activities.

Joining the PREFORMA network would mean to get direct information and means to influence our project, testing parts of the software during the development process and providing feedback and advice.

 

USAThe group showed interested and participated actively, reporting their problem in producing correct PDF/A-files and looking forward to what PREFORMA and its suppliers can provide to help them to solve that problem.

Some of the agencies offered to make available test-files for the demostration and testing phase.

 

After the successful meeting at the City Archives in Sweden, networking activities started also in other Countries, such as in Belgium, Netherlands, Greece, Ireland, Germany, Spain and Estonia.

 

Join the PREFORMA network at www.preforma-project.eu/community.html!


Riga Summit 2015 on the Multilingual Digital Single Market

The Riga Summit will gather government officials, business leaders, technology developers, and language researchers, who will forge a unified vision for the multilingual digital single market.

At the event, stakeholders will work together to develop a combined strategy, identify goals, establish partnerships, and initiate concrete actions to bring about the vision of a digital single market without language barriers.

Besides a high-level plenary, the Riga Summit will consist of multiple workshops, roundtables, and technology showcases. The event will be hosted in Riga, Latvia, as part of the 2015 Presidency of the Council of the European Union.

riga 2015

Day 1 – META-FORUM 2015

META-FORUM 2015 is an international conference on powerful and innovative language technologies for the multilingual information society, the data value chain and the information market place. The two special themes of META-FORUM 2015 are Multilingual Technologies for the Digital Single Market and Language Technologies for the Big Data Challenge. A brief summary of the programme is available at http://www.meta-forum.eu. The online registration for META-FORUM 2015 is open  as usual, participation is free of charge.

Day 2 – Main Conference

  • Presentations from industry and public sector
  • Keynote speeches
  • Plenary session
  • Start-up pitch event
  • Technology exhibitions
  • Roundtable discussions

Day 3 – Main Conference – MultilingualWeb

W3C announced today the 8th MultilingualWeb workshop in a series of events exploring the mechanisms and processes needed to ensure that the World Wide Web lives up to its potential around the world and across barriers of language and culture. The workshop brings together participants interested in the best practices and standards needed to help content creators, localizers, language tools developers, and others meet the challenges of the multilingual Web. It provides further opportunities for networking across communities.

To sign up for news and registration information, please visit the Riga Summit website: www.rigasummit2015.eu.


Creative Enterprise PIE Conference

by Rosamaria Cisneros, Coventry University

creative pie 2014

While participating in this interesting event, CREATIVE ENTERPRISE PIE Conference 2014 held at Belgrade Theatre Conference Venue in Coventry on the 12th November 2014, the objectives were to: (a) disseminate Dance Pilot information and tools. (b) Encourage people to learn more about E-Space and visit project website c) follow the project on twitter and other social media outlets (d) identify local test-users (e) gather feedback on the E-Space Dance Pilot ideas.

Coordinator Sarah Whatley talked in a pop-up discussion dedicated to E-Space and the Dance pilot, about dance annotation and digital technologies; there also was an informal discussion during the PIE Conference which gathered information on Digital Technologies as well as disseminating E-Space. Also Jonathan Shaw from the Open and Hybrid Publishing pilot had a pop-up discussion about open and disruptive media.

Audience  was comprised of creative enterprise business leaders, entrepreneurs, artists, graduate students, academics and other cultural heritage of different nationalities: English, Romanian, American, Irish. Contacts were made with individuals in the creative enterprise sector, cultural heritage sector, free lance artists and university students studying Performing Arts.

Attendees were interested and eager to learn more.  The dialogue generated was constructive and useful for us as a pilot and for them as participants. We gathered information on the digital technologies they are familiar with or currently using. We also shared with them that early next year we will be testing our apps and hope to include them in some capacity.

Website of the event:  Http://www.creativeenterprisecoventry.wordpress.com


“The Digitization Age. Mass Culture is Quality Culture” by Promoter Srl

Comparing Formats for Video Digitization

Author: Carl Fleischhauer, a Digital Initiatives Project Manager in the Office of Strategic Initiatives

Source: http://blogs.loc.gov/digitalpreservation/2014/12/comparing-formats-for-video-digitization/

 

Snap1-300x143FADGI format comparison projects. The Audio-Visual Working Group within the Federal Agencies Digitization Guidelines Initiative recently posted a comparison of a few selected digital file formats for consideration when reformatting videotapes. We sometimes call these target formats: they are the output format that you reformat to.

This video-oriented activity runs in parallel with an effort in the Still Image Working Group to compare target formats suitable for the digitization of historical and cultural materials that can be reproduced as still images, such as books and periodicals, maps and photographic prints and negatives. Meanwhile, there is a third activity pertaining to preservation strategies for born-digital video, as described in a blog that will run on this site tomorrow. The findings and reports from all three efforts are linked from this page.

 

Egan_1

Courtney Egan, photo courtesy of NARA.

Comparing video formats for reformatting. The focus for this project was the reformatting of videotapes with preservation in mind, and it was led by Courtney Egan, an Audio-Video Preservation Specialist at the National Archives. Like its still-image parallel, the for-reformatting video comparison used matrix-based tables to compare forty-odd features that are relevant to preservation planning, grouped under the following general headings:

  • Sustainability Factors
  • Cost Factors
  • System Implementation Factors (Full Lifecycle)
  • Settings and Capabilities (Quality and Functionality Factors)

The online report offers separate comparisons of file wrappers and video-signal encodings. As explained in the report’s narrative section, the term wrapper is “often used by digital content specialists to name a file format that encapsulates its constituent bitstreams and includes metadata that describes the content within. A wrapper provides a way to store and, at a high level, structure the data; it usually provides a mechanism to store technical and descriptive information (metadata) about the bitstream as well.” The report compares the following wrappers: AVI, QuickTime (MOV), Matroska, MXF and the MPEG ad hoc wrapper.

In contrast, the report tells us, an encoding “defines the way the picture and sound data is structured at the lowest level (i.e., will the data be RGB or YUV, what is the chroma subsampling?). The encoding also determines how much data will be captured: in abstract terms, what the sampling rate will be and how much information will be captured at each sample and in video-specific terms, what the frame rate will be and what will the bit depth be at each pixel or macropixel.” The report compares the following encodings: Uncompressed 4:2:2, JPEG 2000 lossless, ffv1, and MPEG-2 encoding.

SonySVHS

S-VHS tape box, one of formats likely to be reformatted. Photo courtesy of NARA.

Courtney’s team identified three main concepts that guided the analysis. First, the group sought formats that could be used to produce an authentic and complete copy of the original. An authentic and complete copy was understood to mean retaining specialized elements that may be present in the original videotape, e.g., multiple instances of timecode or multiple audio tracks, and metadata about the aspect ratio. Second, the group sought formats that maximized the quality of reproduction for both picture and sound. In general, this prejudiced the team against encodings that apply lossy compression to the signal.

Third, the group sought formats with features that support research and access. Central to this–especially for collections of broadcast materials–is the retention of closed captions or subtitles. These textual elements can be embedded in the file that results from the reformatting process and the text can later be extracted by an archive to, say, support word-based searching.

The desiderata of authentic copies and maximal support for research led Courtney’s team to pay special attention to some fairly arcane technical factors. I’m not going to do much explaining in this blog (there’s lots of good information online) but I will offer the following checklist to provide a sense of some techy elements that the team tracked as they made their comparisons:

  • Bit Depth. This is a feature of encoding and, in the interest of quality, the team looked to see if higher-resolution 10-bit sampling was supported.
  • Chroma Subsampling. For encodings, the team asked which forms of subsampling are supported? (Some provide higher quality than others.) For wrappers, the team asked, “Is the type of subsampling in ‘this file’ declared in embedded metadata?”
  • Audio Channels. How many channels? Declared and tagged in metadata?
  • Video Range. Does this format carry the “rule-bound” broadcast range of luma and chroma data, or an unregulated “wide range” signal that may have come from computer graphics? Is the range declared in embedded metadata?
  • Timecode. Can multiple timecodes can be stored?
  • Closed-captioning and Subtitles. Is there a specified location for captions in the file? Or must users employ sidecar files to retain this data?
  • Scan Type and Field Order. Does this format support both interlaced-scan and progressive imagery? Is that fact (and also the field order for interlaced picture) declared in embedded metadata?
  • Display Aspect Ratio. Is aspect ratio declared, specifically display and pixel aspect ratio?
  • Multipart Essences. Support for segmentation, multipart essences?
  • Fixity Checks. Does the format support for within-file fixity data? Many specialists wish to carry a fixity value for each video frame.
3M480Top

1-inch open reel tape. Photo courtesy of NARA.

Out of all of the comparisons, is there a single winning format? The team said no. Practices and technology for video reformatting are still emergent, and there are many schools of thought. Beyond the variation in practice, an archive’s choice may also depend on the types of video they wish to reformat. The narrative section of the report indicates that certain classes of items–say, VHS tapes that contain video oral history footage–can be successfully reproduced in a number of the formats that were compared. In contrast, a tape of a finished television program that contains multiple timecodes, closed captioning, and four audio tracks will only be reproduced with full success in one or two of the formats being compared.

It is also the case that practical matters like an organization’s current infrastructure, technical expertise and/or budget constraints will influence format selection. One of the descriptive examples in the narrative section notes, for example, that at one federal agency, the move to a better format awaits the acquisition of “additional storage space and different hardware and software.”

 

Sidebar: some preference statements suggest the existence of two communities. The team talked to a variety of people as the work progressed and, in addition, sent copies of the final draft to experts for comment. As I reflected on the various contributions and comments the team received, I found myself pondering remarks about two lossless-compressed encodings: ffv1 and the “reversible” variant of JPEG 2000. As far as I can tell, the two encodings work equally well: after you decode the compressed bitstream, you get back exactly what you started with, i.e., in both cases, the encoded data is “mathematically lossless.” But each encoding had its own set of boosters. At great risk of oversimplification, I wondered if we were hearing from two different (albeit overlapping) communities, each with its own ethos.

One community is well represented by national libraries and archives, including the Library of Congress. When members of this community (I’m one of them!) select formats for video mastering and preservation, we are strongly drawn to “capital-S” (official) standards. (When we select video “access” formats for, say, dissemination on the Web, different factors come into play, more like those embraced by the open source advocates described below.)

We participate in or follow the work of standards developing organizations like the Society of Motion Picture and Television Engineers and the European Broadcasting Union. Our collections include significant holdings produced by broadcasters, content with complex added elements like captions and subtitles, multiple timecodes, and other elements. Although our standards-oriented community has moved vigorously toward file-based, digital approaches, its members are more likely to build production and archiving systems from the “top down,” and employ commercial solutions. Now: how did this standard-oriented community vote on the lossless encodings? They favored lossless JPEG 2000, a standard from the International Standards Organization and the International Electrotechnical Commission.

SonyKCA30XBR

3/4″ U-matic tape case. Photo courtesy of NARA.

And the other community? These were specialists–several in Europe–who are strongly drawn to open source specifications and tools. My sense is that members of this group are eager to embrace formats and tools that “just work,” and they are less firmly committed to capital-S standards. (I can imagine one of them saying, “Let’s just do it — we have no time to wait for lengthy standard-development and approval processes.”) Many open source advocates are bona fide experts, skilled in coding and capable of developing systems “from the bottom up.” Meanwhile, some of them work in or on behalf of archives where the collections do not feature extensive broadcast materials but rather consist of, say, oral history or ethnographic recordings, or other content made by faculty or students in a university setting, absent added elements like closed captions. In their communications with the FADGI team, several from this community favored the lossless ffv1 encoding. The published specification for ffv1 is authored by Michael Niedermayer and disseminated via FFmpeg. Wikipedia describes FFmpeg as “a free software project that produces libraries and programs for handling multimedia data.” Worth saying: the FFmpeg project commands considerable respect in video circles.

The simplified picture in this sidebar is, um, good fodder for a blog. But I’ll be interested to hear if any readers also sense community-based preferences like the ones I sketched, which extend well beyond the matter of lossless encodings.

 

Back to the FADGI comparison: no silver bullet. Although no single format warranted an unqualified recommendation, our experience in comparing formats has been instructive, highlighting trends and selection factors, and winnowing the number of leading contenders down to a handful. We found that format preferences for the reformatting of video remain emergent, especially when compared to the better-established practices and preferences associated with still imaging and audio.


Towards a new social contract between publishers and editors

NeDiMAH-LogoNeDiMAH, the Network for Digital Methods in the Arts and Humanities, is delighted to announce this one-day seminar, which will bring together publishers and scholarly editors in order to discuss how best to produce digital editions which are at the same time both economically viable and in keeping with scholarly standards.

In the pre-digital world, publishers and editors normally collaborated: the editors would produce the edition, following the guidelines provided by the publishing house, which for its part would take care of marketing and distribution, as well as essential scholarly services such as peer review.

Digital scholarly editions, on the other hand, tend to be self-published by scholars within their own universities, most often without any connection with a publishing house – an arrangement which is hardly sustainable, for various reasons, and often not available to younger researchers producing their first editions and without access to suitable funding. At the same time, publishers are increasingly engaging with the digital, in particular in connection with tablet distribution. But the majority of such eBooks are generally not up to the standards expected by the scholarly community: in many ePubs, for instance, basic features such as footnotes are a luxury – to say nothing of a proper critical apparatus.

How can be we best address these issues, to the mutual benefit of all involved parties – editors, publishers and the scholarly public?

Organizers:
Elena Pierazzo (University “Stendhal” Grenoble 3, France)
Matthew Driscoll (University of Copenhagen, Denmark)

Confirmed speakers:
Editors
Marjorie Burghart (EHSSE, Lyon, France)
Caroline Macé (Goethe Universität Frankfurt am Main, Germany)
Hilde Bøe (Munch Museum, Oslo, Norway)
Espen Ore (Oslo University, Norway)
Gabriella Ravenni (University of Pisa, Italy)
Manuel Portela (University of Coimbra, Portugal)

Publishers
Brad Schott, Brambletye Publishing
Pierre-Yves Buard, Presse Universitarie de Caen
Rupert Gatti, Open Books publishers, Cambridge
Pierre Mounier, Open Editions

logo_mshVenue: Maison de Science de l’Homme – Alpes, Grenoble

If you are interested in participating, please send an email to Andrea Penso: andrea.penso@u-grenoble3.fr
Registration is free of charge but obligatory (deadline 16 January 2015).

More information: http://www.nedimah.eu/