Author Archives: Annika Rockenberger

About Annika Rockenberger

Humanities Researcher & Digital Textologist

Another Copy of the 1678 Danish Ethica Complementoria and Trancheerbok in Bergen!

While looking for something entirely different (Norwegian Ex Libris), I discovered another copy of the 1678 first edition of the Danish translation of the Ethica Complementoria and Food Carving Book (Tranchierbuch/Trancheerbok)!

The Bergen Public Library has a copy in its special collections, kept in “The Vault” with the shelfmark h 194. Judging by the number of pages (322), the book seems complete. What is interesting about this copy, apart from that it is only the 4th known (the others are in the Royal Danish Library in Copenhagen, the University of Oslo Library and the Norwegian University of Science and Technology Library, Trondheim) so far, is its provenience: The catalogue metadata read:

Har tilhørt Niels Tønsberg

I don’t yet know who this person was or when the Ethica became part of the Bergen Public Library’s collection. I will have to find out during a visit, which is tentatively planned for summer 2024.

A request to digitise the Bergen copy has been declined due to a lack of capacity, and there is already a digitised Ethica: the Trondheim copy. I will investigate if I can get a few pictures of the title and owner’s mark for my bibliography.

Automated Text Recognition with ChatGPT 4

During a presentation for the Digital Meetup at the Department of Archaeology, Conservation and History at the University of Oslo yesterday, the historian Ian Milligan (University of Waterloo) mentioned briefly the stunning results he got when using ChatGPT for automated text recognition of historical documents. I don’t remember which materials he had used, but I was keen to try it out and see how the recognition performs compared to Transkribus public models‘ since this is my text recognition platform of choice.

There were a couple of obstacles, though. Firstly, I tried the University of Oslo’s ChatGPT, which is currently the GPT-3.5 Turbo and can process data that I, as a researcher employed by UiO, cannot share with third parties unless they are vetted and approved by the IT Department. This version, though, does not allow for the upload of images or accessing image sources via a URL. I tried my private, free subscription to Open AI’s ChatGPT and GPT-3.5 model and was equally unsuccessful. What was I doing wrong?

In my confusion, I turned to my social network on Mastodon to ask for advice: My colleague Christof Schöch from the University of Trier responded and pointed me towards a test he had performed with a historical image with some text on where ChatGPT was able to recognize text and transcribe it correctly. However, there was no info on which version was used or how this was done. I began doubting my ability to see an upload button or any information on how I would get an image into ChatGPT. A quick Google Search pointed me mainly toward business applications which used the Open AI API, not something I felt I wanted to dive into right now. Then, I guess it was on Reddit that someone pointed out that there’s an image upload function in ChatGPT-4 with the Plus subscription. Okay, so I first checked the UiO version of ChatGPT-4, where this feature was unavailable. The Open AI platform led me to a subscription, and I decided to try out Plus since it has access to Beta features and other specifications. 25$ later, finally, I had access. After some tweaking of settings, I managed to get the little image upload button to show up and voilà!

First test: Letter from Christopher Hansteen, 1828

I decided to go all in and test the automated text recognition abilities of ChatGPT-4 with a letter by Norwegian astronomer Christopher Hansteen to his wife.1 The letter is part of the collection of the private correspondence held by the National Library of Norway and can be seen and downloaded from their website. Following Christopher’s example, I only prompted ChatGPT with the following image and instructions:

Digital image of a letter written by Christopher Hansteen to his wife Johanne Christine Andrea Hansteen on October 18th, 1828.

Source: National Library of Norway, Brevs. 291 Brev fra Hansteen, Christopher (1784-1873) til Hansteen, Johanne Cathrine Andrea (1787-1840) | url = http://urn.nb.no/URN:NBN:no-nb_digimanus_175851

What text do you recognize in the image?

ChatGPT answered:

The image you’ve provided appears to be a handwritten manuscript or letter, dated “Tübingen 15 October 1818.” The writing is in cursive and may be in German, given the date and location mentioned. However, due to the complexity of the handwriting and the possibility of old Germanic script, I cannot provide a full and accurate transcription of the content.

If you need information or interpretation regarding specific parts of the document, please point them out, and I will do my best to assist you.

Hmm, so that was anticlimactic. I tried with a slightly better prompt:

The image that I provided is a letter from Christopher Hansteen, astronomer, to his wife. Please transcribe the first to lines of text, below the date.

This time, the response was more promising:

I apologize for the oversight earlier. Let me take another look.

From the image you provided and the context you’ve given, the first two lines below the date appear to read:

“Geliebte Christe! Ich schreibe Dir in aller Eile einen …”

However, it’s worth noting that handwritten manuscripts, especially in cursive and from historical periods, can be challenging to transcribe accurately without expertise in the specific handwriting style and language. If this transcription is crucial, I recommend consulting with a historian or archivist who specializes in 19th-century German manuscripts.

Still, I couldn’t get more out of it without investing in significantly better prompt engineering and better knowledge of the text at hand!

Second test: Ethica Complementoria, 1674

So, instead, I turned to more familiar materials: the version of the Ethica Complementoria print I currently work on, the German print from 1674, printed in Copenhagen. I have already transcribed and proofread the text several times and trained and improved an HTR model for it in Transkribus. Here, I could use the results from the latest version of my model and the validation set to see whether ChatGPT-4 performs worse/equally well/better than my model, ethica1674_v2.

The State and University Library Hamburg, Germany, provides the image. It is page 46 of the print and shows a combination of German (printed in Fraktur) and Latin (printed in Roman cursive type).

I uploaded the image and gave ChatGPT the following context and instructions:

Ok, let’s try another image. The image I provide is of a 1674 print of the Ethica Complementoria. It’s in early new high German. The font is a Fraktur. Please take your time and transcribe the first 10 lines of text. Here is the image:

ChatGPT provided the output – funnily enough – as code (i.e. in a code box, in plain text):

Certainly. Given the characteristics you mentioned (early new high German, Fraktur font), I’ll do my best to transcribe the first 10 lines from the image you provided:

Ethica

und seyn will, hierinn nicht bewandert
wird er nicht unbillig veringert
wie jener Professor, welcher von einem
hochgelahrten Fürsten unterthänig
bat, die panaceam contra Peste ihm
gnädigst zu offenbahren; denn der selbige
Prinz ließ andeuten, daß hiero-
glyphicum ex Metamorphosi von Æsculapio
in Draconem converso, wie derselbe habe
die Pest curiret; also sollte es auch an-

Please note that while I’ve done my best to provide an accurate transcription, the nuances of early modern German and the Fraktur font can be tricky. It’s always a good idea to cross-check with an expert in historical German texts for a definitive transcription.

Okay, so that was impressive! There are a couple of mistakes, mainly small things like wrongly recognized characters and a couple of entirely made-up words, like “hochgelahrten” instead of “hocherleuchten” and the apparent normalization of historical typography to fit into a modern script, but impressive nonetheless.

I wondered whether I could improve its text recognition abilities by providing a good example of a properly-recognized text based on the image. I copied the recognized text from the Transkribus validation set (so it’s NOT 100% accurate, but almost) and gave it this prompt:

Look at your transcription and compare it to the transcription done with an expert tool, Transkribus. I have provided it below. Can you improve your transcription by learning from its transcription?

Ethica
und ſeyn will / hierinn nicht bewandert
iſt / wird er nicht unbillich verringert /
wie jener Profeſſor, welcher von einem
hocherleuchten Fuͤrſten unterthånig
bate / die Panaceam contra Peſtem ihm
gnådigſt zu offenbahren / dem der loͤb-
liche Printz ließ andeuten / das hiero-
glyphicum ex Metamorphoſi von Æſculapio
in Draconem converſo, wie derſelbe habe
die Peſt curiret / alſo ſolte er es auch an-

ChatGPT delivered and gave the following output:

Given the expert transcription from Transkribus, here’s an improved version of the first 10 lines:

Ethica
und ſeyn will / hierinn nicht bewandert
iſt / wird er nicht unbillich verringert /
wie jener Profeſſor, welcher von einem
hocherleuchten Fuͤrſten unterthånig
bate / die Panaceam contra Peſtem ihm
gnådigſt zu offenbahren / dem der loͤb-
liche Printz ließ andeuten / das hiero-
glyphicum ex Metamorphoſi von Æſculapio
in Draconem converſo, wie derſelbe habe
die Peſt curiret / alſo ſolte er es auch an-

Thank you for sharing the transcription from Transkribus. It’s always beneficial to compare and refine transcriptions for historical documents.

I realized my prompt was a bit misleading, so I reformulated what I actually wanted it to do:

Now that you know how an expert transcription looks like, can you transcribe lines 10 to 20 from the previous image in the same way as the expert?

Obediently, ChatGPT delivered:

Certainly. Using the provided expert transcription as a guide, I’ll transcribe lines 10 to 20 from the image:

ſchlagen / das iſt : Er ſolte in ſeiner
Kunſt wohl geuͤbt und erfahren ſeyn/
ſeilig ſtudiren / dabei ſtilſchweigen und
wohl nachdencken / ſapienti ſat dicium. Sie
lehren / iſt gut predigen. Es giebet
viel / mancher fuͤr einen groſſen Mann
auſſ / mit prächtigen Kleidern/ und hoch
trabenden Worten herprangen/ und
iſt doch ein muͤßiggänger Ignorant: phi-
loſophus nihil debet eſſe, qua eſſe ipſum

This transcription takes into account the nuances of the Fraktur font and the early new high German language.

Now I was really impressed! This is a stunningly accurate transcription of the page, especially given the change of font and language in the short passage. It has neatly applied the special characters and punctuation, too.

In my excitement, I posted a thread of my experiment to Mastodon, which, I guess, is the closest I can get to “going viral” (at the point of writing, I have 27 boosts/replies) given my network and reach. Christopher Pollin suggested chaining the text recognition with a transformation into XML/TEI like he has done using proper prompt engineering and thus creating a workflow from image to marked-up text using ChatGPT-4. I haven’t tried this part yet, but I will investigate it.

In Conclusion

Unexpectedly, ChatGPT-4 performed on a task I did not anticipate being able to do even, quite outstandingly. I guess that with sound prompt engineering, we can achieve a high enough performance for most use cases: searches in historical documents. It will, additionally, make the work of the textual scholar easier by taking away time-consuming and rather dull tasks like manual transcription and encoding. It is, however, questionable how it scales up. I used one page of text and let it transcribe ten lines each time.

I will contact the University of Oslo ChatGPT team and ask if and when they plan on integrating the image upload function into their service. This could be an option for researchers who need historical documents transcribed and post-processed (like: translated) quickly, without needing to set up specialized software or training HTR models.

1 Note: ChatGPTs transcription and subsequent attempt to read the letter are completely bollocks. The first line reads “Tobolsk 18. Oktober 1828”. The text is not written in German, but in Dano-Norwegian and says: “Tak mangfoldige Gange for Dine 2 trøstende kjerlige Breve. Ja vist har jeg været mere uroelig, end jeg har ladet mig mærke med;” Cited after the transcription by Kari Høgvold, accessible on the University of Oslo Library’s website as a PDF – What ChatGPT probably does here is that it assumes a letter written by a husband to his wife in that period of time will have a beginning like “Geliebte Christe! Ich schreibe Dir in aller Eile einen […]”. By the way: Hansteen’s wife’s name is Johanne Cathrine Andrea.

Ethica Complementoria Digital Scholarly Edition – Revisit Pt. 1

Establishing Ground Truth

Yesterday and today, I spent some hours manually transcribing pages from the 1674 print of the Ethica Complementoria. The edition, printed in Copenhagen by Christian Wering and published by Wolf Lambrecht, survived in only one copy, held by the State and University Library Hamburg. The copy has been digitised a while ago and can be accessed and downloaded freely from their website. I have argued in my book about the print and transmission history of the Ethica Complementoria that this edition is the source for the Danish translation published in 1678 in Copenhagen. I’m preparing a bilingual digital scholarly edition of the German and the Danish Ethica.

However, with the print being digitised and available, you might wonder why I would manually transcribe it! Actually, I don’t plan to transcribe the entire print manually. I am using Transkribus to do the work for me. I have already established a transcription for the Danish print using the Transkribus app and the NorFraktur HTR model provided by the National Library of Norway. The performance was OK; however, there’s no way around a manual quality check when preparing a transcription for a scholarly edition. The NorFraktur model was trained on multilingual materials: 77 Danish-Norwegian and 23 German small blackletter prints from the mid-16th to the mid-17th century. With a CER of 2.0%, it’s quite good. However, with our print from the late 17th century in Danish, we found it underperformed a bit. We decided to improve the model by training it on manually transcribed pages from the digitised copy held by the Norwegian University of Science and Technology library. This improved the quality slightly, but a few issues remained. Since we were going for a quality check of the entire transcription anyway, we corrected all ‘misreadings’. A couple of pages are left; the first round of manual quality checks will be done by the end of September!

For the German print, we initially thought of using NorFraktur, too. However, there are now models for German print which perform significantly better than NorFraktur. Results will be significantly better. For what it’s worth, I will compare two approaches: Firstly, testing the quality of a German model, like the Transkribus Print Multi-Language model, with a CER of 1.6%. Secondly, I will train my model with manually transcribed pages to see if it can outperform the Transkribus Print model: for only this particular print.

Once I have the results, I will report back!

Ethica Complementoria Digital Scholarly Edition – Redux. Pt. 9: Slowing Down

Day 9 of my summer research leave

After yesterday’s proofreading marathon, I felt pretty depleted today. On top of that, I got a migraine which stopped things for a while. So I decided to do a slow day instead of working through the remaining unchecked and unsorted files in my temporary transcriptions folder. I meandered from here to there, leafing through the materials in my binders and physical folders, tidying the Zotero project library, updating information about the project online, etc.

I was thinking about in which way I can best create a post-factum data management plan for the Ethica project, including the digital scholarly edition and all the other related research. Something where I could give myself a guiding line for the future and make informed decisions about what data I keep where, which files are active and how to version control them, which platforms I use for which materials, and where data might be out of sight, like the XML file in the DTAQ environment, which I cannot change immediately, but have to use their ticket system and email. And what to do with the things that are not digital.

I suddenly felt the urge to take a picture of the spread-out documents on the table in my office: Because they are data, too! Some of them are printouts which I created for easier readability, so they could safely be thrown away without any loss, but many have handwritten notes on them, contain sketches of textual relations, corrections of all sorts and kinds or do not have a digital counterpart: they only exist physically.

An example of paper-bound data: printed and hand-annotated lists of errors and their corrections and Latin phrases from the Ethica Complementoria print from 1643.

I’m not a fan of keeping a lot of paper around, but in this case, I had to admit that I was not ready to throw them all out. Sensibly re-digitising them will be a bit of work and might not be worthwhile unless I work on something concretely for which I will need the paper-bound data. So back they go into their binders and onto the shelves. But into the data management plan, too!

Ethica Complementoria Digital Scholarly Edition – Redux. Pt. 8: Proofreading

Day 8 of my summer research leave

After yesterday’s distress, I decided to shift my attention from cleaning up the remaining files with transcriptions to finishing the proof-reading of edition princeps of the Ethica Complementoria, printed in Nürnberg in 1643, and held at the Bavarian State Library in Bamberg. I made it; I finished proofreading and now wait for the results.

Screenshot of the DTAQ proofreading modus with a page from the 1643 Ethica and its transcription

The 1643 print, or A1 as I baptized it in my reconstruction of the print history and genealogy, was the second Ethica text for me to transcribe. I started working on this transcription as early as 2013 when I received the digitised copy in PDF, for which I paid what felt like a small fortune. I strongly believed in creating my transcription in XML, working in the markup as I went, and using the full TEI P5 tag set without customisation. This was because I had not yet found a repository where I felt I could put my digital edition and make it accessible to digital humanities colleagues and the community of early modern German studies, which was still very far from being digitally savvy at that time. I had previously experimented with TextGrid as a virtual research environment and as a repository, but the user interface made it hard for me to commit fully. So, in the end, I gave up and used MS Word for my transcriptions: it was more convenient for my eyes and allowed a more intuitive form of applying markup to the text.

As time went by, I contacted the German Text Archive, and they agreed to host my edited texts and integrate them into their corpus of German texts. The condition was that I had to use the DTA base model, a customized version of the XML TEI P5 standard they had developed meticulously. Now, the problem was to convert the transcription of the text and the markup from a DOCX file into XML and map it onto the DTA base model. I felt that time was not on my side. During these years, I was severely ill and could not use a keyboard and mouse for writing, but I had to use a microphone and dictation software. Using these to convert the files, making the necessary adjustments and writing and tweaking scripts was out of the question. Instead, I contracted a fellow digital humanist who specialized in creating transcriptions using double-keying and converting them into XML. We agreed to process not just the transcriptions of the 1643 Ethica but also A2 (between 1643-1646) and B6 (1660), which I had transcribed in the meantime. I felt fortunate that someone else could help me with an insurmountable task at the time. When I received the files, I didn’t check them as properly as I should have. And about that, I learned only much later. In 2017, I was back full-time working on my doctoral dissertation, and the Ethica project lay dormant for a while.

In 2018, I delivered the XML files of A1, A2, and B6 to the DTA, and quite a few issues became apparent; the markup didn’t quite fit the DTA base model, and there were some other oddities, which I thought I could address quickly, but instead, my life got turned upside down, and other things were more important. It took me until early fall 2019 to return to the texts and try to fix some of the issues, but the attempt was half-hearted. I had just started a new job after being on an extended sick leave, and this job had nothing to do with edition philology, early modern book history, literature, or digital humanities. It wasn’t even a research position, but administrative, and there was no room for using work time on a project like this. So it lay dormant again.

In 2020 I changed jobs again, this time to a permanent and academic position. But life happened again; not only did Covid-19 massively shift my professional and private focus, but other family-related developments took priority. It wasn’t until after the summer of 2021 that I would take another look at the edition that was waiting for me to make the final checks and fixes on the DTAQ servers.

Today, another two years later, I have completed proofreading the version of my transcription and markup used for the DTAQ. It wasn’t the ‘final’ many times proofread version I had as a DOCX file with handwritten annotations. I filed 85 tickets with requests for changes: some mistakes I made transcribing, some markup issues, and some HTML presentation issues, which probably stem from my XML file not entirely conforming to the DTA base model.

So, ten years roughly from receiving the pdf of the digital copy of the 1643 print to finally checking the transcription, encoding and presentation on the DTA repository didn’t feel that long. But they were too long. In the meantime, someone else published their study edition with commentary based on the editio princeps. Something I will now have to refer to and make it very clear and evident how far my digital edition differs from their print/pdf edition. If not for the ethical reason of academic honesty and transparency, then to protect my work from being perceived as a copyright issue. Editions are a bit of a grey zone regarding copyright or German Urheberrecht. I do not believe this will be considered the case for my edition and the study edition, but I must mentally prepare for whatever will come. Fingers crossed.

Ethica Complementoria Digital Scholarly Edition – Redux. Pt. 7: Demotivation

Day 7 of my summer research leave

I am not happy. Quite the opposite.

In the morning, after a friendly chat with a colleague about a copy of the 1648 Löfflerey-Kunst, I searched for “Ethica Complementoria” in the catalogue of the Bavarian State Library and what popped up as the first hit was the book:

Knape, Joachim. 2020. Rhetorik als Komplementärethik: Georg Greflingers Ethica Complementoria 1643. Text und Untersuchung. Wiesbaden: Harrassowitz Verlag.
 

My heart dropped; I was shocked.

Since 2009, I have been working on the digital scholarly edition of the works and writings of Georg Greflinger, and since 2012, in particular, on the digital scholarly edition of the Ethica Complementoria. I have published blog posts, presented at national and international conferences, written an entire book about its print history, delivered three transcriptions to the German Text Archive and drafted the scholarly edition.

It is a non-funded side-project on which I cannot work full-time and often only in short sprints and bursts, and my poor health and life-changing events in the past years have further delayed the publication. However, from early on, I have written publically online about the project and explicitly and expressively invited fellow researchers to contact me. There was – and is – only a tiny research community around the Ethica and similar texts. The community for digital scholarly editions of early modern printed books is small. One would think that publishing barrier-free on the World Wide Web would ensure being seen, read, and contacted in an event like this: that another researcher is planning, preparing and publishing a scholarly edition.

I sincerely believe in collaboration: working together will improve research. It will help to address biases and blind spots. It will distribute labour and make research projects less vulnerable when people have to move on, pause, or redefine their involvement. It will increase the probability of receiving funding or covering costs that occur during a project because they can be divided or carried by those with access to funds. I believe it will also increase honesty because making decisions that can hurt others can be addressed earlier and perhaps avoided altogether with the peer pressure of people looking at what you’re doing already at an early stage. But that might be just me and my beliefs.

I will now have to update websites, project descriptions, data management plans, proposals, and repositories because someone else already published the edition princeps of the Ethica Complementoria from 1643. It will be much harder, if not impossible for me now to find a publisher that is willing to publish an accompanying print version of my digital edition with a too similar book on the market.

From what I have read of the book, it provides a solid introduction to the content of the Ethica, its function, and exemplarity as the first original German Komplimentierlehre. With the commentary, it will be beneficial for students. It’s available both as a printed book and as PDF, so it will serve those who want to close-read the text in a modern yet authoritative version and want to search for specific terms.

I would have loved to contribute to this edition with all the work I had already put into transcribing the text(s), describing the different issues and copies, and my vast overview of variants between the first and most of the later prints.

Now, I will redirect my efforts and make the XML/TEI P5-encoded transcription of the 1643 print available, and then work on the systematic reconstruction of revisions and edits made to the Ethica in the different genealogical groups.

And, if I may, as moral suasion: We don’t have to, but we can collaborate. With don’t have to, but we can ask about the status of a project. And we don’t have to, but we can give a fellow scholar a heads-up about a significant publication that will affect their work.

 

A fool points a wooden spoon at a couple that is making out on a chair.

Ethica Complementoria Digital Scholarly Edition – Redux. Pt. 6: Transcriptions

Day 6 of my summer research leave

Today, I’m mid-way on my research leave: I have until Friday mid-day to finish this project overview and data management sprint and see that the transcription and HTR-training mini-project starts well.

Admittedly, I felt a bit hesitant to start the day: it’s still mainly going through old files, checking their contents, comparing them to the files on the GitHub repositories and deciding whether or not to keep them and, if so: where to move them and what to do with them next. Most of the morning was spent bringing order to the chaos of miscellaneous files in a documents folder. I moved most of them either into the private admin repository or the Ethica and the Greflingeriana repos. I even found some old teaching materials from the master seminar I taught at my alma mater, Freie Universität Berlin, in the summer of 2011; I archived them on Zenodo for whatever it is good for.

Another thing I did was click through the overview of posts on this blog. There were several drafts, some as old as 2015. Two had no content yet, only a title, and I decided to delete them: farewell! I published the other two (one about crowd-funding and sourcing the edition, the other about the Nordischer Mercurius, Greflinger’s newspaper project), adding a short note at the beginning about when they were created and what their status is. This is mainly for myself to have a reference: have I already written something about a particular aspect or sub-project? Now I can refer to it more officially; perhaps they even find an audience!

Lastly, I investigated the dreaded transcriptions folders. The one on the Ethica repository only contained XML files of transcriptions but without any documentation. On the other hand, the folders on my GoogleDrive only contained DOCX or other proprietary text files. If I remember correctly, when I decided to move some contents of the Ethica project into a GitHub repository, I didn’t want my MS Words files and other stuff there that could not be version controlled by Git. But it creates an undesirable situation where I don’t know what is where and finding the actual ‘latest version’ of a transcription will be a costly search, time-wise. And: if I ever want to share a file with a collaborator, it will be much easier to do so via the repository than having some files on a private GoogleDrive. However, the sheer amount of transcriptions and versions of files felt paralysing. So I started with the Löfflerey-Kunst instead of the Ethica.

The Löfflerey-Kunst (The Art of Spooning, i.e. making out) is an odd little book. It appears a short while after the first prints of the Ethica circulated, in 1648, right at the time of the negotiations of the Peace of Westphalia. I have reconstructed the genealogical relations between this text and the Ethica in my book and have pointed out that it is unlikely that the Löfflerey-Kunst in itself has anything to do with the alleged compiler-author Georg Greflinger but that the content lends itself to a combination with the Ethica and thematically related materials. The Löfflerey-Kunst can be best described as a satirical adaption of the Ethica: both in regards to the initiation of romantic relations between young men and women of status and in regards to the political and religious tensions of the times, where especially the Catholics were the targets of insults and jokes in protestant regions.

I have prepared an edition of the Löfflerey-Kunst, based on the printer syntheses B2 and C1, that is, where the Löfflerey-Kunst was printed together with the Ethica Complementoria and texts such as the Bettelstab der Liebe, Alamodische Damensprichwörter and a selection of poems from Seladons Weltliche Lieder. This edition will be published with the Deutsches Textarchiv, but I cannot say when it will happen. My investigation of the many files containing transcriptions revealed that I do have a complete transcription of C1 as a DOCX file – done from the digital copy of the Bavarian State Library in Munich and collated with the original print on-site in Munich in December 2016. But I do not know whether I have collated this file with the XML transcription. I also have a part of a transcription of B2, circle half of the text, from when I had my research stay at the Herzog-August library in Wolfenbüttel in 2013. Since it’s incomplete and I have no plans of travelling to Wolfenbüttel anywhen soon, I have ordered a digital reproduction of the book to run it through Transkribus and complete the transcription. More importantly, I would like to get a full transcription of what I have called L1, the most likely first edition of the Löfflerey. The Berlin State Library uniquely holds it and has a digital copy, so I don’t have to travel there for the transcription job!

A fool points a wooden spoon at a couple that is making out on a chair.

Detail of the title illustration of the 1648 Löfflerey-Kunst print. Berlin State Library, Yz 1551.

To round off my day: data organisation post-factum takes a lot of time and is mentally draining. And it becomes evident that good data management would have prevented much of the frustration and saved me time to finish parts of the edition. 

The image shows a screenshot with a detail from GoogleMaps. Red pins and green dots mark places where news items in the January 1672 issue of the Nordischer Mercurius were sent from and to.

Looking Through the Historian’s Macroscope: The January 1672 Issue of the Nordischer Mercurius

The outline for this post was initially drafted on 2016/03/29 at 1:46 pm; it has never been fleshed out, and I have since not worked with the Nordischer Mercurius. Many things have changed, especially the availability of tools that automatically recognise places and other names and tools for visualising data on (historical) maps. At this point in time, I cannot promise that I will continue in the direction outlined below. But I at least want to publish my idea for future reference.

  1. Introduction / Context
    1. NTNU Enlightenment News Symposion
    2. Large Scale Digitization of C17th German newspapers
      http://brema.suub.uni-bremen.de/zeitungen17/
    3. NM as case study
      1. Selection of an Issue (criteria)
      2. http://brema.suub.uni-bremen.de/zeitungen17/periodical/titleinfo/1007987
  2. Data: Places
    1. Geo-political units as structuring elements of the NM
    2. Places of correspondence (news source 1)
    3. Places of news origin (news source 2)
    4. Places named in news (“where”)
  3. Collecting the data
    1. Manual extraction
      1. Time & effort
      2. Automatic named-entity recognition and information retrieval
        1. Pre-existing tool from Europeana: https://github.com/KBNLresearch/europeananp-ner
          http://blog.kbresearch.nl/2014/03/03/ner-newspapers/
          https://github.com/kbnlresearch/europeananp-dbpedia-disambiguation
        2. From plain text (strings, not-annotated)
        3. From XML/TEI files (no semantic annotation)
        4. The case of German (C17th) language
    2. What is a ‘place’? What counts as a ‘place’ (named entities)
    3. Identifying places
      1. Difficult cases 1: Regions
      2. Difficult cases 2: Historical place names
  4. Visualizations (Google Maps)
    1. Creating an easy data input for Google Maps (Google Docs spreadsheet)
    2. Choosing information for display and annotation
    3. Linking to Wikipedia (?Is this useful?)
    4. (Re-)Using the open map
    5. Alternatives: Neatline and Omeka

  5. Conclusion, main findings
  6. Materials from the original source

Frankreich.

Paris vom 1. Januarii.

DEr Herr Graf de Molina / Spanischer Ambaßadeur in Engeland / wird den 15. dises in diser Qvalität auch alhier ankommen. Am vergangenen Dinstage gaben Monsieur und Madame / als der Hertzog von Orleans und seine Gemahlin / in ihrem Königl. Palast / an 9. Ambaßadeurs und fremde Ministers Audience / welche alle Ordre hatten im Nahmen ihrer Principalen Jhren Hoheiten zu derer glücklichen Heurath zu gratuliren. Der Spanische blib / wegen seiner Unpäßlichkeit / allein aus. Jhre Hoheiten waren sehr köstlich bekleidet / vorauß Madame / welche fast gantz mit Edelgesteinen bedeckt war / so war auch dero Kammer und ganzer Hof auff das prächtigste außgeschmückt. Der König hat am vergangenen Samstage 20. Compagnien von seinen Gardes / so wol Frantzosen / als Schweitzer / nach Chalon marchiren laßen / und folgen S. Majestät übermorgen nach. Also sind auch schon im Auffbruch der Hertzog von Rohannes / la Feuillade / der Graf von Soisons / und vil andere Obristen ihre Troupen mit den marchirenden Troupen des Gardes zu conjugiren. Es hat auch die Cavallerie in Lothringen und Champagne Ordre sich nach dem Rendevous der Königl. Armee zu begeben. Es ist alhier eine Rede / daß unser König die Vestung Jülich von dem Hertzogen von Neuburg kauffen wolle. Der Englische Ambaßadeur verreisete am vergangenen Sonntage frü um 5. Uhr von hinnen nach Engeland / seinen König zu sprächen.

Crowd-funding, crowd-sourcing – Möglichkeiten & Grenzen für ein DH-Projekt

Originally written and saved as a draft on 2015/09/14 at 6:15 pm

This post is a thought-in-progress.

Ich bin ein Patreon. Und ich unterstütze Projekte auf Kickstarter: solche, die ihre Finanzierungsziele erreichen aber auch solche, die es (im ersten Anlauf) nicht schaffen. Ich spende regelmäßig für die Wikipedia. Ab und an redigiere ich Artikel für sie. Ich abonniere Youtube-Kanäle. Ich bin Mitglied von ideellen Vereinen und Verbänden. Ich beteilige mich an crowd-sourcing Projekten.

Ich unterstütze Initiativen, Projekte, Dienste, die ich gut finde und für richtig und wichtig halte. Manche davon sind für mich reine Unterhaltung, manche politisch, andere wissenschaftlich bzw. wissens-vermittelnd. Ich erfreue mich an dem, was andere kreativ umsetzen, ich verwende es weiter, ich baue es aus und um, ich verbreite es, kritisiere es, versuche, es besser zu machen.

Ich trage mich seit einiger Zeit mit dem Gedanken, die Greflinger-Edition umzustellen und zu öffnen sowohl für ein crowd-funding (also: das Einwerben von finanziellen Mitteln für die Arbeit an der Edition, u.a. den Erwerb von Digitalisaten, Bibliotheksreisen, Werkverträge für Korrektoren etc.) als auch für ein crowd-sourcing (also: die kollaborative, kumulative Arbeit an der Edition: Erfassen, Transkribieren, Beschreiben, Taggen, Kommentieren der Primärtexte; Qualitätssicherung, Korrektur). Nicht nur, weil die Arbeit an einer Edition eine zeit- und arbeitsintensive Tätigkeit ist, die sich nebenher nur unter großen Anstrengungen realisieren lässt, sondern auch, weil ich längst nicht auf allen Gebieten, die für ein Projekt wie dieses einschlägig sind und in allen Technologien, die Anwendung finden, Expertin genug bin, um kompetent und zuverlässig arbeiten zu können.

An der Greflinger-Edition arbeite ich seit April 2009. Alleine, an Nachmittagen und Wochenenden oder während des Urlaubs, auf Zug- und Flugreisen. Bisher sind ein gutes Dutzend Texte erfasst in XML-TEI-Kodierung, aber noch weit davon entfernt, publiziert werden zu können. Weil sie nicht fertig sind, also: nicht zitierfähig.

Ethica Complementoria Digital Scholarly Edition – Redux. Pt. 5: Sources

Day 5 of my summer research leave

On Day 5 of my summer research leave, I continued working on organising folders and files and getting a general overview of what I have on my local drive, what is already on the GitHub repository for the Ethica project and what should go into related repositories. I managed to move some files around and normalise filenames, but then I got stuck on a particular file: This, too, was very old. I last touched the file in late August 2013 – ten years ago. It’s a file I called “Quellen” (sources). It contains several phrases, sentences, poems, or concrete references from the Ethica to other texts and – where possible – their verification with contemporary sources or more modern scholarly editions of referenced texts. It’s an MS Word file; the formatting was horrific, and the document really hard to read. I converted the DOCX file into a plain text format like Markdown, using Pandoc. I hadn’t planned on working on the file’s content, only trying to tackle the layout and readability, but I got stuck on the references. Many contained links, and clicking on the first one already revealed a problem I had hoped not to have: link rot. So I spent the rest of the morning checking all links and references, which turned into checking many sources, adding them to the list, and ordering some older prints from the library to check references of not yet digitized material.

I really love this kind of detective work: trying to find the source of an odd-sounding phrase or a cryptic reference, verifying citations, and revealing the literary, theological, and philosophical foundation of this book on good manners and easy conversation. We can also see the frame of reference for a contemporary: what would they have known to use the book? Were they familiar with the works of Plato, the classic authors like Sallust, Horace and Cicero, early medieval philosophers like Boethius, the church fathers, and more contemporary writers like John Owen? Probably. Studying the intertextual references in the Ethica can help us re-create the target audience, perhaps even the model reader/user of the guidebook. 

The overview of sources and references is not complete by far. My work many years ago was discontinued after chapter 2, most likely because I decided to prioritise another part of the scholarly edition. The original edition plan did not include a commentary, so this is ‘side work’, which I thought would be nice but not mandatory. But now I’m intrigued again! I want to dig deeper into it and learn more about the intertextual references and what they might tell us about the Ethica and its appeal. I have uploaded the revised document to the GitHub repository, so I can add things on the way – and have a public reference in case I will not finish this side-work and someone else wants to pick it up instead.

After a much-needed iced coffee break in the sun with my colleagues from the Digital Scholarship Center at the University of Oslo Library, I returned to my office to find one of the youngest prints of the Ethica Complementoria waiting for me on my desk! This second edition of the Danish translation was printed in Copenhagen in 1708. The book, together with the first Danish edition from 1678, was a gift from the Royal Danish Library in Copenhagen to the newly founded University of Oslo in 1811. Together with 45.000 books from the Royal Library’s collection, it made the rootstock of the Univesity Library – and the National Library of Norway – today. An incredible feeling for a researcher interested in the Ethica: to have two rare editions “in-house”. I was unaware of this when I moved to Norway 11 years ago, so I guess we found each other at the right time and place.

A detail photograph of the 1708 edition of the Ethica Complementoria. The book is half opened, and a printed double page is visible. On the right side, the second chapter begins and the page is framed with ornaments.
Detail from the 1708 print of the Ethica Complementoria, UBO UHS Sikring 97

Ethica Complementoria Digital Scholarly Edition – Redux. Pt. 4: Digressions & Contemplations

Day 4 of my summer research leave

Today has been a day of digressions and contemplations. The goal was to go through all the remaining files on my local drive and see which ones contain information that needs to be kept and isn’t stored on the GitHub repository for the Ethica Complementoria project yet. I reviewed almost all files, identified them, checked their content, modified the filename to be more descriptive and standardised, and added the date of creation/last modification, which sent me down memory lane. One file in particular, a synopsis I made of all Ethica prints, hit especially hard: it was last modified precisely on this day, eight years ago! Such a long time ago – I remember that I was working on the edition, recording variants between the different prints and sorting out their relationship. I was so sure I would be able to finish the manuscript of the edition and send it to a publisher in 2016. I did not. I haven’t done it yet, and I am less and less sure whether I will ever “finish” this project as I had initially intended. Don’t get me wrong: I have published a big chunk of my research on the Ethica already in a monograph in 2017. What was planned as an article for editio. Internationales Jahrbuch für Editionswissenschaft turned into a book. It gives a detailed overview of all surviving copies we know of, the relation between the different editions, their chronology, extensions, and revisions. It also discusses the authorship question and argues for a more conservative approach when attributing the Ethica to Georg Greflinger: he was likely not the author but could have acted as a later redactor. I also have three complete transcriptions of prints: the editio princeps from 1643, the edition 1645, and the youngest Ethica print from 1728. Transcription of the critical 1660 print is about 70% done. However, I have never had the time to do the final touches. Both prints from 1643 and 1645 are already uploaded to the German Text Archive but have not been “released” by me because there is some manual fixing of the encoding, which takes forever. I can still aim to do as much as possible next week and perhaps get the 1643 edition publishable. We’ll see.

Aside from all the feelings, I had a couple of productive digressions today. First was a search for who hides behind “Ex libris bibl. erot. Krenneri”. In some Greflinger prints, catalogues recorded provenience, i.e. who owned the book before it made it into the library’s collection. It can be interesting to find out how certain books ended up where they did, both geographically and collection-wise. But also who they might have been made for or who in the past found them worth buying, keeping and collecting. I straightforward Google search didn’t bring many concrete results, but I was led to the notorious bibliography of erotic literature by Hayn and Gotendorf from the early C20th. In the digitized tomes, the ExLibris popped up several times, with the remark “München” (Munich). Aha! An indicator. I modified my search to include the last name “Krenner” (the nominative form of the latinised “Krenneri”) and München and found a newspaper article from 2007. Christina Hoffmann writes about “forbidden books” in the Bavarian State Library in Munich, and the name “Franz Krenner”, date and his occupation is revealed: Turns out, the man was a fiscal officer in the late 18th, early 19th century in Munich and had an extraordinary collection of erotica which were bought by the library right after his death and kept locked away from public access. You can read about the kinky official on Wikipedia.

The other digression(s) mainly were software related. I had some files in an odd format, and I could not check their contents without the proper software. Which, of course, was proprietary and for purchase, but I managed to get a test version to open the files with. From a distance of time, I could now see my search for a comfortable tool to help me visualize the print history, genealogy and stemma of the Ethica. By looking at the time stamps, the filenames and the file formats, I could see a journey of testing out, discarding, re-visiting, and moving around from one tool to another. I made the published visualization of the stemma in Scapple, so I guess at that point in time, I had found something that was good enough for the job and worked well for me. But times have changed, and I no longer want to use this type of software. I cannot export an interactive version of my stemma from Scapple, only static versions in .png or .pdf. But that’s not helpful for edits and further work. I turned to my social network on Mastodon and asked for input, and I got a nice tip from Till Grallert for a file format converter from the OPML format, which Scapple can export, to SVG, which is much more common and easier to import and export between drawing tools. I will try it out tomorrow and hopefully end the week with a nice stack of files in more durable, sustainable, and standard formats so I can still use them eight years from now!

Enjoy this visualisation of the Ethica and Tranchierbuch, and Löfflereikunst prints on a timeline, made with Zotero, tomorrow, also eight years ago 😅 !

Timeline of prints. Ethica-prints are colored yellow, while prints of the Tranchierbuch are red and prints of the Löfflereikunst are orange. Highest density of prints around 1650.

Timeline of prints. Ethica prints are coloured yellow, while prints of the Tranchierbuch are red, and prints of the Löfflereikunst are orange. Observe the density of prints around 1650.

The image shows a visualisation of a useful filename structure for a scan of the title page of a rare book. It has four parts, divided by underscores. The first part is the library information using its standard abbreviation, the second part is the shelfmark, the third page indicates the page number or part of the document, and the fourth part contains additional information in a short form.

Ethica Complementoria Digital Scholarly Edition – Redux. Pt. 3: Filenames and Co-workers

Day 3 of my summer research leave

Filenames

After a bit of a hiccup yesterday, I’m back on track. It’s day 3 of my research leave, and I still haven’t gotten any of the “big tasks” started. If there is anything to be learned from previous iterations of this and other projects, it’s that without proper data management, the big tasks will never be finished. All of the available time in a side project like this will eventually be eaten by trying to understand where I left off last time, what needs to be done and where the files are. So the first step – in a way, the first “big task” – is to get an overview of what I have, where I left off, and what needs to be done to tackle the milestones.

A refreshing insight this morning was that I had already weeded out many superfluous files and manuscript versions of the book on the print and transmission history of the Ethica Complementoria. The most crucial documents are neatly stored in a public GitHub repository: the final version of the manuscript as published on the epub-server at the Herzog-August Library in Wolfenbüttel. The transcriptions of several prints in XML. Preliminary collation results and the code I used to create them. An ok README file describing the status quo and plans. One of these plans was to find a secure place for the documents which cannot be stored publicly, like scans I purchased for the publication and other admin stuff. I will put them in a private repository so I have access remotely.

While preparing the files for transfer into their new home, I witnessed a common problem of practical data management: Creating meaningful yet short and machine-friendly filenames! I used to teach data organisation and documentation in my previous job, and since then, I made it a daily practice only to use informative, machine-friendly filenames. For scans and digital photos of archival materials, manuscripts, or old books, I find it helpful to embed information about the source in the filename, e.g. which library holds this book? What is its shelf mark or reference number? What part of the book is the scan of? Is there any additional information relevant to see at first glance? With this in mind, and adhering to a few principles of good data management, like shallow directory structures, short names for files and folders, and no use of any special characters apart from underscore _ and hyphen – and ASCII letters and numbers. I made a little infographic about filenames for scans of archival materials and shared it with the researchers at the Dept. of Archaeology, Conservation, and History and my colleagues at the library. I will adopt this method now for the Ethica project and similar ones and rename all files accordingly. And add this info to the README so I might have a chance of remembering what I did and why next time!

The image shows a visualisation of a useful filename structure for a scan of the title page of a rare book. It has four parts, divided by underscores. The first part is the library information using its standard abbreviation, the second part is the shelfmark, the third page indicates the page number or part of the document, and the fourth part contains additional information in a short form.

Infographic depicting a filename structure for scans of archival materials.

Co-workers

The Ethica project is a single-person project for most of the time. I recently got funding for a research assistant to help me transcribe and prepare the bilingual digital scholarly edition of the Danish print of the Ethica from 1678. The assistant will mainly work on automatic text recognition, using Transkribus and building on the HTR model NorFraktur created by the National Library of Norway. NorFraktur is a multilingual model trained on printed Norwegian, Danish, and German texts from the early modern period. It performs quite well, and I hope to increase the performance slightly.

More interesting than the work, although, is collaborating with another researcher. Lots of the motivation behind proper data management and documentation stems from the need to work together on something, to share files and folders and workflows. The Ethica project is old, conceived in 2009, and has grown wildly since. So, for the following weeks to be a smooth and fruitful collaboration, I must create a usable data management plan (DMP) and set up a shared and version-controlled repository. And that’s why I had to go through all the “old stuff” – to know where things are, what is useful in this stage and which mistakes not to make again!

I will draft a DMP using DMPtool again – if I get lucky, I can do it today and publish it on this site, too. Update: the DMP is public and published here: https://dmphub.cdlib.org/dmps/doi:10.48321/D1RP93.

A screenshot of the project repository on GitHub showing Jupyter Notebook as the main type of code.

Ethica Complementoria Digital Scholarly Edition – Redux. Pt. 2: Setups and Fails

Day 2 of my summer research leave

The plan for today was to gather all documents which have something to do with the edition, then list an overview of the files and their contents and create a README and a data management plan to continue the Ethica project so I don’t have to redo all these steps again next time.

Due to unforeseen circumstances, I had to work from home today and take care of something else, so this didn’t happen. The little time I had to work on the project was used up by setting up my private laptop for the tasks. In 2019, I transitioned from working locally and occasionally using GoogleDrive or similar cloud-based storage systems to working with Git and GitHub for most of my projects. This helps keep files and stuff synchronised even when I have to work from home – or, in the worst case, have to access my files using the GitHub platform.

However, it turns out that I haven’t been using my private computer for a while, and so it needed the dreaded updates to run Git. This led to a cascade of updates, and when I finally had set up everything I needed, I went online and checked the contents of a project repository which I hadn’t cloned to my local environment yet.

A screenshot of the project repository on GitHub showing Jupyter Notebook as the main type of code.

On GitHub, every repository has an information block displaying the most used programming language. What surprised me was that this repo had “Jupyter Notebook” highlighted. What on earth did I use a Jupyter Notebook for when creating XML transcriptions of early modern books? A quick look revealed that the notebook was created during a workshop at the Huygens Institute in The Netherlands on February 12, 2019. The topic was collating textual witnesses using the CollateX package for Python. Right! I remembered that I had started doing some tests with the transcriptions of the prefaces of several editions of the Ethica. I read the code and installed CollateX on my private computer.

By now, you should have an idea of where this is going. I wasn’t able to. I followed the instructions and updated Java (of course, every time!), but I only ever got this error message: Error: Unable to access jarfile collatex-tools-1.7.1.jar. I spent an hour following suggestions on fixing the problem, but I could not get it to work.

Having CollateX running is not vital to the project at hand; it would probably have been a major distraction. But this stuff is so frustrating! When I teach the basics of coding, I also talk about how to help oneself when stuck and to build resilience and a higher frustration threshold. But we’re only humans; sometimes, even a relatively high threshold isn’t enough. I give up for today. Java issues are a problem for future me. I need some calming tasks to finish my short workday today, so I will spend the remaining 45 minutes tidying my Zotero project library instead.

Ethica Complementoria Digital Scholarly Edition – Redux. Pt. 1: Recovering files

Day 1 of my summer research leave

I will work on the Ethica Complementoria Digital Scholarly Edition project in the next two weeks. A project started in April 2009, with many – and lengthy – interruptions.

To get started and sorted out, I first set out to gain an overview of where the project files are. When the project was conceived, I knew nothing about project or data management. I think I am now on my ninth computer since 2009, and on at least seven of them, I had instances of the project folders and files. In addition, I have most of the files on my private GoogleDrive. I have also moved from Windows to Macintosh operating systems and have used Mac consistently since 2010. I went through an uncounted number of software, programs, and tools to create files with, mainly MS Word, but also OpenOffice/LibreOffice files, plain text files, data in spreadsheets (Excel), presentations in PowerPoint and GoogleSlides or more obscure things like Evernote; piles and piles of pdfs of source texts and secondary literature, other image files either downloaded from an online source or taken with a digital camera or later smartphone; I have transcriptions in XML format; I made some drawings that seem to be saved as a .scap file, and I have no idea what to open them with (yet) or what I used to create them!

I have collected all folders and files of interest in a temporary folder on my Desktop and will use the day tomorrow to go through them systematically. The aim is to gather all documents which have something to do with the edition: transcriptions, digitised prints/digital photos or scans of books/parts of books, transcription criteria and rules for encoding, manuscripts and drafts of articles connected to the edition, admin documents like invoices and orders, variants/text comparisons, and enter all bibliographic items into Zotero. I will then list an overview of the files and their contents and create a README and a data management plan to continue the Ethica project so I don’t have to redo all these steps again next time.

Accompanying the project, I plan to write a blog post each day about what I have accomplished and publish as much of the material on this blog and the sub-project page, archive it on Zenodo (presentation slides, manuscripts of talks) or move it into the active project repository on GitHub

 

Training HTR-Models for a Bilingual Digital Edition of the Ethica Complementoria

In mid-June, I will start a small sub-project and finally get to work with the Ethica Complementoria again! I won a small grant from the University of Oslo’s “Teksthub+DH“, a network to support digital humanities focusing on language and textual sources. The network itself is part of a university-wide endeavour to strengthen IT in research. With the grant money, I was able to hire a research assistant. This is what we are going to do:

We aim to create a dataset to be used as the basis for a bilingual (Danish/German) digital scholarly edition of one of the most popular books on ‘etiquette’ in early modern Germany and Northern Europe: the Ethica Complementoria.

Originally written in German, the book made its way to the Nordic region through the Danish translation from 1678. This first Danish print will be published in parallel with the German version used for the translation.

The transcription project is part of a larger project on the book and revision history of the Ethica Complementoria, led by Annika Rockenberger and will be conducted by Håvard Loeng. An overview of all editions has been published digitally at the Herzog August library: http://diglib.hab.de/ebooks/ed000738/start.htm.

Manual transcription of two 300+ page texts is not feasible anymore. However, traditional Optical Character Recognition (OCR) yields inferior results for older printed books. Therefore, we aim to test, evaluate, improve, and build upon the NorFraktur model from the National Library of Norway. NorFraktur is a public Handwritten Text Recognition (HTR) model in Transkribus. It was trained on the HTR algorithm developed by READ Coop to recognise manuscripts and older prints automatically.

The development project contributes to both a digital scholarly edition with open access (planned as part of the publications by the Norwegian Language and Literature Society at bokselskap.no) and to the improvement and expansion of an open HTR model that the scholarly community can reuse for early modern prints in Norwegian (including Danish and German).