Monday, December 21, 2009

Feature: Related Pages

I've been thinking a lot about page-to-subject links lately as I edit and annotate Julia Brumfield's 1921 diary. While I've been able to exploit the links data structure in editing, printing, analyzing and displaying the texts, I really haven't viewed it as a way to navigate from one manuscript page to another. In fact, the linkages I've made between pages have been pretty boring -- next/previous page links and a table of contents are the limit. I'm using the page-to-subject links to connect subjects to each other, so why not pages?

The obvious answer is that the subjects which page A would have most in common with page B are the same ones it would have in common with nearly every other page in the collection. In the corpus I'm working with, the diarist mentions her son and daughter-in-law in 95% of pages, for the simple reason that she lives with them. If I choose two pages at random, I find that March 12, 1921 and August 12, 1919 both contain Ben and Jim doing agricultural work, Josie doing domestic work, and Julia's near-daily visit to Marvin's. The two pages are connected through those four subjects (as well as this similarly-disappointing "dinner"), but not in a way that is at all meaningful. So I decided that a page-to-page relatedness tool couldn't be built from the page-to-subject link data.

All that changed two weeks ago, when I was editing the 1921 diary and came across the mention of a "musick box". In trying to figure out whether or not Julia was referring to a phonograph by the term, I discovered that the string "musick box" occurred only two times: when the phonograph was ordered and the first time Julia heard it played. Each one of these mentions shed so much light on the other that I was forced to re-evaluate how pages are connected through subjects. In particular, I was reminded of the "you and one other" recommendations that LibraryThing offers. This is a feature that find other users with whom you share an obscure book. In this case, obscurity is defined as the book occurring only twice in the system: once in your library, once in the other user's.

This would be a relatively easy feature to implement in FromThePage. When displaying a page, perform this algorithm:
  • For each subject link in the page, calculate how many times it is referenced within the collection, then
  • Sort those subjects by reference count, and
  • Take the 3 or 4 subject links with the lowest reference count and,
  • Display the pages which link to those subjects.
For a really useful experience, I'd want to display keyword-in-context, showing a few words to explain the context in which that other occurrence of "musick box" appears.

Friday, June 26, 2009

Connecting With Readers

While editing and annotating Julia Brumfield's 1919 diary, I've tried to do research on the people who appear there. Who was Josie Carr's sister? Sites like FindAGrave.com can help, but the results may still be ambiguous: there are two Alice Woodings buried in the area, and either could be a match.

These questions could be resolved pretty easily through oral interviews -- most of the families mentioned in the diaries are still in the area, and a month spent knocking on doors could probably flesh out the networks of kinship I need for a complete annotation. However, that's really not time I have, and I can't imagine cold-calling strangers to ask nosy questions about their families -- I'm a computer programmer, after all.

It turns out that there might be an easier way. After Sara installed Google Analytics on FromThePage, I've been looking at referral log reports that show how people got to the site. Here's the keyword report for June, showing what keywords people were searching for when they found FromThePage:

Keyword Visits Pages/Visit Avg. Time on Site
"tup walker" 21 12.47619 992.8571
"letcher craddock" 7 12.42857 890.1429
julia craddock brumfield 3 28 624.3333
juliacraddockbrumfield 3 74.33333 1385
"edwin mayhew" 2 7 117.5
"eva mae smith" 2 4 76.5
"josie carr" virginia 2 6.5 117
1918 candy 2 4 40
clack stone hubbard 2 55.5 1146

These website visitors are fellow researchers, trying to track down the same people that I am. I've got them on my website, they're engaged -- sometimes deeply so -- with the texts and the subjects, but I don't know who they are, and they haven't contacted me. Here are a couple of ideas that might help:
  1. Add an introductory HTML block to the collection homepage. This would allow collection editors to explain their project, solicit help, and whatever contact information.
  2. Add a 'contact us' footer to be displayed on every page of the collection, whether the user is viewing a subject article, reading a work, or viewing a manuscript page. Since people are finding the site via search engines, they're navigating directly to pages deep within a collection. We need to display 'about this project', 'contact us', or 'please help' messages on those pages.
One idea I think would not work is to build comment boxes or a 'contact us' form. I'm trying to establish a personal connection to these researchers, since in addition to asking "who is Alice Wooding", I'd like to locate other diaries or hunt down other information about local history. This is really best handled through email, where the barriers to participation are low.

Tuesday, June 2, 2009

Interview with Hugh Cayless

One of the neatest things to happen in the world of transcription technology this year was the award of an NEH ODH Digital Humanities Start-Up Grant to "Image to XML", a project exploring image-based transcription at the line and word level. According to a press release from UNC, this will fund development of "a product that will allow librarians to digitally trace handwriting in an original document, encode the tracings in a language known as Scalable Vector Graphics, and then link the tracings at the line or even word level to files containing transcribed texts and annotations." This is based on the work of Hugh Cayless in developing Img2XML, which he has described in a presentation to Balisage, demonstrated at this static demo, and shared at this github repository.

Hugh was kind enough to answer my questions about the Img2XML project and has allowed me to publish his responses here in interview form:


First, let me congratulate you on img2xml's award of a Digital Humanities Start-Up Grant. What was that experience like?

Thanks! I've been involved in writing grant proposals before, and sat on an NEH review panel a couple of years ago. But this was the first time I've been the primary writer of a grant. Start-Up grants (understandably) are less work than the larger programs, but it was still a pretty intensive process. My colleague at UNC, Natasha Smith, and I worked right down to the wire on it. At research institutions like UNC, the hard part is not the writing of the proposal, but working through the submission and budgeting process with the sponsored research office. That's the part I really couldn't have done in time without help.

The writing part was relatively straightforward. I sent a draft to Jason Rhody, who's one of the ODH program officers, and he gave us some very helpful feedback. NEH does tell you this, but it is absolutely vital that you talk to a program officer before submitting. They are a great resource because they know the process from the inside. Jason gave us great feedback, which helped me refine and focus the narrative.

What's the relationship between img2xml and the other e-text projects you've worked on in the past? How did the idea come about?

At Docsouth, they've been publishing page images and transcriptions for years, so mechanisms for doing that had been on my mind. I did some research on generating structural visualizations of documents using SVG a few years ago, and presented a paper on it at the ACH conference in Victoria, so I'd some experience with it. There was also a project I worked on while I was at Lulu where I used Inkscape to produce a vector version of a bitmap image for calendars, so I knew it was possible. When I first had the idea, I went looking for tools that could create an SVG tracing of text on a page, and found potrace (which is embedded in Inkscape, in fact). I found that you can produce really nice tracings of text, especially if you do some pre-processing to make sure the text is distinct.

What kind of pre-processing was necessary? Was it all manual, or do you think the tracing step could be automated?

It varies. The big issue so far has been sorting out how to distinguish text from background (since potrace converts the image to black and white before running its tracing algorithm), particularly with materials like papyrus, which is quite dark. If you can eliminate the background color by subtracting it from the image, then you don't have to worry so much about picking a white/black cutover point--the defaults will work. So far it's been manual. One of the agendas of the grant is to figure out how much of this can be automated, or at least streamlined. For example, if you have a book with pages of similar background color, and you wanted to eliminate that background as part of pre-processing, it should be possible to figure out the color range you want to get rid of once, and do it for every page image.

I've read your Balisage presentation and played around with the viewer demonstration. It looks like img2xml was in proof-of-concept stage back in mid 2008. Where does the software stand now, and how far do you hope to take it?

It hasn't progressed much beyond that stage yet. The whole point of the grant was to open up some bandwidth to develop the tooling further, and implement it on a real-world project. We'll be using it to develop a web presentation of the diary of a 19th century Carolina student, James Dusenbery, some excerpts from which can be found on Documenting the American South at http://docsouth.unc.edu/true/mss04-04/mss04-04.html.

This has all been complicated a bit by the fact that I left UNC for NYU in February, so we have to sort out how I'm going to work on it, but it sounds like we'll be able to work something out.

It seems to me that you can automate generating the SVG's pretty easily. In the Dusenbery project, you're working with a pretty small set of pages and a traditional (i.e. institutionally-backed) structure for managing transcription. How well suited do you think img2xml is to larger, bulk digitization projects like the FamilySearch Indexer efforts to digitize US census records? Would the format require substantial software to manipulate the transcription/image links?

It might. Dusenbery gives us a very constrained playground, in which we're pretty sure we can be successful. So one prong of attack in the project is to do something end-to-end and figure out what that takes. The other part of the project will be much more open-ended and will involve experimenting with a wide range of materials. I'd like to figure out what it would take to work with lots of different types of manuscripts, with different workflows. If the method looks useful, then I hope we'll be able to do follow-on work to address some of these issues.

I'm fascinated by the way you've cross-linked lines of text in a transcription to lines of handwritten text in an SVG image. One of the features I've wanted for my own project was the ability to embed a piece of an image as an attribute for the transcribed text -- perhaps illustrating an unclear tag with the unclear handwriting itself. How would SVG make this kind of linking easier?

This is exactly the kind of functionality I want to enable. If you can get close to the actual written text in a referenceable way then all kinds of manipulations like this become feasible. The NEH grant will give us the chance to experiment with this kind of thing in various ways.

Will you be blogging your explorations? What is the best way for those interested in following its development to stay informed?

Absolutely. I'm trying to work out the best way to do this, but I'd like to have as much of the project happen out in the open as possible. Certainly the code will be regularly pushed to the github repo, and I'll either write about it there, or on my blog (http://philomousos.blogspot.com), or both. I'll probably twitter about it too (@hcayless). I expect to start work this week...


Many thanks to Hugh Cayless for spending the time on this interview. We're all wishing him and img2xml the best of luck!

Sunday, May 17, 2009

Review: USGS North American Bird Phenology Program

Who knew you could track climate change through crowdsourced transcription? The smart folks at the U. S. Geological Survey, that's who!

The USGS North American Bird Phenology program encouraged volunteers to submit bird sightings across North America from the 1880s through the 1970s. These cards are now being transcribed into a database for analysis of migratory pattern changes and what they imply about climate change.

There's a really nice DesertUSA NewsBlog article that covers the background of the project:
The cards record more than a century of information about bird migration, a veritable treasure trove for climate-change researchers because they will help them unravel the effects of climate change on bird behavior, said Jessica Zelt, coordinator of the North American Bird Phenology Program at the USGS Patuxent Wildlife Research Center.

That is — once the cards are transcribed and put into a scientific database.

And that’s where citizens across the country come in - the program needs help from birders and others across the nation to transcribe those cards into usable scientific information.

CNN also interviewed a few of the volunteers:
Bird enthusiast and star volunteer Stella Walsh, a 62-year-old retiree, pecks away at her keyboard for about four hours each day. She has already transcribed more than 2,000 entries from her apartment in Yarmouth, Maine.

"It's a lot more fun fondling feathers, but, the whole point is to learn about the data and be able to do something with it that is going to have an impact," Walsh said.

Let's talk about the software behind this effort.

The NABPP is fortunate to have a limited problem domain. A great deal of standardization was imposed on the manuscript sources themselves by the original organizers, so that for example, each card describes only a single species and location. In addition, the questions the modern researchers are asking of the corpus also limits the problem domain: nobody's going to be doing analysis of spelling variations between the cards. It's important to point out that this narrow scope exists in spite of wide variation in format between the index cards. Some are handwritten on pre-printed cards, some are type-written on blank cards, and some are entirely freeform. Nevertheless, they all describe species sightings in a regular format.

Because of this limited scope, the developers were (probably) able to build a traditional database and data-entry form, with specialized attributes for species, location, date, or other common fields that could be generalized from the corpus and the needs of the project. That meant custom-building an application specifically for the NABPP, which seems like a lot of work, but it does not require building the kind of Swiss Army Knife that medieval manuscript transcription requires. This presents an interesting parallel with other semi-standardized, hand-written historical documents like military muster rolls or pension applications.

One of the really neat possibilities of subject-specific transcription software is that you can combine training users on the software with training them on difficult handwriting, or variations in the text. NABPP has put together a screencast for this, which walks users through transcribing a few cards from different periods, written in different formats. This screencast explains how to use the software, but it also explains more traditional editorial issues like what the transcription conventions are, or how to process different formats of manuscript material.

This is only one of the innovative ways the NABPP deals with its volunteers. I received a newsletter by email shortly after volunteering, announcing their progress to date (70K cards transcribed) and some changes in the most recent version of the software. This included some potentially-embarrassing details that a less confident organization might not have mentioned, but which really do a lot. Users may get used to workarounds to annoying bugs, but in my experience they still remember them and are thrilled when those bugs are finally fixed. So when the newsletter announces that "The Backspace key no longer causes the previous page to be loaded", I know that they're making some of their volunteers very happy.

In addition to the newletter, the project also posts statistics on the transcription project, broken down both by volunteer and by bird. The top-ten list gives the game-like feedback you'd want in a project like this, although I'd be hesitant to foster competition in a less individually-oriented project. They're also posting preliminary analyses of the data, including the phenology of barn swallows, mapped by location and date of first sighting, and broken down by decade.

Congratulations to the North American Bird Phenology Program for making crowdsourced transcription a reality!

Saturday, May 2, 2009

Open Source vs. Open Access

I've reached a point in my development project at which I'd like to go ahead and release FromThePage as Open Source. There are now only two things holding me back. I'd really like to find a project willing to work together with me to fix any deployment problems, rather than posting my source code on GitHub and leaving users to fend for themselves. The other problem is a more serious issue that highlights what I think is a conflict between Open Access and Open Source Software.

Open Source/Free Software and Rights of Use

Most of the attention paid to Open Source software focuses on the user's right to modify the software to suit their needs and to redistribute that (or derivative) code. However, there is a different, more basic right conferred by Free and Open source licenses: the user's right to use the software for whatever purpose they wish. The Free Software Definition lists "Freedom 0" as:
  • The freedom to run the program, for any purpose.
    Placing restrictions on the use of Free Software, such as time ("30 days trial period", "license expires January 1st, 2004") purpose ("permission granted for research and non-commercial use", "may not be used for benchmarking") or geographic area ("must not be used in country X") makes a program non-free.
Meanwhile, the Open Source Definition's sixth criterion is:
6. No Discrimination Against Fields of Endeavor
The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research.
Traditionally this has not been a problem for non-commercial software developers like me. Once you decide not to charge for the editor, game, or compiler you've written, who cares how it's used?

However, if your motivation in writing software is to encourage people to share their data, as mine certainly is, then restrictions on use start to sound pretty attractive. I'd love for someone to run FromThePage as a commercial service, hosting the software and guiding users through posting their manuscripts online. It's a valuable service, and is worth paying for. However, I want the resulting transcriptions to be freely accessible on the web, so that we all get to read the documents that have been sitting in the basements and file folders of family archivists around the world.

Unfortunately, if you investigate the current big commercial repositories of this sort of data, you'll find that their pricing/access model is the opposite of what I describe. Both Footnote.com and Ancestry.com allow free hosting of member data, but both lock browsing of that data behind a registration wall. Even if registration is free, that hurdle may doom the user-created content to be inaccessible, unfindable or irrelevant to the general public.

Open Access

The open access movement has defined this problem with regards to scholarly literature, and I see no reason why their call should not be applied to historical primary sources like the 19th/20th century manuscripts FromThePage is designed to host. Here's the Budapest Open Access Initiative's definition:
By "open access" to this literature, we mean its free availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself.
Both the Budapest and Berlin definitions go on to talk about copyright quite a bit, however since the documents I'm hosting are already out-of-copyright, I don't really think that they're relevant. What I do have control over is my own copyright interest in the FromThePage software, and the ability to specify whatever kind of copyleft license I want.

My quandry is this: none of the existing Free or Open Source licenses allow me to require that FromThePage be used in conformance with Open Access. Obviously, that's because adding such a restriction -- requiring users of FromThePage not to charge for people reading the documents hosted on or produced through the software -- violates the basic principles of Free Software and Open Source. So where do I find such a license?

Have other Open Access developers run into such a problem? Should I hire a lawyer to write me a sui generis license for FromThePage? Or should I just get over the fear that someone, somewhere will be making money off my software by charging people to read the documents I want them to share?

Sunday, April 5, 2009

Feature: Full Text Search/Article Link integration

In the last couple of weeks, I've implemented most of the features in the editorial toolkit. Scribes can identify unannotated pages from the table of contents, readers can peruse all pages in a collection linked to a subject, and users can perform a full text search.

I'd like to describe the full text search in some detail, since there are some really interesting things you can do with the interplay between searching and linking. I also have a few unresolved questions to explore.

Basic Search

There are a lot of technologies for searching, so my first task was research. I decided on the simple MySQL fulltext search over SOLR, Sphinx, and acts_as_ferret because all the data I wanted to search was located within the PAGES table. As a result, this only required a migration script, a text input, and a new controller action to implement. You can see the result on the right hand side of the collection homepage.

Article-based Search

Once basic search was working, I could start integrating the search capability with subject indexes. Since each subject link contains the wording in the original text that was used to link to a subject, that wording can be used to seed a text search. This allows an editor to double-check pages in a collection to see if any references to a subject have been missed.

For example, Evelyn Brumfield is a grandchild who is mentioned fairly often in Julia's diaries. Julia spells her name variously as "Evylin", "Evelyn", and "Evylin Brumfield". So a link from the article page performs a full text search for "Evylin Brumfield" OR Evelyn OR Evylin.

While this is interesting, it doesn't directly address the editors need to find references they might have missed. Since we're able to see all the fulltext matches for Evelyn Brumfield, since we can see all pages that link to the Evelyn Brumfield subject, why not subtract the second set from the first? An additional link on the subject page searches for precisely this set: all references to Evelyn Brumfield within the text that are not on pages linked to the Evelyn Brumfield subject.

At the writing of this blog post, the results of such a search are pretty interesting. The first two pages in the results matched the first name in "Evylin Edmons", in pages that are already linked to Evelyn Edmonds subject. Matched pages 4-7 appear to be references to Evelyn Brumfield in pages that have not been annotated at all. But we hit pay dirt with page number 3: it's a page that was transcribed and annotated very early during the transcription project, containing reference to Evelyn Brumfield that should be linked to that subject but is not.

Questions

I originally intended to add links to search for each individual phrase linked to a subject. However, I'm still not sure this would be useful -- what value would separate, pre-populated searches for "Evelyn", "Evylin", and "Evylin Brumfield" add?

A more serious question is what exactly I should be searching on. I adopted a simple approach of searching the annotated XML text for each page. However, this means that subject name expansions will match a search, even if the words don't appear in the text. A search for "Brumfield" will return pages in which Julia never wrote Brumfield, merely because they link to "John", which is expanded to "John Brumfield". This is not a literal text search, and might astonish users. On the other hand, would a user searching for "Evelyn" expect to see the Evelyns in the text, even though they had been spelled "Evylin"?

Monday, March 23, 2009

Feature: Mechanical Turk Integration

At last week's Austin On Rails SXSW party, my friend and compatriot Steve Odom gave me a really neat feature idea. "Why don't you integrate with Amazon's Mechanical Turk?" he asked. This is an intriguing notion, and while it's not on my own road map, it would be pretty easy to modify FromThePage to support that. Here's what I'd do to use FromThePage on a more traditional transcription project, with an experienced documentary editor at the head and funding for transcription work:

Page Forks: I assume that the editor using Mechanical Turk would want double keyed transcriptions to maintain quality, so the application needs to present the same, untranscribed page to multiple people. In the software world, when a project splits, we call this forking, and I think that the analogy applies here. This feature needs to be able to track an entirely separate edit history for the different forks of a page. This means a new attribute on the master page record describing whether more than one fork exists, and a separate edit history for each fork of a page that's created. There's no reason to limit these transcriptions to only two forks, even if that's the most common use case, so I'd want to provide a URL that will automatically create a new fork for a new transcriber to work in. The Amazon HIT (Human Intelligence Task) would have a link to that URL, so the transcriber need never track which fork they're working in, or even be aware of the double keying.

Reconciling Page Forks: After a page has been transcribed more than one time, the application needs to allow the editor to reconcile the transcriptions. This would involve a screen displaying the most recent version of two transcriptions alongside the scanned page image. Likely there's a decent Rails plug in already for displaying code diffs, so I could leverage that to highlight differences between the two transcriptions. A fourth pane would allow the editor to paste in the reconciled transcription into the master page object.

Publishing MTurk HITs: Since each page is an independent work unit, it should be possible to automatically convert an untranscribed work into MTurk HITs, with a work item for each page. I don't know enough about how MTurk works, but I assume that the editor would need to enter their Amazon account credentials to have the application create and post the HITs. The app also needs to prevent the same user from re-transcribing the same page in multiple forks.

In all, it doesn't sound like more than a month or two worth of work, even performed part-time. This isn't a need I have for the Julia Brumfield diaries, so I don't anticipate building this any time soon. Nevertheless, it's fun to speculate. Thanks, Steve!

Wednesday, March 18, 2009

Progress Report: Page Thumbnails and Sensitive Tags

As anyone reading this blog through the Blogspot website knows, visual design is not one of my strengths. One of the challenges that users have with FromThePage is navigation. It's not apparent from the single-page screen that clicking on a work title will show you a list of pages. It's even less obvious from the multi-page work reading screen that the page images are accessible at all on the website.

Last week, I implemented a suggestion I'd received from my friend Dave McClinton. The work reading screen now includes a thumbnail image of each manuscript page beside the transcription of that page. The thumbnail is a clickable link to the full screen view of the page and its transcription. This should certainly improve the site's navigability, and I think it also increases FromThePage's visual appeal.

I tried a different approach to processing the images from the one I'd used before. For transcribable page images, I modified the images offline through a batch process, then transferred them to the application, which serves them statically. The only dynamic image processing the FromThePage software did for end-users was involved in zoom. This time, I added a hook to the image link code, so that if a thumbnail was requested by the browser, the application would generate it on the fly. This turned out to be no harder to code than a batch process, and the deployment was far easier. I haven't seen a single broken thumbnail image yet, so it looks like it's fairly robust, too.

The other new feature I added last week was support for sensitive tags. The support is still fairly primitive -- enclose text with and it will only be desplayed to users authorized to transcribe the work -- but it gets the job done and solves some issues that had come up with Julia Brumfield's 1919 diary. Happily, this took less than 10 minutes to implement.

Sunday, March 15, 2009

Feature: Editorial Toolkit

I'm pleased to report that my cousin Linda Tucker has finished transcribing the 1919 diary. I've been trying my best to keep up with her speed, but she's able to transcribe two pages in the amount of time it takes me to edit and annotate a single, simple page. If the editing work requires more extensive research, or (worse) reveals the need to re-do several previous pages, there is really no contest. In the course of this intensive editing, I've come up with a few ideas for new features, as well as a few observations on existing features.

Show All Pages Mentioning a Subject

Currently, the article page for each subject shows a list of the pages on which the subject is mentioned. This is pretty useful, but it really doesn't serve the purposes of the reader or editor who wants to read every mention of that subject, in context. In particular, after adding links to 300 diary pages, I realized that "Paul" might be either Paul Bennett, Julia's 20-year-old grandson who is making a crop on the farm, or Paul Smith, Julia's 7-year-old grandson who lives a mile away from her and visits frequently. Determining which Paul was which was pretty easy from the context, but navigating the application to each of those 100-odd pages took several hours.

Based on this experience, I intend to add a new way of filtering the multi-page view, which would display all the transcriptions of all pages that mention a subject. I've already partially developed this as a way to filter the pages within a work, but I really need to 1) see mentions across works, and 2) make this accessible from the subject article page. I am embarrassed to admit that the existing work-filtering feature is so hard to find, that I'd forgotten it even existed.

Autolink

The Autolink feature has proven invaluable. I originally developed it to save myself the bother of typing [[Benjamin Franklin Brumfield, Sr.|Ben]] every time Julia mentioned "Ben". However, it's proven especially useful as a way of maintaining editorial consistency. If I decided that "bathing babies" was worth an index entry on one page, I may not remember that decision 100 pages later. However, if Autolink suggests [[bathing babies]] when it sees the string "bathed the baby", I'll be reminded of that. It doesn't catch every instance , but for subjects that tend to cluster (like occurrences of newborns), it really helps out.

Full Text Search

Currently there is no text search feature. Implementing one would be pretty straightforward, but in addition to that I'd like to hook in the Autolink suggester. In particular, I'd like to scan through pages I've already edited to see if I missed mentions of indexed subjects. This would be especially helpful when I decide that a subject is noteworthy halfway through editing a work.

Unannotated Page List

This is more a matter of work flow management, but I really don't have a good way to find out which pages have been transcribed but not edited or linked. It's really hard to figure out where to resume my editing.

[Update: While this blog post was in draft, I added a status indicator to the table of contents screen to flag pages with transcriptions but no subject links.]

Dual Subject Graphs/Searches

Identifying names is especially difficult when the only evidence is the text itself. In some cases I've been able to use subject graphs to search for relationships between unknown and identified people. This might be much easier if I could filter either my subject graphs or the page display to see all occurrences of subjects X and Y on the same page.

Research Credits

Now that the Julia Brumfield Diaries are public, suggestions, corrections, and research is pouring in. My aunt has telephoned old-timers to ask what "rebulking tobacco" refers to. A great-uncle has emailed with definitions of more terms, and I've had other conversations via email and telephone identifying some of the people mentioned in the text. To my horror, I find that I've got no way to attribute any of this information to those sources. At minimum, I need a large, HTML acknowledgments field at the collection level. Ideally, I'd figure out an easy-to-use way to attribute article comments to individual sources.

Monday, February 9, 2009

GoogleFight Resolves Unclear Handwriting

I've spent the last couple of weeks as a FromThePage user working seriously on annotation. This mainly involves identifying the people and events mentioned in Julia Brumfield's 1918 diary and writing short articles to appear as hyperlinked pages within the website, or be printed as footnotes following the first mention of the subject. Although my primary resource is a descendant chart in a book of family history, I've also found Google to be surprisingly helpful for people who are neighbors or acquaintances.

Here's a problem I ran into in the entry for June 30, 1918:

In this case, I was trying to identify the name in the middle of the photo. Bo__d Dews. The surname is a bit irregular for Julia's hand, but Dews is a common surname and occurs on the line above. In fact, this name is in the same list as another Mr. Dews, so I felt certain about the surname.

But what to make of the first name? The first two and final letters are clear and consistent: BO and D. The third letter is either an A or a U, and the fourth is either N or R. We can eliminate "Bourd" and "Boand" as unlikely phonetic spellings of any English name, leaving "Bound" and "Board". Neither of these are very likely names... or are they?

I thought I might have some luck by comparing the number and quality of Google search results for each of "Board Dews" and "Bound Dews". This is a pretty common practice used by Wikipedia editors to determine the most common title of a subject, and is sometimes known as a "Google fight". Let's look at the results:

"Bound Dews" yields four search results. The first two are archived titles from FromThePage itself, in which I'd retained a transcription of "Bound(?) Dews" in the text. The next two are randomly-generated strings on a spammer's site. We can't really rule out "Bound Dews" as a name based on this, however.

"Board Dews" yields 104 search results. The first page of results contains one person named Board Dews, who is listed on a genealogist's site as living from 1901 to 1957, and residing in nearby Campbell County. Perhaps more intriguing is the other surnames on the site, all from the area 10 miles east of Julia's home. The second page of results contains three links to a Board Dews, born in 1901 in Pittsylvania County.

At this point, I'm certain that the Bo__d Dews in the diary must be the Board Dews who would have been a seventeen-year-old neighbor. But I'm still astonished that I can resolve a legibility problem in a local diary with a Google search.

Thursday, February 5, 2009

Progress Report: Eight Months after THATCamp

It's been more than half a year since I've updated this blog. During that period, due to some events in my personal life, I was only able to spend a month or so on sustained development, but nevertheless made some real progress.

The big news is that I announced the project to some interested family members and have acquired one serious user. My cousin-in-law, Linda Tucker, has transcribed more than 60 pages of Julia Brumfield's 1919 diary since Christmas. In addition to her amazing productivity transcribing, she's explored a number of features of the software, reading most of the previously-transcribed 1918 diary, making notes and asking questions, and fighting with my zoom feature. Her enthusiasm is contagious, and her feedback -- not to mention her actual contributions -- has been invaluable.

During this period of little development, I spent a lot of time as a user. Fewer than 50 pages remain to transcribe in the 1918 diary, and I've started seriously researching the people mentioned in the diary for elaboration in footnotes. It's much easier to sustain work as a user than as a developer, since I don't need an hour or so of uninterrupted concentration to add a few links to a page.

I've also made some strides on printing. I jettisoned DocBook after too many problems and switched over to using Bruce Williamson's RTeX plugin. After some limited success, I will almost certainly craft my own set of ERb templates that generate LaTeX source for PDF generation. RTeX excels in serving up inline PDF files, which is somewhat antithetical to my versioned approach. Nevertheless, without RTeX, I might have never ventured away from DocBook. Thanks go to THATCamper Adam Solove for his willingness to share some of his hard-won LaTeX expertise in this matter.

Although I'm new to LaTeX, I've got footnotes working better than they were in DocBook. I still have many of the logical issues I addressed in the printing post to deal with, but am pretty confident I've found the right technology for printing.

I'm also working on re-implementing zoom in GSIV, rather than my cobbled-together solution. The ability to pan a zoomed image has been consistently requested by all of my alpha testers, the participants at THATCamp, and its lack is a real pain point for Linda, my first Real User. I really like the static-server approach GSIV takes, and will post when the first mock-up is done.