13. Phase II image examination

Image review. In early 1997, a subset of the captured images was loaded on the World Wide Web as one part of the Library's online Federal Theatre Project collection. In its final form, the online presentation will be framed by a finding aid that describes the content of the entire collection, only a portion of which has been digitized. Links to the digital reproductions retrieve both the document images created by Picture Elements in this demonstration project and a set of pictorial images made under other auspices. The Library invites end-users who consult the online Federal Theatre Project collection to inspect the images and forward their comments to the National Digital Library Program collections information email site ([email protected]).

The images created during this demonstration project received a preliminary examination at the Library upon receipt from Picture Elements. This examination was carried out by staff in the National Digital Library Program and the Music Division, custodians of the Federal Theatre Project collection.

Lack of objective measuring tools. When Library staff evaluated the Picture Elements images, they employed "informed subjective judgement." The examiners were struck by a factor that the consultants had not elucidated during Phase I: the seeming absence of objective measuring tools. Several staff members had experience in microform production and were aware of the use of targets and densitometry to measure microfilm's spatial resolution, tonal resolution, and consistency of color.

The Picture Elements staff explained that three test chart scans were made at the start of each day's scanning, two black and white (the IEEE-167A fax chart and the AIIM MS-44 test chart No. 2) and one color (the RIT Process Ink Gamut chart). Since each set of test scans did not correspond to a single directory of delivered images, but rather to a single day, these test scans were written to a single disk at the end of scanning. Appendix J contains one scan of each of these test charts. The scans of the test charts were intended as a reference to be available to future researchers into the captured images, however, rather than as an aid to the review process.

In the post-scanning discussion of measurement tools, the consultants described an approach used to measure the performance of scientific and military optical systems: Modulation Transfer Function or MTF. This method is better suited for the characterization of systems used to produce tonal images than methods based on measurements of bitonal grid patterns. Work by the Mitre Corporation originally performed to assist the FBI's characterization of fingerprint scanners has raised the interest of the document imaging community in MTF measurement. The calibrated, continuous-tone target suitable for MTF measurement and the corollary software that would have to be written or adapted to make the measurements was not budgeted for this project.

In a separate consultancy, James Reilly and Franziska Frey of the Image Permanence Institute (IPI) reported to the Library that, indeed, there are not ready means for lay workers to objectively measure tonal digital images. (Properly equipped engineers can measure aspects of images using laboratory equipment.) The IPI report is Recommendations for the Evaluation of Digital Images Produced from Photographic, Micrographic, and Various Paper Formats. In a follow-on consultancy, Reilly and Frey will develop an approach and a toolset for the Library to use in judging grayscale digital images.

Preservation-quality images: spatial and tonal resolution. The preliminary examination indicated that the selection of 300 dpi as the level of spatial resolution appeared to be satisfactory. There were no instances in which significant features on original documents were lost. The examination, however, raised some questions about the clarity of some images and the discussion of this matter highlighted the importance of tonal resolution (the quantity and distinguishability of tones) in the images. For binary images, spatial resolution is the key factor for capture of fine detail; with tonal images both spatial and tonal resolution are important.

Preservation-quality images: distribution of tones and image clarity. The following two images are examples of the type that seemed to show loss of clarity when compared to the original documents. In these digital image, for example, the opening in several of the a characters was filled or partially filled. In the paper originals, the a was open or relatively open. In addition, the overall appearance of the digital image was dark, in the words of one Library staff member, "as if a color slide had been underexposed by a half stop."

As these and similar images were examined, Library staff asked whether capturing at a setting that yielded dark-looking images might not have contributed to the loss of clarity, just as excessive inking in letterpress printing might have closed up openings in a letter like a. The person who examined the greatest number of images observed that, for relatively "clean" documents, the darker image tone caused a number of stray or irrelevant marks to become visible, leading to the related question of whether these would have benefited from a "lighter touch."

Prompted by these samples, the Library team asked, "Should the contrast stretching applied to the images have produced a lighter value for the paper relative to the density of the strokes?" and "Would lighter values overall have provided a better reproduction of the letter a?"

Picture Elements offered a two-part response. First, an observer should never expect a reproduction to be identical to an original. And the perception of the original can be influenced by subtle viewing factors. For example, changing the angle of the original paper page in the light, he said, can make a significant difference in one's ability to read poor-quality information. In addition, it is always possible--as is the case with microform--that some researchers will have to examine some original documents in order to answer all of their questions.

Second, Picture Elements reminded the Library that in Phase I, the discussion of contrast stretching had focussed on the rescue of illegible documents, not aesthetic cleanup. The sample that had received the committee's attention was a sheet of dark red paper printed with black ink. The reason to stretch contrast is to avoid leaving significant portions of the tonal range unused in the case of images whose initial representation occupies only a small portion of that range. The algorithm Picture Elements used for contrast stretching makes the lightest pixels in the image the highest value of white but does not raise all of the lighter pixels to this same value. If all lighter values were raised then the highest values would be "clipped," i.e., rendered the same as lower values, and some information about tonality in the paper texture would be lost. The Picture Elements contrast-stretching algorithm used to capture the preservation-quality images permits a hardware implementation that will analyze the image histogram during scanning and make adjustments without human intervention. In order to capture faint keystrokes, it is necessary to use dark settings. As a result, the setting may not yield the most aesthetically pleasing tonal archival images.

The Library had noted that some aesthetic improvement was seen when the two images were opened in a graphic-arts software and the brightness and contrast were increased. Picture Elements responded that, when this was done in a manner that clipped nearly 50 percent of the pixels, distracting dark patches remained on the background of the page. In order to achieve an aesthetically pleasing effect, 90 percent of the pixels had to be clipped, which would be ill-advised during the production of preservation-quality images, while it might be entirely appropriate (and still an option) for access-quality images.

Finally, Picture Elements noted that the contrast stretching provided in these two examples was sufficient to permit the binarization algorithm to work when the access images were produced (see discussion of access images below).

The preservation-quality images were produced in a manner that would minimize the loss of the image content of the original. That is why contrast stretching was limited to a "semi-reversible" amount, with the lightest pixels mapped to full white, but with no saturation (where a range of relatively light pixels would be mapped irretrievably to full white).

Improving the visual quality of the originals (e.g. reducing the visibility of "stray or irrelevant marks") would introduce some loss, which should not be done in the preservation image. Any lossy processing should be done in the production of some form of access image. The goal of the preservation-quality image, Picture Elements asserted, is to only have to scan the original once, picking up all of the information that is technically feasible. Any intentional loss of visible information should be incorporated in the production of some class of access images intended for a certain use.

Preservation-quality images: color and color fidelity. The examination of images did not return in a systematic way to the most intriguing aspect of color reproduction: when is it needed? The image reviewers were drawn to color reproduction: having chromatic cues on the page makes it easier to spot the script lines the stage director underlined in red, but no one has stated that this determination could not have been made from a monochrome image (like the binary access image of the same page). But the reviewers did not encounter grayscale examples where they wished that capture had been in color. The Library looks forward to hearing comments on this question from researchers who use the collection.

The examination of color images identified some examples with a greenish cast that had not been present in the paper originals. In others, red values did not closely resemble the shades of red in the originals.

After receiving the Library's review, Picture Elements re-examined this images. The consultant first isolated a small portion of the "overscan" at the bottom. This area reproduces the white color of the scanner lid and not the particular shade of the paper. The red, green, and blue (RGB) values for the overscan-area sample were R=220, G=231, and B=215 (average=222, delta +/- 4 percent). For a perfect white, the RGB values should be the same and should come closer to 255; an evenly weighted 222 should look like a color-free light gray. But the difference in RGB values, although slight, favored green and gave the image a slight but noticeable color cast.

This observation prompted Picture Elements to review the color test images made for Phase I of the project. This review revealed that some of the images appeared to have an subtle increasing green shift as one moved from the top to the bottom of the page.

Picture Elements was unable to determine the cause for the color shift, speculating that the lamp might be changing color very slightly during the scan. In an echo of laundry detergent advertisements, the consultants noted that it is very difficult to obtain white whites when producing color images of documents from a CCD sensor consisting to three separate color channels.

It is worth noting that the challenges associated with the maintenance of color fidelity in digital systems have been reported in literature associated with the printing industry, which today relies heavily on computerized design and production. Several recent trade journal articles have highlighted the need for better color management systems and the IPI report on objective measures for digital images skirts the issue of color.6 Thus there was no expectation that this demonstration project would wrestle with color fidelity in a conclusive way. Color capture was addressed to explore the question of when the added capture time and storage requirements for color would be warranted in a manuscript digitization project.

Preservation-quality images: black background sheet to reduce print-through. Early in the production phase, Picture Elements staff noted that many documents were on onion-skin or other thin paper and that writing on the backs of these sheets was visible in the scans.

This effect was increased by the white lid of the scanner lid; light passed through the sheet and bounced back from the lid and again through the paper, thus making the back-side writing more visible. In an attempt to reduce the visibility of the print-through, black paper was placed between the sheet and scanner. This action, however, had the effect of heightening the visibility of the onion-skin paper's texture: the "valleys" in the paper were darkened where they pressed against the black sheet, increasing the difference in tone from the paper-texture "hills." This increase in the visible paper texture was slightly bothersome in the tonal preservation-quality images and led to problems when the binarization algorithm was applied. In some circumstances, the texture could be mistaken for meaningful marks on the sheet. After these effects were observed, use of the black backing sheet ceased.

Preservation-quality images: reproducing the entire sheet. The scanner employed in this project had an active area of 8 1/2x14 inches. This meant that it was not possible to scan beyond the left and right edges of 8 1/2x11-inch typing-paper-size documents or beyond any edge of legal-size documents. Thus the project could not adopt the preservation-microfilming convention of showing that the entire items has been captured by displaying the full sheet of paper.

Access images: clean appearance and printability. In some cases, the lack of clarity in the characters as rendered on onionskin paper carbon copies robs the binary images of legibility. But for many of these images, when they are printed on a laser printer, the paper copies present a very clean appearance. In contrast, the printed copies of the tonal images have an overall gray cast produced by the effect of halftoning at print time; the background tonal values ("the paper") printed with a light pattern of dots. (At the time of these inspections, the Library did not have at hand a special printer capable of rendering tonal images on paper.)

Had a different posture been taken at the outset of the project, the production of a variety of special-purpose access copies could have been planned for. For instance, in addition to a binary print access image, the production of the following derivative image types is possible (thanks largely to the high fidelity of the tonal preservation-quality image):

Picture Elements agreed that multiple access images are desirable, and that their particulars will change with time and amongst users. In the future, they will be built instantly on demand by software associated with online delivery, using the high-quality preservation image as a source.

Access images: information loss. Some information was lost in the binarization process. The collection's curator offered the strongest statements on this topic, pointing out that some dim or subtle writings, including erased pencilled notes, were visible in the tonal images and not in the binary derivatives. If he were a researcher consulting the collection online, the curator said, he would wish to have access to the preservation-quality images. A similar benefit would follow from using tonal access-quality images. Thresholding has an inherent tradeoff between producing a clean, speckle-reduced image and losing the faintest markings. Reducing speckles is necessary for two reasons: because the tonal reproduction curve (TRC) of printers gives undue weight to small black specks (due to toner spread), and because the T.6 compression algorithm performs extremely poorly when they are present.

In the post-production discussion of these two examples, Picture Elements described the binarization process and how it made use of "iterative thresholding," the automated process by which the thresholding sensitivity is selected. The writing in sample A (image 0003a.jpg) was lost, the consultant said, due to the iterative thresholding choosing a sensitivity that was low enough to clean up the noise in the image. An examination of the log that was produced when this image was processed shows that the iterative thresholder evaluated four decreasing sensitivities that produced compressed image sizes of 44 KB (kilobytes), 30 KB, 21 KB, and 18 KB respectively. The ratio of the difference in file sizes between 30 KB and 21 KB was large enough that it was likely to be caused by noise, but the difference between 21 KB and 18 KB was small enough that it was not worth the potential information loss associated with the smaller image, so the 21 KByte image was automatically chosen. The tradeoff with the automatic iterative thresholding algorithm is between having a larger average file size (and noisier images) versus accepting some dropout when there is a lot of noise in the image that has the same contrast as some of the significant information.

Picture Elements pointed out that a key benefit of iterative thresholding is that it is completely automatic. Putting a person in the loop (who could better discriminate between information and noise) would probably have improved this image but at the expense of more noise and a larger file size and increased labor costs. In a high volume production line, however, the differences under discussion here might not have been noticed even in such a manual approach, especially if the operator was less than fully attentive.

Picture Elements reported that sample B (image 0010a.jpg) represented an example in which the iterative thresholding produced an optimum result by showing a hint of the erased writing. This is preferable, he said, to picking up all of the erased writing which would fail to signal that the erasure had occurred. It is true that if someone needs to investigate the erasure, the tonal image will need to be examined. In this case, the iterative thresholding process evaluated two decreasing sensitivities that produced compressed file sizes of 71 KB and 60 KB. The ratio of compressed file sizes was small enough that it was unlikely that the image had excessive noise, resulting in the sensitivity that produced a 71 KB image file being chosen: this picked up all of the writing and the "hint" of erased writing.

Access images: the WWW environment. As this project proceeded and as the time neared for presenting the Federal Theatre Project collection on the World Wide Web, the Library saw that the binary access images would not serve for "screen" or "display" access and navigation. Meanwhile, during 1996, a number of libraries began demonstrating effective presentations of document image sets in WWW browser software; the Library of Congress soon followed suit.7 These presentations exploit tonal images in the GIF format, structured to permit end users to page through a multi-page document. The same approach can be used with JPEG files, including progessive JPEG.

One portion of the Library's online presentation of the Federal Theatre Project collection includes both the preservation-quality and access-quality document images created by Picture Elements during this demonstration project. In order to offer paging navigation, the Picture Elements images have been placed in browser-based paging displays like those cited above. As end-users page through a GIF-image paging set, they can call up the Picture Elements-produced images. In effect, there will be two access images: print-access images in the TIFF format with Group 4 compression (produced by Picture Elements during the demonstration project) and tonal screen-access images in the GIF format (produced by the Library after completion of the demonstration project).

In order to work well on today's display screens and to be small enough for easy transmission on the Internet, the spatial and tonal resolution of the GIF images has been reduced from the originals. The GIF images are at about 60 dpi with a tonality of 4 bits per pixel.

Access images: types will change over time. At the end of the project, the project planners and consultants reflected on the apparent need to produce a set of three images for each Federal Theatre Project collection document. Tonal preservation-quality images provide the most faithful reproduction of the original document but are very large and thus cumbersome to view and print. Binary printing-access images offer clean printouts and are conveniently small but may suffer some information loss. Tonal display-access images open easily in WWW browser software and can be used to navigate through a paging set but their reduced resolution means they offer an imperfect reproduction of the original document.

Archiving the highest-quality image is sensible and necessary. But are the two access image types necessary? Certainly, changes in technology will lead to changed practice. Some online projects have demonstrated ways to create GIF images for end users on the fly; adoption of this practice would eliminate the need to produce and store such files. Greater network bandwidth and faster computers in the future will make it easier to display the preservation-quality images and reduce the need to have separate display-access images on hand. Improved printing options may make it easier for end-users sitting at desktop computers to to print a tonal image thus reducing the need for binary access images. Other more advanced technologies, further in the future, may make it possible for institutions to produce many types of derivative images on the fly to meet the end-user's immediate needs by dynamically processing the archival version of the images.


Next Section | Previous Section | Contents