Technology Staff Develop New Video Restoration Software
While the vast majority of USC Shoah Foundation’s testimonies can be viewed at 48 sites all over the world, some of the 235,005 tapes that make up the Visual History Archive have been rendered unwatchable – a consequence of faulty recording and 20-year-old technology. But thanks to the efforts of a few of USC Shoah Foundation’s Information Technology Services (ITS) staff, that’s about to change.
ITS completed its Preservation Project in June 2012, in which all 52,000 testimonies, originally recorded on Betacam SP videotapes between 1994 and 1999, were digitized into a variety of commonly-used formats.
ITS staff then embarked on the Restoration Project, which aims to perform additional repairs on the approximately 5 percent of tapes that have audio or visual problems. The project will be complete around July 2014.
However, some of these tapes were seemingly damaged beyond repair. The original recordings used interlaced video, in which each frame of video is made up of two separate fields recorded with separate heads in the video camera. If one head failed to record properly, leaving only one field, ITS could restore the video. But if both heads failed, the video could not be fixed.
Ryan Fenton-Strauss, video archive and post-production manager at ITS, was tasked with researching video restoration techniques currently being used in the motion picture industry. He found that there were very few existing options for restoring tape-based material.
“It seemed terribly unfortunate that after a survivor had lived through the Holocaust and poured his or her heart into a testimony, that parts of it would be lost due to a technical problem during the recording process,” Fenton-Strauss said.
However, Fenton-Strauss had an epiphany while sorting family photos with Google’s Picasa tool. He noticed that Picasa’s facial recognition software was so powerful that it could recognize his six-year-old daughter as a baby.
“I realized then that if we could automate the process of identifying the "good" and "bad" images using image recognition software, then we could correct some of our most difficult video problems,” Fenton-Strauss said.
He realized that if he broke up the video fields into a sequence of still images, he might then be able to isolate the "bad" images and replace them with the nearest previous "good" image.
Fenton-Strauss worked with ITS intern Sindhu Jagadeesh over the course of a semester to write the software that would fill in the missing images and sort through the difficult workflow challenges of manipulating so many images. They produced their first video and prototype of the image-recognition technique.
This semester, Fenton-Strauss has continued refining the technique with the help of intern Ivan Alberto Trujillo Priego, a graduate student in biomedical engineering.
“He would stand in our machine room for hours, troubleshooting and correcting videos using the video hardware solution,” Fenton-Strauss said. “He had a really good intuition for the project and he seemed to truly enjoy the process. “
While Picasa’s image recognition software could help correct some of the videos, others still required hours of manually sifting through the photos. Priego suggested that they use a more powerful image recognition system that he had used for his undergraduate thesis to sort through the images.
So far, Priego has replicated the workflow within the image recognition environment of National Instruments Vision Builder, and he and Fenton-Strauss have now restored three videos using almost no manual labor.
Fenton-Strauss said he hopes to go into production with this system early next year.
(See two examples of image restoration in the video above)
Like this article? Get our e-newsletter.
Be the first to learn about new articles and personal stories like the one you've just read.