NOTE: TO DOWNLOAD THE FOLLOWING AS A DOC, SEE ATTACHMENT AT BOTTOM OF PAGE.
My role at the DEG was primarily focused on post-capture audio/video
editing and mastering using a combination of the Final Cut Pro non-linear
editing software application, Compressor for video encoding and format conversions,
and Audacity for audio tracking, editing and mastering. Because I was familiar
with editing in the field of Audio, I was hired under the assumption that I could
easily transfer that knowledge over to video, which I believe I achieved.
The videos DEG creates vary, but I spent most of my time on internal event videos
and lecture-style videos from NYPL'S Live series. Although I was primarily
working in a post-production capacity, there were a few times when went off-site
and actually surveyed and/or participated in the video shooting process itself.
Unfortunately at this moment, none of the videos that I myself edited are
available to the public yet, but they will be very soon. To give you a taste of
the type of videos I edited, check out the following:
These are good examples of the sort of videos James and I produce. You have
simple 1 camera interview footage of the narrator, combined with environmental
footage with helps the viewer situate him or herself within the narrator's ideas.
An a/v bumper and watermark are added during the mastering process.
I worked on the transcript of this video, and then made suggestions for the
most interesting segments of his lecture for inclusion into the video.
A more complicated type of video that James and I make involve two camera
shoots, normally reserved for NYPL Live events which feature both moderators/
narrators and guest speakers or lecturers. One camera is usually a static shot that
captures the guest, while the other either zooms in on the moderator or frames
both the moderator and guest together on the stage. Editing is all about
rhythm, and therefore understanding when to cut between camera shots is
all about creating a seamless, energetic, and natural feel, so that the viewer is
unaware of the edits; instead the edits function as eyes do during conversation,
flowing between points of attention in a organic way. See below:
The rest of my time was spent helping to vet the new NYPL site for data asset errors.
Each video is assigned to a node, which is its own web page containing all pertinent metadata
and styling which then gets pushed out into the front-facing site. I had a spreadsheet of about
500 videos that needed to be QA tested and signed off on. Each node would be opened and I
would check to see if any metadata was missing, whether the links worked, and if the HTML
was properly styled (titles italicized, correct header sizes, etc). I then created a color-coded
system in a spreadsheet which explained what phase of error testing each node was in, so that
my supervisor James would have a more streamlined approach to fixing the nodes. Some of the
most interesting work I did involved researching into metaata encoding standards when considering
how to deal with certain creator/contributor variants which were not easy to identify. I enjoyed the
process of investigating standards and identifying issues.