The four sources used within this section included very different information which made it difficult to write in a more comparison style. Each of these sources included interesting and important data which made it clear that my summary of each would be quite substantial. I attempted to be sure to only include relevant information from each text.
Susan Lang and Laura Palmer (2017) conducted in depth research on how job trends and requirements for a new definition of technical editing and how technical writing programs may not be keeping up with the skills needed for these new definitions. Lang and Palmer compiled their collective data into a paper titled Reconceiving Technical Editing Competencies for the 21st Century: Reconciling Employer Needs with Curricular Mandates. Lang and Palmer begin by sharing their thoughts on technical editing in curriculum, "We contend that technical editing receives relatively little attention in the academic world of technical communication, and, as such, the course has remained relatively static-and perhaps has even moved more toward a "classical editing" course (p. 298)." The authors began by searching for current job opportunities on various career boards and websites. There they found job descriptions that included knowledge of new ways of editing for a more technical world including audio, film, and images.
The next steps the authors took were to review articles, other media, and ultimately Technical Editing textbooks. What they found in articles as well as discussion boards was that the information was often dated. They had also found in comments from a 2015 thread from the Association of Teachers of Technical Writing that, "...most still included copyedit marks on paper as a component of their teaching." adding, "This finding was interesting to us. That teachers of technical editing classes in 2015 were still talking about paper and the proofreading marks an editor might place on the page seemed very dated (p. 300)." After their findings on editing texts, they turned their attention to an assortment of technical editing course descriptions from five universities. Only one class from the New Jersey Institute of Technology titled Professional and Technical Editing included information about how the students would be taught to "edit print and electronic media from a variety of fields (p. 302.)" The other courses explained the basics of editing such as speaking with subject matter experts, correct grammar and spelling, and creating style sheets.
Finally, the authors took the data they found and decided to write a curriculum for a new class that they would teach. Like most courses, it included the course description, expected learning outcomes and information on each module. Out of the four modules, three were dedicated to Audio, Video, and Editing for the Web (p. 304). They emailed the students at the end of the semester asking questions for feedback. Lang and Palmer stated that, "the most surprising response was that the majority believed that they would benefit from more time working on grammar and copyediting (p.305). The professors chose to re-evaluate and teach the course once more to see what had changed, some students found that going back to the basics would help while, "However, students in this cohort seemed to value the non-text based work more so than the work involving text-based editing (p.306)." Overall, Lang and Palmer recognized the need for more up to date formal training for students of the 21st century while also noting some potential obstacles to getting there.
Ryan K. Boettger researched three studies in how hiring managers used tests on editing, usually privatized and exclusive to their companies alone. The purpose of these tests very depending on what the hiring managers is looking for. Some score the tests of the hopeful employees holistically instead of scoring on a point to point method. Boettger states in his article The Technical Communication Editing Test: Three Studies on This Assessment Type, "According to the hiring managers, holistic assessment allowed them to gauge which errors the applicant fixed in relation to the ones they missed (p.218)." In addition to how the exams are assessed, Boettger gives us understanding on how these tests are administered (on-site vs off-site), the format of the exam (narrative format, sentence format, a combination of the two with the addition of true/false and multiple choice questions). "70% of the tests required applicants demonstrate knowledge of one of five style guides including the Chicago Manual of Style (35%) [and] the American Medical Association Manual Style (15%) (p.217).
The second study had to do with the types of errors in the editing tests. Two previous studies provided the data listed in this second study. Connors & Lunsford (1988) and Lunsford & Lunsford (2008). The first study accumulated findings of the most types of errors made made by college students and these were complied onto a list. The number one error was spelling mistakes by an overwhelming margin of 300%. This data was removed from the study for more independent research. Other rankings include, "Missing comma after an introductory element" (occurring 11.5% of the time) and ended with "Its/it's error" (occurring 1% of the time) (p. 218). Lunsford & Lunsford than followed up with the study 20 years later to see what had changed. Expectedly, the information had changed as to what kind of mistakes were most common during that time. "Due to an increase in argument papers, the new list included errors related to using sources, quotations, and attributions. (p.218). Errors were also classified into "six broad categories: grammar and mechanics, punctuation, spelling style, content, and design (p.219)."
The final study looked into "Professional's Perceptions of Error types (p.224). The results were mixed during this study. Depending on the professional and the companies goals themselves, there was a great variant to which editing errors held the most weight.
The article’s information I am sharing with you comes from the journal Technical Communication and was written by Wilde, Corbin, Jenkins, and Rouiller. The purpose of the article is to explain how IBM ensures that they produce the best quality of information they can by a system of editing checks. IBM uses a refined set of nine characteristics as defined in the book “Developing quality technical information” (Hargis and colleagues 2004)(p.439). These characteristics are imperative to the high quality of information provided to their customers. The use of these characteristics are known as the EFQ process. EFQ stands for “Editing for Quality”. This EFQ process has developed into…,”a vital process that not only improves the quality and consistency of information, but also increases the skills of both writers and editors (p. 439)."
It is important that companies decide for themselves the definition of what quality is for them. IBM’s definition is, “grounded in the user’s perspective, but it is based on quality characteristics that our information developers control (pp. 439-440). The nine qualities are as follows: Task orientation, Accuracy, Completeness, Clarity, Concreteness, Style, Organization, Retrievability, and Visual effectiveness (p. 440). In order to use the information they receive, IBM has to create a set of metrics in which the qualities could be measured. IMB produces a lot of content and they cannot be expected to use EFQ for each one. However, they will focus on the information that users use the most as well as the information that has received criticism. “Hart (2004) reminds technical communicators that metrics determine where writers will spend their time to improve quality (p. 441)."
Michael J. Albers and John F. Marsella created a study that would analyze the types of editing comments undergrad students of technical communication would make on a report. Albers and Marsella (2011) provided 11 students with a report that consisted of seven pages. The purpose was to see how well the students commented on, "Three levels of edits (global, paragraph, and sentence) (p. 56)." They used two coders who would independently look through the papers that were edited then they would come together to discuss their reasons for scoring how they did. This interaction would help the researchers to develop codes which would help in the analysis. "Each comment was coded for three different attributes. Level of structure commented on (global, paragraph, or sentence), Type of comment (direct or indirect), and Quality of comment (good or poor) (p.56)." Additionally, the researchers use a screen video recording technology made by TechSmith, called Morae so that each students process could be monitored.
The student were given 45 minutes to read over the text and make their comments to the author. Among the 11 samples assignments, 132 comments were made. Nine students finished the assignment to their satisfaction within the 45-minute time limit. two students ...attempted to rewrite the document instead of offering suggestions to the authors. As a result they only worked with the first 25% to 30% if the document before the end of the 45 minutes (p. 58)." Only one student did what most editing instructional texts would tell you and that was to read through the whole document first. After coding each completed (or somewhat completed) assignments, it was shown that,"Of the 132 comments collected, the global level was the least commented on: only 11 comments (8%) were made on this level. Paragraph-level comments were the predominant level, with 81 comments (62%). There were 40 sentence-level comments(30%) (p. 58)."
With the data procured, the researchers were able to make recommendations about what needs to be taught in technical editing classes. such as, "direct students toward effective editing habits, such as top-down style" and "perhaps the discussion (of top-down style) can be combined with working through specific global-level examples to help the students contextualize its advantages (62)."
While the data found was interesting, this writer must comment on the unusual information provided the readers. This information speaks of how an editor must (or are greatly encouraged to) write their comments in such a way as to not offend the author. some examples show this, "authors may easily become offended and feel they are being criticized rather than helped by the editors (Lanier 2004) (p.53), "Comments should be structured to include the rational for the change (Gerich, 1994) (53)."Comments should be phrased as suggestions rather than directions to the author (Zimmerman & Taylor 1999)." and, "The comment's tone can determine the author's opinion of the editor: One harsh-sounding comment could cause the author to ignore the editor's work wholesale (Mackiewicz and Riley, 2003) (p. 59)." The authors stress the ability to make good comments is vital for the editor and that these skills need to be taught. However, as a personal reflection, this writer would also like to add that the writers shouldn't be such babies.