Skip to main content

The Teacher’s Pet: Computers Grade Homework

Visalia Direct: Virtual Valley
July 9, 2012 Deadline
August 2012 Issue

The Teacher’s Pet: Computers Grade Homework

While technology long ago replaced human grading of multiple-choice exams, with the familiar “machine gun” rattle of Scantron machines heard in schools around the globe, few teachers expected software to start grading student essays.

Early this year a research paper was published by Mark Shermis, dean of the University of Akron’s College of Education. Co-authored with two graduate students, the paper went unnoticed until it was reposted on in the New York Times and featured on National Public Radio this summer. This report, “Contrasting State-of-the-Art Automated Scoring of Essays,” compared nine software-based systems for grading student papers.

The researchers found, “By and large, the scoring engines did a good of replicating the mean scores for all of the data sets.”

Surprisingly, software-calculated grades matched the grades teachers would have assigned in 85 percent of cases. Even when the grades differed, they weren’t as far apart as many would expect. Two teachers grading the same papers are likely to agree at about the same rate, according to the researchers.

While I’m not entirely convinced software can grade complex student essays, discussion among writing teachers have focused on the basic skills that software can grade. A computer cannot grade concepts, but software can detect rudimentary grammar, mechanics and formatting problems. Plus, software can help detect plagiarism and sudden changes in writing styles.

The sad reality is that many students entering our institutions of higher education need remediation in writing and math. The U.S. Department of Education estimates 1.7 million college students are required to enroll in remedial courses. Many of these students graduated from high school unable to write at an eighth-grade level, much less perform college-level work.

Software-assisted feedback can help these students. By marking basic problems, the software can enable instructors to spend more class time on logical reasoning and academic argumentation. At the university level, we hope to teach critical thinking skills. We expect students to arrive on campus with basic grammar skills.

Working on my master’s degree at Fresno State, I began using software tools to help students understand their strengths and weaknesses as writers. I was assigned four “writing lab” sections. These special courses were designed to help students learn the standards of academic writing. I soon noticed patterns in the writing and realized I could use software to mark those issues automatically.

I do not use software to grade papers. Instead, I use software to help me quickly detect and highlight potential problems. After using software to assist me, I still need to review the marks because automation is far from perfect.

Any long-time Microsoft Word user knows the grammar checking is weak, prone to missing obvious errors and incorrectly flagging correct sentences. Instead of relying on Word’s built-in grammar tools, I use Grammarian, by Linguisoft (http://linguisoft.com/). Thanks to Grammarian’s “Statistics Analysis” and other reports, I can show students areas of concern in their papers. Grammarian reports include notes explaining errors, offering writers tips to remember rules and avoid repeating errors in the future.

Where I teach, the university uses two services that check papers for plagiarism and formatting issues. We submit student first drafts to SafeAssign (http://www.safeassign.com/), while final papers are submitted to Turnitin (http://turnitin.com/). In my experience, students rarely intend to omit citations or plagiarize in their papers. What these services allow me to do is teach students about the various academic paper formats and university expectations.

In addition to these three tools, I have created my own AppleScripts and Word macros to mark papers and provide feedback to students. These scripts and macros perform tasks that used to take a great deal of time. For example, I highlight words that new college students overuse. Imprecise words like “very” need to be replaced with precise words and phrases. Manually marking each “very” in some papers takes several minutes, whereas the computer can mark the complete list of problem words in a paper in mere seconds.

Marking problems and indicating a grade is not teaching. Grades are one tool to help students develop skills, but I don’t consider them the best tool. I allow students to revise papers until the last week of class because they will learn more with each draft. Yes, this means final course grades are higher, but I’m willing to tell any administrator that the students earned those grades.

No software can provide the guidance and assistance emerging writers seek. The software cannot meet with a student and offer support and encouragement. A page of highlighted words and tagged formatting mistakes might demoralize a student without proper guidance and feedback.

Administrators are embracing automated essay grading because they know class sizes are too large. With 40 or more students in each section of a required undergraduate course, I might have 120 papers to review each week. We require four or five papers per semester, along with one or two drafts per paper. It isn’t uncommon for a professor to review 6000 pages of student writing in a single semester: five papers, five pages per paper, a draft and final of each, multiplied by 100 to 150 students.

I do worry that administrators will become convinced that software can grade as effectively as teachers, forgetting the important roles educators play as guides and mentors. In the name of efficiency, software will be used to justify larger and larger class sizes.

Some universities now have 50 or more students in general education writing courses. Teachers are being asked to teach “overloads” leading to five or six classes per instructor. How can we ask any instructor to guide 300 students into the world of academic writing?

I love the assistance software provides when I review student papers. There is no doubt the software tools have allowed me to focus on developing critical thinking skills, which matter more to me than formatting perfection.

I do not want the software to grade my students. Grading and guidance are part of my job as a university professor.

Comments

Popular posts from this blog

What I Studied in Graduate School

Lower case ‘a’ from Adobe Caslon Pro, superposed onto some guides. (Photo credit: Wikipedia) Asked to summarize my research projects...

Curiously, beyond the theses and dissertation, all my work is in economics of media and narrative. I ask what works and why when offering stories to audiences. What connects with an audience and can we model what audiences want from narratives? (Yes, you can model data on narratives and what "sells" and what wins awards and what nobody wants.)

Yet, my degree research projects all relate to design of writing spaces, as knowing what works is also key to knowing what could be "sold" to users.

MA: How poor LMS UI/UX design creates online spaces that hinder the writing process and teacher mentoring of students.

Also: The cost of LMS design and compliance with legal mandates for usability.

Ph.D: The experiences of special needs students in online settings, from commercial spaces to games to learning spaces and which spaces are best desig…

Comic Sans Is (Generally) Lousy: Letters and Reading Challenges

Specimen of the typeface Comic Sans. (Photo credit: Wikipedia) Personally, I support everyone being able to type and read in whatever typefaces individuals prefer. If you like Comic Sans, then change the font while you type or read online content. If you like Helvetica, use that.

The digital world is not print. You can change typefaces. You can change their sizes. You can change colors. There is no reason to argue over what you use to type or to read as long as I can use typefaces that I like.

Now, as a design researcher? I'll tell you that type matters a lot to both the biological act of reading and the psychological act of constructing meaning. Statistically, there are "better" and "worse" type for conveying messages. There are also typefaces that are more legible and more readable. Sometimes, legibility does not help readability, either, as a type with overly distinct letters (legibility) can hinder word shapes and decoding (readability).

One of the co…

MarsEdit and Blogging

MarsEdit (Photo credit: Wikipedia) Mailing posts to blogs, a practice I adopted in 2005, allows a blogger like me to store copies of draft posts within email. If Blogger, WordPress, or the blogging platform of the moment crashes or for some other reason eats my posts, at least I have the original drafts of most entries. I find having such a nicely organized archive convenient — much easier than remembering to archive posts from Blogger or WordPress to my computer.

With this post, I am testing MarsEdit from Red Sweater Software based on recent reviews, including an overview on 9to5Mac.

Composing posts an email offers a fast way to prepare draft blogs, but the email does not always work well if you want to include basic formatting, images, and links to online resources. Submitting to Blogger via Apple Mail often produced complex HTML with unnecessary font and paragraph formatting styles. Problems with rich text led me to convert blog entries to plaintext in Apple Mail and then format th…