Visalia Direct: Virtual Valley
July 9, 2012 Deadline
August 2012 Issue
The Teacher’s Pet: Computers Grade Homework
While technology long ago replaced human grading of multiple-choice exams, with the familiar “machine gun” rattle of Scantron machines heard in schools around the globe, few teachers expected software to start grading student essays.
Early this year a research paper was published by Mark Shermis, dean of the University of Akron’s College of Education. Co-authored with two graduate students, the paper went unnoticed until it was reposted on in the New York Times and featured on National Public Radio this summer. This report, “Contrasting State-of-the-Art Automated Scoring of Essays,” compared nine software-based systems for grading student papers.
The researchers found, “By and large, the scoring engines did a good of replicating the mean scores for all of the data sets.”
Surprisingly, software-calculated grades matched the grades teachers would have assigned in 85 percent of cases. Even when the grades differed, they weren’t as far apart as many would expect. Two teachers grading the same papers are likely to agree at about the same rate, according to the researchers.
While I’m not entirely convinced software can grade complex student essays, discussion among writing teachers have focused on the basic skills that software can grade. A computer cannot grade concepts, but software can detect rudimentary grammar, mechanics and formatting problems. Plus, software can help detect plagiarism and sudden changes in writing styles.
The sad reality is that many students entering our institutions of higher education need remediation in writing and math. The U.S. Department of Education estimates 1.7 million college students are required to enroll in remedial courses. Many of these students graduated from high school unable to write at an eighth-grade level, much less perform college-level work.
Software-assisted feedback can help these students. By marking basic problems, the software can enable instructors to spend more class time on logical reasoning and academic argumentation. At the university level, we hope to teach critical thinking skills. We expect students to arrive on campus with basic grammar skills.
Working on my master’s degree at Fresno State, I began using software tools to help students understand their strengths and weaknesses as writers. I was assigned four “writing lab” sections. These special courses were designed to help students learn the standards of academic writing. I soon noticed patterns in the writing and realized I could use software to mark those issues automatically.
I do not use software to grade papers. Instead, I use software to help me quickly detect and highlight potential problems. After using software to assist me, I still need to review the marks because automation is far from perfect.
Any long-time Microsoft Word user knows the grammar checking is weak, prone to missing obvious errors and incorrectly flagging correct sentences. Instead of relying on Word’s built-in grammar tools, I use Grammarian, by Linguisoft (http://linguisoft.com/). Thanks to Grammarian’s “Statistics Analysis” and other reports, I can show students areas of concern in their papers. Grammarian reports include notes explaining errors, offering writers tips to remember rules and avoid repeating errors in the future.
Where I teach, the university uses two services that check papers for plagiarism and formatting issues. We submit student first drafts to SafeAssign (http://www.safeassign.com/), while final papers are submitted to Turnitin (http://turnitin.com/). In my experience, students rarely intend to omit citations or plagiarize in their papers. What these services allow me to do is teach students about the various academic paper formats and university expectations.
In addition to these three tools, I have created my own AppleScripts and Word macros to mark papers and provide feedback to students. These scripts and macros perform tasks that used to take a great deal of time. For example, I highlight words that new college students overuse. Imprecise words like “very” need to be replaced with precise words and phrases. Manually marking each “very” in some papers takes several minutes, whereas the computer can mark the complete list of problem words in a paper in mere seconds.
Marking problems and indicating a grade is not teaching. Grades are one tool to help students develop skills, but I don’t consider them the best tool. I allow students to revise papers until the last week of class because they will learn more with each draft. Yes, this means final course grades are higher, but I’m willing to tell any administrator that the students earned those grades.
No software can provide the guidance and assistance emerging writers seek. The software cannot meet with a student and offer support and encouragement. A page of highlighted words and tagged formatting mistakes might demoralize a student without proper guidance and feedback.
Administrators are embracing automated essay grading because they know class sizes are too large. With 40 or more students in each section of a required undergraduate course, I might have 120 papers to review each week. We require four or five papers per semester, along with one or two drafts per paper. It isn’t uncommon for a professor to review 6000 pages of student writing in a single semester: five papers, five pages per paper, a draft and final of each, multiplied by 100 to 150 students.
I do worry that administrators will become convinced that software can grade as effectively as teachers, forgetting the important roles educators play as guides and mentors. In the name of efficiency, software will be used to justify larger and larger class sizes.
Some universities now have 50 or more students in general education writing courses. Teachers are being asked to teach “overloads” leading to five or six classes per instructor. How can we ask any instructor to guide 300 students into the world of academic writing?
I love the assistance software provides when I review student papers. There is no doubt the software tools have allowed me to focus on developing critical thinking skills, which matter more to me than formatting perfection.
I do not want the software to grade my students. Grading and guidance are part of my job as a university professor.
July 9, 2012 Deadline
August 2012 Issue
The Teacher’s Pet: Computers Grade Homework
While technology long ago replaced human grading of multiple-choice exams, with the familiar “machine gun” rattle of Scantron machines heard in schools around the globe, few teachers expected software to start grading student essays.
Early this year a research paper was published by Mark Shermis, dean of the University of Akron’s College of Education. Co-authored with two graduate students, the paper went unnoticed until it was reposted on in the New York Times and featured on National Public Radio this summer. This report, “Contrasting State-of-the-Art Automated Scoring of Essays,” compared nine software-based systems for grading student papers.
The researchers found, “By and large, the scoring engines did a good of replicating the mean scores for all of the data sets.”
Surprisingly, software-calculated grades matched the grades teachers would have assigned in 85 percent of cases. Even when the grades differed, they weren’t as far apart as many would expect. Two teachers grading the same papers are likely to agree at about the same rate, according to the researchers.
While I’m not entirely convinced software can grade complex student essays, discussion among writing teachers have focused on the basic skills that software can grade. A computer cannot grade concepts, but software can detect rudimentary grammar, mechanics and formatting problems. Plus, software can help detect plagiarism and sudden changes in writing styles.
The sad reality is that many students entering our institutions of higher education need remediation in writing and math. The U.S. Department of Education estimates 1.7 million college students are required to enroll in remedial courses. Many of these students graduated from high school unable to write at an eighth-grade level, much less perform college-level work.
Software-assisted feedback can help these students. By marking basic problems, the software can enable instructors to spend more class time on logical reasoning and academic argumentation. At the university level, we hope to teach critical thinking skills. We expect students to arrive on campus with basic grammar skills.
Working on my master’s degree at Fresno State, I began using software tools to help students understand their strengths and weaknesses as writers. I was assigned four “writing lab” sections. These special courses were designed to help students learn the standards of academic writing. I soon noticed patterns in the writing and realized I could use software to mark those issues automatically.
I do not use software to grade papers. Instead, I use software to help me quickly detect and highlight potential problems. After using software to assist me, I still need to review the marks because automation is far from perfect.
Any long-time Microsoft Word user knows the grammar checking is weak, prone to missing obvious errors and incorrectly flagging correct sentences. Instead of relying on Word’s built-in grammar tools, I use Grammarian, by Linguisoft (http://linguisoft.com/). Thanks to Grammarian’s “Statistics Analysis” and other reports, I can show students areas of concern in their papers. Grammarian reports include notes explaining errors, offering writers tips to remember rules and avoid repeating errors in the future.
Where I teach, the university uses two services that check papers for plagiarism and formatting issues. We submit student first drafts to SafeAssign (http://www.safeassign.com/), while final papers are submitted to Turnitin (http://turnitin.com/). In my experience, students rarely intend to omit citations or plagiarize in their papers. What these services allow me to do is teach students about the various academic paper formats and university expectations.
In addition to these three tools, I have created my own AppleScripts and Word macros to mark papers and provide feedback to students. These scripts and macros perform tasks that used to take a great deal of time. For example, I highlight words that new college students overuse. Imprecise words like “very” need to be replaced with precise words and phrases. Manually marking each “very” in some papers takes several minutes, whereas the computer can mark the complete list of problem words in a paper in mere seconds.
Marking problems and indicating a grade is not teaching. Grades are one tool to help students develop skills, but I don’t consider them the best tool. I allow students to revise papers until the last week of class because they will learn more with each draft. Yes, this means final course grades are higher, but I’m willing to tell any administrator that the students earned those grades.
No software can provide the guidance and assistance emerging writers seek. The software cannot meet with a student and offer support and encouragement. A page of highlighted words and tagged formatting mistakes might demoralize a student without proper guidance and feedback.
Administrators are embracing automated essay grading because they know class sizes are too large. With 40 or more students in each section of a required undergraduate course, I might have 120 papers to review each week. We require four or five papers per semester, along with one or two drafts per paper. It isn’t uncommon for a professor to review 6000 pages of student writing in a single semester: five papers, five pages per paper, a draft and final of each, multiplied by 100 to 150 students.
I do worry that administrators will become convinced that software can grade as effectively as teachers, forgetting the important roles educators play as guides and mentors. In the name of efficiency, software will be used to justify larger and larger class sizes.
Some universities now have 50 or more students in general education writing courses. Teachers are being asked to teach “overloads” leading to five or six classes per instructor. How can we ask any instructor to guide 300 students into the world of academic writing?
I love the assistance software provides when I review student papers. There is no doubt the software tools have allowed me to focus on developing critical thinking skills, which matter more to me than formatting perfection.
I do not want the software to grade my students. Grading and guidance are part of my job as a university professor.
Comments
Post a Comment