Should Teachers Let Computers Grade Essays?
While the below article argues that there are many flaws with new programs that grade papers automatically, I wonder if there could be a way to use such technology in combination with the human teacher?
To read more click here.
Viewed 3,595 times
Page Options
3 Comments
Ted, thanks for posting this article. It was a frightening read for me since I was shocked that programs that grade papers automatically, like E-Rater, are actually used on some standardized tests and in secondary schools in four states (Louisiana, North Dakota, Utah and West Virginia). Since computers cannot read, I was very surprised that administrators/educators would give that much credence to these programs.
I think a paragraph from the blog pretty much says it all. In paragraph five the author writes, "Perelman concluded in his critique of automated essay marking that longer writing and bigger words got better grades and that the ways to corrupt the auto-grader are almost limitless. E-rater, the creators of the software that graded his essay, responded by saying that if students were smart enough to deceive the software they deserved good grades" (Rosa, 2013).
There is something morally and ethically wrong when a company like E-Rater says that someone who is deceptive and basically cheating should be awarded good grades because that means they are "smart enough". The comments of the E-Rater company are frightening and should remind us that when we leave the grading to the computers, we are also in danger of causing further moral decay of our education system. In short, computers cannot read and if computer programmers, like those at E-Rater, have questionable moral and ethical standards then it could become a case of garbage in, garbage out.
Scott
Kalia, R. (May 24, 2013) Automated Marking: Bad for Essays? The Guardian. http://www.guardian.co.uk/education/mortarboard/2013/may/24/automated-marking-bad-for-essays.
Hello,
Thank you, Scott. I have to agree. Many of these grading softwares use formulas such as the Fleicsh-Kincaid and and the ARI (Automated Readability Index). The scales count syllables, number of words, and number of sentences. While the scales could give us an idea of whether the student has variety in sentence strucuture of a a monosyllabic vocabulary, it can tell us nothing of content.
However, There are some uses for these formulas. They are often used to make sure the language and sentence variety of business and medical documents are simple enough for the public to comprehend. If we could combine the data collected by the formula with a human evaluation, it may have some benefit. Does anyone have an idea how we could use these scales to improve writing?
Thanks,
Rebecca
You have a very good website, which is loading very fast.. can you tell us how you managed it ? smartphonesunder10000.com or best phone under 10000 in january 2016 or best phone under 15000 for india in 2016