On-line and Open Web Exams was a very interesting and relevant subject to explore. I think the way it was approached in this paper was very good. The questions posed and research used to explore this topic seem to put a spotlight on the concerns some people have about these kinds of exams. I liked the tables used in this paper because it presented the findings in an easy to understand fashion. The only complaint I have about this paper is that the author used contractions in the submission and I do not agree that in formal papers this should take place. After reading this paper, I do feel I have learned more about the pros and cons of these types of assessments, and did enjoy the paper overall.
With the modern day student turning more so towards online education it is important for researchers to delve further into the phenomenon of online learning and testing. This paper provides a thorough discussion of the pros and cons of online testing covering important aspects of web testing including administering the test, grading the test, and academic integrity regarding web testing. I do think some aspects of the article were presented in a negative connotation which is offsetting as an online learner myself, a more neutral stance on the topic or a complete full or against argument may be more beneficial to the body/field of knowledge for publication purposes.
When you discussed material covered on online exams, it made me think about authenticity. When students are asked to solve a problem in real life they would have access to resources such as the web, just like they would during an online exam. I agree with the comment above that there was a somewhat negative connotation to the paper. Many drawbacks were mentioned that seemed to outweigh the benefits of this type of exam. It may be interesting to discuss types of material that would be better fit for open-Web assessment. It seems to me that there may be many cons to open-Web testing in general, but this could also be a great form of assessing a student's ability to apply what they have learned for some areas.
I think that this Essay encompasses the advantages of online exams. I especially like when the writer mentioned the grading of online exams. Being an online educator, auto-grading is definitely convenient and leaves less room for human error if applied correctly. Auto-grading can help online educators to assist students with the content of the course rather than spending endless hours grading exams. In online education, high student count is common so it is beneficial to the students when the instructor spends less time grading and more time assisting with questions. The only down-side in auto-grading is if the system entails technical difficulties. Other than that, I do not see a negative aspect to auto-grading.
This is an interesting paper; however, I have a few comments.
To uncover the implications of online exams, we surveyed instructors on four teaching-related listservs [1] and students in the author's classes from Fall 2009 until Spring 2011. Eighty-five instructors and 315 students responded to the survey. All of them had experience administering or taking online and/or "open-Web" exams (where students were given unfettered access to the Internet during the exam, but forbidden to communicate with others).
1. I would like to know more about the sample. Right now I don't know if 315 respondents was 50% of potential respondents or 3%.
But doing so precludes using the question-scrambling feature, thus forgoing an important defense against cheating.
2. The June 3, 2012 Chronicle seems to dispute some of the above…"In the case of that student, the professor in the course had tried to prevent cheating by using a testing system that pulled questions at random from a bank of possibilities. The online tests could be taken anywhere and were open-book, but students had only a short window each week in which to take them, which was not long enough for most people to look up the answers on the fly. As the students proceeded, they were told whether each answer was right or wrong…Mr. Smith figured out that the actual number of possible questions in the test bank was pretty small. If he and his friends got together to take the test jointly, they could paste the questions they saw into the shared Google Doc, along with the right or wrong answers" http://chronicle.com/article/Cheating-Goes-High-Tech/132093/?sid=at&utm_source=at&utm_medium=en
In the Moodle quizzing system, we found that text boxes covered up the questions on certain browsers. Early versions of Google Chrome tended to freeze up, especially when the window was scrolled horizontally.
3. You may want to develop the section on Moodle, Google Chrome, etc. a bit more. Not all readers will be familiar with Moodle and it may help to expand more on compatibility. You could connect this to your cheating discussion - on a timed exam students know if they call technical support they can get twice as much time since the professor usually resets the timer to zero.
. Thus, automatic grading can sometimes take more time than manual grading. It is also more difficult to give partial credit on automatically graded questions; this can negatively impact student scores.
4. This is an interesting observation. You may want to determine if some of the issues you raise are due to poor test questions.
I enjoyed the article very much, thank you for giving me the opportunity to comment on it.
[1] The Engineering Technology listserv; SIGCSE-members (computer science faculty); the College Board's AP-CS list; and the listserv of the Professional & Organization Development Network in Higher Education.
References
Boniface, D. (1985). Candidates' use of notes and textbooks during an open-book examination. Educational Research, 27(3), 201-209.
[Redacted]. (2010). Online vs. on-paper exams. ASEE 2010. Presented at the American Society for Engineering Education Annual Conference, Louisville, KY.
Ioannidou, M. K. (1997). Testing and Life-Long Learning: Open-Book and Closed-Book Examination in a University Course. Studies in Educational Evaluation, 23(2), 131-39.
McDaniel, M. A., Roediger, H. L., & McDermott, K. B. (2007). Generalizing test-enhanced learning from the laboratory to the classroom. Psychonomic Bulletin & Review, 14(2), 200-206.
Rakes, G. C. (2008). Open book testing in online learning environments. Journal of Interactive Online Learning, 7(1), 1-9.
Wiggins, G. (1990, December). The Case for Authentic Assessment. ERIC Clearinghouse on Tests Measurement and Evaluation: ERIC ED328611.
Williams, J. B., & Wong, A. (2007). Closed book, invigilated exams versus open book, open web exams: An empirical analysis. Proc. ASCILite 2007, 1079-1083.
Greetings Ed, Your research is much needed in this area. I do think that addressing the question of do they really understand the information and can we get in depth answers that require critical thinking in an online testing environment is critical. You have addressed the pros and cons that are in the testing environment. The real question may be will there be enough money invested to create the assessments that are needed with variables considered.
This article is a wonderful look at online testing. I feel that this can be the beginning of a further discussion around this topic. In the "Handwriting vs. Coding" section the author discusses WYSIWYG editors, one of which is Front Page. This is a outdated, non-supported program that Microsoft has not provided since their 2003 version. I suggest that the more recent products of maybe Microsoft Expression Web or Sharepoint Designer be discussed in order to show currency.
One other suggestion is to briefly engage in a discussion of the developments currently in process regarding ways to assure test security.
Relevance - Your examination of one area of a fast growing educational sector has good potential. Some work is needed though to bring the paper up to its potential for a significant contribution.
Significance - Considerations for each type and some context for each was helpful. Building more on the foundation of the paper will help strengthen the level of significance for the study.
Originality - Once the analysis is conducted, the level of originality may be able to be strengthened.
Methodology - A few points in the paper should be addressed. For example, off-line does not equate to on-paper. Personal computers, tablets, and smart phones could host exams on or offline. Additional information could be gathered regarding how the respondents regarded either method. Also, details regarding the data collection seem to be missing. For example, what is the population size? Did the samples contain multiple types of questions? Where graphic responses used where a test-taker could use a mouse or digitizing tablet?
Generalizability - Having additional detail and theoretical background will help in assessing the generalizability. Some statements with respect to existing configurations such as, "virtually all learning management …" (p.1) should be justified so such statements are not to be considered editorial.
Theoretical grounding - Some foundational material was provided. What may be helpful is to start with general test-taking then moving to delivery methods. How do other methods compare? Good start with academic integrity and should develop into a good section.
Clarity - I had a hard time following the logic in several areas. The juxtapositioning example was just one. How will extra scrutiny with on-line exams out-of-class determine if a student is cheating? Comparing scores seems to be a tenuous method. Perfect scores leave little room for judgment. The rationale does not seem present. An instructor may want to structure the flow of questions rather than have them completely randomized. A prescribed flow could be an advantage of paper surveys or could be programmed into an on-line delivery. Also, an inherent bias seems to be coming through against paper exams. For example, each student can have the same amount of time with a paper exam but to do so would require more work. If a time limit is not set, as with the unlimited time, the paper exam has no apparent disadvantage. Numerical tests can be difficult to autograde. For example, ½ = 0.5. Either could be correct but the computer may only recognize one. Partial credit could also be difficult to assess for on-line essay and mathematical questions.
The grading section had me lost. Be sure to be specific. For example, "Mention online exams, and automated grading comes to mind. When it works, it is a godsend, especially in large classes." (p.3). I did not think of automatic grading when thinking of online exams so perhaps softening of the absoluteness of the verbiage would be helpful. To what does it refer? Does it refer to automated grading, online exams, or both?
Some of the examples listed were a bit of a stretch as well. Leaving town, different feedback, phrasing of questions, security, and so forth should each be developed more. Where do paper exams with scan sheets fit into the study?
Additional data explanations are needed. For example, what are the numbers in the tables? How many exams were lost because of saving versus submitting?
APA - Needs some polishing. A few areas: APA does not allow contraction, the verbiage is too informal, avoid second person, paragraphs should have at least three sentences, word usage - e.g., each versus all, a table splitting a paragraph.
Hopefully these comments are taken in the spirit in which they are given. Looks like you have a good start on some interesting work. Thanks for the opportunity to provide some input.
Research in this area is so necessary for those skeptics, educators and administrators. I administer online tests for secondary students in a computer lab setting designated rooms at designated times. The results are directly submitted to the state. We (teachers) are not suppose to read the questions. The Arizona State tests AIMS is a paper test and no one is suppose to look at the questions or help the students. So the state is more concerned with the teachers cheating. You may want to state in your summery what further research is needed.
It sounds like Denver has been involved in online assessment research more so than I have. However, I have to agree that we need more information on your sample respondents. Educational research must be conducted in a rigorous and systematic way.
Your essay offers a good overview of online testing, which is becoming a relevant topic as instructors work to find viable means by which to measure student comprehension and learning. As one who earned a degree online, one area that was problematic for me as a student that I hoped you would address was in the area of useable feedback. I know you discussed feedback, but specifically I found that online feedback might inform the student that the answer was right or wrong, but at times did not explain why the answer was right or wrong. This was problematic for me as a student taking a statistics course. The computer generated quiz indicated that my answer was either right or wrong, but did not provide the means by which I could learn why my answer was right or wrong nor how to correct my mistake to produce the correct answer. As a result, the concepts were difficult to learn, which should be the purpose and result of educational pursuits whether online or on ground. I believe this to be a shortfall of the online testing systems with which I am familiar. I realize your population did not have that concern and you are only addressing the results of your survey; however, I thought I would bring this aspect of online testing to your attention.
Regarding the essay itself, I would encourage the adoption of a more formal writing tone including the avoidance of a conversational tone, passive voice, first person pronouns, and contractions. Additionally, review APA for the formatting requirements for tables as well as the APA designation for tables. Further, I would encourage avoiding assumptions or generalities. For example, the assumption that the mention of online exams brings automated grading to mind or that handwriting is a handicap for students and instructors.
I trust these thoughts might assist you going forward with your work.
You provided a great deal of information on the different formats for online exams. I have used the multiple question and essay formats in a few classes. Most of my students preferred essay questions over the true/false and multiple choice questions, primarily due to the time it takes to look up an answer in the textbook or on the web. For essay questions, I do require that the student includes references. I appreciated reading the pros and cons you considered for online exams, especially in the various system platforms.
Experience with Online and Open-Web Exams Contribution: The study is a wonderful eye-opener and would contribute to the fields of education and technology. Study discusses advantages and disadvantages of online and open-web exams [did you mean open-web course exams for clarification?] but has some challenges that can be fixed.
Title: The term Experience in title implies the study is qualitative phenomenological but method used in the study is quantitative. Is it more appropriate to revise the title to reflect the method used? Does the title need to be aligned with the specific problem and purpose of the study?
Abstract: How could summarizing the purpose of the study, research question (specific problem), number and type of population surveyed, design, method, findings, and future implications give readers a clear picture of the direction of the study at first glance?
Introduction: How would using sources (citations) provide a credible argument? Imagine readers who are not technical savvy and who do not understand the difference between an online versus an open-web course exam; would explaining the distinction clear the fog and provide understanding?
Distance Learning: Is it feasible to discuss distance learning and connect it to open-web course exams for clarification rather than the general term online?
Objectivity: Is objectivity important in quantitative research? For instance, would using neutral and more positive verbiage enhance the readability and salience of the study?
Hypothesis/Research Question: Have you thought of including a null and alternate hypothesis?
Material Covered, Administration: Provides facts on several exam systems but how would readers know if the information provided is valid or whether the systems are reliable if they are thinking of purchasing one or more? Would citing seminal sources resolve the challenge of personal information?
Theoretical Framework: The discussion on academic integrity is on point. How would connecting academic integrity to a related theory ground/anchor the study? Tables: Good graphics but do you need to refer to each table in the content to show clear connection? Overall, a good start! Have you thought of the aspect that online faculty function as test proctors for distance learning open-web course exams? May I add a suggestion to include the specifics on validity, reliability, and generalizability should boost the robustness/credibility of the study. Once the study is made more robust considering everyone's input, who knows, findings may yet provide empirical evidence!
I like the premise of this paper, and as others have commented, I would like to see more specifics on validity and reliability. I would reconsider your conjunction uses; many are placed such that the meaning of your sentences is different than I suspect you were attempting to convey.
The use of contractions, slang ("divvied up", "boon" and "paper shuffling", for example) and a misuse of some punctuation is diminishing the readability of this formal piece of work.
As for content, your research is not always current, but well-cited. I would like to see a bit more current data to back up your study. In several places results state that "students were quite positive"; does this mean that they are content with or sure about? Results should be clearly communicated.
Interesting topic that I, myself, consider daily, being an instructor that utilizes automated assessment on a regular basis. Thanks!
Exam questions should always tie into the learning objectives of the course.
Academic integrity can be encouraged using exam software, as well as using web-based exams, in some situations.
Character recognition is a problem with respect to control codes, as per your research. These types of codes and characters must be standard across all computers and exam programs. Mathematical entries must be able to be recognized. Students are encouraged to show their work, but how can the automated system consider this? Partial credit is important, especially in these types of problems and equations. This may be the best way to get to the answer, but there are always different, unique ways of getting to the same answer. Perhaps this is a possible limitation.
In my opinion, online exams are the best way to test an online student. There are more pros than cons, in each direction, as per the literature review. It would be helpful to allow students more than one attempt due to the problems that may arise during the testing process. I believe a student should be able to save their work and return later to complete it.
I think this is a very interesting and most timely article. I like the detail as well as both the pros and cons of online and web testing. I found the article to be very thought provoking. Online and web testing is becoming more and more prevalent and this paper is informative as well as innovative. If I have any constructive criticism, I would say that the paper is a bit informal and lacks scholarly writing. Overall, I enjoyed the paper and found it to be very worthwhile and timely.
17 Comments
On-line and Open Web Exams was a very interesting and relevant subject to explore. I think the way it was approached in this paper was very good. The questions posed and research used to explore this topic seem to put a spotlight on the concerns some people have about these kinds of exams. I liked the tables used in this paper because it presented the findings in an easy to understand fashion. The only complaint I have about this paper is that the author used contractions in the submission and I do not agree that in formal papers this should take place. After reading this paper, I do feel I have learned more about the pros and cons of these types of assessments, and did enjoy the paper overall.
With the modern day student turning more so towards online education it is important for researchers to delve further into the phenomenon of online learning and testing. This paper provides a thorough discussion of the pros and cons of online testing covering important aspects of web testing including administering the test, grading the test, and academic integrity regarding web testing. I do think some aspects of the article were presented in a negative connotation which is offsetting as an online learner myself, a more neutral stance on the topic or a complete full or against argument may be more beneficial to the body/field of knowledge for publication purposes.
When you discussed material covered on online exams, it made me think about authenticity. When students are asked to solve a problem in real life they would have access to resources such as the web, just like they would during an online exam. I agree with the comment above that there was a somewhat negative connotation to the paper. Many drawbacks were mentioned that seemed to outweigh the benefits of this type of exam. It may be interesting to discuss types of material that would be better fit for open-Web assessment. It seems to me that there may be many cons to open-Web testing in general, but this could also be a great form of assessing a student's ability to apply what they have learned for some areas.
I think that this Essay encompasses the advantages of online exams. I especially like when the writer mentioned the grading of online exams. Being an online educator, auto-grading is definitely convenient and leaves less room for human error if applied correctly. Auto-grading can help online educators to assist students with the content of the course rather than spending endless hours grading exams. In online education, high student count is common so it is beneficial to the students when the instructor spends less time grading and more time assisting with questions. The only down-side in auto-grading is if the system entails technical difficulties. Other than that, I do not see a negative aspect to auto-grading.
Greetings
This is an interesting paper; however, I have a few comments.
To uncover the implications of online exams, we surveyed instructors on four teaching-related listservs [1] and students in the author's classes from Fall 2009 until Spring 2011. Eighty-five instructors and 315 students responded to the survey. All of them had experience administering or taking online and/or "open-Web" exams (where students were given unfettered access to the Internet during the exam, but forbidden to communicate with others).
1. I would like to know more about the sample. Right now I don't know if 315 respondents was 50% of potential respondents or 3%.
But doing so precludes using the question-scrambling feature, thus forgoing an important defense against cheating.
2. The June 3, 2012 Chronicle seems to dispute some of the above…"In the case of that student, the professor in the course had tried to prevent cheating by using a testing system that pulled questions at random from a bank of possibilities. The online tests could be taken anywhere and were open-book, but students had only a short window each week in which to take them, which was not long enough for most people to look up the answers on the fly. As the students proceeded, they were told whether each answer was right or wrong…Mr. Smith figured out that the actual number of possible questions in the test bank was pretty small. If he and his friends got together to take the test jointly, they could paste the questions they saw into the shared Google Doc, along with the right or wrong answers" http://chronicle.com/article/Cheating-Goes-High-Tech/132093/?sid=at&utm_source=at&utm_medium=en
In the Moodle quizzing system, we found that text boxes covered up the questions on certain browsers. Early versions of Google Chrome tended to freeze up, especially when the window was scrolled horizontally.
3. You may want to develop the section on Moodle, Google Chrome, etc. a bit more. Not all readers will be familiar with Moodle and it may help to expand more on compatibility. You could connect this to your cheating discussion - on a timed exam students know if they call technical support they can get twice as much time since the professor usually resets the timer to zero.
. Thus, automatic grading can sometimes take more time than manual grading. It is also more difficult to give partial credit on automatically graded questions; this can negatively impact student scores.
4. This is an interesting observation. You may want to determine if some of the issues you raise are due to poor test questions.
I enjoyed the article very much, thank you for giving me the opportunity to comment on it.
Sincerely
Denver
Endnotes
[1] The Engineering Technology listserv; SIGCSE-members (computer science faculty); the College Board's AP-CS list; and the listserv of the Professional & Organization Development Network in Higher Education.
References
Boniface, D. (1985). Candidates' use of notes and textbooks during an open-book examination. Educational Research, 27(3), 201-209.
[Redacted]. (2010). Online vs. on-paper exams. ASEE 2010. Presented at the American Society for Engineering Education Annual Conference, Louisville, KY.
Ioannidou, M. K. (1997). Testing and Life-Long Learning: Open-Book and Closed-Book Examination in a University Course. Studies in Educational Evaluation, 23(2), 131-39.
McDaniel, M. A., Roediger, H. L., & McDermott, K. B. (2007). Generalizing test-enhanced learning from the laboratory to the classroom. Psychonomic Bulletin & Review, 14(2), 200-206.
Rakes, G. C. (2008). Open book testing in online learning environments. Journal of Interactive Online Learning, 7(1), 1-9.
Wiggins, G. (1990, December). The Case for Authentic Assessment. ERIC Clearinghouse on Tests Measurement and Evaluation: ERIC ED328611.
Williams, J. B., & Wong, A. (2007). Closed book, invigilated exams versus open book, open web exams: An empirical analysis. Proc. ASCILite 2007, 1079-1083.
Greetings Ed, Your research is much needed in this area. I do think that addressing the question of do they really understand the information and can we get in depth answers that require critical thinking in an online testing environment is critical. You have addressed the pros and cons that are in the testing environment. The real question may be will there be enough money invested to create the assessments that are needed with variables considered.
Betty
This article is a wonderful look at online testing. I feel that this can be the beginning of a further discussion around this topic. In the "Handwriting vs. Coding" section the author discusses WYSIWYG editors, one of which is Front Page. This is a outdated, non-supported program that Microsoft has not provided since their 2003 version. I suggest that the more recent products of maybe Microsoft Expression Web or Sharepoint Designer be discussed in order to show currency.
One other suggestion is to briefly engage in a discussion of the developments currently in process regarding ways to assure test security.
The grading section had me lost. Be sure to be specific. For example, "Mention online exams, and automated grading comes to mind. When it works, it is a godsend, especially in large classes." (p.3). I did not think of automatic grading when thinking of online exams so perhaps softening of the absoluteness of the verbiage would be helpful. To what does it refer? Does it refer to automated grading, online exams, or both?
Some of the examples listed were a bit of a stretch as well. Leaving town, different feedback, phrasing of questions, security, and so forth should each be developed more. Where do paper exams with scan sheets fit into the study?
Additional data explanations are needed. For example, what are the numbers in the tables? How many exams were lost because of saving versus submitting?
Hopefully these comments are taken in the spirit in which they are given. Looks like you have a good start on some interesting work. Thanks for the opportunity to provide some input.
Dr. Al
Research in this area is so necessary for those skeptics, educators and administrators. I administer online tests for secondary students in a computer lab setting designated rooms at designated times. The results are directly submitted to the state. We (teachers) are not suppose to read the questions. The Arizona State tests AIMS is a paper test and no one is suppose to look at the questions or help the students. So the state is more concerned with the teachers cheating. You may want to state in your summery what further research is needed.
It sounds like Denver has been involved in online assessment research more so than I have. However, I have to agree that we need more information on your sample respondents. Educational research must be conducted in a rigorous and systematic way.
Your essay offers a good overview of online testing, which is becoming a relevant topic as instructors work to find viable means by which to measure student comprehension and learning. As one who earned a degree online, one area that was problematic for me as a student that I hoped you would address was in the area of useable feedback. I know you discussed feedback, but specifically I found that online feedback might inform the student that the answer was right or wrong, but at times did not explain why the answer was right or wrong. This was problematic for me as a student taking a statistics course. The computer generated quiz indicated that my answer was either right or wrong, but did not provide the means by which I could learn why my answer was right or wrong nor how to correct my mistake to produce the correct answer. As a result, the concepts were difficult to learn, which should be the purpose and result of educational pursuits whether online or on ground. I believe this to be a shortfall of the online testing systems with which I am familiar. I realize your population did not have that concern and you are only addressing the results of your survey; however, I thought I would bring this aspect of online testing to your attention.
Regarding the essay itself, I would encourage the adoption of a more formal writing tone including the avoidance of a conversational tone, passive voice, first person pronouns, and contractions. Additionally, review APA for the formatting requirements for tables as well as the APA designation for tables. Further, I would encourage avoiding assumptions or generalities. For example, the assumption that the mention of online exams brings automated grading to mind or that handwriting is a handicap for students and instructors.
I trust these thoughts might assist you going forward with your work.
Dr Piercy
You provided a great deal of information on the different formats for online exams. I have used the multiple question and essay formats in a few classes. Most of my students preferred essay questions over the true/false and multiple choice questions, primarily due to the time it takes to look up an answer in the textbook or on the web. For essay questions, I do require that the student includes references. I appreciated reading the pros and cons you considered for online exams, especially in the various system platforms.
Posted on behalf of Joan Chambers:
Experience with Online and Open-Web Exams
Contribution: The study is a wonderful eye-opener and would contribute to the fields of education and technology. Study discusses advantages and disadvantages of online and open-web exams [did you mean open-web course exams for clarification?] but has some challenges that can be fixed.
Title: The term Experience in title implies the study is qualitative phenomenological but method used in the study is quantitative. Is it more appropriate to revise the title to reflect the method used? Does the title need to be aligned with the specific problem and purpose of the study?
Abstract: How could summarizing the purpose of the study, research question (specific problem), number and type of population surveyed, design, method, findings, and future implications give readers a clear picture of the direction of the study at first glance?
Introduction: How would using sources (citations) provide a credible argument? Imagine readers who are not technical savvy and who do not understand the difference between an online versus an open-web course exam; would explaining the distinction clear the fog and provide understanding?
Distance Learning: Is it feasible to discuss distance learning and connect it to open-web course exams for clarification rather than the general term online?
Objectivity: Is objectivity important in quantitative research? For instance, would using neutral and more positive verbiage enhance the readability and salience of the study?
Hypothesis/Research Question: Have you thought of including a null and alternate hypothesis?
Material Covered, Administration: Provides facts on several exam systems but how would readers know if the information provided is valid or whether the systems are reliable if they are thinking of purchasing one or more? Would citing seminal sources resolve the challenge of personal information?
Theoretical Framework: The discussion on academic integrity is on point. How would connecting academic integrity to a related theory ground/anchor the study?
Tables: Good graphics but do you need to refer to each table in the content to show clear connection?
Overall, a good start! Have you thought of the aspect that online faculty function as test proctors for distance learning open-web course exams? May I add a suggestion to include the specifics on validity, reliability, and generalizability should boost the robustness/credibility of the study. Once the study is made more robust considering everyone's input, who knows, findings may yet provide empirical evidence!
I like the premise of this paper, and as others have commented, I would like to see more specifics on validity and reliability. I would reconsider your conjunction uses; many are placed such that the meaning of your sentences is different than I suspect you were attempting to convey.
The use of contractions, slang ("divvied up", "boon" and "paper shuffling", for example) and a misuse of some punctuation is diminishing the readability of this formal piece of work.
As for content, your research is not always current, but well-cited. I would like to see a bit more current data to back up your study. In several places results state that "students were quite positive"; does this mean that they are content with or sure about? Results should be clearly communicated.
Interesting topic that I, myself, consider daily, being an instructor that utilizes automated assessment on a regular basis. Thanks!
Exam questions should always tie into the learning objectives of the course.
Academic integrity can be encouraged using exam software, as well as using web-based exams, in some situations.
Character recognition is a problem with respect to control codes, as per your research. These types of codes and characters must be standard across all computers and exam programs. Mathematical entries must be able to be recognized. Students are encouraged to show their work, but how can the automated system consider this? Partial credit is important, especially in these types of problems and equations. This may be the best way to get to the answer, but there are always different, unique ways of getting to the same answer. Perhaps this is a possible limitation.
In my opinion, online exams are the best way to test an online student. There are more pros than cons, in each direction, as per the literature review. It would be helpful to allow students more than one attempt due to the problems that may arise during the testing process. I believe a student should be able to save their work and return later to complete it.
I think this is a very interesting and most timely article. I like the detail as well as both the pros and cons of online and web testing. I found the article to be very thought provoking. Online and web testing is becoming more and more prevalent and this paper is informative as well as innovative. If I have any constructive criticism, I would say that the paper is a bit informal and lacks scholarly writing. Overall, I enjoyed the paper and found it to be very worthwhile and timely.