The Missing Link: Evaluating Faculty Workload in the Online Modality Scott Hovater and Michael J. Coplan Grand Canyon University
Financial viability, effective marketing and a strong curriculum are seen as indicators of success in distance learning programs. An often overlooked indicator of success is faculty workload. This study examines the weekly workload of more than 100 full-time online ins
It is refreshing to see an interest in the workload of the online instructor! The number of assignments per week and the type of assignment certainly does affect the number of hours per day/week spent on the course, as does class size. There must be a class size maximum, as there is a vast difference between instructing 16 students as compared to instructing 25+ students. With online instruction becoming the new platform for advanced learning, it is imperative that the quality of online instruction remains at the very least, equal to that of ground classes taught. Sadly, if the quality of an online education is jeopardized, there will be many, many adults entering the workforce unprepared for their chosen profession.
This is an important contribution to the online teaching environment. I enjoy my online teaching but right now it is more of a hobby because of the financial limitations and workload.
What I find most intriguing about the article is the suggestion and recommendation that faculty and curriculum designers collaborate. I wholeheartedly concur. I think faculty teaching a particular course should represent a more important design component in developing courses and evaluating the assessment content.
Thank you for sharing your thoughts with us. We are at great advantage here at GCU in that we get to have input with the development of curriculum, and this will help us continually provide a quality education. However not knowing what the course could look like for a faculty member in terms of time can create an array of issues, and the quality of the interactions and feedback could be one of the things to suffer.
Thank you for sharing with us. Having an understanding of what the course design will entail for a faculty member is crucial, but it also helps us to understand the level of rigor that is being put into a class. Not enough work or too much work can create serious problems, and the only way to know where a course stands is to have some level of measure. Knowing this helps faculty in their design, motivates them to become involved in the design of courses, and allows the students the greatest level of opportunity for growth.
Thank goodness you were able to get faculty to participate. It would be helpful to see the data collection tool. You note statistical differences in the text, but do share the data. That would be helpful to see and to see the confidence intervals and such as applicable.
A lot of faculty are the curriculum developers (not likely at GCU). A more precise measurement of course development time is called for (I'd like to research that). The trade-offs you mention hold a lot of face validity. As someone who spends hours and hours on course development to have classes run effectively and efficiently, I can attest to those trade-offs.
A potential confounder is length of one's teaching a particular course. For a course I have taught for a while I can quickly grade an essay or a paper because I have read so many my mind is on autopilot, and I can tell what meets standard and what does not. However, if I am new to the problem at hand, it can take three to four times the amount of time it takes to review said material.
Thank you very much for your excellent contribution to the literature.
This is an appealing and timely topic with little current literature. Kudos on selecting this topic and providing data. I would echo Mark's sentiments about needing a bit more information on the tool in your article. It may not hurt to attach as an appendix, if space would allow. Additionally, be sure to discuss what the analysis was in the methodology section. I kept looking for this information, but did not find it. I am also curious as to why you chose the two months you did. It would be beneficial to see if you could replicate this with additional months to see if there are ebb and flows of workload (as we may presume there are). Interesting study and it seems like a good research track.
Racheal, thanks so much for taking the time to comment on our study. I agree that adding an appendix that contains the actual tool would be helpful. The tool is an excel document that is automatized so that faculty only need to enter (in minutes) the amount of time spent on each task. Faculty were encouraged to enter their data at two hour intervals (calendar reminders were set to remind them) to help improve the validity of the data.
In terms of discussing the analysis of our study in the methodology section could you be more specific? The results section gives a brief analysis of our findings but I am uncertain what analysis is needed within the methodology section. Any help you can give here would be greatly appreciated.
Finally, you raise a great question about the timing of the study. We selected July and September mostly due to convenience. There was no master plan behind which months were chosen. Later we collected data in April as well since we wanted to replicate the study and see if a different month had any affect. Unfortunately, our response rate was much lower in April (around 45%). Our assumption is that faculty grew tired of having to track their every move during the course of a week. Results in April did not show any significant difference on the major items discussed in our initial study. We just did not feel as comfortable reporting this data due to the lower response rate.
I believe in the methodology section there is a statement about the analysis being done in SPSS, but not what analysis was conducted. It would help clarify and give credibility to the study to mention it in that section. And, your response rates for the two months you have are quite strong, I'm not sure it would have detracted to add April in there, but I certainly understand your rationale. It would be interesting to follow a couple of faculty over the course of year to see how it pans out -- possibly future research ideas. Great information!
I regret to say I see a few important shortcomings with this study:
First, I am questioning whether or not the stated lack of research in this area is correct - i.e., research examining faculty workload in the online venue. I look forward to perusing the research to confirm..
Relatedly, the supporting research dates back to 2006. This information is outdated. These points / information needs to be presented within the context of the year.
As a few of you noted, the Methodology section a bit vague with the details on the statistical analysis that was conducted. The Results section follows suit with the vague and somewhat hazy manner in which the findings are presented.
Lastly, the recommendation for faculty to be involved with course development per their insights on the time involved with grading assignments implies that courses should be developed to accommodate the time limitations of faculty - rather than facilitating student learning. This sets up a huge red flag. The appropriate recommendation would be for further research to identify the extent and manner in which this workload differs per subject area and grade level to inform staffing needs and/or to gauge appropriate class size.
I hope I am not being too critical with my assessment. I welcome any further thoughts on these items.
I would disagree with the point about the literature dating to 2006. In much of social science research it is perfectly acceptable to pull from all literature on a topic, even literature that happened 20, 30 or even 50 years ago. It all should build on each other and be connected (even if the results differ).
If there is scant literature on this topic, then perhaps the most recent study is from 2006. I would tend to not worry about the date of the research, but how solid that previous research is. Having research that is dated would raise a concern about the timeliness of a topic. Is it still relevant? Based on the information provided in this study, this topic is still quite relevant.
To follow-up on Racheal's post, the subject of the essay is certainly not "time" and "statistics" sensitive and a date of 2006 for some of the more recent literature would imply that more research needs to be conducted, has not been, and this fills a present lacunae. A study of this nature would then build upon previous work, regardless of the date, and I certainly agree with Racheal that it is certainly relevant based on the content alone.
First, let me start by thanking you for some great critical feedback. I believe you touched on some key points that will help us to improve this paper. Keep the feedback coming in! ^_^
Here are my specific replies to your comments: You wrote: First, I am questioning whether or not the stated lack of research in this area is correct - i.e., research examining faculty workload in the online venue. I look forward to perusing the research to confirm. My Response: If you find more literature on online faculty workload then we would love for you to share it with us. We will also continue to research this topic and amend our literature where appropriate.
You Wrote: Relatedly, the supporting research dates back to 2006. This information is outdated. These points / information needs to be presented within the context of the year. My Response: There have already been a few comments on this by Racheal and Stephen (see above). I agree that it is always beneficial to find the most recent literature available, especially with a subject such as online faculty workload. This field is changing constantly so being as relevant as possible is important. As mentioned above, we will continue to research the literature in this area.
You Wrote: As a few of you noted, the Methodology section is a bit vague with the details on the statistical analysis that was conducted. The Results section follows suit with the vague and somewhat hazy manner in which the findings are presented. My Response: You are absolutely right here. This is an area of weakness that we need to shore up. Our only defense is that we adapted this paper from an as yet unpublished paper that contains the actual statistical analysis used (comparison of means, ANOVA) and includes a breakdown for time spent on each item (essays, worksheets, posting, phone calls, responding in the LMS to student questions, etc.). This paper is only reporting on the aggregate results based on the mean average of time spent in each course. We will work on explaining these results better in our methodology section and in the results section.
You Wrote: Lastly, the recommendation for faculty to be involved with course development per their insights on the time involved with grading assignments implies that courses should be developed to accommodate the time limitations of faculty - rather than facilitating student learning. This sets up a huge red flag. The appropriate recommendation would be for further research to identify the extent and manner in which this workload differs per subject area and grade level to inform staffing needs and/or to gauge appropriate class size. My Response: Here I would have to respectively disagree. Within the introduction we mention three pillars of success that institutions examine to determine whether they have a successful online program. One of those was a strong curriculum. Thus our assumption is that student learning is necessary. Our argument is that institutions tend to only look at student learning without considering faculty workload. The premise is that courses which contain a very heavy faculty workload may (1) cause you to lose a lot of your quality faculty, (2) force the faculty member to spend so much time grading assignments that little time is left for student engagement (thus actually hindering student learning).
Like the others who have weighed in on your article I also think elaboration on the data collection and analysis would enhance your paper. With that noted, I think you have begun a conversation on a much needed aspect of online line teaching - the demands on faculty time related to the ancillary issues of teaching. Qualitative and quantitative feedback is a part of teaching that in manifest in the grading of assignments, but often the teaching that allows the student to apply course concepts to real-life situations occurs in the Discussion Forum; this is especially evident in courses that deal with conceptual frameworks such as religion and theology. Transferring some of the demands on faculty time from grading to interaction would effectively enhance the critical thinking skills of the students; resulting in an educational process that aims toward the higher levels on Bloom's Taxonomy in practice rather than merely in principal. One additional item of research and analysis that might enhance your project would be the analysis the EOCS feedback from student's whose instructors were required to spend less time grading and more time teaching and interacting with students in comparison to those whose instructors spend more time grading and less interacting.
Keep up the great work! Your study has a tremendous upside.
Thank you for sharing and for your encouragement. We could look at the EOCS as you mentioned. I would be intrigued to see where those values land and as you mentioned it could enhance the value of the information presented.
In response to Jennifer regarding the red flag regarding faculty involvement in course development I would like to couch my comment in terms of my own experience. I am an online instructor- part time and a school principal full time. I feel that it is crucial that those teaching the courses are part of designing the course. Research shows that effective teachers feel great ownership toward what they teach. In the K-12 realm, ownership in lesson design is crucial to student success. There is no reason this ownership would not benefit instruction in college level courses. Our present system allows online instructors very little ownership for the courses they teach. In some ways ownership is even discouraged. While I agree that assignment grading time should not be the major factor in seeking faculty involvement in course design, I also would contend that those who design the course should be required to teach the course. One must walk a mile in the shoes! I have taught courses where each week the assignment was the same 2000 word essay and each week all I did was read essays that did not truly assess the week's learning. Quick assessments that are easily graded can assess the learning just as well and then when an essay is assigned, quality feedback is easier to provide without repeating the same comments about design issues, grammar issues, and mechanical issues. When I end up teaching a course where the assessment is an essay each week, I feel the message here is that the course designer did not put in the front load effort and because of this the students and the instructor suffer. One of the most effective online lessons I have taught was during one of my school law courses. Because of a small number of students, I was able to substitute one of my own lessons for a CLC assignment. I created a real life school law scenario where students took on various roles in a simulation. We used the discussion forum where I posted events and then the students, in their roles as lawyers, judges, plaintiffs, etc... had to then respond. The times when I can be creative like this are few and far between. I feel that because of this, my expertise is not fully translated to the students. The course evaluations from the students were glowing and they specifically attributed their positive experience to the simulation. As a group of professionals, we must promote ownership for the good of the students and the proper utilization of our faculty's skills.
This is a very interesting topic and research question, but the article seems to be very superficial in its treatment and analysis. The recommendations and conclusion follow “common sense” but are not supported by much more. I did a quick google scholar search on “online faculty workload” and this is the first item sorted by scholar:
Mandernach, B. Jean, Swinton Hudson, and Shanna Wise. "Where Has the Time Gone? Faculty Activities and Time Commitments in the Online Classroom." Journal of Educators Online 10.2 (2013): n2.
It seems a rather relevant title to this work and it is rather recent.
Here are a few more comments:
Table 1 mentions months, the text of the article mentions weeks this is confusing. What about using Time Period , Week 1 and Week 2 as heading for Table 1 and then use the actual time frame rather than the name of the month (e.g., replace July with 7-12 to 7-18), that would be easier to read understand at first look and also provide more robust data if somebody wants to do a repeated experiment. Also typing 82+58=140 is a bit condescending, name the column “Total Participation” is explanatory enough.
How was student count calculated? Where W's included?
On table 3 the header “Estimate Student Count” is misleading. It should be replaced with something more representative of what that column represents like “Optimal Student count per 40Hrs Week” or whatever you come up with that is more representative of what those numbers represent.
“Finally faculty engagement in the discussion forums was higher in the top three courses in table #3.” This is one of the most important components of online teaching so it deserves more than 9 words. (add a comma after Finally).
On the recommendations, what about the time on task with the existing literature for online and ground? These recommendations seem reasonable but a bit out of a vacuum without any comparison to what happens in similar courses at other universities (if this data is available, if not say so) or other modalities.
Thank you for your time reading this, and effort on the research I hope you keep this up and that you find these comments helpful to improve your manuscript.
Good topic to address as many of my students have indicated that "I actually read their assignments" which made me ask other instructors if I was spending too much time providing feedback. A comment was made that instructors are not paid appropriately for the amount of time spent reading and grading online work.
I also have found the curriculum design to not coincide with material, I just have not figured out how to redesign the curriculum myself. I agree I believe a lot of the material is created by outside staff who do not understand the changing vision of the organization and how as professors we have to incorporate change into our weekly teaching.
To put this issue into perspective, I believe that the workload comparison between the instructors in the traditional brick-and mortar and online is essential to the discussion of this paper. The discussion should not merely address who works more, but rather how curriculum development and instructor input could enhance student learning, retention and reduce instructor overload across the board. In reality, this is not a simple process as it requires cooperation from educators, administrators, designers and reliable research. Lastly, what implications does instructor work overload have on students? Reduction in quality teaching? Poor feedback from instructors? The impact on students is pertinent and should be addressed as well.
As an SME, it is very easy to come up with what seem like creative assignments that students will just love. However, one may not always take into consideration the receiving end in terms of faculty workload. I have recently begun asking myself as I am working on course development if I would want to teach the course!
One consideration for this paper that would be interesting is compensation. Does the time that an instructor takes to focus on their workload correlate to the pay they are receiving? "You get what you pay for" comes to mind. If an online instructor feels they are being adequately compensated for their work, they may be inclined to spend more time on the class. Thus, compensation would seem to be an important element to consider in a study of this nature, particularly since adjunct pay is so disproportionate to brick-and-mortar faculty pay.
20 Comments
It is refreshing to see an interest in the workload of the online instructor! The number of assignments per week and the type of assignment certainly does affect the number of hours per day/week spent on the course, as does class size. There must be a class size maximum, as there is a vast difference between instructing 16 students as compared to instructing 25+ students. With online instruction becoming the new platform for advanced learning, it is imperative that the quality of online instruction remains at the very least, equal to that of ground classes taught. Sadly, if the quality of an online education is jeopardized, there will be many, many adults entering the workforce unprepared for their chosen profession.
This is an important contribution to the online teaching environment. I enjoy my online teaching but right now it is more of a hobby because of the financial limitations and workload.
What I find most intriguing about the article is the suggestion and recommendation that faculty and curriculum designers collaborate. I wholeheartedly concur. I think faculty teaching a particular course should represent a more important design component in developing courses and evaluating the assessment content.
Dr. Stephen Jester
Janice,
Thank you for sharing your thoughts with us. We are at great advantage here at GCU in that we get to have input with the development of curriculum, and this will help us continually provide a quality education. However not knowing what the course could look like for a faculty member in terms of time can create an array of issues, and the quality of the interactions and feedback could be one of the things to suffer.
Michael Coplan
Dr. Jester,
Thank you for sharing with us. Having an understanding of what the course design will entail for a faculty member is crucial, but it also helps us to understand the level of rigor that is being put into a class. Not enough work or too much work can create serious problems, and the only way to know where a course stands is to have some level of measure. Knowing this helps faculty in their design, motivates them to become involved in the design of courses, and allows the students the greatest level of opportunity for growth.
Michael Coplan
Thank goodness you were able to get faculty to participate. It would be helpful to see the data collection tool. You note statistical differences in the text, but do share the data. That would be helpful to see and to see the confidence intervals and such as applicable.
A lot of faculty are the curriculum developers (not likely at GCU). A more precise measurement of course development time is called for (I'd like to research that). The trade-offs you mention hold a lot of face validity. As someone who spends hours and hours on course development to have classes run effectively and efficiently, I can attest to those trade-offs.
A potential confounder is length of one's teaching a particular course. For a course I have taught for a while I can quickly grade an essay or a paper because I have read so many my mind is on autopilot, and I can tell what meets standard and what does not. However, if I am new to the problem at hand, it can take three to four times the amount of time it takes to review said material.
Thank you very much for your excellent contribution to the literature.
Mark,
Thank you for sharing. We were very fortunate to get the faculty to willingly participate in this study.
I have attached a copy of the collection tool for you to review. If you have any questions about it please let me know.
Michael Coplan
Attachments
Racheal, thanks so much for taking the time to comment on our study. I agree that adding an appendix that contains the actual tool would be helpful. The tool is an excel document that is automatized so that faculty only need to enter (in minutes) the amount of time spent on each task. Faculty were encouraged to enter their data at two hour intervals (calendar reminders were set to remind them) to help improve the validity of the data.
In terms of discussing the analysis of our study in the methodology section could you be more specific? The results section gives a brief analysis of our findings but I am uncertain what analysis is needed within the methodology section. Any help you can give here would be greatly appreciated.
Finally, you raise a great question about the timing of the study. We selected July and September mostly due to convenience. There was no master plan behind which months were chosen. Later we collected data in April as well since we wanted to replicate the study and see if a different month had any affect. Unfortunately, our response rate was much lower in April (around 45%). Our assumption is that faculty grew tired of having to track their every move during the course of a week. Results in April did not show any significant difference on the major items discussed in our initial study. We just did not feel as comfortable reporting this data due to the lower response rate.
Thanks again for sharing your thoughts.
Scott
Hi Scott,
I believe in the methodology section there is a statement about the analysis being done in SPSS, but not what analysis was conducted. It would help clarify and give credibility to the study to mention it in that section. And, your response rates for the two months you have are quite strong, I'm not sure it would have detracted to add April in there, but I certainly understand your rationale. It would be interesting to follow a couple of faculty over the course of year to see how it pans out -- possibly future research ideas. Great information!
I regret to say I see a few important shortcomings with this study:
I hope I am not being too critical with my assessment. I welcome any further thoughts on these items.
Thank you.
Jennifer Keeley
I would disagree with the point about the literature dating to 2006. In much of social science research it is perfectly acceptable to pull from all literature on a topic, even literature that happened 20, 30 or even 50 years ago. It all should build on each other and be connected (even if the results differ).
If there is scant literature on this topic, then perhaps the most recent study is from 2006. I would tend to not worry about the date of the research, but how solid that previous research is. Having research that is dated would raise a concern about the timeliness of a topic. Is it still relevant? Based on the information provided in this study, this topic is still quite relevant.
Jennifer,
First, let me start by thanking you for some great critical feedback. I believe you touched on some key points that will help us to improve this paper. Keep the feedback coming in! ^_^
Here are my specific replies to your comments:
You wrote: First, I am questioning whether or not the stated lack of research in this area is correct - i.e., research examining faculty workload in the online venue. I look forward to perusing the research to confirm.
My Response: If you find more literature on online faculty workload then we would love for you to share it with us. We will also continue to research this topic and amend our literature where appropriate.
You Wrote: Relatedly, the supporting research dates back to 2006. This information is outdated. These points / information needs to be presented within the context of the year.
My Response: There have already been a few comments on this by Racheal and Stephen (see above). I agree that it is always beneficial to find the most recent literature available, especially with a subject such as online faculty workload. This field is changing constantly so being as relevant as possible is important. As mentioned above, we will continue to research the literature in this area.
You Wrote: As a few of you noted, the Methodology section is a bit vague with the details on the statistical analysis that was conducted. The Results section follows suit with the vague and somewhat hazy manner in which the findings are presented.
My Response: You are absolutely right here. This is an area of weakness that we need to shore up. Our only defense is that we adapted this paper from an as yet unpublished paper that contains the actual statistical analysis used (comparison of means, ANOVA) and includes a breakdown for time spent on each item (essays, worksheets, posting, phone calls, responding in the LMS to student questions, etc.). This paper is only reporting on the aggregate results based on the mean average of time spent in each course. We will work on explaining these results better in our methodology section and in the results section.
You Wrote: Lastly, the recommendation for faculty to be involved with course development per their insights on the time involved with grading assignments implies that courses should be developed to accommodate the time limitations of faculty - rather than facilitating student learning. This sets up a huge red flag. The appropriate recommendation would be for further research to identify the extent and manner in which this workload differs per subject area and grade level to inform staffing needs and/or to gauge appropriate class size.
My Response: Here I would have to respectively disagree. Within the introduction we mention three pillars of success that institutions examine to determine whether they have a successful online program. One of those was a strong curriculum. Thus our assumption is that student learning is necessary. Our argument is that institutions tend to only look at student learning without considering faculty workload. The premise is that courses which contain a very heavy faculty workload may (1) cause you to lose a lot of your quality faculty, (2) force the faculty member to spend so much time grading assignments that little time is left for student engagement (thus actually hindering student learning).
Thanks again for your great feedback.
Scott
Hi Scott and Michael,
Like the others who have weighed in on your article I also think elaboration on the data collection and analysis would enhance your paper. With that noted, I think you have begun a conversation on a much needed aspect of online line teaching - the demands on faculty time related to the ancillary issues of teaching. Qualitative and quantitative feedback is a part of teaching that in manifest in the grading of assignments, but often the teaching that allows the student to apply course concepts to real-life situations occurs in the Discussion Forum; this is especially evident in courses that deal with conceptual frameworks such as religion and theology. Transferring some of the demands on faculty time from grading to interaction would effectively enhance the critical thinking skills of the students; resulting in an educational process that aims toward the higher levels on Bloom's Taxonomy in practice rather than merely in principal. One additional item of research and analysis that might enhance your project would be the analysis the EOCS feedback from student's whose instructors were required to spend less time grading and more time teaching and interacting with students in comparison to those whose instructors spend more time grading and less interacting.
Keep up the great work! Your study has a tremendous upside.
Blessings,
Ron
Ron,
Thank you for sharing and for your encouragement. We could look at the EOCS as you mentioned. I would be intrigued to see where those values land and as you mentioned it could enhance the value of the information presented.
Michael Coplan
This is a very interesting topic and research question, but the article seems to be very superficial in its treatment and analysis. The recommendations and conclusion follow “common sense” but are not supported by much more. I did a quick google scholar search on “online faculty workload” and this is the first item sorted by scholar:
Mandernach, B. Jean, Swinton Hudson, and Shanna Wise. "Where Has the Time Gone? Faculty Activities and Time Commitments in the Online Classroom." Journal of Educators Online 10.2 (2013): n2.
It seems a rather relevant title to this work and it is rather recent.
Here are a few more comments:
Table 1 mentions months, the text of the article mentions weeks this is confusing. What about using Time Period , Week 1 and Week 2 as heading for Table 1 and then use the actual time frame rather than the name of the month (e.g., replace July with 7-12 to 7-18), that would be easier to read understand at first look and also provide more robust data if somebody wants to do a repeated experiment. Also typing 82+58=140 is a bit condescending, name the column “Total Participation” is explanatory enough.
How was student count calculated? Where W's included?
On table 3 the header “Estimate Student Count” is misleading. It should be replaced with something more representative of what that column represents like “Optimal Student count per 40Hrs Week” or whatever you come up with that is more representative of what those numbers represent.
“Finally faculty engagement in the discussion forums was higher in the top three courses in table #3.” This is one of the most important components of online teaching so it deserves more than 9 words. (add a comma after Finally).
On the recommendations, what about the time on task with the existing literature for online and ground? These recommendations seem reasonable but a bit out of a vacuum without any comparison to what happens in similar courses at other universities (if this data is available, if not say so) or other modalities.
Thank you for your time reading this, and effort on the research I hope you keep this up and that you find these comments helpful to improve your manuscript.
Filippo
Good topic to address as many of my students have indicated that "I actually read their assignments" which made me ask other instructors if I was spending too much time providing feedback. A comment was made that instructors are not paid appropriately for the amount of time spent reading and grading online work.
I also have found the curriculum design to not coincide with material, I just have not figured out how to redesign the curriculum myself. I agree I believe a lot of the material is created by outside staff who do not understand the changing vision of the organization and how as professors we have to incorporate change into our weekly teaching.
I agree with Janice's comments strongly.
Dr. Parker
To put this issue into perspective, I believe that the workload comparison between the instructors in the traditional brick-and mortar and online is essential to the discussion of this paper. The discussion should not merely address who works more, but rather how curriculum development and instructor input could enhance student learning, retention and reduce instructor overload across the board. In reality, this is not a simple process as it requires cooperation from educators, administrators, designers and reliable research. Lastly, what implications does instructor work overload have on students? Reduction in quality teaching? Poor feedback from instructors? The impact on students is pertinent and should be addressed as well.
As an SME, it is very easy to come up with what seem like creative assignments that students will just love. However, one may not always take into consideration the receiving end in terms of faculty workload. I have recently begun asking myself as I am working on course development if I would want to teach the course!
One consideration for this paper that would be interesting is compensation. Does the time that an instructor takes to focus on their workload correlate to the pay they are receiving? "You get what you pay for" comes to mind. If an online instructor feels they are being adequately compensated for their work, they may be inclined to spend more time on the class. Thus, compensation would seem to be an important element to consider in a study of this nature, particularly since adjunct pay is so disproportionate to brick-and-mortar faculty pay.