Generating a Classroom Pulse from Active Windows on Student Computers
Abstract
With technology embedded in an increasing number of educational contexts, it is prudent to identify ways in which instructors can leverage technology to benefit their pedagogical practices. The purpose of this study was to determine if information about students’ active windows on their personal computers could provide actionable information to inform real-time instructional interventions and post-lecture reflection on practices. The active window approach mitigates issues with prior data collection methods and provides an opportunity to capture complete, real-time student computer usage without the need to install spyware. Based on observing 68 first-year engineering students and 32 second-year engineering students in large engineering lectures, we generated error rates of 4.28% with a 95% confidence interval of [2.81%, 6.04%] in a structured computer use course setting and 6.89% with [4.42%, 10.17%] in a semi-structured use setting. To illustrate the type of information active window monitoring could provide, we captured active window data from 135 students every 12 seconds for an entire 75-minute lecture. The data was averaged to generate a timeline which provided insight into how students responded to the instructor’s methods. This research has immediate practical implications in course design, instructional strategies, and engineering education research methods.
Keywords
improving classroom teaching, structured computer use, media in education
Introduction
Imagine standing in a large lecture hall and glancing around to gauge whether students are grasping the lecture concepts. However, rather than observing students nodding in agreement or shaking their heads in confusion, the raised lids of open laptop computers greet you. Instead of garnering a quick comprehension check, you are left wondering, “Are students paying attention or are laptops hurting learning?” As large classes become more prevalent and schools increasingly implement college-wide computing initiatives, this is the reality for numerous instructors. From one-to-one initiatives (Hayhurst, 2018; Richardson et al., 2013) to Bring Your Own Device (BYOD) requirements (Siani, 2017) in both K-12 and higher education, personal computers are embedded into educational contexts. Personal computers have been heralded for enabling interaction and supporting technology-centered instructional activities such as electronic content delivery, interactive polling, course management, and interactive software mentoring (Campbell & Pargas, 2003; Tront, 2007). However, the advantages of laptops in the classroom are often accompanied by the disadvantage of student inattentiveness. Laptops allow students to engage in media multitasking, swapping between a myriad of distracting activities including social media, gaming, and email (Langan et al., 2016; Wammes et al., 2019). Additionally, students who engage in off-task laptop activities distract neighboring students (Hall, Lineweaver, Hogan, & Brien, 2020).
Over the years as technology has become increasingly embedded into classrooms, researchers have tried to provide instructors with data to understand how laptop usage in the classroom impacts learners. From personal digital assistants to laptops, tablets and cell phones, researchers have documented both positive learning effects (Barak, Lipson, & Lerman, 2006; Doolen, Porter, & Hoag, 2003; Lohani, Castles, Lo, & Griffin, 2007; Roth & Butler, 2016; Samson, 2010; Shaw, Kominko, & Terrion, 2015) and negative learning effects (Carter, Greenberg, & Walker, 2017; Fried, 2008; Hembrooke & Gay, 2003; Junco, 2012; Kraushaar & Novak, 2010; May & Elder, 2018; Wood et al., 2012; Zhang, 2015) related to personal technology usage in the classroom. With no clear consensus regarding the impact of personal technology on learning in the classroom, researchers continue to tease out details of how student computer usage impacts learning by, for example, quantifying the amount of off-task activity in classrooms (Ragan, Jennings, Massey, & Doolittle, 2014), examining how non-academic applications like Facebook are used by students during lectures (Judd, 2014), and investigating how laptop bans impact learning (Elliott-Dorans, 2018). One trend that has emerged is that, for classrooms where instructors structure students’ computer use, learning impacts are typically positive (Downs, Tran, Mcmenemy, & Abegaze, 2015; Kay & Lauricella, 2011). That is, when instructors design the course to incorporate purposeful and deliberate computer usage, the impact of computer usage on learning tends to be positive. When student computer use is unregulated, research results are both positive and negative with regard to learning impacts. This finding should motivate instructors to embrace technology in their classrooms and learn how to use technology for their advantage.
One powerful tool that would allow instructors to use student computers to their advantage is a system that would allow instructors to see the pulse of large lectures. By capturing data that is similar to but more accurate than glancing around the classroom in order to gauge who is engaged, instructors could react in real-time to encourage more participation from students. Instructors could use a learning pulse monitor to time instructional interventions to promote active, engaged learning. We hypothesize that the active window information from student computers could provide the requisite data for determining a real-time classroom learning pulse. This study uses observational research in two different large engineering lecture courses over one semester in order to quantify the amount of error for using active window as a proxy for student attention. Then, we capture active window data electronically to illustrate how a learning pulse monitor could provide actionable information to an instructor for both real-time intervention or post-lecture reflection in order to improve instructional practices.
Why Active Window?
In information processing theory, there is a strong, direct link between attention and learning. This direct link is very clear in a quote from Mackintosh (1975), “The probability of attending to a stimulus determines the probability of learning about that stimulus” (p. 294). More recent studies have reached similar conclusions: humans learn about items that they attend to (Mitchell & Pelley, 2010). Robert Gagne has been credited with shifting the information processing discussion from the research lab to the practical realm of instructional design with his introduction of the Conditions of Learning (Gredler, 1997). Gagne’s (1965) original theory stipulates that there are nine instructional events that must occur for learning to take place, the first of which involves obtaining the learner’s attention. The instructional events do not guarantee learning will occur, but rather they support the learner’s internal mental processes. That is, each event is a necessary condition for learning to take place. While the theory has evolved somewhat since its introduction (Gagne, 1965; Gagne, 1977), attention has remained an initial event.Fleming (1987) succinctly explains why: “Quite simply, without attention [the first event] there can be no learning” (p. 236). To support computer users, who are students in our contexts, some suggest a need to design attention-aware systems that delay interruption by deferring alerts unrelated to the task at hand (Bailey & Konstan, 2006). Instead, we focus on how attention-aware systems could support instructional design. Specifically, we hypothesize that a students’ top-most, active window can be used to make a determination of that student’s attention and provide real-time data for instructors’ decision making.
Current assessment strategies used to measure student computer use in classrooms are limited. Existing research studies have explored student attention through either self-reported survey data, internet activity monitoring or through the installation of Spyware software. Survey data, by far the most common data collection method, does not provide the data resolution needed to generate a real-time learning pulse. Internet monitoring provides an incomplete characterization due to missing data related to non-internet activity (e.g., local applications). Spyware would provide the requisite data; however, significant privacy concerns and installation issues have plagued studies attempting to utilize spyware (Kraushaar & Novak, 2010; Kraushaar, Chittenden, & Novak, 2008). By relying on active window data, particularly data captured as a binary on-task and off-task determination, we attempt to balance student expectations for privacy and the need for capturing real-time data.
In courses that use classroom learning technology in order to communicate with student laptops via a server (e.g., DyKnow Vision or Classroom Presenter), active window data could be captured directly through the software. Specifically, if a student’s top-most window contains the course material, the student is paying attention (i.e., on-task) (Figure 1). If any other application is the top-most, active window (e.g., Figure 2 , Figure 3 ), the student is not paying attention (i.e., off-task). The assumption that active window indicates attention to or distraction from lecture has been previously implied (Hembrooke & Gay, 2003; Kraushaar & Novak, 2010). However, the assumption has not been directly tested for reliability. It is clear there is error associated with the method. For example, consider the layout in Figure 3 where the classroom software and another application split the screen. While the active window (the window with the mouse focus) is not the course software, it is possible that the student could be paying attention to the lecture. Similarly, it is possible that in Figure 2 , the student has non-course software as the active window, but is viewing the slides on the classroom projector. There is a need to quantify the amount of error with the active window method to understand if active window data can provide actionable information. This study uses observational data of student computer usage within classrooms to quantify the error associated with the active window method.
Method
Participants
The study was conducted at a large research university located in the Southeast United States. The university’s college of engineering has an established computer requirement resulting in a multitude of personal computers in classrooms. The college of engineering supports interactive learning software that establishes a communication link between instructor and student computers to facilitate the distribution of slides, polls, and other instructional activities. We purposefully selected courses in which the interactive learning software was integrated into instructional activities so that there was a clear on-task software window. Data were collected in six sections of a First-Year Engineering (FYE) course, one section of a Statics course (S), and one section of a Dynamics course (D) in the Fall semester. FYE observations and S&D observations were considered as two separate groups due to differences in the use of technology. In the FYE courses, computer use was strictly structured by the instructors. In the S&D courses, computer use was semi-structured; the instructor passed slides and annotations but otherwise usage was unregulated.
First-year Engineering (FYE sections – Structured Computer Use
The six FYE sections were part of a first-year, first-semester course, and consistent primarily of freshmen general engineering students. The sections were all large lectures with enrollment varying from 120 to 250 students and met one time a week for 50-minutes in large auditoriums. The FYE sections had five different instructors (one instructor taught two sections), but covered identical content. The instructors had weekly coordination meetings during which a common slide deck was distributed. Students were required to bring personal computers to class and used the interactive learning software to receive lecture content and to interact with instructors. Students were given initial software training during the second week of classes. The instructors actively directed student computer use throughout the lecture period with polling questions, active exercises, and student work submission (Mohammadi-Aragh & Williams, 2013).
Statics and Dynamics (S&D sections – Semi-Structured Computer Use
The S&D sections included in this study were taught by the same instructor and had the same lecture format. The Statics section was a large lecture with 228 students and met in a large auditorium. The Dynamics section was the smallest lecture with 86 students and was taught in a large classroom. Both sections met for 75 minutes twice a week. The selected instructor used a Tablet PC to distribute slides and lecture notes in real-time to students via interactive learning software. Lecture notes were also projected in the front of the classroom. The lecture usually began with a review of student selected homework problems, was followed by a short lecture covering new concepts, and concluded with example problems. The instructor used the class roster to create an interactive environment by randomly calling on students to assist him when working problems.
Before enrolling in either S&D section, students completed the college of engineering FYE two-course sequence, which used the same interactive learning software described in Section 5.1.1. At the beginning of the semester, S&D students were told that they could use the interactive learning software to capture, annotate, and save lecture content. However, students were not required to use a computer, and lecture slides (i.e., the instructor’s DyKnow file) were posted at the conclusion of each class. Only students who brought a computer to class were included in the study.
Observations
We used observations of students’ behavior to collect student attention data and information about active windows on student computers. Direct observations of student behavior are a frequent and recognized method for determining student attention in educational, behavioral, psychological research studies (Hoge, 1985; Rapport, Kofler, Alderson, Timko, & Dupaul, 2009). In determining attention, observations may focus on general behaviors, such as “on-task”, or specific behaviors, such as “playing with an object”. Focusing on general behaviors is recommended since significant and consistent evidence exists for the validity of general measures (Hoge, 1985). We used in-class, naturalistic observations, which are unobtrusive, covert observations during which the observer blends in with participants and does not affect behavior. Students were not informed that they were being observed in order to capture typical, unchanged student behavior.
Observations were conducted each week of the semester during FYE and S&D lectures. To increase validity of our estimates as a representation of total error rates, we selected students to observe using stratified random sampling. That is, we divided the class into sections (e.g., front, back, middle) and randomly sampled from each of these areas. Prior to the start of lecture, the observer would sit in a random location in the classroom, and select students whose computer screens were visible. To avoid data overlap due to neighboring students interacting, the selected students could not be sitting next to each other. Observations were conducted on a Tablet computer similar to students’ computers and the screen was shielded from nearby students. Throughout the semester, observers reported conversations with neighboring students that indicated their presence remained undetected (e.g., neighbors asked homework questions such as “What did you get for question 3?”).
Observers were trained and used an observation protocol to strengthen reliability. Figure 4 shows the observation protocol with sample data. The protocol guided observers to document student activity (Notes), the observer’s perception of student attention (A?), and the students’ top-most, active window (Window) at every minute during the lecture. Generally, a student was considered attentive if they were looking at course content or the instructor, discussing course content, working on instructor-assigned tasks, or listening to the instructor. In other words, a student was classified as on-task or attentive if they were participating in teacher-sanctioned activities (Hoge, 1985). For validity and reliability purposes, after each observation was completed, the protocol “Notes” field and the judgement of attention columns were reviewed by the research team.
Analysis Technique for Observations
For every observed participant, the observer’s perception of attention (A? column) captured a timeline of observed student attention. The record of a student’s active window (Window) was analyzed to produce a timeline of measured student attention. Following the observations, both timelines were coded with a 1 representing “y” (paying attention) and a 0 representing “n” (not paying attention). As an example, for Student 1 in the observation protocol in Figure 4 , their observed student attention (OSA) would be 1-1-1-0 while their measured student attention (MSA) would be 1-1-1-1.
Every participant’s OSA and MSA were compared for mismatches, which are instances in the timelines where the OSA and MSA are not equal. A mismatch occurs when a student is observed to be attentive, but their active window is not course software (e.g., Figure 4 : Student 2, 9:47am). In this case, the mismatch is a false negative (Type II error) since MSA is 0 but OSA (actual attention) is 1. A mismatch also occurs when a student is observed to be distracted, but their active window is course software (e.g., Figure 4 : Student 1, 9:48am). In this case, the mismatch is a false positive (Type I error) since MSA is 1, but OSA (actual attention) is 0.
Observation notes were analyzed to determine the types of activities that produce error. The degree of validity was calculated as a mismatch error rate. For each student, the error rate (ER) was calculated as the number of mismatched instances (#MI) divided by the total number of observed instances (TOI), ER = #MI/TOI. Using the error rates for each group of students, we created 10,000 bootstrap samples with replacement in order to estimate the true mean error rate for each class type (i.e., structured versus semi-structured computer use).
Electronic Active Window Monitoring for Classroom Pulse
There are a variety of ways active window monitoring could be implemented. In our case, we approached the developer for the interactive learning software used at our study site and asked them to generate the data. They incorporated a visual attention widget into the instructor panel. The widget provided instructors with a visual representation of student attention by monitoring students’ active, top-most window (Figure 5 ). The software assumed that, similar to the measured student attention from the observation protocol, if the active window on a student’s computer was the learning software, then the student was on-task. All other active windows indicated off-task behavior. We created a record of the widget’s output with screen capture software. We then processed the recordings with MATLAB’s image processing toolbox to create a spreadsheet file for analysis.
Active windows were measured every 12 seconds for the entire lecture. For each time, average class attention was calculated by dividing the total number of attentive students (e.g., course software as top-most, active window) by the total number of students logged into the course software (Equation 1 ). The average class attention timeline was supplemented with information from observation notes and an audio recording of lecture in order to create a descriptive class timeline (i.e., start of class, start of homework review, start of new lecture material, start of practice problems related to new lecture material).
Results
Characteristics of Attentive and Inattentive Students
Thirty-four observations sessions were conducted in eight weeks of FYE lectures. Two students were observed during each session, providing a total of 68 FYE student (6.4% of the total students enrolled in the course). One student was excluded from analysis because their computer battery died before the end of the lecture. The FYE observations have an average of 47 instances per observation.
Ten observations sessions were conducted in eight weeks of Statics lectures and six in five weeks of Dynamics lectures. Two students were observed during each session, providing a total of 32 S&D students (10.2% of the total students enrolled in the two courses). Of the 32 S&D students, two were excluded from analysis due to shortened observations (one Statics student left class early and one Dynamics student’s laptop battery died). The S&D observations have an average of 70 instances per observation.
The observation “Notes” field was analyzed to determine the characteristics of a student who is paying attention versus a student who is not paying attention. Those characteristics are listed in Table 1 . Characteristics are self-explanatory with the exception of Doodling. We note that Doodling (and listening) referred to students drawing simple patterns or sketches while occasionally looking at slides or the instructor. Doodling (and not listening) was used to indicate students who were engaged in intensive or elaborate drawings with body language suggesting deep concentration (e.g., head down and focused on artwork). The characteristics provide reliability of the observer’s determination of attention, as they aligned with instructor expectations and literature-defined protocols for attentive and non-attentive students.
Paying Attention |
|
Not Paying Attention |
|
Listening to the instructor |
Looking at instructor |
Installing software |
Texting |
Taking notes / writing on slide |
Participating |
Talking to neighbor |
Spacing out |
Helping neighbor on assignment |
Submitting a slide |
Doodling (and not listening) |
Surfing the web |
Doodling (and listening) |
Looking at handout |
Working homework |
Sleeping |
Answering Poll/Question) |
Looking at projector |
Flipping through previous slides |
Reading newsfeed |
Copying instructor’s notes |
Asking questions |
Checking email |
Writing report |
Participant Error Rates
Sources of error generated from the “Notes” field of the observation protocol are listed in Table 2 and are ordered from most frequent to least frequent overall. Using a second device to surf the web, email, or play a game (Reason B) was considered separate from texting (Reason C) since the length of activity was different. Not participating (Reason D) included activities such as ignoring the discussion, not advancing the slides, and reviewing past slides in order to “catch up”.
The primary and secondary sources of error were different between the two groups. In FYE sections, the primary source of error was using a second device while course software was open on the primary device (Reason B – 33 occurrences), and the secondary reason was texting (Reason C – 24 occurrences). Both these sources of error produce false positives since the active window data indicates that students are paying attention, but in reality they are not. By far the largest source of error for S&D was students leaving a non-course window open (e.g., a browser window) and looking up at the instructor and lecture slides in the front of the room (Reason A – 98 occurrences). This source of error produces false-negatives since the active window data indicates that students are not paying attention, but in reality they are attentive. The secondary source of error was students with their head down or sleeping (Reason G – 12 occurrences).
Label |
Reason for Mismatches |
Total |
FYE |
S&D |
|
A. |
Student left browser/email open and looked at instructor |
112 |
14 |
98 |
|
B. |
Using second device (computer/slate/phone) |
34 |
33 |
1 |
|
C. |
Texting |
30 |
24 |
6 |
|
D. |
Not participating |
24 |
18 |
6 |
|
E. |
Student is talking to neighbor with course software open |
23 |
14 |
9 |
|
F. |
Screensaver on |
20 |
17 |
3 |
|
G. |
Head is down / appears to be sleeping |
18 |
6 |
12 |
|
H. |
Student is working homework with course software open |
9 |
5 |
4 |
|
I. |
Doodling |
4 |
2 |
2 |
|
J. |
Looking up answers online |
2 |
2 |
0 |
For each participant, the error rate, primary reason for error, and total mismatches attributed to the primary reason are shown in Table 2 . Reasons are a reference back to the labels given in Table 1 . The student code in each table indicates course (F – FYE, S – Statics, D – Dynamics), the observation week (01 – 11), and then the individual student code. In Table 2 , the FYE student code represents the observed section (1 – 6) and then the student (01 or 02). For S&D, we only observed one section of each course, so there is no section code in Table 2 , and the student code indicates that the students were observed on Tuesday (01 and 02) or Thursday (03 and 04). All 97 codes indicate unique participants.
The primary reasons for students’ mismatches are distributed across all observation weeks and all observed sections. For an individual FYE student, the most common source of error was not participating (Reason D – 8 students). However, in many cases this source of error only produced a single mismatch (e.g., F03-201, F05-602). The two students with 10 or more mismatches both had second devices. F02-602 had 11 mismatches with 9 attributed to using a second computer. F06-102 has 20 mismatches with 15 attributed to playing games on a cell phone. For an individual S&D student, the most common source of error was leaving a non-course window open (e.g., a browser window) and looking up (Reason A – 16 students), or “checking in” with the lecture. The four students with more than 10 mismatches all engaged in “checking-in” behavior.
Estimate of Mean Error Rates
Based on the bootstrap analysis of the FYE data (Figure 6 , left), the mean percent error is 4.28% and the estimate of standard error is 0.82. The 95% confidence interval for FYE percent error is [2.81%, 6.04%]. The bootstrap analysis of the S&D data (Figure 6 , right) produced a mean percent error and standard error of 6.89% and 1.51. The 95% confidence interval for S&D percent error is [4.42%, 10.17%].
Real-time Electronic Classroom Pulse
Active window records were electronically captured from 135 students in one 75-minute Statics lecture every 12 minutes. The percentage of class time spent in the course software was calculated for each of the students and the frequency distribution for all students is shown in Figure 7 . The average percentage of on-task time varies across the entire frequency range. Twenty-eight students were in the 90-100% category indicating they remained in the course software nearly the entire course. Fourteen students were in the 0-10% category indicating they were logged into, but not using the course software for nearly the entire course. The remaining 93 students engaged in multitasking (i.e., switching between application windows).
The average class attention for the Statics lecture is plotted in Figure 8 . The timeline is annotated based on a recording of the course. The instructor reviewed homework problems starting at 3:43pm and worked practice problems starting at 4:18pm. During the homework review and practice problem sessions, gray shading is used to indicate the start and end of different problems. By annotating the average class attention time line in Figure 8 we see a clear indication that instructors effect student attention. When new material was presented, there were peaks (e.g., local maximums) in attention. Furthermore, instructor statements such as “Pay attention” also promoted attention, but the duration was short lived. Randomly calling on students while working on practice problems may be a method of returning students to lecture, but another method must be used to prolong the increased engagement, as students would return to off-task activities when they realized that they were not selected.
Discussion
The primary purpose of this study was to examine the validity of using students’ top-most, active window as a proxy for attention. Asserting that there is an average of 4.28% or 6.89% error depending on course type, this study provides strong evidence that active window can be a valid proxy for average classroom attention. Obviously, the final determination of acceptable error rates for future contexts should be made in consideration with the specific research or pedagogical questions under investigation. However, to give the reader perspective, until now, instructors primarily obtain student computer usage with surveys. Even with a high response rate, survey data can be inaccurate because student’s memories may not match reality (Brener, Billy, & Grady, 2003) or grade-oriented students may underreport negative behavior 40 . Kraushnaar and Novak’s 19 investigation directly comparing students’ self-reported computer use to computer use monitored by Spyware established that students underreported instant messaging use by 40%. Only 25% of students reported using instant messaging programs during class, but the Spyware record captured instant messaging use by 61% of the class. The error rates established in our investigation for the active window method are significantly less than the error rate for instant messaging self-reports established by Kraushnaar and Novak.
While active window error rates are acceptable for an application producing a general classroom pulse, they may not be acceptable for applications requiring less error. For example, the active window method may not be appropriate for assigning participation grades. Data collection in this study occurred across two distinct types of computer-infused classrooms. Student characteristics given in Table 1 and sources of error in Table 2 allow for informed decisions regarding the appropriateness of using the active window technique in classrooms that differ greatly from the study context. For example, if an instructor has observed classroom behavior similar to activities in Table 1, Table 2 , the active window method may be considered appropriate. In the future, researchers and instructors can consider how technology is used in their classroom contexts to estimate the error for their particular situation.
Our decision to treat FYE and S&D as two different groups due to the use of technology was supported by the sources of error observed in the two groups. The primary and secondary reasons for mismatches in one group were not the primary or secondary reasons for the other group. In FYE courses (structured use), students were more likely to leave the course software active and use a secondary device. In S&D (semi-structured use), students were more likely to log into the course software and then switch to off-task activities on the same device. Then, S&D students would “check-in” with lecture by glancing up at the instructor and projector; students appeared to use the projector as a second monitor, and would only switch back to the course software if they decided to re-engage with the lecture by, for example, taking notes on the instructor distributed slides. The distinction in error types between FYE and S&D courses did not appear to be related to content or instructor, as reasons for error were similar across different sections, course timing, instructors, and weeks. Instead, the differences in error appear to be related to the instructor’s use of technology. Our results provide some evidence that instructional methods produce different student behaviors in computer-infused classrooms. Future studies could investigate whether this behavior is a natural change that occurs as students progress through degree programs, or if it is more directly related to technology policies for a course.
To further explore our hypothesis that an active window monitor could serve as a classroom pulse to generate actionable feedback for the instructor, we paired near real-time active window data (i.e., data collected every 20s) with recordings of courses lectures. Our results suggest an active-window based classroom pulse could provide insight for both real-time intervention or post-lecture reflection. For example, a real-time intervention based on the timeline in Figure 8 could be related to waning attention during the middle of the lecture, approximately 4:07pm to 4:18pm. The pulse could alert the instructor that an active exercise should be executed to reengage students with lecture content. As another example, upon reflection of the course at the end of the semester, the instructor could be motivated to reduce homework review time (low attention approximately 3:43pm to 3:54pm) in future semesters. The extra time could be dedicated to working practice problems as students appear to be more engaged during that portion of the lecture. Based on our initial analysis conducted as part of the study and reported in this paper, a classroom pulse generated from active window data can show an instructor how their pedagogical techniques directly affect student engagement. We are conducting additional studies to determine how instructors respond to the data and how that subsequently effects student engagement and learning.
There are two primary limitations of the active window method. First, the active window method only provides information on whether a student is in course software or not. Second, the active window method can only be used in courses where there is a clear datum for on-task. Essentially, these two limitations combine to mean that a researcher cannot use the active window method without knowing the context of the computer use. As an example, if a portion of the lecture required students to complete an exercise on paper, then the computer active window would not be an indication of the classroom pulse. However, the instructor would be aware of this activity and surely not consider the pulse as an indication of attention.
Conclusion
As evident from the popularity of studies examining student computer use, instructors want to understand how students are using their computers in classes. We examined the appropriateness and application of monitoring the active window on student computers as a means for providing a pulse of classroom attention. To quantify error for the active window method, we observed students in two course types, structured computer use and semi-structured computer use. We quantified both false-positive and false-negative error for active window monitoring through observations of unmanipulated student behavior. The observations provided a listing of behaviors that observers classified as attentive or inattentive and a listing of behaviors that were associated with error. These listings will provide evidence to inform decisions as to whether the active window method is appropriate for alternate contexts.
In courses where students are required to use interactive learning software, electronically captured active window data has the potential to produce a real-time attention record for every student, as well as the average class attention, essentially creating a pulse for the classroom. By implementing data collection through existing interactive learning software, the method was much less invasive than spyware installations – data were only recorded during class times and no additional software was required. Active window monitoring has the potential to inform the timing of real-time instructional intervention and to help instructors improve their practice through post-lecture reflection.
Appendix
Student |
MI |
OI |
Err.(%) |
Reason |
Student |
MI |
OI |
Err.(%) |
Reason |
|
---|---|---|---|---|---|---|---|---|---|---|
F02-601 |
1 |
47 |
2.1 |
E (1) |
|
F05-101 |
5 |
50 |
10 |
C (3) |
F02-602 |
11 |
47 |
23.4 |
B (9) |
|
F05-102 |
0 |
50 |
0 |
– |
F02-501 |
2 |
51 |
4.0 |
J (2) |
|
F06-601 |
2 |
42 |
4.8 |
A (2) |
F02-502 |
0 |
52 |
0 |
– |
|
F06-602 |
0 |
42 |
0 |
– |
F02-201 |
0 |
41 |
0 |
– |
|
F06-502 |
2 |
36 |
5.6 |
E (2) |
F02-202 |
5 |
42 |
11.9 |
D (4) |
|
F06-401 |
0 |
43 |
0 |
– |
F03-201 |
1 |
39 |
2.6 |
D (1) |
|
F06-402 |
1 |
43 |
2.3 |
F (1) |
F03-202 |
1 |
39 |
2.6 |
I (1) |
|
F06-301 |
4 |
45 |
8.9 |
C (3) |
F03-401 |
0 |
50 |
0 |
– |
|
F06-302 |
4 |
46 |
8.7 |
C (4) |
F03-402 |
3 |
52 |
5.8 |
D (3) |
|
F06-101 |
0 |
49 |
0 |
– |
F03-301 |
0 |
47 |
0 |
– |
|
F06-102 |
20 |
49 |
40.8 |
B (15) |
F03-302 |
6 |
47 |
12.8 |
H (5) |
|
F07-601 |
2 |
48 |
4.2 |
F (2) |
F03-101 |
0 |
51 |
0 |
– |
|
F07-602 |
0 |
49 |
0 |
– |
F03-102 |
0 |
52 |
0 |
– |
|
F07-501 |
0 |
34 |
0 |
– |
F04-601 |
0 |
49 |
0 |
– |
|
F07-502 |
0 |
34 |
0 |
– |
F04-602 |
7 |
49 |
14.3 |
G (5) |
|
F07-401 |
2 |
51 |
3.9 |
A (2) |
F04-501 |
0 |
50 |
0 |
– |
|
F07-402 |
1 |
50 |
2 |
A (1) |
F04-502 |
1 |
50 |
2.0 |
D (1) |
|
F07-301 |
2 |
44 |
4.5 |
C (1) E (1) |
F04-201 |
1 |
51 |
2.0 |
G (1) |
|
F07-302 |
4 |
44 |
9.1 |
F (3) |
F04-202 |
0 |
51 |
0 |
– |
|
F07-101 |
0 |
50 |
0 |
– |
F04-401 |
0 |
52 |
0 |
– |
|
F07-102 |
0 |
49 |
0 |
– |
F04-402 |
1 |
52 |
1.9 |
F (1) |
|
F08-401 |
1 |
52 |
1.9 |
C (1) |
F04-301 |
3 |
48 |
6.3 |
F (3) |
|
F08-402 |
7 |
52 |
13.5 |
E (6) |
F04-302 |
1 |
48 |
2.1 |
D (1) |
|
F08-301 |
0 |
49 |
0 |
– |
F04-101 |
0 |
51 |
0 |
– |
|
F08-302 |
0 |
49 |
0 |
– |
F04-102 |
3 |
51 |
5.9 |
D (3) |
|
F08-101 |
1 |
52 |
1.9 |
A (1) |
F05-601 |
1 |
51 |
2.0 |
D (1) |
|
F08-102 |
3 |
52 |
5.8 |
A (2) |
F05-602 |
2 |
51 |
4.0 |
D (1) |
|
F09-501 |
0 |
44 |
0 |
– |
F05-501 |
0 |
51 |
0 |
– |
|
F09-502 |
0 |
44 |
0 |
– |
F05-502 |
0 |
51 |
0 |
– |
|
F09-401 |
1 |
49 |
2.0 |
C (1) |
F05-401 |
1 |
51 |
2.0 |
B (1) |
|
F09-402 |
0 |
49 |
0 |
– |
F05-402 |
0 |
51 |
0 |
– |
|
F09-301 |
8 |
44 |
18.2 |
B (5) |
F05-301 |
7 |
45 |
15.6 |
F (7) |
|
F09-302 |
3 |
43 |
7.0 |
C (2) |
F05-302 |
4 |
45 |
8.9 |
B (2) C (2) |
|
|
|
|
|
* MI = total number of mismatched instances, OI = total number of observed instances
Student |
MI |
OI |
Err.(%) |
Reason |
Student |
MI |
OI |
Err.(%) |
Reason |
|
---|---|---|---|---|---|---|---|---|---|---|
S01-01 |
6 |
67 |
9.0 |
E (3) |
|
S07-01 |
0 |
72 |
0 |
– |
S01-02 |
3 |
67 |
4.5 |
E (2) |
|
S07-02 |
3 |
72 |
4.2 |
A (3) |
S02-01 |
4 |
74 |
5.4 |
I (2) |
|
S11-01 |
10 |
75 |
13.3 |
A (4) |
S02-02 |
1 |
75 |
1.3 |
A (1) |
|
S11-02 |
11 |
70 |
15.7 |
A (11) |
S03-03 |
6 |
73 |
8.2 |
A (6) |
|
D01-03 |
0 |
72 |
0 |
– |
S04-01 |
5 |
72 |
6.9 |
H (4) |
|
D01-04 |
2 |
72 |
2.8 |
D (2) |
S04-02 |
7 |
72 |
9.7 |
A (7) |
|
D02-01 |
9 |
76 |
11.8 |
A (9) |
S05-01 |
7 |
75 |
9.3 |
A (7) |
|
D02-02 |
4 |
76 |
5.3 |
C (2) |
S05-02 |
2 |
75 |
2.7 |
A (2) |
|
D03-03 |
1 |
75 |
1.3 |
A (1) |
S05-03 |
4 |
72 |
5.6 |
C (2) |
|
D05-01 |
0 |
70 |
0 |
– |
S05-04 |
15 |
72 |
20.8 |
A (15) |
|
D05-02 |
5 |
70 |
7.1 |
H (4) |
S06-01 |
0 |
75 |
0 |
– |
|
D05-03 |
4 |
64 |
6.3 |
A (4) |
S06-02 |
0 |
75 |
0 |
– |
|
D05-04 |
1 |
66 |
1.5 |
E (1) |
S06-03 |
5 |
72 |
6.9 |
A (5) |
|
D06-01 |
2 |
66 |
3.0 |
A (1) D (1) |
S06-04 |
20 |
73 |
27.4 |
A (17) |
|
D06-02 |
4 |
66 |
6.1 |
A (4) |
* MI = total number of mismatched instances, OI = total number of observed instances