Technical Report

A correlational study of MyLab Statistics in a combined math and statistics course


Pearson Global Product Organization

Efficacy & Research


Anne Pier Salverda

Dirk Tempelaar1

Ross Metusalem

Dan Belenky


1Maastricht University


Compiled on 2018-04-03 11:54:15


Executive summary

Overview of MyLab Statistics

MyLab Statistics is an adaptive online tutoring and assessment system for higher education statistics instruction. MyLab Statistics provides a suite of multimedia instructional materials; interactive tutorials and demonstrations; example data sets; Pearson’s StatCrunch software for performing statistical analyses; an adaptive student study plan feature; and an assessment system for homework, quizzing, and testing. The assessment system includes pre-written items of various formats, including multiple choice, fill-in-the-blank, and free response items requiring students to perform statistical calculations. Some calculation problems algorithmically generate new values each time a student attempts them, allowing students to repeat problems to practice their calculation and problem-solving abilities. Assessments provide students with immediate feedback on their performance on a question-by-question basis. Additionally, assessments include multiple Learning Aids that, when utilized, provide step-by-step guidance for solving individual problems.

Intended outcome

The ability to explore and analyze data requires students to acquire a range of skills, such as the ability to understand statistical graphs and the ability to perform statistical analyses on data. In many areas of education and in a wide variety of professions, these skills have become vital as technological developments have resulted in an increasingly strong influence of data on many aspects of everyday life. As a result, more students than ever, with a wide variety of quantitative skills, engage in some form of statistical education. MyLab Statistics is designed to offer students a rich, personalized learning experience that builds a set of strong fundamental statistics skills to help students to perform well in their college statistics courses and their future professions.

Performance on course final exam

The learner outcome of interest in this study is students’ achievement in the course, as measured by their performance on the course’s final exam. The goal of the study was to examine if students’ usage of, and performance in, MyLab Statistics is associated with their performance on the final exam. We examined this relationship while controlling for each student’s gender, their level of prior math education, and their math ability.

Research questions

In this study, we examined the following research questions:

  1. What is the relationship between student homework activity, measured as the percentage of homework questions assigned that the student attempted, and performance on the final exam?

  2. What is the relationship between performance on formative assessments (homework assignments in MyLab Statistics) and performance on the final exam?

  3. What is the relationship between performance on summative assessments (tests in MyLab Statistics) and performance on the final exam?

Key findings

The following key findings adjust for students’ gender, level of prior math education and ability in mathematics.

  1. MyLab Statistics homework activity and performance on the final exam. There is a positive and statistically significant relationship between the percentage of homework questions attempted and performance on the final exam. Higher percentages of homework questions attempted are associated with higher final exam scores.

  2. Performance on MyLab Statistics homework assignments and performance on the final exam. There is a positive and statistically significant relationship between the mean homework score and score on the final exam. Higher mean homework scores are associated with higher final exam scores.

  3. Performance on MyLab Statistics tests and performance on the final exam. There is a positive and statistically significant relationship between the mean test score and score on the final exam. Higher mean test scores are associated with higher final exam scores.

Recommendations

Our findings on the effectiveness of MyLab Statistics usage suggest that doing more homework questions is associated with better performance in the course.

Our findings on student performance in MyLab Statistics suggest that students’ mean homework score and mean test score both have a positive association with their score on the final exam. Both scores can therefore be used as early indicators for student performance on the final exam. However, our findings suggest that test scores have a stronger positive association with final exam performance than homework scores do.

Next steps

Our examination of the relationship between usage of MyLab Statistics and student performance on the final exam was constrained by limitations in the amount of relevant platform data that was available to us at the time of this research, for the course under study. Once more detailed information about student activity is made available (or made accessible) within Pearson’s databases, we will be in a position to examine more detailed aspects of student interactivity with MyLab Statistics than what we focused on in this study. Doing so will allow us to test more specific hypotheses about the relationships between students’ use of MyLab Statistics and desirable learner outcomes. For instance, we would like to examine specifically if the degree to which students improve their performance on homework questions with repeated attempts is associated with better performance on the final exam. Further research could also uncover student usage strategies that are associated with poor performance in the course, and this information could be used to improve the effectiveness of MyLab Statistics for student learning.


Introduction

The ability to critically process, analyze and learn from data is essential to many areas of education. In a wide variety of professions, these skills have become vital as technological developments have resulted in an increasingly strong influence of data on many aspects of everyday life. As a result, more students than ever, with a wide variety of quantitative skills, engage in some form of statistical education. MyLab Statistics is designed to offer students a rich, personalized learning experience that builds a set of strong fundamental statistics skills to help students to perform well in their college statistics courses and their future professions.

Description of MyLab Statistics

The study presented here investigates the efficacy of MyLab Statistics, an adaptive online tutoring and assessment system for higher education statistics instruction. MyLab Statistics provides a suite of multimedia instructional materials; interactive tutorials and demonstrations; example data sets; Pearson’s StatCrunch software for performing statistical analyses; an adaptive student study plan feature; and an assessment system for homework, quizzing, and testing. The assessment system includes pre-written items of various formats, including multiple choice, fill-in-the-blank, and free response items requiring students to perform statistical calculations. Some calculation problems algorithmically generate new values each time a student attempts them, allowing students to repeat problems to practice their calculation and problem-solving abilities. Assessments provide students with immediate feedback on their performance on a question-by-question basis. Additionally, assessments include multiple Learning Aids that, when utilized, provide step-by-step guidance for solving individual problems.

The present study specifically investigates the relationship between students’ use of the MyLab Statistics assessment system and performance on an independently administered written final exam. The data comes from an undergraduate Quantitative Methods course taught in a Business and Economics department at a public university in the Netherlands. MyLab Statistics assessments were used for low-stakes homework and quizzing in the course. For each homework problem, students had access to Learning Aids and were allowed an unlimited number of attempts. In contrast, quiz problems allowed only one attempt and did not allow use of Learning Aids. It is worth noting that while MyLab Statistics has an adaptive assessment feature that assigns individual students with questions that target content and skills that they have yet to master, this adaptive feature was not utilized in this course.

This correlational study seeks to uncover relationships between students’ completion of, and performance on, MyLab statistics assessment items, and scores on a final exam. MyLab Statistics assessments incorporate numerous learning science principles to help students develop the skills to explore and analyze data, interpret statistics and graphs, and responsibly use and consume statistics in their everyday lives. Several learning science principles specifically relevant in the context of this study will now be reviewed.

Practice testing and repetition

Acquiring new knowledge and skills benefits from engaging in practice testing and problem-solving. For acquisition of declarative knowledge (e.g., the definition of a key concept), being tested on information has been shown to be more effective for learning than rereading that same information. This “testing effect” has been demonstrated in numerous laboratory and educational settings (Roediger & Karpicke, 2006). Additionally, in subjects such as statistics, students are meant to develop problem-solving skills, and solving practice problems is required to fully develop such skills (VanLehn, 1996). MyLab Statistics provides students with opportunities for developing both declarative knowledge and problem-solving skills by providing a variety of question types that target both declarative knowledge and problem-solving skills. In the homework assignments, students could repeat problems an unlimited number of times. Research indicates that such repeated practice further improves learning (Greene, 2008, VanLehn, 1996). The well-established benefit of repeated practice suggests that students who attempt and repeat more assessment items should learn more than students who attempt and repeat fewer items. This is particularly relevant for the homework problems in this study, which allowed unlimited repetitions.

Figure 1: an example MyLab statistics homework problem that provides students with practice calculating probabilities from frequencies. If a student repeats this problem, new frequencies of green and yellow peas are generated

Worked examples and scaffolding

It is common in education for teachers to demonstrate to students how to solve a particular type of problem, and many instructional materials provide similar guidance. Research shows that providing such demonstrations, or “worked examples”, improves learning, particularly when presented alongside practice problems that students complete on their own (Atkinson, Derry, Renkl, & Wortham, 2000). MyLab Statistics’ assessments provide worked examples through the Learning Aids feature called “View an Example”, which students in the current study could invoke when attempting any homework problem. In addition, students could utilize a Learning Aid called “Help Me Solve This”, which provides the student with a series of questions that guide the student through enacting each step of the problem-solving process on their own. This feature is a form of instructional “scaffolding”, in which a complex problem or task is provided with additional structure to make it more accessible to the student. Such scaffolding techniques are known to enhance learning (Reiser, 2004). Unfortunately, reliable data on students’ use of Learning Aids was not available for this study.

Figure 2: the first screen of the “Help Me Solve This” scaffolding feature for the problem shown in Figure 1. The second panel of text explains in general terms the approach that is required to solve this problem. The student can click ‘Close’ to return to the problem or can click ‘Continue’ to receive step-by-step guidance for using the provided equation to solve the problem

Immediate feedback

MyLab Statistics provides students with immediate feedback on each assessment item. This feedback indicates correctness of the response and, for incorrect responses, provides additional information to help students identify and correct errors. The efficacy of such immediate, informative feedback is supported by research. Studies of computer-based feedback systems have shown that feedback that explains or otherwise elaborates on the correctness of a response is more effective than feedback that only indicates correctness (Van der Kleij, Feskens, & Eggen, 2015). While research on feedback timing (i.e., immediate vs. delayed) has produced a wide range of results, there is research suggesting that immediate feedback improves learning of procedural skills in disciplines like mathematics and programming more so than presenting feedback at a delay (Shute, 2008).

Figure 3: the incorrect response feedback for the question from Figure 1. This feedback explains to students the strategy they should use in solving the problem. Students receive this feedback immediately upon entering an incorrect response

The present study

This study investigated the relationship between students’ use of the MyLab Statistics assessment system and performance on an independently administered written final exam. The first goal was to examine the relationship between the amount of homework activity and performance on the final exam. The second goal was to examine the relationship between performance in MyLab Statistics, in the form of formative and summative assessments, and performance on the final exam.

We examined the following research questions:

  1. What is the relationship between student homework activity, measured as the percentage of homework questions assigned that the student attempted, and performance on the final exam?

  2. What is the relationship between performance on formative assessments (homework assignments in MyLab Statistics) and performance on the final exam?

  3. What is the relationship between performance on summative assessments (tests in MyLab Statistics) and performance on the final exam?

These relationships were examined while controlling for each student’s gender, level of prior math education and math ability.

In order to quantify each relationship as reliably and precisely as possible, given the available data, our statistical analyses adjusted for factors extraneous to MyLab Statistics that may affect student performance. To this end, we used information on student gender, math education and math ability, which had been collected for each student. These variables could reasonably be expected to be associated with a student’s performance in the course. For instance, students with a high level of math education may be expected to perform better, on average, than students with an intermediate level of math education, since knowledge about math is applicable in part to the domain of statistics. Our statistical analyses examine the relationship between each of these three so-called control measures and performance on the final exam, and take these relationships into account in the estimation of the relationship between each main variable of interest associated with each research question (e.g., the percentage of homework questions attempted, as a measure of activity in MyLab Statistics), and performance on the final exam.


Method

This study uses a correlational design and examines observational data to evaluate the association between academic achievement in a combined math and statistics course, and a) usage of MyLab Statistics; b) performance in MyLab Statistics. The study takes into account students’ gender, their level of prior math education and their math ability. Separate analyses examine the relationship between performance on the statistics portion of the course’s final exam, and a) homework activity; b) performance on homework assignments; c) performance on quizzes.

Participants

Data from a total of 1,085 students in the School of Business and Economics at Maastricht University, in the Netherlands, who were enrolled in the first-year course Quantitative Methods in the Fall semester of 2015 were analyzed in this study. This group of students was highly diverse and included students from 48 different nationalities. Only 24% of the students were Dutch; other large groups included German (46%) and Belgian (12%) students.

Course information

Quantitative Methods I is a first-year course that teaches students essential skills in mathematics and statistics. The course uses the book Business Statistics (3rd edition) by Sharpe, De Veaux and Velleman (2015) and covers chapters 1 through 12, with the exception of chapter 4. The course took place in the Fall of 2015 and lasted 8 weeks. It is the first course that students of the School of Business and Economics take in their first year.

The course uses a form of blended learning consisting of an overview lecture at the start of the week; problem-based learning sessions with a content expert tutor (in small groups of 14 students; 19 different tutors teach 3 to 5 sections each); a lab session; and a recap lecture at the end of the week. Use of MyLab Statistics is optional in principle, but typically adopted by more than 99% of the students.

Throughout the course, 7 homework assignments and 3 quizzes are assigned within MyLab Statistics. Items in homework assignments and quizzes are drawn from the same items pools. Each item consists of several questions. None of the items consist exclusively of multiple choice questions. The course concludes with a written final exam consisting of questions that are focused on testing students’ conceptual understanding and application of knowledge. The questions are modeled on questions in AP Statistics exams. The exam does not include any questions from the homework assignments or quizzes.

The statistics part of the final course grade consists of two portions:

  1. 1/6 score on homework assignments and quizzes
    A complex formula is used to determine this score. The average quiz score is weighted more strongly than the average homework score. A perfect quiz score leads to a perfect score on this portion of the final grade. The weight of the average homework score increases as the average quiz score decreases, but by itself the homework score cannot determine more than 50% of 1/6th of this portion of the final grade.

  2. 5/6 score on the final exam
    The course’s final exam covers math as well as statistics. For the analyses in this report we used students’ scores on the statistics part of the final exam to measure their achievement in learning statistics. For ease of exposition, we will refer to this score as the score on the final exam throughout this report.

Data collection

Data was collected from three sources: a) an interview with the course instructor; b) final exam scores and student surveys; c) MyLab Statistics activity and performance data retrieved from Pearson’s MyLab database.

Instructor interview

In the summer of 2017, an interview with the instructor was conducted to gather information about the course, the course structure, the type of instruction used, how MyLab Statistics was implemented in the course, and the instructor’s prior experience with MyLab Statistics. Also obtained was information about the data that the instructor had collected at the time of the course and shared as part of the current research, including the data structure and a data dictionary.

Final exam scores and student variables

The instructor provided each student’s score on the statistics part of the final course. He also provided information on each student’s gender, their prior math education, and their score on a math pretest; each of these variables may plausibly affect student performance in the course. This information was acquired at the beginning of the course when students fill out a series of mandatory surveys designed by the course instructor; this is an integral part of taking the course. For the course under study, students were required to have achieved an intermediate or high level of prior math education taken as preparation for a four-year university education. ‘Intermediate corresponds’ to a level of math education required for college-level social sciences studies; ‘high’ corresponds to a level required for natural sciences, technology, engineering and math.

MyLab Statistics platform data

MyLab Statistics stores an extensive amount of user data in Pearson databases. Data for the course under study was extracted for each student who took part in the course. This course used two types of assignments: 7 homework assignments and 3 quizzes. Data for these homework assignments and quizzes was extracted for each student, specifically:

  1. whether or not the student attempted the assignment
  2. their assignment grade (the percentage of questions answered correctly; which was 0 if the assignment was not attempted)
  3. the total number of attempts on each question in each homework assignment

Quizzes were administered every two weeks. Students took the quizzes at a pre-specified time in a computer room, together with other students in their tutor group and in the presence of a proctor. Multiple attempts on quizzes or quiz questions were not allowed.

Homework assignments were allocated every week, and students performed them individually, at their own pace. Students were allowed to attempt homework questions as often as they liked. For instance, if a student did not get the maximum score on a question, that question could be repeated by going back to the question and opting for a Similar Question. After they reattempted the question, students had the option to replace the score for that question with their current score, which they very likely did whenever their score had improved.

During the course, the instructor used the gradebook to download each student’s score on each of the homework assignments at the time of the final exam. Importantly, a student can improve their homework score at any point in time, but Pearson platform data stores only their highest homework score. Inspection of homework activity time stamps in the Pearson database indicated that some students’ homework scores had been positively affected by activity after the final course exam. There was a second-chance exam for students who failed the first exam, and some of those students revisited homework assignments to prepare for that exam. We therefore relied on homework scores retrieved by the instructor at the time of the final exam. These scores correspond to the highest homework score achieved, for each homework assignment, at the time of the final exam.


Results

This study examined the relationship between student achievement on the final course exam, and student use of, and performance in, MyLab Statistics.

Below, we first present descriptive statistics of course variables, control variables at the student level, and MyLab Statistics usage and performance measures. We then present the results of hierarchical linear models (HLMs) that examine the relationship between usage or performance measures and student performance on the course’s final exam, while controlling for students’ gender, level of prior math education and math ability.

Descriptive statistics

Missing data

Data for all students who were enrolled in the course for the first time were considered for analysis. We excluded from analysis only those students for whom one or more variables that were used in this study were not available. Data from 957 students remained. (For details on data exclusion, see Appendix A.)

Final exam score

This course taught concepts and skills in mathematics and statistics. The final exam was therefore divided into a section on mathematics and a section on statistics. The statistics section of the final exam consisted of 20 questions.

Figure 4 shows the distribution of final exam scores. The average score on the final exam was 66.4%. A score of 55% or higher was needed to pass the exam, and the passing rate was therefore, 77%.

(Note that throughout this report, a diamond shape along the axis of a statistical graph marks the mean value of the variable plotted along that axis.)

*Figure 4: distribution of final exam scores*

Figure 4: distribution of final exam scores

Control variables

Our statistical analyses take into account some preexisting differences between students and estimate the relationship between those variables and students’ score on the final exam. The use of such “control variables” typically results in improved sensitivity of the analysis to variables of interests (e.g., in one of our analyses: performance on quizzes) and more reliable and more precise quantitative estimates for the relationship between those variables and the dependent variable (here, score on the final exam).

Level of prior math education and ability in mathematics

Table 1 reports students’ level of prior math education. About a third of the students (36%) had a high level of education in mathematics. On average, those students achieved higher scores on the final exam than students with an intermediate level of math education.

Level of prior math education percentage of students final exam score
high 36 70.9
intermediate 64 63.8

Table 1: participation rate and mean score on the final exam as a function of level of prior math education

Students’ ability in mathematics was assessed through a math pretest that was designed by the instructor, which was administered at the start of the course. This test consisted of 14 questions: five on algebraic skills, five on logarithms, and four on equations.

Figure 5 shows the distribution of math pretest scores. The distribution is broad and suggests substantial variation in students’ math abilities, as measured by the test. The average score on the math pretest was 8.3 points (i.e., 59 % correct).

*Figure 5: distribution of math pretest scores*

Figure 5: distribution of math pretest scores

Since statistics and mathematics are related, it is reasonable to assume that both prior level of education in mathematics and degree of math ability would enable a student to perform better in the course.

Figure 6 presents the score on the final exam as a function of the score on the math pretest. There appears to be a positive relationship between math ability and course achievement: the higher a student’s score on the math pretest, the better they performed on the final exam.

(Note that the size of each dot represents the number of students with a particular combination of math pretest and final exam score.)

*Figure 6: final exam score vs. math pretest score*

Figure 6: final exam score vs. math pretest score

Gender

Table 2 presents course statistics on gender. The majority of students were men. The average score on the final exam was about the same for men and women.

Gender percentage of students final exam score
Female 45 67.1
Male 55 65.8

Table 2: participation rate and mean score on the final exam as a function of gender

Homework usage

Throughout the course, students were given 7 homework assignments, which together consisted of a total of 176 questions. For each student, we computed the percentage of these 176 questions that they attempted, and the percentage of the questions they attempted that they repeated at least once.

Due to the structure of Pearson’s platform data for MyLab products, and the implementation of the homework assignments in the course under study, question attempt data for some students includes a small amount of homework activity following the final exam. Those students are included in the analyses below. Importantly, a subset analysis excluding those students showed the same pattern of results; see Appendix B.

Questions attempted

Figure 7 presents the distribution of the percentage of homework questions attempted. This distribution is strongly skewed. Most students attempted a large percentage of the 176 homework questions. The median percentage of questions attempted was 95, and 24% of the students attempted all questions. On average, students attempted 77% of the questions.

*Figure 7: distribution of the percentage of homework questions attempted*

Figure 7: distribution of the percentage of homework questions attempted

Figure 8 presents, for each student, the score on the final exam as a function of the percentage of homework questions attempted. The data suggests a positive and close to linear relationship: as the percentage of homework questions increases, so does the score on the final exam.

*Figure 8: final exam score vs. percentage of homework questions attempted*

Figure 8: final exam score vs. percentage of homework questions attempted

Question repetition

Figure 9 presents the distribution of the percentage of those questions that a student attempted that they repeated at least once. A small percentage of students repeated none of the questions they attempted (4%). However, the vast majority of students repeated some of the questions they attempted (34% on average).

*Figure 9: distribution of the percentage of attempted homework questions repeated*

Figure 9: distribution of the percentage of attempted homework questions repeated

This data suggests that students typically used the opportunity to reattempt questions a fair amount. However, a priori, there is no straightforward relationship between degree of question repetition and learning. On the one hand, each repetition reflects a student’s effort to improve their performance, and one might expect this to be reflected in a positive relationship between repetition and final exam score. On the other hand, a student who repeats a large percentage of the questions they attempted most likely provided an incorrect answer on many first attempts, and was probably poorly prepared for the homework. From this perspective, one might expect this to be reflected in a negative relationship between repetition and final exam score.

Unfortunately, the Pearson platform data that were available for the students in the current study did not provide sufficient information to examine if homework score improvement activity is associated with better performance on the final exam. For instance, if improving one’s homework score promotes learning, we would predict a positive relationship between the degree to which a student improved their homework score (expressed as the percentage of those questions they answered incorrectly that they then repeated and answered correctly) and their performance on the final exam. Unfortunately, the platform database provided data aggregated at the question level, but not at the attempt level (that is, we did not have data specific to each time a question was attempted).

Homework performance

Students were allocated one homework assignment each week that covered the material associated with that week of class. The average score across all of these homework assignments counted as partial credit for their final course grade.

In the course under study, homework assignments are used as formative assessment, to encourage students to learn through testing themselves. Students were therefore allowed to attempt homework questions as many times as they wanted, to try to improve their score on homework assignments.

Pearson’s platform data stores homework performance as the highest score obtained for each homework assignment. However, this score can be positively affected by homework activity following the final exam. Time stamp information in the platform data suggested that some students engaged in such activity. For this reason, we analyzed homework scores that the course instructor retrieved from the MyLab Statistics gradebook at the time of the final exam.

Descriptive statistics

Homework participation

Figure 10 presents the percentage of students who attempted each homework assignment. This percentage decreased gradually over time, from nearly all the students (97%) for the first homework assignment, to 76% of student for the last homework assignment.

*Figure 10: percentage of students who attempted each homework assignment*

Figure 10: percentage of students who attempted each homework assignment

Figure 11 presents the number of students who attempted each homework assignment. Most students (72%) attempted all 7 homework assignments, but a substantial number (28%) attempted six or fewer. On average, students completed 6.2 homework assignments.

*Figure 11: distribution of number of homework assignments attempted*

Figure 11: distribution of number of homework assignments attempted

Average homework score

Figure 12 presents the distribution of average homework scores. The average homework score, computed for each student across all 7 homework assignments, was 74%, and half the students achieved a score of 89% or higher. This high average homework score is due, at least in part, to the fact that students were allowed to repeat homework questions an unlimited number of times, and save their score for the best attempt.

*Figure 12: distribution of average homework scores*

Figure 12: distribution of average homework scores

Score on final exam as a function of average homework score

Figure 13 presents the score on the final exam as a function of each student’s average homework score. The data suggests a positive and close to linear relationship: as the average score on the homework assignments increases, so does the score on the final exam.

*Figure 13: final exam score vs. average homework score*

Figure 13: final exam score vs. average homework score

Quiz performance

Quiz participation

Figures 14 and 15 show that each quiz was attempted by nearly all of the students, and that the vast majority of students (94.8%) attempted all three quizzes.

*Figure 14: percentage of students who attempted each quiz*

Figure 14: percentage of students who attempted each quiz

*Figure 15: distribution of number of quizzes attempted*

Figure 15: distribution of number of quizzes attempted

Average quiz score

Figure 16 shows the average score for each quiz (66%, 56%, and 57%, respectively). Quiz scores were substantially lower than the average homework score (74%). Importantly, this difference in performance is likely due, in part, to the fact that students were only allowed one attempt on each quiz, whereas they were allowed multiple attempts on homework assignments.

*Figure 16: average quiz scores*

Figure 16: average quiz scores

Average quiz score across all three quizzes

The average quiz score across all students was 60%. The distribution of the quiz score averaged across all three assigned quizzes (see Figure 17) reveals that there was a large degree of variation in the average quiz score across students. The distribution resembles the distribution of final exam score (see Figure 1) much more than it resembles the distribution of average homework scores (see Figure 9). The latter shows markedly less variation and is strongly skewed to high scores in the 95–100% range. Taken together, these patterns suggest that performance on quizzes may be a more discriminative measure of learning than performance on homework assignments.

*Figure 17: distribution of average quiz scores*

Figure 17: distribution of average quiz scores

Score on final exam as a function of average quiz score

Figure 18 presents each student’s final exam score as a function of their average quiz score. This relationship appears close to linear in the 35–100% range. A different relationship is observed for the relatively small percentage of students (4%) with average quiz scores in the 0–35% range.

*Figure 18: final exam score vs. average quiz score*

Figure 18: final exam score vs. average quiz score

Score on final exam as a function of average quiz score, excluding students who did not do all quizzes

Some students’ low average quiz score may reflect that they did not take all the quizzes. In that case, they got a zero score for each quiz they did not attempt, which had a strong negative effect on their average quiz score.

Figure 19 shows that when data for 50 students who did not do all quizzes are excluded (5% of the data), the relationship between average quiz score and score on the final exam is closer to linear.

*Figure 19: final exam score vs. average quiz score for students who did all quizzes*

Figure 19: final exam score vs. average quiz score for students who did all quizzes


Statistical analyses

The hierarchical linear models (HLMs), reported below, examine the relationship between students’ achievement in the course and MyLab Statistics usage and performance. Students’ average scores on the final exam were analyzed as a function of:

  1. activity in homework assignments
  2. performance on homework assignments
  3. performance on quizzes

Each of these regression models controlled for a possible relationship between score on the final exam and:

  • gender
  • level of prior math education
  • score on the math pretest

Across models, we found a statistically significant relationship between level of prior math education, and performance on the final exam. Students with a high level of math education achieved higher scores on the final exam than students with an intermediate level of math education. We also found a statistically significant relationship between score on the math pretest and score on the final exam. Higher scores on the math pretest were associated with higher scores on the final exam.

Model summary tables

Note that:

  1. p-values are computed using the Kenward-Roger approximation
  2. two measures of the model’s goodness of fit are reported: R2, corresponding to the squared correlation between the observed and fitted values, and \(\Omega^{2}\), an alternative measure of goodness of fit for HLMs proposed by Xu (2003)
  3. variance inflation factors are reported for each model; a low VIF (e.g., < 5) indicates that there were no problems with multicollinearity

Relationship between homework activity and achievement on final exam

HLM specification

The multi-level linear regression model, below, predicts students’ score on the stats portion of the final exam as a function of the percentage of homework questions attempted, while controlling for:

  • gender (male / female)
  • level of prior math education (intermediate / high)
  • score on math pretest (0–15)

With a random intercept for:

  • tutor (19 levels)

Note that all continuous predictor variables were centered at the mean. The model’s intercept therefore represents the predicted score for a female student with an intermediate level of prior math education, an average score on the math pretest, and an average percentage of homework questions attempted.

    Stats final exam score
    B SE p
Fixed Parts
(Intercept)   64.562 0.947 <.001
Gender (male)   -0.165 1.008 .871
Level of prior math education (high)   5.271 1.098 <.001
Math pretest score   0.438 0.168 .012
Percentage questions attempted   0.228 0.016 <.001
Random Parts
σ2   233.502
τ00, tutor   4.281
Ntutor   19
Observations   957
R2 / Ω02   .243 / .243

Maximum variance inflation factor: 1.13

Table 3: summary of HLM analysis of relationship between score on final exam and measure of MyLab Statistics homework activity

Model interpretation

For ease of interpretation and effective communication to a broad audience, throughout this report we present effect sizes for a 10% increase of the independent variable (here, the percentage of homework questions attempted). Note that a 10% increase can easily be rescaled to consider effect sizes for increases of 5%, 20%, 30%, etc.

The HLM examining the association between homework activity and score on the final exam, while controlling for gender, prior math education and score on a math pretest, shows that:

  • there is a positive association between the percentage of questions attempted, and score on the final exam. An increase of 10% in questions attempted is associated with an increase of 2.3% in final exam score.

Relationship between homework performance and achievement on final exam

HLM specification

The multi-level linear regression model, below, predicts students’ score on the stats portion of their final exam as a function of average homework score, while controlling for:

  • gender (male / female)
  • level of prior math education (intermediate / high)
  • score on math pretest (0–15)

With a random intercept for:

  • tutor (19 levels)

Note that all continuous predictor variables were centered at the mean. The model’s intercept therefore represents the predicted score for a female student with an intermediate level of prior math education, an average score on the math pretest, and an average homework score.

Summary table

    Stats final exam score
    B SE p
Fixed Parts
(Intercept)   64.482 0.946 <.001
Gender (male)   -0.016 1.004 .988
Level of prior math education (high)   5.264 1.093 <.001
Math pretest score   0.381 0.167 .027
Mean homework score   0.245 0.017 <.001
Random Parts
σ2   231.437
τ00, tutor   4.351
Ntutor   19
Observations   957
R2 / Ω02   .250 / .250

Maximum variance inflation factor: 1.13

Table 4: summary of HLM analysis of relationship between score on final exam and performance on homework assignments

Model interpretation

The HLM examining the association between homework performance and score on the final exam, while controlling for gender, prior math education and score on a math pretest, shows that:

  • there is a positive association between the average homework score and score on the final exam. An increase of 10% in average homework score is associated with an increase of 2.5% in final exam score.

Relationship between quiz performance and achievement on final exam

HLM specification

The multi-level linear regression model, below, predicts students’ score on the stats portion of their final exam as a function of their mean quiz score (out of 3 quizzes), while controlling for:

  • gender (male / female)
  • level of prior math education (intermediate / high)
  • score on math pretest (0–15)

With a random intercept for:

  • tutor (19 levels)

Note that this model includes students who did not do all the quizzes. All continuous predictor variables were centered at the mean. The model’s intercept therefore represents the predicted score for a female student with an intermediate level of prior math education, an average score on the math pretest, and an average quiz score.

Summary table

    Stats final exam score
    B SE p
Fixed Parts
(Intercept)   65.422 0.790 <.001
Gender (male)   -0.905 0.850 .291
Level of prior math education (high)   4.039 0.934 <.001
Math pretest score   0.325 0.142 .026
Mean quiz score   0.646 0.025 <.001
Random Parts
σ2   168.180
τ00, tutor   2.695
Ntutor   19
Observations   957
R2 / Ω02   .454 / .454

Maximum variance inflation factor: 1.13

Table 5: summary of HLM analysis of relationship between score on final exam and performance on quizzes

Model interpretation

The HLM examining the association between quiz performance and score on the final exam, while controlling for gender, prior math education and score on a math pretest, shows that:

  • there is a positive association between the average quiz score and score on the final exam. An increase of 10% in average quiz score is associated with an increase of 6.5% in final exam score.

Excluding students who did not do all quizzes

The same analysis was performed on data from just those students who did all 3 quizzes.

Summary table, students who did all quizzes

    Stats final exam score
    B SE p
Fixed Parts
(Intercept)   66.559 0.735 <.001
Gender (male)   -1.290 0.825 .122
Level of prior math education (high)   3.384 0.908 <.001
Math pretest score   0.314 0.139 .027
Mean quiz score   0.763 0.028 <.001
Random Parts
σ2   151.109
τ00, tutor   1.639
Ntutor   19
Observations   907
R2 / Ω02   .488 / .488

Maximum variance inflation factor: 1.14

Table 6: summary of HLM analysis of relationship between score on final exam and performance on quizzes, for students who did all quizzes

Model interpretation, students who did all quizzes

The HLM examining the association between quiz performance and score on the final exam, while controlling for gender, prior math education and score on a math pretest, shows that:

  • there is a positive association between the average quiz score and score on the final exam for students who did all quizzes. An increase of 10% in average quiz score is associated with an increase of 7.6% in final exam score.

Discussion

This study examined the association between academic achievement in a combined math and statistics course, and performance in, and usage of, MyLab Statistics. Participants in this study were students enrolled in the course Quantitative Methods in the School of Business and Economics at Maastricht University in the Fall of 2015. We examined the relationships between a) homework activity and final exam score; b) mean homework scores and final exam score; c) mean quiz scores and final exam scores.

Key findings

The key findings of our analyses, which adjusted for gender, level of prior math education and ability in mathematics, are:

  1. Students who attempted a larger percentage of homework questions tended to achieve higher scores on the final exam. An increase of 10% in percentage of homework questions attempted is associated with an increase of 2.3% in final exam score.

  2. Students with higher average homework scores tended to achieve higher scores on the final exam. An increase of 10% in average homework score is associated with an increase of 2.5% in final exam score.

  3. Students with higher average quiz scores tended to achieve higher scores on the final exam. An increase of 10% in average quiz score is associated with an increase of 6.5% in final exam score. This association is stronger for just those students who did all three quizzes, for whom an increase of 10% in average quiz score is associated with an increase of 7.6% in final exam score.

Homework and quiz performance as measures of learning

Although higher scores on homework assignments and higher scores on quizzes were associated with higher final exam scores, this relationship was substantially stronger for quiz scores than that for homework scores. An increase of 10% in average quiz score was associated with a 6.5% increase in final exam score, while an increase of 10% in average homework score was associated with a 2.5% increase in final exam score. Quiz performance also explained almost twice as much of the variance in final exam scores as homework performance did.

The difference in degree of positive association with final exam scores may suggest that quiz scores are better measures of learning than homework scores are. This is not surprising, since homework questions are likely less valid measures of learning than quiz scores are. The format of homework assignments is such that students are allowed to attempt each question as often as they like, and each further attempt has the potential to improve their homework score (which corresponds to the highest score obtained). This typically results in high levels of performance, but not necessarily in equally high levels of learning — depending on what type of homework activities and strategies students used to improve their performance.

Limitations

Because this study used a correlational design, it is not possible to determine if use of, and performance in, MyLab Statistics promoted learning and caused changes in performance on the final exam. A more rigorous design would compare the performance of students using MyLab Statistics to students not using MyLab Statistics. Students would either be randomly assigned to treatment condition, or would be matched to students in the other group on important background characteristics, such as prior achievement and socioeconomic status. Note that controlling for socioeconomic status would strengthen the current study, but unfortunately this information was not available.

To avoid the issue of multicollinearity introduced by including in the same model multiple independent variables that are strongly related to one another (i.e., homework activity, homework performance and quiz performance), we analyzed the relationship between each of these variables and performance on the final exam separately. Importantly, given the overlap between these independent variables, their relationships with performance on the final exam should not be interpreted as independent from one another and thus additive in some way.

An important limitation of our findings is that our results may not (fully) generalize because they were obtained for a specific implementation of MyLab Statistics. However, more research began in the Fall of 2017 to replicate and extend this study with a higher education institution in North America.

Another important limitation is the quality of the Pearson platform data on student activity in MyLab Statistics that was available for the course under study. The data did not include information about individual question attempts. Moreover, reliable platform data on the use of learning aids was not available. Such data could provide important insights into the effectiveness (and, possibly ineffectiveness) of particular student activity,such as question-answering strategies, while using the product.

Future research and implications for product implementation

Our findings that performance on homework assignments and quizzes are both positively associated with course achievement in the form of the final exam score are not surprising. However, the results on the association between homework activity and final exam score merit further research.

Taken together, more research is needed to get a better understanding of the strategies that students follow when working on homework assignments, and how those strategies promote or hinder learning. Knowledge obtained from such research could be used to incorporate measures into the design of homework assignments in MyLab Statistics, and MyLab products more generally, that encourage effective learning strategies and discourage ineffective ones. For instance, students who use learning aids followed by rapid reattempts of homework questions to boost their score could receive a message informing them that “spacing” repeated attempts on the same question leads to the biggest gains in learning (Bjork & Bjork, 2011).

Future research should focus on different types of MyLabs Statistics usage patterns and their relationship with student achievement and learning. However, deeper insights from such research are contingent on the availability, accessibility, and quality (e.g. degree of granularity) of platform data. The more detailed, reliable and accessible such information is, the greater is the potential for efficacy research to uncover important and novel insights that will contribute to the improvement of MyLab Statistics and its implementation, and gains in learner achievement.


Appendix A. Data exclusion

The original SPSS data file provided by the course instructor contained data for 1085 students. The table below specifies different reasons why data for some of those students was excluded from the descriptive and inferential statistics presented in this report. For each of these reasons, the table lists the total number of students affected. Exclusions were performed in the order listed in the table.

Table A1: excluded students

Reason for exclusion Number of cases excluded
Had previously taken the course 66
Did not have a score on the final exam 34
Information about prior math education was missing 2
Did not have a score on the math pretest 22
Did not do any homework assignments or quizzes 1
Gender unspecified 1

The following cases were excluded after the instructor’s data was merged with Pearson’s platform data:

Reason for exclusion Number of cases excluded
Number of homework attempts did not match homework score 2

Appendix B. Sub-analysis of homework usage

The analysis on homework usage required information about question attempts to be retrieved from Pearson’s platform data. The database stores information at the question level, but only in the form of the total number of attempts. That is, for each student and each question within each homework assignment, the database contains information about the total number of attempts made for that question.

However, because students had access to the homework assignments throughout the semester, the total number of attempts for a question can include activity that took place following the course’s final exam. The main analysis on homework activity includes data on homework attempts from all students, including those whose attempt data has been influenced to some extent by homework activity following the final exam.

To ensure that the pattern of results found in our statistical analysis of homework activity was not driven by post-exam activity, we also ran the homework activity analysis on a subset of the data that excluded the 5% of students who engaged in any amount of activity on homework assignments following the final exam. The resulting model yields the same pattern of statistical significance and highly similar coefficient estimates as the original model for all three measures of homework activity. The pattern of results in our main analysis are therefore not due to the inclusion of post-exam activity that a small percentage of the students engaged in. (Note that the main analysis is presented in the main text.)

HLM summary tables

Table B1: main analysis, all students

    Stats final exam score
    B SE p
Fixed Parts
(Intercept)   64.562 0.947 <.001
Gender (male)   -0.165 1.008 .871
Level of prior math education (high)   5.271 1.098 <.001
Math pretest score   0.438 0.168 .012
Percentage questions attempted   0.228 0.016 <.001
Random Parts
σ2   233.502
τ00, tutor   4.281
Ntutor   19
Observations   957
R2 / Ω02   .243 / .243

Maximum variance inflation factor: 1.13

Table B2: subset analysis, excluding students with post-exam homework activity

    Stats final exam score
    B SE p
Fixed Parts
(Intercept)   65.206 0.986 <.001
Gender (male)   -0.351 1.029 .734
Level of prior math education (high)   5.030 1.116 <.001
Math pretest score   0.401 0.170 .022
Percentage questions attempted   0.227 0.016 <.001
Random Parts
σ2   230.941
τ00, tutor   5.160
Ntutor   19
Observations   909
R2 / Ω02   .244 / .244

Maximum variance inflation factor: 1.13


Appendix C. Robustness of results

Throughout this report, we excluded student data only if there were missing values or if there was clearly something wrong with the data (see Appendix A). The aim of this “inclusive” analysis strategy was to focus on generalizability of our findings. However, to ensure that our findings were not driven by data from a small number of students, we verified that the pattern of statistical significance and the values for coefficient estimates obtained were robust.

For each of the models presented in this paper, we computed Cook’s distance for each student and excluded the 2% of students with the highest values, thus removing data from those students with the highest influence on the model’s coefficient estimates. (Note that this percentile was chosen based on the visual identification of outliers in histograms of Cook’s d values across all models.) We then fitted the same model on just the remaining data.

Across models, coefficient estimates for the fixed effect of interest in the subset models were found to be very close to those in the original models.

Note:

  1. In the summary tables below, coefficient estimates are rounded to two decimals to facilitate comparisons across models.

  2. Variables that were centered in the original mode were not re-centered in the subset model. This was done so that coefficient estimates for both models applied to the same (imaginary) student with average values for those variables in the original data.

Table C1: homework usage

    Main analysis   Subset analysis
    B SE p   B SE p
Fixed Parts
(Intercept)   64.56 0.95 <.001   64.40 0.83 <.001
Gender (male)   -0.16 1.01 .871   0.20 0.96 .837
Level of prior math education (high)   5.27 1.10 <.001   6.14 1.05 <.001
Math pretest score   0.44 0.17 .012   0.45 0.16 .007
Percentage questions attempted   0.23 0.02 <.001   0.24 0.02 <.001
Random Parts
σ2   233.502   208.344
τ00, tutor   4.281   1.687
Ntutor   19   19
Observations   957   937
R2 / Ω02   .243 / .243   .280 / .280

Maximum variance inflation factor in subset model: 1.14

Table C2: homework performance

    Main analysis   Subset analysis
    B SE p   B SE p
Fixed Parts
(Intercept)   64.48 0.95 <.001   64.29 0.82 <.001
Gender (male)   -0.02 1.00 .988   0.33 0.95 .732
Level of prior math education (high)   5.26 1.09 <.001   6.21 1.05 <.001
Math pretest score   0.38 0.17 .027   0.34 0.16 .039
Mean homework score   0.25 0.02 <.001   0.26 0.02 <.001
Random Parts
σ2   231.437   206.780
τ00, tutor   4.351   1.487
Ntutor   19   19
Observations   957   937
R2 / Ω02   .250 / .250   .282 / .282

Maximum variance inflation factor in subset model: 1.15

Table C3: quiz performance

    Main analysis   Subset analysis
    B SE p   B SE p
Fixed Parts
(Intercept)   65.42 0.79 <.001   65.21 0.70 <.001
Gender (male)   -0.91 0.85 .291   -0.98 0.80 .223
Level of prior math education (high)   4.04 0.93 <.001   4.28 0.88 <.001
Math pretest score   0.32 0.14 .026   0.29 0.13 .034
Mean quiz score   0.65 0.03 <.001   0.71 0.02 <.001
Random Parts
σ2   168.180   145.402
τ00, tutor   2.695   1.268
Ntutor   19   19
Observations   957   937
R2 / Ω02   .454 / .454   .511 / .511

Maximum variance inflation factor in subset model: 1.14


References

Atkinson, R. K., Derry, S. J., Renkl, A., & Wortham, D. (2000). Learning from examples: Instructional principles from the worked examples research. Review of Educational Research, 70(2), 181–214.

Bjork E. L., Bjork R. A. (2011). Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning. In M. A. Gernsbacher, R. W. Pew, L. M. Hough, & J. R. Pomerantz (Eds.), Psychology and the real world: Essays illustrating fundamental contributions to society (pp. 56–64). New York: Worth.

Greene, R. L. (2008). Repetition and spacing effects. In J. H. Byrne (Ed.), Learning and memory: A comprehensive reference, Vol. 2 (pp. 66–78). Oxford, UK: Academic Press.

Reiser, B. J. (2004). Scaffolding complex learning: The mechanisms of structuring and problematizing student work. The Journal of the Learning Sciences, 13(3), 273–304.

Roediger, H. L., & Karpicke, J. D. (2006). The power of testing memory: Basic research and implications for educational practice. Perspectives on Psychological Science, 1(3), 181–210.

Sharpe, N.D., De Veaux, R.D., & Velleman, P.F. (2015). Business Statistics (3rd ed.). Boston, MA: Pearson Education.

Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153–189.

Van der Kleij, F. M., Feskens, R. C., & Eggen, T. J. (2015). Effects of feedback in a computer-based learning environment on students’ learning outcomes: A meta-analysis. Review of Educational Research, 85(4), 475–511.

VanLehn, K. (1996). Cognitive skill acquisition. Annual Review of Psychology, 47(1), 513–539.

Xu, R. (2003). Measuring explained variation in linear mixed effects models. Statistics in Medicine, 22, 3527–3541.


R session info

devtools::session_info()
##  setting  value                       
##  version  R version 3.4.3 (2017-11-30)
##  system   x86_64, darwin15.6.0        
##  ui       X11                         
##  language (EN)                        
##  collate  en_US.UTF-8                 
##  tz       America/New_York            
##  date     2018-04-03                  
## 
##  package      * version   date       source        
##  abind          1.4-5     2016-07-21 CRAN (R 3.4.0)
##  arm            1.9-3     2016-11-27 CRAN (R 3.4.0)
##  assertthat     0.2.0     2017-04-11 CRAN (R 3.4.0)
##  backports      1.1.2     2017-12-13 CRAN (R 3.4.3)
##  base         * 3.4.3     2017-12-07 local         
##  bayesplot      1.4.0     2017-09-12 CRAN (R 3.4.2)
##  bindr          0.1       2016-11-13 CRAN (R 3.4.0)
##  bindrcpp     * 0.2       2017-06-17 CRAN (R 3.4.0)
##  blme           1.0-4     2015-06-14 CRAN (R 3.4.0)
##  broom        * 0.4.3     2017-11-20 CRAN (R 3.4.2)
##  carData        3.0-0     2017-08-28 CRAN (R 3.4.1)
##  cellranger     1.1.0     2016-07-27 CRAN (R 3.4.0)
##  cli            1.0.0     2017-11-05 CRAN (R 3.4.2)
##  coda           0.19-1    2016-12-08 CRAN (R 3.4.0)
##  codetools      0.2-15    2016-10-05 CRAN (R 3.4.3)
##  coin           1.2-2     2017-11-28 CRAN (R 3.4.3)
##  colorspace     1.3-2     2016-12-14 CRAN (R 3.4.0)
##  compiler       3.4.3     2017-12-07 local         
##  crayon         1.3.4     2017-09-16 CRAN (R 3.4.1)
##  datasets     * 3.4.3     2017-12-07 local         
##  devtools       1.13.5    2018-02-18 CRAN (R 3.4.3)
##  digest         0.6.15    2018-01-28 CRAN (R 3.4.3)
##  dplyr        * 0.7.4     2017-09-28 CRAN (R 3.4.2)
##  DT             0.4       2018-01-30 CRAN (R 3.4.3)
##  effects        4.0-0     2017-09-15 CRAN (R 3.4.1)
##  emmeans        1.1.2     2018-02-24 CRAN (R 3.4.3)
##  estimability   1.3       2018-02-11 CRAN (R 3.4.3)
##  evaluate       0.10.1    2017-06-24 CRAN (R 3.4.1)
##  forcats      * 0.3.0     2018-02-19 CRAN (R 3.4.3)
##  foreign        0.8-69    2017-06-22 CRAN (R 3.4.3)
##  formatR        1.5       2017-04-25 CRAN (R 3.4.0)
##  ggeffects      0.3.1     2018-01-15 CRAN (R 3.4.3)
##  ggplot2      * 2.2.1     2016-12-30 CRAN (R 3.4.0)
##  glmmTMB        0.2.0     2017-12-11 CRAN (R 3.4.3)
##  glue           1.2.0     2017-10-29 CRAN (R 3.4.2)
##  graphics     * 3.4.3     2017-12-07 local         
##  grDevices    * 3.4.3     2017-12-07 local         
##  grid           3.4.3     2017-12-07 local         
##  gridExtra    * 2.3       2017-09-09 CRAN (R 3.4.1)
##  gtable         0.2.0     2016-02-26 CRAN (R 3.4.0)
##  haven        * 1.1.1     2018-01-18 CRAN (R 3.4.3)
##  highr          0.6       2016-05-09 CRAN (R 3.4.0)
##  HLMdiag      * 0.3.1     2015-12-12 CRAN (R 3.4.0)
##  hms            0.4.1     2018-01-24 CRAN (R 3.4.3)
##  htmltools      0.3.6     2017-04-28 CRAN (R 3.4.0)
##  htmlwidgets    1.0       2018-01-20 CRAN (R 3.4.3)
##  httpuv         1.3.6.2   2018-03-02 CRAN (R 3.4.3)
##  httr           1.3.1     2017-08-20 CRAN (R 3.4.1)
##  jsonlite       1.5       2017-06-01 CRAN (R 3.4.0)
##  kableExtra   * 0.7.0     2018-01-15 CRAN (R 3.4.3)
##  knitr        * 1.20      2018-02-20 CRAN (R 3.4.3)
##  labeling       0.3       2014-08-23 CRAN (R 3.4.0)
##  lattice        0.20-35   2017-03-25 CRAN (R 3.4.3)
##  lazyeval       0.2.1     2017-10-29 CRAN (R 3.4.2)
##  lme4         * 1.1-15    2017-12-21 CRAN (R 3.4.3)
##  lmtest         0.9-35    2017-02-11 CRAN (R 3.4.0)
##  lubridate    * 1.7.3     2018-02-27 CRAN (R 3.4.3)
##  magrittr       1.5       2014-11-22 CRAN (R 3.4.0)
##  MASS           7.3-49    2018-02-23 CRAN (R 3.4.3)
##  Matrix       * 1.2-12    2017-11-20 CRAN (R 3.4.3)
##  memoise        1.1.0     2017-04-21 CRAN (R 3.4.0)
##  merTools       0.3.0     2016-12-12 CRAN (R 3.4.0)
##  methods      * 3.4.3     2017-12-07 local         
##  mgcv           1.8-23    2018-01-15 CRAN (R 3.4.3)
##  mime           0.5       2016-07-07 CRAN (R 3.4.0)
##  minqa          1.2.4     2014-10-09 CRAN (R 3.4.0)
##  mnormt         1.5-5     2016-10-15 CRAN (R 3.4.0)
##  modelr         0.1.1     2017-07-24 CRAN (R 3.4.1)
##  modeltools     0.2-21    2013-09-02 CRAN (R 3.4.0)
##  multcomp       1.4-8     2017-11-08 CRAN (R 3.4.2)
##  munsell        0.4.3     2016-02-13 CRAN (R 3.4.0)
##  mvtnorm        1.0-7     2018-01-25 CRAN (R 3.4.3)
##  nlme           3.1-131.1 2018-02-16 CRAN (R 3.4.3)
##  nloptr         1.0.4     2014-08-04 CRAN (R 3.4.0)
##  nnet           7.3-12    2016-02-02 CRAN (R 3.4.3)
##  parallel       3.4.3     2017-12-07 local         
##  pbkrtest       0.4-7     2017-03-15 CRAN (R 3.4.0)
##  pillar         1.2.1     2018-02-27 CRAN (R 3.4.3)
##  pkgconfig      2.0.1     2017-03-21 CRAN (R 3.4.0)
##  plyr           1.8.4     2016-06-08 CRAN (R 3.4.0)
##  prediction     0.2.0     2017-04-19 CRAN (R 3.4.0)
##  psych          1.7.8     2017-09-09 CRAN (R 3.4.2)
##  purrr        * 0.2.4     2017-10-18 CRAN (R 3.4.2)
##  pwr            1.2-2     2018-03-03 CRAN (R 3.4.3)
##  R6             2.2.2     2017-06-17 CRAN (R 3.4.0)
##  Rcpp           0.12.15   2018-01-20 CRAN (R 3.4.3)
##  readr        * 1.1.1     2017-05-16 CRAN (R 3.4.0)
##  readxl         1.0.0     2017-04-18 CRAN (R 3.4.0)
##  reshape2       1.4.3     2017-12-11 CRAN (R 3.4.3)
##  rlang          0.2.0     2018-02-20 CRAN (R 3.4.3)
##  RLRsim         3.1-3     2016-11-04 CRAN (R 3.4.0)
##  rmarkdown      1.9       2018-03-01 CRAN (R 3.4.3)
##  rprojroot      1.3-2     2018-01-03 CRAN (R 3.4.3)
##  rstudioapi     0.7       2017-09-07 CRAN (R 3.4.1)
##  rvest          0.3.2     2016-06-17 CRAN (R 3.4.0)
##  sandwich       2.4-0     2017-07-26 CRAN (R 3.4.1)
##  scales         0.5.0     2017-08-24 CRAN (R 3.4.1)
##  shiny          1.0.5     2017-08-23 CRAN (R 3.4.1)
##  sjlabelled     1.0.8     2018-02-26 CRAN (R 3.4.3)
##  sjmisc         2.7.0     2018-02-03 CRAN (R 3.4.3)
##  sjPlot       * 2.4.1     2018-02-05 CRAN (R 3.4.3)
##  sjstats        0.14.1    2018-02-04 CRAN (R 3.4.3)
##  snakecase      0.9.0     2018-02-25 CRAN (R 3.4.3)
##  splines        3.4.3     2017-12-07 local         
##  stats        * 3.4.3     2017-12-07 local         
##  stats4         3.4.3     2017-12-07 local         
##  stringdist     0.9.4.6   2017-07-31 CRAN (R 3.4.1)
##  stringi        1.1.6     2017-11-17 CRAN (R 3.4.2)
##  stringr      * 1.3.0     2018-02-19 CRAN (R 3.4.3)
##  survey         3.33      2018-01-22 CRAN (R 3.4.3)
##  survival       2.41-3    2017-04-04 CRAN (R 3.4.3)
##  TH.data        1.0-8     2017-01-23 CRAN (R 3.4.0)
##  tibble       * 1.4.2     2018-01-22 CRAN (R 3.4.3)
##  tidyr        * 0.8.0     2018-01-29 CRAN (R 3.4.3)
##  tidyselect     0.2.4     2018-02-26 CRAN (R 3.4.3)
##  tidyverse    * 1.2.1     2017-11-14 CRAN (R 3.4.2)
##  TMB            1.7.12    2017-12-11 CRAN (R 3.4.3)
##  tools          3.4.3     2017-12-07 local         
##  utils        * 3.4.3     2017-12-07 local         
##  viridisLite    0.3.0     2018-02-01 CRAN (R 3.4.3)
##  withr          2.1.1     2017-12-19 CRAN (R 3.4.3)
##  xml2           1.2.0     2018-01-24 CRAN (R 3.4.3)
##  xtable         1.8-2     2016-02-05 CRAN (R 3.4.0)
##  yaml           2.1.17    2018-02-27 CRAN (R 3.4.3)
##  zoo            1.8-1     2018-01-08 CRAN (R 3.4.3)