Hero Image

Research Overview - Short

Understanding the Impact of Online Assessment Practices in Higher Education

Problem

The Importance of Assessment

Among the most important roles of instructors in higher education is the task of certifying that each individual learner in their course has achieved a particular standard in relation to the intended outcomes of the course and that achievement is both a valid and reliable measurement of the learner’s true ability. The importance of this determination is a reflection of how student achievement data are used in not only summative course assessments, but also predicting future success, awarding scholarships, and determining acceptance into competitive programs (Baird et al., 2017; Guskey & Link, 2019). Classroom assessment practices in higher education tend to focus on summative exams, usually designed to maximize efficiency in administration and objectivity in scoring (Lipnevich et al., 2020). There are problems with this approach, however, including learner anxiety, the tendency of learners to cram for exams and take surface instead of deep approaches to learning (Biggs & Tang, 2011), misalignment between learning outcomes and assessments, and ‘teaching to the test’ (Broadfoot, 2016; Gerritsen-van Leeuwenkamp et al., 2017). However, Broadfoot (2016) and Pellegrino and Quellmalz (2010) argue that the goals and intended outcomes for higher education have changed as society has become more saturated with digital technologies. Predictably, this has led to an increasing gap between traditional assessment structures which prioritize validity, reliability, fairness, and objectivity through psychometric analyses and modern goals of higher education prioritizing more affective constructs such as cooperation, empathy, creativity, and inquiry. Complicating this problem is the trend, accelerated by SARS-Cov-2 and COVID-19, towards the use of digital technologies to create, administer, and score assessments to increase the efficiency and objectivity of test administration (Benjamin, 2019), reinforcing traditional assessment structures. However, as Shute et al. (2016) argue, digital technologies could be used to drive innovations in assessment practice while balancing the need for both quantitative and qualitative approaches to assessment.

Validity and Reliability

Large-scale assessments (LSAs), like the SAT or GRE undergo thorough vetting and robust statistical analysis to ensure that they are valid and reliable predictors of success in higher education. LSAs are necessarily context independent, meaning that they are intended to provide useful information about an examinee regardless of who they are or where and when they complete the assessment (Baird et al., 2017). Black and Wiliam (1998) and Guskey and Link (2019), however, report that classroom teachers, when given the opportunity, attempt to emulate these summative LSAs by building their own or using publisher-created assessment items and instruments. Unfortunately, these instructor- or publisher-created instruments have not been vetted through the same degree of psychometric analyses or refinement as typical LSAs, meaning that much of the assessment practice in higher education may be based on untested assumptions of validity and reliability (Broadfoot, 2016; Lipnevich et al., 2020). Furthermore, we know that classroom assessments should take into account learner contexts, including accessibility accommodations and also learner agency and autonomy and the context of instruction (Brookhart, 2003).

Technology and Assessment

The use of technology in higher education has been particularly noticeable in assessment practices with instructors relying on the ease of administration and efficiency of scoring selected-response tests to determine learners’ grades (Broadfoot, 2016), but it has been slower to lead to significant innovation in how technology might afford novel assessment structures (Pellegrino & Quellmalz, 2010). This does not mean, however, that there are not affordances of technology that may empower new ways of thinking about how grading decisions are made. Researchers caution, however that the increased use of technology in assessment will require careful thought about the ethical use of data, especially as surveillance tools intended to enforce academic integrity regulations have begun to proliferate in the field (Oldfield et al., n.d.).

Possible Questions

  • How do instructors perceive various approaches to assessment in higher education (e.g., summative, formative, ungrading)?
  • What factors influence instructors’ approaches to assessment?
  • What are the effects of assessment practices in digital environments on learners’ experiences of learning in higher education?

Potential Methods

Based on the purposes and questions noted above, preliminary thoughts about methods include the likelihood of a mixed-method approach including surveying instructors and learners in higher education to learn about their views on assessment in higher education and including opportunities for open-ended responses to gather further context. Following analysis of the quantitative data, follow-up interviews with select instructors and learners may be conducted to explore beliefs and experiences more deeply.

Significance of the Research

The pivot to remote teaching in the spring of 2020 due to the COVID-19 pandemic created conditions that led many instructors to reconsider the design and structure of their courses, including how they assess learners. Their sudden reliance on technology to administer exams revealed significant gaps in what had become traditional modes of assessment. While there will almost certainly be some post-pandemic reversion to the norm, this presents an opportunity to explore the topic of assessment with both instructors and learners. Understanding both how instructors think about assessment and how learners are impacted by assessment decisions will be critical to informing assessment practices and policies as higher education emerges from the pandemic and moves forward into the 21st century.

About the Researcher

I am in my third year (part-time) of my Ph.D. program in the Educational Technology area in the Department of Curriculum and Instruction at the University of Victoria and a graduate research affiliate of the Technology Integration and Evaluation (TIE) Research Lab. My doctoral coursework has included Advanced Research Methods (UVic), Education Action Research (UBC), and Test Theory (UAlberta). I also took 2 courses is research methods at Athabasca University during my M.Ed. My program supervisor is Dr. Valerie Irvine, Co-director of the Technology Integration and Evaluation Lab at the University of Victoria. In addition, I am Manager of Online Learning and Instructional Technology in a different Western Canadian university where I support faculty in designing and deploying transformative online learning experiences that focus of rich, communities of Inquiry. I am also a member of the board of the Open/Technology in Education, Society, and Scholarship Association (OTESSA), which is a member association of the Federation for the Humanities and Social Sciences and a participating member of the annual Congress of the Humanities and Social Sciences.

References

Baird, J.-A., Andrich, D., Hopfenbeck, T. N., & Stobart, G. (2017). Assessment and learning: Fields apart? Assessment in Education: Principles, Policy & Practice, 24 (3), 317–350. https://doi.org/10/gf3brt

Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. Polity.

Biggs, J., & Tang, C. (2011). Teaching for quality learning at university: What the student does (4th ed.). Society for Research into Higher Education & Open University Press.

Black, P., & Wiliam, D. (1998). Assessment and Classroom Learning. Assessment in Education: Principles, Policy & Practice, 5(1), 7–74. https://doi.org/10/fpnss4

Broadfoot, P. (2016). Assessment for Twenty-First-Century Learning: The Challenges Ahead. In M. J. Spector, B. B. Lockee, & M. D. Childress (Eds.), Learning, Design, and Technology (pp. 1–23). Springer International Publishing. https://doi.org/10.1007/978-3-319-17727-4_64-1

Brookhart, S. M. (2003). Developing Measurement Theory for Classroom Assessment Purposes and Uses. Educational Measurement: Issues and Practice, 22(4), 5–12. https://doi.org/10/dj7bxr

Gerritsen-van Leeuwenkamp, K. J., Joosten-ten Brinke, D., & Kester, L. (2017). Assessment quality in tertiary education: An integrative literature review. Studies in Educational Evaluation, 55, 94–116. https://doi.org/10/ghjbhx

Guskey, T. R., & Link, L. J. (2019). Exploring the factors teachers consider in determining students’ grades. Assessment in Education: Principles, Policy & Practice, 26(3), 303–320. https://doi.org/10/ghg8j7

Lipnevich, A. A., Guskey, T. R., Murano, D. M., & Smith, J. K. (2020). What do grades mean? Variation in grading criteria in American college and university courses. Assessment in Education: Principles, Policy & Practice, 27(5), 480–500. https://doi.org/10/ghjw3k

Oldfield, A., Broadfoot, P., Sutherland, R., & Timmis, S. (n.d.). Assessment in a Digital Age: A research review. Graduate School of Education, University of Bristol. Retrieved January 14, 2021, from https://www.bristol.ac.uk/media-library/sites/education/documents/researchreview.pdf

Pellegrino, J. W., & Quellmalz, E. S. (2010). Perspectives on the Integration of Technology and Assessment. Journal of Research on Technology in Education, 43(2), 119–134. https://doi.org/10/ggfh8z

Shute, V. J., Leighton, J. P., Jang, E. E., & Chu, M.-W. (2016). Advances in the Science of Assessment. Educational Assessment, 21(1), 34–59. https://doi.org/10/gfgtrs