Partial List of Speakers

Vasile Rus
The University of Memphis
Homepage URL
Talk: Automated Assessment of Learner-Constructed Responses
Vasile Rus is the professor at at the Department of Computer Science at the University of Memphis. He is also a member of the Institute for Intelligent Systems at the University of Memphis. Dr. Rus conducts state-of-the-art research and teaching in the area of language and information processing. He has been exploring fundamental topics such as natural language based knowledge representations, semantic similarity, and question answering as well as applications such as intelligent tutoring systems and software defect knowledge management. Dr. Rus has received research awards to support his work from the National Science Foundation, Institute for Education Sciences, Office of Naval Research, and other federal agencies. He has been a Principal Investigator or co-Principal Investigator on awards totaling more than $6.2 million. For his work on automated methods to handle software defect reports in large-scale software development projects, Dr. Rus has been named a Systems Testing Research Fellow of the FedEx Institute of Technology. He has published more than 80 scientific articles in premier peer-reviewed international conferences and journals, as well as book chapters.
Talk Abstract
Assessment is a key element in education in general and in educational technologies such as Intelligent Tutoring Systems (ITSs) in particular because, for instance, fully adaptive tutoring presupposes accurate assessment. Indeed, a necessary step towards instruction adaptation is assessing students’ knowledge state such that appropriate instructional tasks (macro-adaptation) are selected and appropriate scaffolding is offered while students are working on a task (micro-adaptation). Considering the early stage nature of the assessment module in the educational processing pipeline and therefore the positive or negative cascading effect it may have on the downstream modules (learner model, feedback, strategies, and outcome, e.g., learning) the importance of automated assessment cannot be overstated.

We focus in this talk on automated methods for assessing freely constructed textual responses (as opposed to responses to, for instance, multiple-choice questions). Learner-constructed responses fit well with constructivist theories of learning that emphasize learners constructing their own knowledge and with self-explanation theories of learning that emphasize learners self-explaining their understanding of target concepts.

The self-generation process, the key feature of learner-constructed responses, offers unique opportunities and challenges when it comes to automating the assessment process. An effect of the self-generation aspect of open-ended responses, which is an advantage and a challenge at the same time, is their diversity along many quantitative and qualitative dimensions. For instance, free responses can vary in size from one word to a paragraph to a full document. The challenging part is the fact that there needs to be a solution that can handle the entire variety of student responses, a tall order.

Another major challenge is that open-ended responses may need to be assessed in different ways depending on the target domain and instructional goals. This makes it difficult to compare assessments. For example, in automated essay scoring the emphasis is more on how learners argue for their position with respect to an essay prompt while in other tasks, such as conceptual Physics problem solving or source code comprehension, the emphasis is more on the content and accuracy of the solution articulated by the learner. We will provide an overview of the opportunities, challenges, and state-of-the-art solutions in the area of automated assessment of learner-generated natural language responses.

Furthermore, we will argue that student-generated open responses (be them textual, visual, or in some abstract language such as mathematical expressions) are the only assessment modality that leads to true assessment because are the only assessment modality that reveals students’ true mental model. As an immediate consequence, future educational technologies should include open-ended assessment items and corresponding facilities that enable the automated assessment of such open-ended student responses.

PPT: Powerpoint 2007 presentation RUS CBS.pptx
Tags: