Game-based learning environments create rich learning experiences
that are both effective and engaging. Recent years have seen growing interest in
data-driven techniques for tutorial planning, which dynamically personalize
learning experiences by providing hints, feedback, and problem scenarios at
run-time. In game-based learning environments, tutorial planners are designed to
adapt gameplay events in order to achieve multiple objectives, such as enhancing
student learning or student engagement, which may be complementary or com-
peting aims. In this paper, we introduce a multi-objective reinforcement learning
framework for inducing game-based tutorial planners that balance between
improving learning and engagement in game-based learning environments. We
investigate a model-based, linear-scalarized multi-policy algorithm, Convex Hull
Value Iteration, to induce a tutorial planner from a corpus of student interactions
with a game-based learning environment for middle school science education.
Results indicate that multi-objective reinforcement learning creates policies that
are more effective at balancing multiple reward sources than single-objective
techniques. A qualitative analysis of select policies and multi-objective prefer-
ence vectors shows how a multi-objective reinforcement learning framework
shapes the selection of tutorial actions during students’ game-based learning
experiences to effectively achieve targeted learning and engagement outcomes.