A partial archive of edxchange.opencraft.com as of Wednesday July 13, 2022.

Add adaptive learning capabilites to open edX

yoavca

adapt the content based on user’s success and knowledge in order to bring all student to the same level by the end of each course, but not necessarily via the same path

cotoha

we’ve helped to conduct Adaptive learning experiments on Lagunita (Stanford’s open edx instance) and now Harvard is starting similar experiments.

so my understanding is that adaptive learning will be brought to edx (as a 3rd party tool) within a year or so. hopefully :slight_smile:

yoavca

Adding to this thread the link that was on Andrew Ang’s presentation in the open edX con:
https://vpal.harvard.edu/blog/designing-adaptive-learning-and-assessment-harvardx-collaborative-project-harvard

@Colin_Fredericks any new insights from the Harvard side? :slight_smile:

antoviaque

There is also the Adaptive Learning features being contributed by FUN and OpenCraft:

Summary/next steps of the contribution: https://docs.google.com/document/d/1WmfX3dcXImAXkIwcbUVfNjlJtal927g28dQG5onYQbE/edit

Colin_Fredericks

Since that paper came out, we’ve started implementing this in the next course on our list. There are two big improvements on the platform integration side here:

  1. This professor is a programmer, and he and his grad student have created not just a few problems, but several “problem generators” that allow us to make many variations on a single problem.
  2. We have a better way to “hide” the problems within the edX course, so that we can do a larger variety of experimental approaches. The trick is using a conditional block. We literally have 1200+ auto-generated problems hidden in a single unit, where we can display them to the students via XBlock URL but the students will never see them otherwise.

There have probably also been improvements in the LTI tool and the SCALE algorithm, but I don’t know as much about those.

antoviaque

@Colin_Fredericks That seem like interesting experiments. Is there some code and/or demos somewhere for this? That course is still in preparation, and hasn’t been released yet, right?

@tikr @Braden If we haven’t already, could we make sure to have a look at this, to see if there was some potential for conflicts and/or collaborations in some of the upstream PRs coming out of both projects?

Colin_Fredericks

No public code or demos at the moment, though I’d be glad to walk people through it. And yes, it’s unreleased. We’ve been testing out the methodology on-campus while we build up content for the full release.

tikr

@antoviaque @Braden @Colin_Fredericks

Based on the information that is openly available (in particular, the main blog post about Harvard’s approach, as well as the detailed posts on course design and technology), I don’t see a potential for conflicts between the two approaches, at least not at the current stage. One aspect that we could collaborate on in the future would be content tagging (note though, that this is out of scope for the current adaptive learning epic we are working on with FUN) – @Colin_Fredericks, the blog posts mentioned above don’t go into detail about how

all problems in the course were manually tagged with one or several learning objectives. […] all problems in the 4 adaptive assessments were tagged with one of three difficulty levels: advanced, regular and easy.

Do you have any information about that available elsewhere?

Beyond that, to achieve tighter integration with edx-platform, I think it would be possible for Harvard to adopt an approach that is similar to the one we’ve been developing with FUN:

You could move the current LTI implementation to a subclass of Randomized Content Module with a custom UI. (With this approach, sets of questions to display would be stored in content libraries.) This might help address the main issues you were facing (hiding problems and retrieving data to send to TutorGen) in the following way:

  • You could use arbitrary information about individual users (such as assigned cohort) to determine which questions to make available and/or display, and whether to display adaptive UI elements (toolbar and navigation features). (*)
  • You could reuse the Adaptive Learning backend that we built (and are in the process of upstreaming) to retrieve relevant data about problem submissions in a robust way, subclassing it to enable the exact behavior that you need for TutorGen.

The only drawback of that approach would be loss of portability (being an LTI tool, Bridge for Adaptivity can be integrated more easily with other learning platforms such as Canvas).

If you’re interested, I recommend having a look at the section called “Technical approach” from the document that summarizes the adaptive learning features we built with FUN.

If moving to a more integrated approach is not an option for you, you could still consider replacing the JavaScript hack for retrieving submission data and sending it to the LTI block with the backend approach described above. You’d still have custom platform code that you’d need to maintain (**), but as our approach shows, it would be possible to keep that code in separate files (aside from a couple lines for registering the backend). That could help minimize rebasing efforts for you; and in terms of retrieving relevant data you wouldn’t be limited to scraping user/problem/grade data that is available on the page.

Footnotes

(*) To allow for an easy way to group questions, you could consider having the new block type draw questions from two different content libraries (one for questions that the control group should see, and another one for questions that the experimental group should see).

(**) From our perspective, getting the JavaScript hack merged upstream would be unlikely, and the same is probably true for a custom event tracking backend. That’s why you’d most likely have to continue maintaining some code on your fork with either approach.