SCORM Engine Integration Architecture

Last Modified: November 14, 2013

Current SCORM Engine Version: 2013.2

Background

The SCORM Engine integrates with many, many systems, over 80 different LMS's at current count. It provides a piece of vital functionality to these systems and needs to be tightly integrated to provide both a seamless user experience and a robust technical implementation.

While a tight integration is vital to the long term success of an LMS powered by the SCORM Engine, it is also important that these systems not be so interwoven that they cannot be maintained separately. Much of the value that Rustici Software provides its clients stems from our ability to maintain and improve the SCORM Engine as the standards evolve and new interpretations of them emerge. Similarly our clients must be able to evolve and improve their LMS's without being encumbered by the SCORM Engine or relying on Rustici Software to make code changes. Thus the SCORM Engine and the host LMS need to be remain logically separate systems. From a technical perspective, we say that the SCORM Engine needs to be loosely coupled with the host LMS.

While many LMS's are quite similar in their basic structure and concepts, each has its own set of functionality that makes it unique. Different sets of business rules, innovative features and even subtle quirks can all affect how SCORM content should best be delivered in a particular LMS. Thus, the SCORM Engine needs to be highly customizable and configurable to handle whatever is thrown at it.

So, the SCORM Engine needs to be tightly integrated, loosely coupled and highly customizable. Those three goals are often at odds with one another. It took some innovative design work to craft an architecture for the SCORM Engine that meets these requirements, but the effort has paid off. The "integration layer" architecture described in this document has enabled us to integrate the SCORM Engine with dozens of different LMS systems without ever having to make a major change to the core SCORM Engine code.

The Integration Layer

The integration layer is the interface between the SCORM Engine and the system with which it is integrating (the "host LMS"). It is also the boundary between the two systems, acting as a buffer to keep the core systems separate.

(This document will refer to the "system with which the SCORM Engine is integrating" as the "host LMS". We use this term for convenience, but note that while most integrations are with an LMS, the SCORM Engine has been integrated with a number of systems not directly related to learning.)

Integration Layer

Loosely Coupled

The diagram above depicts the conceptual architecture of the SCORM Engine integrated with an LMS through the integration layer. Notice that the SCORM Engine does not directly communicate with the host LMS. Instead, all communication is routed through the integration layer. The common interface of the integration layer provides a level of indirection that isolates the host LMS from changes in the SCORM Engine and vice versa.

The integration layer is different for every integration of the SCORM Engine. The integration layer is also the only thing that is different for each integration of the SCORM Engine. The integration layer can be thought of as the "stuff we change" or the "stuff you are allowed to touch" when integrating.

Tightly Integrated

Notice that there is a very tight integration between the SCORM Engine and the integration layer. The integration layer is essentially a component of the SCORM Engine that can be swapped out for each integration. The interface between the integration layer and the host LMS can be very tight or very loose. The integration layer can communicate very loosely with the host LMS via web services (or even URL redirections). Or, the integration layer can invoke an LMS-provided API for a tighter integration. The integration layer can even make direct calls into the host LMS's database to achieve an extremely tight integration. All of these solutions are perfectly viable and it is up to the client to decide how tight a particular integration should be.

Highly Customizable

The integration layer is also where we can customize and configure the SCORM Engine. For anything that ever has been, or conceivably could be, customized in the SCORM Engine, there is an integration function that lets the SCORM Engine "ask" the integration how the action should be performed. For instance, before displaying the user interface, the SCORM Engine will ask the integration layer "which skin should I show?". Or, before writing a log message, the SCORM Engine will ask "where should I write this message to?". There are well over 100 integration functions that let us precisely customize the SCORM Engine for a particular host LMS.

How It Works

The core of the integration layer is an abstract class called the integration interface. The integration interface defines all the operations the SCORM Engine needs to perform which might vary based on the particular integration. For each integration, we create a unique class that implements all the methods defined in the integration interface. The method implementations in this subclass are specific to the host LMS and implement the functionality the client needs. At runtime, the SCORM Engine uses a factory class to instantiate the appropriate integration implementation.

UML Diagram

The UML diagram above depicts the class hierarchy for the integration classes. The boxes represent classes and the arrow indicate inheritance. At the top, is the abstract IntegrationInterface class. This class is where all of the integration methods are defined (but not implemented). At the bottom, there is are many concrete implementations of the the IntegrationInterface. Each client has their own unique implementation with all of the required methods implemented.

In the middle, there is a DefaultIntegration class. The purpose of this intermediate class is to provide default implementations of the many methods in the IntegrationInterface that usually do not change from client to client. There are over 100 integration methods defined in the IntegrationInterface. Of these, only about a dozen are required to change from client to client. The rest of the methods are there to allow things to change between clients, but in most cases there is a default implementation that is perfectly acceptable. For example, in most cases, it is perfectly acceptable to log error messages to the standard event log. On the other hand, some LMS's have their own built in event tracking system, in which case it would be appropriate to override the default logging mechanism.

Data Relations

One of the core principles of the SCORM Engine integration design is that the host LMS should not need to know anything of the internals of the SCORM Engine. This separation helps to maintain the loose coupling between systems. Yet, many of the integration functions require that the systems communicate about a specific package or a specific registration. Rather than requiring the host LMS to know of the SCORM Engine's internal identifiers, the SCORM Engine defines a set of external identifier classes that allow the host LMS to use its existing identifiers no matter their structure and type.

Data Relations

Every LMS uses a different set of identifiers (each with different data types) to represent packages and registrations. Some use integers, some use strings, some use GUIDs, some use a combination of all of these and more). Some LMS's refer to packages as courses, others as lessons or tasks or items or classes. The integration layer defines an abstract way to represent these complex and varying objects in a consistent manner through the ExternalPackageId and ExternalRegistrationId classes.

When we create an integration we generate a concrete implementation of the ExternalPackageId and ExternalRegistrationId that is unique to the host LMS. These integration objects will have a set of properties that mirrors the keys used by the host LMS for the identified package and registration entities. These objects give the host LMS the ability to communicate with the integration layer in its own language.

Methods With ExternalId Args

These classes are each instances of the abstract ExternalId class which defines serialization methods for these classes. The serialization common to all external identifier objects allows LMS's to manipulate their identifiers as strings at times instead of instantiating actual ExternalId objects.

The relationships between the SCORM Engine's internal concepts and package and registration with the host LMS's associated concepts is also reflected in the database. In keeping with the goal of tight integration with loose coupling, we allow two of the SCORM Engine's database tables to be modified during the integration. Both the ScormPackage table and the ScormRegistration table will have additional foreign key fields added to them to reflect their relationships with tables in the host LMS (tight integration). The other SCORM Engine tables remain untouched (loose coupling).

SCORM Engine database tables before integration

SCORM Engine database tables before integration

SCORM Engine database tables after integration

SCORM Engine database tables after integration

External Configuration

There is one more integration object that follows the same pattern of composition as the external package id and external registration id. The external configuration object is a "tunnel" for passing information from the host LMS to the integration layer. The external configuration object is perhaps best explained with an example.

Take the SCORM Engine integration function LogError that was previously mentioned. This function is invoked by the SCORM Engine in the event of an unexpected runtime error so that diagnostic information about the error can be recorded for later analysis. Say Client X has service level agreements (SLAs) with a few select customers that imposes financial penalties for any system downtime. Because of these SLAs, Client X wants all of its support and development staff to be immediately notified by cell phone, pager, email, text message, singing telegram and carrier pigeon whenever an error affects a client with an SLA (note, we do not endorse inhumane treatment of pigeons). For all other clients, there's no need to interrupt anybody's sleep, so the error should just be recorded to a system log for later analysis.

To implement the LogError function in the integration layer, we need to have some information about the current client available to us to know what actions to take. This need poses a problem because it is the SCORM Engine that invokes the LogError function, not the host LMS. The SCORM Engine only knows about packages and registrations, not LMS client SLAs. To expand this problem out further to all clients and all integration functions. There are innumerable data points upon which the integration functions might rely to make decisions. The SCORM Engine can't possibly be aware of all of these options, so another solution is needed.

Enter the external configuration object. When the host LMS passes control to the SCORM Engine, it has the opportunity to pass along an external configuration object. Just like the other external ids (external package and external registration), the external configuration may contain any arbitrary set of properties. In other words, it can contain whatever information the integration layer might need. Anytime the SCORM Engine calls an integration function (i.e., anytime it might be calling back to the host LMS), it passes that same external configuration object to the integration layer. In this way, the external configuration object is like a tunnel that allows the host LMS to pass information through the SCORM Engine to the integration layer.

In our example above, the integration would define a property called IsSLACustomer in the ClientXExternalConfiguration class. Then, when launching the course, the host LMS would set this flag appropriately before handing control over to the SCORM Engine. The SCORM Engine would then save this configuration information and pass it to the integration layer every time an integration call is made. The integration function can then examine this flag and take the appropriate course of action in the event of an error.