Lightweight Product Line AUTOSAR BSW configuration with physical files and splitables

During AUTOSAR development, the configuration of the basic software is a major task.  Usually, some of the parameters are going to be the same for all your projects (company-wide, product line) and some of them will be specific to a given project / ECU. And you might have a combination of these within the same BSW configuration container.

One of the approaches uses the AUTOSAR Split(t)able concept, which allows you to split the contents of a model element over more than one physical ARXML-file.

So let us assume, we are going to configure the DemGeneral container, and we assume that we have a company-wide policy for all projects to set DemAvailabilitySupport to 1. And let’s assume that the DemBswErrorBufferSize within the same container should be set specific to each ECU.

So let us create an .arxml for the company-wide settings. That might be distributed through some general channel, e.g. when setting up a new project.

2015-02-18_10h11_58Now we can place an Ecuc specific .arxml next in the same project.

2015-02-18_10h11_58bNow any tool that supports splitables (e.g. the COMASSO BSWDT tooling) will be able to create a merged model. The following screenshots are based on our own prototype of an AspectJ based splitable support for Artop. By simply using the context menu

2015-02-18_10h37_26

we will see the merged view:

2015-02-18_10h19_18You can see a few things in that screen shot:

  • First, and most important, the two physically separated DemGenerals are now actually one. The tooling (code generators etc.) will now only see one model, one CONF package and one DemGeneral.

Our demonstrator shows additional information as decorators after the element (decorators can be deactivated through preferences in Eclipse).

  • The “[x slices]” decorator shows how many elements are actually used to make up the joined element. We have 2 physical DemGeneral, so it says “2 slices”. We have one DemAvailabilitySupport, so it shows “1 slices”
  • The decorator at the end shows the physical files that are actually involved for the merging of a given element.

Now if for some reason, a new version of the company-wide .arxml is provided, we only need to exchange / update that file – there is no need for a complex diff and merge.

There are some usecases that cannot be covered with this scenario, but it opens a lot of possibilities for managing variations in BSW configuration on a file level.

 

 

 

 

 

Writing Fast tests for uniqueness in AUTOSAR (and other models) with Java 8

One of the repeating tasks when writing model checks for AUTOSAR (but also for other models) is to check for uniqueness according to some criterion. In AUTOSAR, this could be that all elements in a collection must be unique with respect to their shortName, or e.g., that all ComMChannels must be unique with respect to their channel ids.

Such a test is very easily written, but many of the ad-hoc implementations that we see perform badly when the model grows – which is easily missed when testing with small models.

Assume that we want to test ComMChannels for uniqueness based on their channel ids. To illustrate the problem, two slides from our COMASSO slides. A straight forward implementation with Xpand Check (in the COMASSO framework) would be:2015-01-16_09h50_30For each ComMChannel it checks if any of the other ComMChannel in the same ComM has the same ID. BUT:

2015-01-16_09h50_43So this costs a lot of performance. The trick is to loop only once and use a Set to store data. This can be done in Xpand and other languages. Java 8’s lambdas make it especially nice, since we can have a single method that collects the duplicates:

This code uses both templates methods as well as passing functions to have a very generic interface. We can use it anywhere to search for duplicates in a collection.
Now looking for duplicates in ComMChannel ids would look like this:

And thanks to template methods and Java 8 we can use the same groupBy to look for duplicate Shortnames:

Similar code could also be written in the Xtend2 language.

With the new Sphinx Check framework, a check in an Artop based tool might look like this:

 

 

Towards a generic splitable framework implementation – Post #3

In the previous blog posts I have layed out the basic thoughts about a framework for splitable implementation. As indicated, we are using AspectJ to intercept the method calls to the EMF model’s classes and change the behavior to present a merged model to the clients. If the AspectJ plugins are used in the clients’ executable, it will see merged models.

So for a model org.mymodel  we intercept all calls to the getters by declaring the JoinPoint

and in the actual code for the execution we want to do the following :

  • Check if the return type (could be the direct return type or the parameter of a parameterized type) is splitable. If so, we need to replace the returned object with the primary slice.
  • If the object that the method invoked on is splitable, we need to find all slices and return the joined results.

This is the preliminary code for illustration (not yet finished for all cases, since it is just used for a specific meta-model right now):

If we look at the implementation of FeatureCalculator, we see that internally we call the same method on all the slices:

But why does this not cause an infinite loop, since the get method’s are intercepted. Because through the definition of the join points, we make sure that our aspect is not executed when called from an Aspect or from the Splitable Frameworks classes. That means that all our implementation of the framework can safely operate on the source models.

We are almost set. But when we start a client with that AspectJ configuration, we will see that EMF calls will be intercepted when loading a model. But obviously the models should be first loaded as source models and only later accesses should be intercepted to present the joined model.

We can configure that through additional Join Points:

The following additional definitions are being used to disable the Aspect if the current call stack (control flow) is from certain EMF / Sphinx infrastructure classes:

 Result

With this configuration, our models load fine and the Sphinx model explorer shows the joined elements wherever a slice is in the model. Right now our implementation supports the joining of models and packages (i.e. it supports containment references) but can be extended to do more.

Upcoming

The following features will be added to the framework:

  • Intercepting more of the necessary methods (e.g. eContainer)
  • Support modification of models (i.e., the intercepted set-methods will modify the source models. The component “Arbiter” will be introduced to decide which source models to modify.
  • (De-)Activiation of the joined view based on transactions. A settable flag to indicate whether the client operates on the joint view our on the source view.
  • Integration with IncQuery

Sphinx

We hope to release the framework as part of the Sphinx project.

Towards a generic splitable framework implementation – Post #2

This blog post is the 2nd in a series of posts detailing some concepts for a generic framework for the support of splitables. For this concept we will include the term “slice”. A “slice” is one of the partial model elements that together form the merged splitables.

Effects of being splittable

If an model element (EObject) is splitable / has a splitable feature, this has two kinds of impacts:

  1. Impact on the “behavior” of the class itself.
  2. Impact on other classes, that have features of a splitable type

Impact on the Class ITSELF

  • any feature call on such an object needs to join the features of all this objects’ slices. This includes primitive and more complex features.

IMPACT ON OTHER CLASSES

  • By Feature: any feature call on another class’ feature, that could contain a splittable object, needs to filter the result of the feature to return a result, where the slices are merged.
    This affects the entire model, all classes could be affected, since all could have a feature of a splitable type.
    This affects any features that are a super-class of a splittable object.
  • By containment/aggregation: If a class is splittable, then all possible classes in all containment trees that lead to a splittable class must also be splittable. Otherwise it would not really be possible to create a spittable tree, since the splittable element could be only in one file when the containment hierarchy is not splittable.
  • By inheritance: That also implies, that if a class gains “splittable” through this containment rule, all of it’s subclasses must be splittable too, since it could be that they have the specific instance of the containment feature.

Components of a splittable tooling

  • ISplitKeyProvider: For a given object, identifies a “key” that is used to identify slices that constitute an splitable. E.g., this could be the fully qualified name.
  • ISliceFinder: Given an object, finds all object that constitute that splittable.
  • ISplittingConfiguration: Given an object or structural features, decides if that is splittable.
  • IPrimarySliceFinder: Given an object, finds the object that represents the joined splitable.
  • IFeatureCalculator: Create a joint view of the splittables
  • Arbiter: Decide where to put new elements / how to move elements

ISplitKeyProvider

For an object, returns the key that is used to identifiy the slices that together make up a model. In a simple implementation, this could e.g. be the fully qualified name. For some AUTOSAR elements, this is the FQN plus additional variant information.

Slice Finder

In a simple implementation, all the slices can be identified through their key and the slice finder finds all objects with the same key. In more advanced implementation, caching by key.

Primary Slice Finder

Returns an object, that represents the fusioned object given a specific key. That could be a new object, in case a model is copied. Or, if we inject behavior with AspectJ, one of the slices that is used. Important: Must always be unique, i.e. for a given split key, this must _always_ return the same object.

IFeatureCalculator

Must be able to create a joint view of all the slices. Can be e.g. on copy when creating a shadow model, it could also be dynamic, creating a common view e.g. with AspectJ.

Joining Structural Features

  • For references / containment, that could contain a splittable, only return the PrimarySlice within the feature.
    • 0..1: Intercept call and replace with PrimarySlice
    • 0..n: Return new list that replaces contents with primary slice
  • Primitives / All objects with 0..1 multiplicity (including references, containment)
    • Return value when all slices have the same value or 0
    • Throw exception otherwise (merge conflict)
  • Needs a list implementation that delegates the contents of the original list. Should always return the same list –> Feature Calculator is responsible. Could maintain own cache or add a new value by means of AspectJ

 

 

 

 

 

 

 

Towards a generic splitable framework implementation – Post #1

Splitables are a concept introduced by AUTOSAR. In a nutshell, splitables allow for model elements to be split over a number of AUTOSAR files. E.g., two AUTOSAR files could contain the same package and the full model would actually consist over the merged contents of these two files. Some information can be found in the previous blog posts.

Since the introduction of splitables, we have been thinking about different ways of implementation. Now that the concepts has also found its way into the EAST-ADL standard as well as into customer models, it will be of interest to shed some light on this issue from different angles as well as introduce the current activities towards a framework for splitable models. Since Artop, EATOP (EAST-ADL) are all EMF-based, the considerations related to EMF.

Basic Requirements

For the discussion, we assume that we have three basic meta-model classes:

If we have a split model, then we might have two Package with the same name in different Resources from the same ResourceSet. For such a package p, p.getElements() should return the joined list of all those packages in the ResourceSet.

There could be a number of approaches to actually to that.

A dedicated API

We could introduce helper functions in addition to the actual setter/getter etc either directly in the model or as an additional API. That would duplicate a lot fo the model API and would also be a source of errors, because clients must make sure that they use the correct API calls.

Design Desicion
The framework for Splitable support should not introduce a dedicated API for accessing the Split objects. Clients should be able to work on the standard API.

Putting the logic into the metamodel (ecore, getters / setters)

Since EMF generates the Java code for the model out of the .ecore, we could actually put the additional logic required to “join” the models into the operations of the ecore. This could be done manually, which would imply a lot of work. Another approach would be modify the code generators that generate the Java code to include the required functionality.

But that would actually mean that we would have to coordinate with all the projects that we would like to support splitables for and also in the meantime to reproduce the build environments etc.

Requirement: Non Intrusiveness
The framework for Splitable support should support working with the binary builds of the supported meta-models.

In addition, we will see later on, that we would like to (de-)activate the splitable mechanism depending on who actually calls the framework. That would add additional overhead with this approach.

The (shadow) copy.

The simplest approach of course is to actually create a new model out of the source models that actually contains the merged view on the model. That means that any code that processes a model needs to invoke the copying before accessing the model.

Requirement: Agnostic Clients
The client code for loading   the meta-model should not need special initialisation.

In addition, a pure copy would imply that we cannot modify the model. When modifiying a joined models, any model modifications must be reflected in the source models.

Requirement: Model Modification
The framework for Splitable support should be extendable so that it can also support model modification.

With a copy of the model, a framework that works with a shadow copy could work like this:

  • A map that keeps the tracing between source model elements and joined model is maintained.
  • When the joined model is modified, listen to the changes on the model and apply them to the source model.

This could be a feasible approach, however, there is also a further idea:

Intercepting and modifiying calls to the model

If we could intercept calls to the model (e.g. getters and setters), we could change the behavior of those methods to either reflect the “real” model structure or the actual “merged” view depending on the actual need. Such a different behavior could be considered a cross-cutting concern which directly leads to aspect-oriented programming. Further blog posts will detail the current thoughts/concepts/status of an Aspect-Oriented approach for a splitable framework for EMF models.

P.S.: Notes on Artop

Artop already provides an splitable implementation. But since it is part of Artop, it cannot directly be re-used for non-AUTOSAR customers. Documentation is a bit sparse, but some of the usage can be found in the test cases:

And the Javdoc from SplitableMerge.java shows that a clone is created.

where as the SplitableEObjectsProvider follows the “intercept idea”. In the default implementation we see

and the comment of the SplitableEObject shows:

 

 

Dynamic Workflows

In this guest post by Amine Lajmi, Amine introduces the workflow framework that was designed and implemented by him and itemis Paris for Sphinx.

 Dynamic Workflows

Still today, MWE is used for the definition of workflows in several model-driven applications (Xtext workflows, legacy Xpand and Check files, etc.), mainly to perform tasks like model loading, running transformations, batch validation, etc. Even though MWE workflows with Xpand and Check are less used, component-based approaches for defining tool chains are still needed for new developments in Xtend and Java.
Workflow components, such as code generators, are often executed in a separate Eclipse instance. In practice, the way a developer usually works is: write some transformations, launch a runtime instance, test the transformations, go back to the IDE and fix some bugs, re-launch the runtime, etc.
The Dynamic Workflows framework newly introduced in Sphinx is a contribution that solves these issues. It offers complete support for developing workflows in Xtend and Java, running workflows within the same Eclipse instance, so more convenient for the developer, in a transactional environment. This blog post will introduce how to use the framework, on the Hummingbird example shipped with Sphinx.

A SIMPLE WORKFLOW EXAMPLE

To define a dynamic workflow, subclass: org.eclipse.sphinx.emf.mwe.dynamic.WorkspaceWorkflow. The workflow could be written in Xtend, or Java, or both. Here is an example in Xtend:

To run the workflow, navigate to the Model Explorer, right-click on the workflow file then select Run Workflow > Run. The IDE console should display the message.

WORKSPACE WORKFLOW COMPONENTS

A workflow aggregates a set of workflow components as children. To define a workflow component, subclass org.eclipse.sphinx.emf.mwe.dynamic.components.AbstractWorkspaceWorkflowComponent, for example:

MODEL WORKFLOW COMPONENTS

Model Workflow Components are workflow components which need to read and/or write in models. To define a model workflow component, you need to subclass org.eclipse.sphinx.emf.mwe.dynamic.components.AbstractModelWorkflowComponent.
In a transactional context, read and write accesses to models are handled differently: in one case the workflow component is encapsulated into a read transaction, while in the other case, the workflow component is encapsulated into a write transaction.
The Dynamic Workflows framework allows defining both types of model workflow components.

READ-ONLY MODEL WORKFLOW COMPONENTS

By default all model workflow components are executed into a read transaction unless it is specified in their constructors. The following component reads the model slot from the context, and prints its content to the console.

READ-WRITE MODEL WORKFLOW COMPONENTS

As stated before, if the model workflow component needs to modify any model, it should be encapsulated into a write transaction. This is handled in the constructor of the component; the only thing to care about is to call the parent constructor with the flag modifiesModel set to true, see example below.

Finally, to add a workflow component to a workflow, simply add the component to the list children.

To run the workflow, select the model to validate from the Model Explorer, and select Run Workflow > Run. A dialog window invites you to select the workflow to execute (see Fig1); use the search field to filter the available classes, and then click OK. The console output should now display the content of the model slot.

Fig1. Running a dynamic workflow on a model input

Fig1. Running a dynamic workflow on a model input

You can also embed components written with Xtend and Java in the same workflow.

HEADLESS EXECUTION SUPPORT

The Dynamic Workflows framework allows executing workflows outside the UI. The general syntax of the headless application is the following:
eclipse -noSplash
-data an Eclipse workspace location
-application org.eclipse.sphinx.emf.mwe.dynamic.headless.WorkflowRunner
-workflow a workspace relative path of a Java or Xtend file, or a fully qualified name of a binary class
-model a workspace relative path to a model, or a resource URI (possibly with fragment), or a file URI
Some examples:
• eclipse
-noSplash
-data /my/workspace/location
-application org.eclipse.sphinx.emf.mwe.dynamic.headless.WorkflowRunner
-workflow org.example/src/org/example/MyWorkflowFile.java
-model org.example/model/MyModelFile.hummingbird
• eclipse
-noSplash
-data /my/workspace/location
-application org.eclipse.sphinx.emf.mwe.dynamic.headless.WorkflowRunner
-workflow org.example/src/org/example/MyWorkflowFile.xtend
-model platform:/resource/org.example/model/MyModelFile.hummingbird
• eclipse
-noSplash
-data /my/workspace/location
-application org.eclipse.sphinx.emf.mwe.dynamic.headless.WorkflowRunner
-workflow org.example/src/org/example/MyWorkflowFile.xtend
-model platform:/resource/org.example/model/MyModelFile.hummingbird#//@components.0

New Sphinx Validation/Check Framework

In this guest post by Amine Lajmi, Amine introduces the new check framework that was designed and implemented by him and itemis Paris for Sphinx.

VALIDATION/CHECK FRAMEWORK

The standard EMF validation seems tightly coupled to the EMF Ecore metamodel, and difficult to extend without modifying the metamodel. As a consequence, many users are seeking a less intrusive framework, with a more flexible/declarative mechanism.

Therefore, a new annotation-based validation framework has been introduced in Sphinx. The framework follows the same principle of the validation framework of Xtext, as it uses @Chek annotated methods for validating model objects. Moreover, the framework allows extracting validation constants such as error messages and severities into a separate model called catalog, making the maintenance of the validators much more comfortable.

In this post, we will show how to set up a validator based on the Check framework. We will use the Hummingbird example shipped with Sphinx as a running example.

UNDERSTANDING CHECK VALIDATORS

A check validator is a Java class extending org.eclipse.sphinx.emf.check.AbstractCheckValidator. It contains a set of methods annotated with @Check. The method parameter indicates the type of the object to be validated. At runtime, the validator dispatches the calls to the appropriate methods, based on the type of the current object.

A validator optionally makes use of a Catalog. When a catalog is used, @Check annotations reference constraints by their unique Id in the catalog, so there is a logical mapping from methods to constraints. For a constraint to be applicable within the scope of a validator, the set of categories specified in its @Check annotation should be a subset of the set of categories referenced by the constraint in the check catalog. Roughly speaking, categories are used to narrow down the scope of methods, i.e. their applicability.

CONTRIBUTING CHECK VALIDATORS

A check validator is contributed through the extension point org.eclipse.sphinx.emf.check.checkvalidators.

It is possible to contribute multiple check validators for the same EPackage. It is also possible to a catalog to be shared between multiple check validators, simply use the same plugin URI of the check catalog in the contribution to the extension point.

DEFINING CONSTRAINTS

Let’s start with something trivial. The following method is called on objects of type Application, it raises an error stating the application name is invalid. Notice, the API provides three methods: error(…), info(…), and warning(…) to explicitly force the type of the issue at design time.

The following @Check annotation points to a catalog constraint with an id equals to “ApplicationNameNotValid”. Instead of using explicit severities, a more general method called issue(…) is used. While error(…) raises systematically an error, issue(…) gets its severity type at runtime from the catalog.

The severity type is set directly on the check catalog (cf. Fig1), which gives the user more flexibility at design time.

 

Fig1. Configuring validation constraints and categories from the catalog

Fig1. Configuring validation constraints and categories from the catalog

Each constraint has a unique identifier, and refers to a possibly empty set of Categories. In Fig1, the constraint with the id ApplicationNameNotValid is associated to Category 1, its severity is Error, and its message expects a placeholder (see {0} in the message value). This additional parameter is provided in the issue(…) method call as a last parameter; in this example it has a value “for all categories”.

UNDERSTANDING CATEGORIES

A constraint can be applicable for one or more categories. This means the constraint will be evaluated only when the category referenced in the @Check annotation is selected by the user. For example, the following constraint is called either when Category 1, or Category 2, or both categories are selected.

Convention: if no categories are explicitly specified, the constraint applies for all categories.
The following example checks the Component name, and raises info if the component name has an illegal character. In the validator class, the constraint is referenced as follows:

The annotation specifies the constraint ComponentNameNotValid should be checked only for Category 1, which means it is applied only when the user selects Category 1 from the UI.

RUNNING CHECK VALIDATORS

The easiest way to launch a check validation is to use the contextual menu (see Fig2). Right click on the object to be validated, then select Validate > Check-based Validation. Validation is performed on the selected object and all its direct and indirect children.

 

Fig2. Running a check validation from the UI

Fig2. Running a check validation from the UI

The user is then asked for what categories the validation should be performed; let’s select Category 1 for which we know a constraint is defined. The categories displayed in the UI are the ones defined in the check catalog.

check3

Fig3. Selecting the categories for which validation should apply

 

The Check Validation problems view displays an error as expected as the constraint ApplicationNameNotValid is violated. Note the error markers are displayed on both Check Validation View and the classical Problems View. The Check Validation View gives additional information about the errors such as the validator which raised the issue, and the ability to navigate to the error in the model under validation.

check4

Fig4. Validation markers in the check validation view

 

Sphinx – how to access your models

When you work with Sphinx as a user in an Eclipse runtime, e.g. with the Sphinx model explorer, Sphinx does a lot of work in the background to update models, provide a single shared model to all editors etc. But what to do when you want to access Sphinx models from your own code.

EcorePlatformUtil

EcorePlatformUtil is one of the important classes with a lot of methods that help you access your models. Two important methods are

  • getResource(…)
  • loadResource(…)

They come in a variety of parameter variations. The important thing is, getResource(…) will not load your resource if it is not yet loaded. That is a little bit different than the standard ResourceSet.getResource(…) with its loadOnDemand parameter.

On the other hand, loadResource(…) wil only load your resource if it is not loaded yet. If it is, there will be now runtime overhead. Let’s have a look at the code:

 

Sphinx uses its internal registries to find the TransactionalEditingDomain that the file belongs to and then calls loadResource(…)

So we have to look at org.eclipse.sphinx.emf.util.EcoreResourceUtil to see what happens next. There is just a little fragment

that leads us to

  • In line 25, the standard EMF ResourceSet.getResource() is used to see if the resource is already there. Note that loadonDemand is false.
  • Otherwise the resource is actually created and loaded. If it does not have any content, it is immediately removed
  • Information about loading errors / warnings is stored at the resource

EcorePlatformUtil.getResource(IFile)

This method will not load the resource as can be seen from the code:

ModelLoadManager

In addition, the ModelLoadManager provides more integration with the Workspace framework (Jobs, ProgressMonitor,etc.):

ModelLoadManager.loadFiles(Collection, IMetaModelDescriptor, boolean, IProgressMonitor) supports an “async” parameter. If that is true, the model loading will be executed within a org.eclipse.core.runtime.jobs.Job. The method in turn calls

ModelLoadManager.runDetectAndLoadModelFiles(Collection, IMetaModelDescriptor, IProgressMonitor), which sets up progress monitor and performance statistics, itself calling detectFilesToLoad and runLoadModelFiles. detectFilesToLoad will be discussed in a dedicated posting.

runLoadModelFiles sets up progress monitor and performance statistics and calls loadModelFilesInEditingDomain finally delegating down to EcorePlatformUtil.loadResource(editingDomain, file, loadOptions) that we discussed above.

Sphinx is blacklisting your proxies

EMF proxy resolving is one area where Sphinx adds functionality. Sphinx was designed to support, amongst others, AUTOSAR models and with AUTOSAR models, references have some special traits:

  • References are based on fully qualified names
  • Any number of .arxml files can be combined to form a model
  • Models can be merged based on their names. So any number of resources can contain a package with a fully qualified name of “/AUTOSAR/p1/p2/p3/…pn”

Blacklisting

Obviously, it would be highly inefficient to try to resolve a proxy each time it is encountered in code in scenarios like this. So Sphinx can “blacklist” the proxies.

The information about the proxy blacklist is used in org.eclipse.sphinx.emf.resource.ExtendedResourceSetImpl. At the end of the getEObject() methods we see that the proxy is added to the blacklist if it could not be resolved:

And of course, at the beginning of the method, a null is returned immediately if the proxy has been blacklisted:

Getting de-listed

Now Sphinx has to find out when it would make sense not to blacklist a proxy anymore but try to resolve it. So it registers a org.eclipse.sphinx.emf.internal.ecore.proxymanagement.blacklist.ModelIndexUpdater by the extension point described in this previous post.

So the ModelIndexUpdaer will react to changes:

The class that actually implements the blacklist is org.eclipse.sphinx.emf.internal.ecore.proxymanagement.blacklist.ModelIndex, which mostly delegates to org.eclipse.sphinx.emf.internal.ecore.proxymanagement.blacklist.MapModelIndex

When a resource has changed or being loaded, the MapModelIndex tries to see if the blacklisted proxies could be resolved against the resource:

When the resource is actually removed from the model, all proxies from that resource are removed from the index:

So Sphinx avoids re-resolving of proxies.

Sphinx is listening – editingDomainFactoryListeners

One of the mechanism that Sphinx uses to get notified about changes in your Sphinx-based models is registering Listeners when the editingDomain for your model is being created. This mechanism can also be used for your own listeners.

Sphinx defines the extension point org.eclipse.sphinx.emf.editingDomainFactoryListeners and in org.eclipse.sphinx.emf it registers a number of its own listeners:

Listeners registered by Sphinx

Listeners registered by Sphinx

The registry entries for that extension point are processed in EditingDomainFactoryListenerRegistry.readContributedEditingDomainFactoryListeners(). It creates an EditingDomainFactoryListenerRegistry.ListenerDescriptor which stores the class name and registers it internally for the meta-models that are specified in the extension. The ListenerDescriptor also contains code to actually load the specified class and instantiate it (base type is ITransactionalEditingDomainFactoryListener).

The method EditingDomainFactoryListenerRegistry.getListeners(IMetaModelDescriptor) can be used to get all the ITransactionalEditingDomainFactoryListener that are registered for a given IMetaModelDescriptor.

These are in turn invoked from ExtendedWorkspaceEditingDomainFactory.firePostCreateEditingDomain(Collection, TransactionalEditingDomain). ExtendedWorkspaceEditingDomainFactory is responsible for creating the editing domain and through this feature we have a nice mechanism to register custom listeners each time an editing domain is created by Sphinx (e.g., after models are being loaded).