Sphinx – how to access your models

When you work with Sphinx as a user in an Eclipse runtime, e.g. with the Sphinx model explorer, Sphinx does a lot of work in the background to update models, provide a single shared model to all editors etc. But what to do when you want to access Sphinx models from your own code.


EcorePlatformUtil is one of the important classes with a lot of methods that help you access your models. Two important methods are

  • getResource(…)
  • loadResource(…)

They come in a variety of parameter variations. The important thing is, getResource(…) will not load your resource if it is not yet loaded. That is a little bit different than the standard ResourceSet.getResource(…) with its loadOnDemand parameter.

On the other hand, loadResource(…) wil only load your resource if it is not loaded yet. If it is, there will be now runtime overhead. Let’s have a look at the code:


Sphinx uses its internal registries to find the TransactionalEditingDomain that the file belongs to and then calls loadResource(…)

So we have to look at org.eclipse.sphinx.emf.util.EcoreResourceUtil to see what happens next. There is just a little fragment

that leads us to

  • In line 25, the standard EMF ResourceSet.getResource() is used to see if the resource is already there. Note that loadonDemand is false.
  • Otherwise the resource is actually created and loaded. If it does not have any content, it is immediately removed
  • Information about loading errors / warnings is stored at the resource


This method will not load the resource as can be seen from the code:


In addition, the ModelLoadManager provides more integration with the Workspace framework (Jobs, ProgressMonitor,etc.):

ModelLoadManager.loadFiles(Collection, IMetaModelDescriptor, boolean, IProgressMonitor) supports an “async” parameter. If that is true, the model loading will be executed within a org.eclipse.core.runtime.jobs.Job. The method in turn calls

ModelLoadManager.runDetectAndLoadModelFiles(Collection, IMetaModelDescriptor, IProgressMonitor), which sets up progress monitor and performance statistics, itself calling detectFilesToLoad and runLoadModelFiles. detectFilesToLoad will be discussed in a dedicated posting.

runLoadModelFiles sets up progress monitor and performance statistics and calls loadModelFilesInEditingDomain finally delegating down to EcorePlatformUtil.loadResource(editingDomain, file, loadOptions) that we discussed above.

Sphinx is blacklisting your proxies

EMF proxy resolving is one area where Sphinx adds functionality. Sphinx was designed to support, amongst others, AUTOSAR models and with AUTOSAR models, references have some special traits:

  • References are based on fully qualified names
  • Any number of .arxml files can be combined to form a model
  • Models can be merged based on their names. So any number of resources can contain a package with a fully qualified name of “/AUTOSAR/p1/p2/p3/…pn”


Obviously, it would be highly inefficient to try to resolve a proxy each time it is encountered in code in scenarios like this. So Sphinx can “blacklist” the proxies.

The information about the proxy blacklist is used in org.eclipse.sphinx.emf.resource.ExtendedResourceSetImpl. At the end of the getEObject() methods we see that the proxy is added to the blacklist if it could not be resolved:

And of course, at the beginning of the method, a null is returned immediately if the proxy has been blacklisted:

Getting de-listed

Now Sphinx has to find out when it would make sense not to blacklist a proxy anymore but try to resolve it. So it registers a org.eclipse.sphinx.emf.internal.ecore.proxymanagement.blacklist.ModelIndexUpdater by the extension point described in this previous post.

So the ModelIndexUpdaer will react to changes:

The class that actually implements the blacklist is org.eclipse.sphinx.emf.internal.ecore.proxymanagement.blacklist.ModelIndex, which mostly delegates to org.eclipse.sphinx.emf.internal.ecore.proxymanagement.blacklist.MapModelIndex

When a resource has changed or being loaded, the MapModelIndex tries to see if the blacklisted proxies could be resolved against the resource:

When the resource is actually removed from the model, all proxies from that resource are removed from the index:

So Sphinx avoids re-resolving of proxies.

Sphinx is listening – editingDomainFactoryListeners

One of the mechanism that Sphinx uses to get notified about changes in your Sphinx-based models is registering Listeners when the editingDomain for your model is being created. This mechanism can also be used for your own listeners.

Sphinx defines the extension point org.eclipse.sphinx.emf.editingDomainFactoryListeners and in org.eclipse.sphinx.emf it registers a number of its own listeners:

Listeners registered by Sphinx

Listeners registered by Sphinx

The registry entries for that extension point are processed in EditingDomainFactoryListenerRegistry.readContributedEditingDomainFactoryListeners(). It creates an EditingDomainFactoryListenerRegistry.ListenerDescriptor which stores the class name and registers it internally for the meta-models that are specified in the extension. The ListenerDescriptor also contains code to actually load the specified class and instantiate it (base type is ITransactionalEditingDomainFactoryListener).

The method EditingDomainFactoryListenerRegistry.getListeners(IMetaModelDescriptor) can be used to get all the ITransactionalEditingDomainFactoryListener that are registered for a given IMetaModelDescriptor.

These are in turn invoked from ExtendedWorkspaceEditingDomainFactory.firePostCreateEditingDomain(Collection, TransactionalEditingDomain). ExtendedWorkspaceEditingDomainFactory is responsible for creating the editing domain and through this feature we have a nice mechanism to register custom listeners each time an editing domain is created by Sphinx (e.g., after models are being loaded).

Sphinx is listening to Resources – Synchronizing Model Descriptors

The Eclipse Sphinx project provides a number of useful features for (EMF-based) model-management.  One of the features is reloading the model when the underlying resources change. Sphinx uses so called ModelDescriptors to define which files belong to a model, and on a change these have to be updated as well.

One of the relevant code pieces is org.eclipse.sphinx.emf.Activator:

The ModelDescriptorSynchronizer registers itself with standard means at the Eclipse Workspace to listen for resource changes and adds a BasicModelDescriptorSynchronizerDelegate delegate :


The major task of that class is to map the IResourceChangeEvent from the Eclipse framework to Sphinx’ IModelDescriptorSyncRequest which specify what to update in model descriptors:

But it also does some handling of the model registry caches:

org.eclipse.sphinx.emf.internal.model.ModelDescriptorSynchronizer itself extends org.eclipse.sphinx.platform.resources.syncing.AbstractResourceSynchronizer which itself implements IResourceChangeListener from Eclipse and is a base class for handling resource changes. It basically executes the the syncRequests:

In our case, the sync requests are of type ModelDescriptorSyncRequest, because that is how createSyncRequest is @Overriden in org.eclipse.sphinx.emf.internal.model.ModelDescriptorSynchronizer.

Now the Model Descriptors are actually processed in this class:

And we are finally at the org.eclipse.sphinx.emf.model.ModelDescriptorRegistry, where the information about all the loaded models is actually kept (to be detailed in a further blog post).

Supporting Model-To-Transformation in Java with AspectJ

There are a number of Model-To-Model-Transformation Frameworks provided under the Eclipse umbrelle (e.g. QVTO). Some of these provide a lot of support. However, in some scenarios, you need to either adapt them or implement your own M2M with Java. For us, some of the cases are:

  • Framework does not support model loading saving with Sphinx
  • Framework does not support EMF unsettable attributes (eIsSet)
  • Framework performance.

However, one of the most annoying problems in writing transformations is caching already created elements. Consider the following abstract example:

  • Our source model has an element of type Package ‘p’
  • In the package, there are sub elements ‘a’ and ‘b’
  • ‘a’ is a typed element, it’s type is ‘b’

Usually when mapping the contents of p, we would iterate over its contents and transform each element. However, when transforming ‘a’, the transformed element’s ‘A’ type should be set to the transformed ‘b’ (‘B’). But we have not transformed it yet. If we transform it at this point in time, it will be transformed again when iterating further through the package, resulting at two different ‘B’ in the target model.

Obviously we should keep a Cache of mapping results and not create a new target instance if a source element has been transformed before. However, managing those caches will clutter our transformation rules.

But it is quite easy do factor this out by using AspectJ. First, we define an annotation for methods that should be cached:


Then we define an aspect that intercepts all calls to methods annotated with @Cached:

  •  In line 14, we build a cache key out of the current object, the method called and its arguments
  • If this combination is already cached, we return the key entry
  • otherwise we proceed to the original method and cache its result

The transformator then looks like this:

(OK, so this is Xtend2 rather than Java, because the polymorphic dispatch comes handy).

Note that we are transforming EAST-ADL models here. The package contains Units and Quantities. Units refer to Quantitites. Quantity transformations are called twice: From the package transformation and from the Unit transformation. However, by using the caching, we get the correct structure.

In addition, we also solve a little annonying problem: Elements have to be put in the containment hierarchy as well. But that would require additional code in the transformators and passing around the containers.

In line 25 in the aspect, we check if the target object implements the AutoAdd interface and call the method in that case. So the transformation can take care of the addition to the containment hierarchy at a defined place.

The code above is an implementation prototype to show initial feasability.



Travel Tips for EclipseCon France

EclipseCon France is one of my favourite conferences. If you are going, here are some tips:

  • I prefer to book a hotel in the city centre, e.g. near Jean Jaures (https://goo.gl/maps/RKkQJ). A lot of the restaurants are within walking distance and there a lot of locations that you can use to meet with other participants.
  • There is a bus shuttle between the airport and the city centre (Jean Jaures) that also stops right in front of the EclipseCon venue. And it is reasonably priced.
  • Toulouse has a dense network of public bicycle rental stations. So instead of using the metro, you might want to choose to pedal your way to EclipseCon – it is within reasonable distance and there are bike stations in front of the venue and in near vicinity.
  • Although you will find Wifi at the venue and at hotels, I prefer the prepaid internet SIM by SFR or Orange. If you have a mobile wlan hotspot or you can use SIM cards in your notebook, that is my preferred option. Orange and SFR are both right at the city center. You can get a prepaid SIM easily – but bring a passport / ID card for registration.

Enjoy an interesting conference in a beautiful city in early summer!


Open Collaborations for Automotive Software – and Eclipse

On June 4th itemis is hosting a conference on Open Collaborations for Automotive Software (in German) in Esslingen near Stuttgart. There are a lot of interesting talks and – while not immediately obvious – all of them do have some relation to Eclipse.

  • Open Source OSEK: Erika Enterprise is the first certified Open Source OSEK. And it uses Eclipse based tooling for the configuration / development of OSEK based software
  • COMASSO is a community based AUTOSAR BSW implementation. It comes with BSWDT, an Eclipse-based AUTOSAR configuration tool that leverages technologies like EMF and Xpand
  • SAFE RTP Open Source Platform for Safety-Modeling and -Analysis: This tooling is completely based on Eclipse and features the EAST-ADL implementation EATOP, that is published within the Eclipse Automotive Industry Working Group
  • openMDM: Open Measured Data Management: Again, a community based tooling for the management of measured data, based on Eclipse.
  • Strategic Relevance of Open Source at ETAS GmbH: ETAS GmbH builds a lot of automotive tools on Eclipse and they will talk about the business relevance of open source / Eclipse.
  • Advanced Technology for SW Sharing – Steps towards an open source platform for automotive SW Build Systems: A new build system framework for automotive ECU framework – based on Eclipse

Eclipse is a key technology in automotive software engineering and the conference will introduce many interesting aspects.

AUTOSAR reports with Artop and BIRT

Even in times of model-based development, artefacts like pdf and Word are still important for documentation. With Eclipse based technologies, it is easy to produce documents in various formats from your AUTOSAR xml.

One approach would be to use Eclipse M2T (model-to-text) technologies to generate your own reports. Technologies for that would be Xtend2 or Xpand. An article can be found here.

However, with the Eclipse BIRT project, there is also a more WYSIWYG approach. Taking a very simple AUTOSAR model as an input, we can easily create reports in most formats.


After creating a BIRT project, we have to configure a so called “Data Source”, that simply takes our ARXML as input:

2013-12-17_11h35_30We then configure one or more “Data Sets” that actually pull the data from the data source:

2013-12-17_11h36_48From there it is mostly drag and drop and maybe a little scripting to create the report in the WYSYWIG editor – it also supports a lot of styling and templating. Including images, custom headers etc:

2013-12-17_11h38_04As you can see, we do create a table with a row for each SWC-Type with the shortname and a subtable for all the ports of that SWC-Type. The BIRT Report Viewer shows the report in HTML.


By clicking on “Export Report” we can export to a number of formats:





2013-12-17_11h44_27BIRT and ARTOP provide powerful reporting functionality.



Support for vendor specific parameter / module definitions in COMASSO basic software configuration tool

During the 6th AUTOSAR Open Conference one of the presenters pointed out, that integration of vendor specific / custom parameter definitions into the tool chain can be problematic because of insufficient tool support. Some tools seem to have hard coded user interface for the configuration or proprietary formats for customization.

The COMASSO basic software tooling chose a different approach: User interface and code generator framework are dynamically derived from the .arxml in the workspace and all the configuration data is directly stored in .arxml (so no Import/Export of models is required, just drag and drop).

NVRAM Example

In one minimal example that we use for training purposes, we have a parameter definition for a NvRam module (stripped from the official AUTOSAR definition). In the “file system” view, the param def can bee seen in the workspace.2013-11-15_14h14_09

This results in the “Model view”:2013-11-15_14h14_30Introducing a vendor specific module

Now, assume the construed scenario, that we would like to provide a BSW/MCAL module for a Graphical Processor Unit (GPU). For the configuration tool to work, all what we need is a param definition for that.

Although COMASSO does not require a specific project structure, for convenience, we copy the NvM file structure and create an new param def callsed GpU_ParamDef.arxml. In XML, we define a new module:

After saving the new param def .arxml, the tool scans the workspace and rebuilds the user interface.

2013-11-15_14h26_25Without a single line of additional code, the tool knows about the vendor specific parameter definition. The icon in front of the GPU is grey, because we do not have configured any data yet. Opening the editor shows that it knows about the VSMD now as well:

2013-11-15_14h29_42Any configuration here will be directly stored as values in a .arxml file.

 Code generation

But not only the user interface knows immediately about the vendor specific modules – so does the generation and validation framework. As you can see below, we can now directly access the GPU module configuration from the AUTOSAR root (line 11) and then loop through the new GPU descriptors (line 12).

And the content assist system is working with the information as well. No need to remember what we defined. In line 16 we invoke content assist on “d” (which is a GPUBlockDescriptor) and get suggestions on all the attributes.

2013-11-15_14h34_47Open Language

The code generation and validation framework is Eclipse Xtend/Xpand. It is publically available and not a proprietary solution.

Parameter Description

In addition, when clicking on an element in the configuration editor, there is a documentation view that shows the important information from the .arxml in a nicer view. Any documentation that is stored in the .arxml will be easily accessible.









Functional Architectures (EAST-ADL) and Managing Behavior Models

In the early phases of systems engineering, functional architectures are used to create a functional description of the system without specifying details about the implementation (e.g. the decision of implementation in HW or SW). In EAST-ADL, the building blocks of functional architectures (Function Types) can be associated with so-called Function Behaviors that support the addition of formal notations for the specification of the behavior of a logical function.

In the research project IMES, we are using the EAST-ADL functional architecture and combine it with block diagrams. Let’s consider the following simplified functional model of a Start-Stop system:

2013-11-10_12h53_32In EAST-ADL, you can add any number of behaviors to a function. We already have a behavior activationControlBehavior1 attached to the function type activationControl and we are adding another behavior called newBehavior.2013-11-10_12h54_06

A behavior itself does not specifiy anything yet in EAST-ADL. It is more like a container that allows to refer to other artefacts actually containing the behavior specification. You can refer to any specification (like ML/SL or other tools). For our example, we are using the block diagram design and simulation tool DAMOS, which is available as open source at www.eclipse.org.

We can now invoke the “Create Block Diagram” on the behavior. 2013-11-10_12h55_34

The tool will then create a block diagram with in- und output ports that are taken from the function type (you can see in the diagram that we have two input ports and one output ports).2013-11-10_12h56_20 Add this point, the architects and developers can specify the actual logic. Of course it is also possible to use existing models and derive the function type from that (“reverse from existing”). That would work on a variety of existing specifications, like state machines, ML/SL etc.

A DAMOS model could look like this:2013-11-10_12h56_48

At this point of the engineering process, we would have:

  • A functional architecture
  • one or more function behaviors with behavior models attached to the function types

Obviously, the next thing that comes to mind is  to combine all the single behavior models and to create a simulation model for a (sub-) system. Since we might have more than one model attached to a function, we have to pick a specific model for each of the function types.

This is done by creating a system configuration.2013-11-10_13h02_52That will create a configuration file that associates a function type (on the left side) with a behavior model (right side). Of course, using Eclipse Xtext, this is not only a text file, but a full model with validation (are the associated models correct) and full type aware content assist:2013-11-10_13h16_52After the configuration has been defined, a simulation model is generated that can then be run to combine the set of function models.