New Sphinx Validation/Check Framework

In this guest post by Amine Lajmi, Amine introduces the new check framework that was designed and implemented by him and itemis Paris for Sphinx.

VALIDATION/CHECK FRAMEWORK

The standard EMF validation seems tightly coupled to the EMF Ecore metamodel, and difficult to extend without modifying the metamodel. As a consequence, many users are seeking a less intrusive framework, with a more flexible/declarative mechanism.

Therefore, a new annotation-based validation framework has been introduced in Sphinx. The framework follows the same principle of the validation framework of Xtext, as it uses @Chek annotated methods for validating model objects. Moreover, the framework allows extracting validation constants such as error messages and severities into a separate model called catalog, making the maintenance of the validators much more comfortable.

In this post, we will show how to set up a validator based on the Check framework. We will use the Hummingbird example shipped with Sphinx as a running example.

UNDERSTANDING CHECK VALIDATORS

A check validator is a Java class extending org.eclipse.sphinx.emf.check.AbstractCheckValidator. It contains a set of methods annotated with @Check. The method parameter indicates the type of the object to be validated. At runtime, the validator dispatches the calls to the appropriate methods, based on the type of the current object.

A validator optionally makes use of a Catalog. When a catalog is used, @Check annotations reference constraints by their unique Id in the catalog, so there is a logical mapping from methods to constraints. For a constraint to be applicable within the scope of a validator, the set of categories specified in its @Check annotation should be a subset of the set of categories referenced by the constraint in the check catalog. Roughly speaking, categories are used to narrow down the scope of methods, i.e. their applicability.

CONTRIBUTING CHECK VALIDATORS

A check validator is contributed through the extension point org.eclipse.sphinx.emf.check.checkvalidators.

It is possible to contribute multiple check validators for the same EPackage. It is also possible to a catalog to be shared between multiple check validators, simply use the same plugin URI of the check catalog in the contribution to the extension point.

DEFINING CONSTRAINTS

Let’s start with something trivial. The following method is called on objects of type Application, it raises an error stating the application name is invalid. Notice, the API provides three methods: error(…), info(…), and warning(…) to explicitly force the type of the issue at design time.

The following @Check annotation points to a catalog constraint with an id equals to “ApplicationNameNotValid”. Instead of using explicit severities, a more general method called issue(…) is used. While error(…) raises systematically an error, issue(…) gets its severity type at runtime from the catalog.

The severity type is set directly on the check catalog (cf. Fig1), which gives the user more flexibility at design time.

 

Fig1. Configuring validation constraints and categories from the catalog

Fig1. Configuring validation constraints and categories from the catalog

Each constraint has a unique identifier, and refers to a possibly empty set of Categories. In Fig1, the constraint with the id ApplicationNameNotValid is associated to Category 1, its severity is Error, and its message expects a placeholder (see {0} in the message value). This additional parameter is provided in the issue(…) method call as a last parameter; in this example it has a value “for all categories”.

UNDERSTANDING CATEGORIES

A constraint can be applicable for one or more categories. This means the constraint will be evaluated only when the category referenced in the @Check annotation is selected by the user. For example, the following constraint is called either when Category 1, or Category 2, or both categories are selected.

Convention: if no categories are explicitly specified, the constraint applies for all categories.
The following example checks the Component name, and raises info if the component name has an illegal character. In the validator class, the constraint is referenced as follows:

The annotation specifies the constraint ComponentNameNotValid should be checked only for Category 1, which means it is applied only when the user selects Category 1 from the UI.

RUNNING CHECK VALIDATORS

The easiest way to launch a check validation is to use the contextual menu (see Fig2). Right click on the object to be validated, then select Validate > Check-based Validation. Validation is performed on the selected object and all its direct and indirect children.

 

Fig2. Running a check validation from the UI

Fig2. Running a check validation from the UI

The user is then asked for what categories the validation should be performed; let’s select Category 1 for which we know a constraint is defined. The categories displayed in the UI are the ones defined in the check catalog.

check3

Fig3. Selecting the categories for which validation should apply

 

The Check Validation problems view displays an error as expected as the constraint ApplicationNameNotValid is violated. Note the error markers are displayed on both Check Validation View and the classical Problems View. The Check Validation View gives additional information about the errors such as the validator which raised the issue, and the ability to navigate to the error in the model under validation.

check4

Fig4. Validation markers in the check validation view

 

Sphinx – how to access your models

When you work with Sphinx as a user in an Eclipse runtime, e.g. with the Sphinx model explorer, Sphinx does a lot of work in the background to update models, provide a single shared model to all editors etc. But what to do when you want to access Sphinx models from your own code.

EcorePlatformUtil

EcorePlatformUtil is one of the important classes with a lot of methods that help you access your models. Two important methods are

  • getResource(…)
  • loadResource(…)

They come in a variety of parameter variations. The important thing is, getResource(…) will not load your resource if it is not yet loaded. That is a little bit different than the standard ResourceSet.getResource(…) with its loadOnDemand parameter.

On the other hand, loadResource(…) wil only load your resource if it is not loaded yet. If it is, there will be now runtime overhead. Let’s have a look at the code:

 

Sphinx uses its internal registries to find the TransactionalEditingDomain that the file belongs to and then calls loadResource(…)

So we have to look at org.eclipse.sphinx.emf.util.EcoreResourceUtil to see what happens next. There is just a little fragment

that leads us to

  • In line 25, the standard EMF ResourceSet.getResource() is used to see if the resource is already there. Note that loadonDemand is false.
  • Otherwise the resource is actually created and loaded. If it does not have any content, it is immediately removed
  • Information about loading errors / warnings is stored at the resource

EcorePlatformUtil.getResource(IFile)

This method will not load the resource as can be seen from the code:

ModelLoadManager

In addition, the ModelLoadManager provides more integration with the Workspace framework (Jobs, ProgressMonitor,etc.):

ModelLoadManager.loadFiles(Collection, IMetaModelDescriptor, boolean, IProgressMonitor) supports an “async” parameter. If that is true, the model loading will be executed within a org.eclipse.core.runtime.jobs.Job. The method in turn calls

ModelLoadManager.runDetectAndLoadModelFiles(Collection, IMetaModelDescriptor, IProgressMonitor), which sets up progress monitor and performance statistics, itself calling detectFilesToLoad and runLoadModelFiles. detectFilesToLoad will be discussed in a dedicated posting.

runLoadModelFiles sets up progress monitor and performance statistics and calls loadModelFilesInEditingDomain finally delegating down to EcorePlatformUtil.loadResource(editingDomain, file, loadOptions) that we discussed above.

Sphinx is blacklisting your proxies

EMF proxy resolving is one area where Sphinx adds functionality. Sphinx was designed to support, amongst others, AUTOSAR models and with AUTOSAR models, references have some special traits:

  • References are based on fully qualified names
  • Any number of .arxml files can be combined to form a model
  • Models can be merged based on their names. So any number of resources can contain a package with a fully qualified name of “/AUTOSAR/p1/p2/p3/…pn”

Blacklisting

Obviously, it would be highly inefficient to try to resolve a proxy each time it is encountered in code in scenarios like this. So Sphinx can “blacklist” the proxies.

The information about the proxy blacklist is used in org.eclipse.sphinx.emf.resource.ExtendedResourceSetImpl. At the end of the getEObject() methods we see that the proxy is added to the blacklist if it could not be resolved:

And of course, at the beginning of the method, a null is returned immediately if the proxy has been blacklisted:

Getting de-listed

Now Sphinx has to find out when it would make sense not to blacklist a proxy anymore but try to resolve it. So it registers a org.eclipse.sphinx.emf.internal.ecore.proxymanagement.blacklist.ModelIndexUpdater by the extension point described in this previous post.

So the ModelIndexUpdaer will react to changes:

The class that actually implements the blacklist is org.eclipse.sphinx.emf.internal.ecore.proxymanagement.blacklist.ModelIndex, which mostly delegates to org.eclipse.sphinx.emf.internal.ecore.proxymanagement.blacklist.MapModelIndex

When a resource has changed or being loaded, the MapModelIndex tries to see if the blacklisted proxies could be resolved against the resource:

When the resource is actually removed from the model, all proxies from that resource are removed from the index:

So Sphinx avoids re-resolving of proxies.

Sphinx is listening – editingDomainFactoryListeners

One of the mechanism that Sphinx uses to get notified about changes in your Sphinx-based models is registering Listeners when the editingDomain for your model is being created. This mechanism can also be used for your own listeners.

Sphinx defines the extension point org.eclipse.sphinx.emf.editingDomainFactoryListeners and in org.eclipse.sphinx.emf it registers a number of its own listeners:

Listeners registered by Sphinx

Listeners registered by Sphinx

The registry entries for that extension point are processed in EditingDomainFactoryListenerRegistry.readContributedEditingDomainFactoryListeners(). It creates an EditingDomainFactoryListenerRegistry.ListenerDescriptor which stores the class name and registers it internally for the meta-models that are specified in the extension. The ListenerDescriptor also contains code to actually load the specified class and instantiate it (base type is ITransactionalEditingDomainFactoryListener).

The method EditingDomainFactoryListenerRegistry.getListeners(IMetaModelDescriptor) can be used to get all the ITransactionalEditingDomainFactoryListener that are registered for a given IMetaModelDescriptor.

These are in turn invoked from ExtendedWorkspaceEditingDomainFactory.firePostCreateEditingDomain(Collection, TransactionalEditingDomain). ExtendedWorkspaceEditingDomainFactory is responsible for creating the editing domain and through this feature we have a nice mechanism to register custom listeners each time an editing domain is created by Sphinx (e.g., after models are being loaded).

Sphinx is listening to Resources – Synchronizing Model Descriptors

The Eclipse Sphinx project provides a number of useful features for (EMF-based) model-management.  One of the features is reloading the model when the underlying resources change. Sphinx uses so called ModelDescriptors to define which files belong to a model, and on a change these have to be updated as well.

One of the relevant code pieces is org.eclipse.sphinx.emf.Activator:

 
The ModelDescriptorSynchronizer registers itself with standard means at the Eclipse Workspace to listen for resource changes and adds a BasicModelDescriptorSynchronizerDelegate delegate :

 

The major task of that class is to map the IResourceChangeEvent from the Eclipse framework to Sphinx’ IModelDescriptorSyncRequest which specify what to update in model descriptors:

But it also does some handling of the model registry caches:

org.eclipse.sphinx.emf.internal.model.ModelDescriptorSynchronizer itself extends org.eclipse.sphinx.platform.resources.syncing.AbstractResourceSynchronizer which itself implements IResourceChangeListener from Eclipse and is a base class for handling resource changes. It basically executes the the syncRequests:

In our case, the sync requests are of type ModelDescriptorSyncRequest, because that is how createSyncRequest is @Overriden in org.eclipse.sphinx.emf.internal.model.ModelDescriptorSynchronizer.

Now the Model Descriptors are actually processed in this class:

And we are finally at the org.eclipse.sphinx.emf.model.ModelDescriptorRegistry, where the information about all the loaded models is actually kept (to be detailed in a further blog post).

Supporting Model-To-Transformation in Java with AspectJ

There are a number of Model-To-Model-Transformation Frameworks provided under the Eclipse umbrelle (e.g. QVTO). Some of these provide a lot of support. However, in some scenarios, you need to either adapt them or implement your own M2M with Java. For us, some of the cases are:

  • Framework does not support model loading saving with Sphinx
  • Framework does not support EMF unsettable attributes (eIsSet)
  • Framework performance.

However, one of the most annoying problems in writing transformations is caching already created elements. Consider the following abstract example:

  • Our source model has an element of type Package ‘p’
  • In the package, there are sub elements ‘a’ and ‘b’
  • ‘a’ is a typed element, it’s type is ‘b’

Usually when mapping the contents of p, we would iterate over its contents and transform each element. However, when transforming ‘a’, the transformed element’s ‘A’ type should be set to the transformed ‘b’ (‘B’). But we have not transformed it yet. If we transform it at this point in time, it will be transformed again when iterating further through the package, resulting at two different ‘B’ in the target model.

Obviously we should keep a Cache of mapping results and not create a new target instance if a source element has been transformed before. However, managing those caches will clutter our transformation rules.

But it is quite easy do factor this out by using AspectJ. First, we define an annotation for methods that should be cached:

 

Then we define an aspect that intercepts all calls to methods annotated with @Cached:

  •  In line 14, we build a cache key out of the current object, the method called and its arguments
  • If this combination is already cached, we return the key entry
  • otherwise we proceed to the original method and cache its result

The transformator then looks like this:

(OK, so this is Xtend2 rather than Java, because the polymorphic dispatch comes handy).

Note that we are transforming EAST-ADL models here. The package contains Units and Quantities. Units refer to Quantitites. Quantity transformations are called twice: From the package transformation and from the Unit transformation. However, by using the caching, we get the correct structure.

In addition, we also solve a little annonying problem: Elements have to be put in the containment hierarchy as well. But that would require additional code in the transformators and passing around the containers.

In line 25 in the aspect, we check if the target object implements the AutoAdd interface and call the method in that case. So the transformation can take care of the addition to the containment hierarchy at a defined place.

The code above is an implementation prototype to show initial feasability.

 

 

Travel Tips for EclipseCon France

EclipseCon France is one of my favourite conferences. If you are going, here are some tips:

  • I prefer to book a hotel in the city centre, e.g. near Jean Jaures (https://goo.gl/maps/RKkQJ). A lot of the restaurants are within walking distance and there a lot of locations that you can use to meet with other participants.
  • There is a bus shuttle between the airport and the city centre (Jean Jaures) that also stops right in front of the EclipseCon venue. And it is reasonably priced.
  • Toulouse has a dense network of public bicycle rental stations. So instead of using the metro, you might want to choose to pedal your way to EclipseCon – it is within reasonable distance and there are bike stations in front of the venue and in near vicinity.
  • Although you will find Wifi at the venue and at hotels, I prefer the prepaid internet SIM by SFR or Orange. If you have a mobile wlan hotspot or you can use SIM cards in your notebook, that is my preferred option. Orange and SFR are both right at the city center. You can get a prepaid SIM easily – but bring a passport / ID card for registration.

Enjoy an interesting conference in a beautiful city in early summer!

 

Open Collaborations for Automotive Software – and Eclipse

On June 4th itemis is hosting a conference on Open Collaborations for Automotive Software (in German) in Esslingen near Stuttgart. There are a lot of interesting talks and – while not immediately obvious – all of them do have some relation to Eclipse.

  • Open Source OSEK: Erika Enterprise is the first certified Open Source OSEK. And it uses Eclipse based tooling for the configuration / development of OSEK based software
  • COMASSO is a community based AUTOSAR BSW implementation. It comes with BSWDT, an Eclipse-based AUTOSAR configuration tool that leverages technologies like EMF and Xpand
  • SAFE RTP Open Source Platform for Safety-Modeling and -Analysis: This tooling is completely based on Eclipse and features the EAST-ADL implementation EATOP, that is published within the Eclipse Automotive Industry Working Group
  • openMDM: Open Measured Data Management: Again, a community based tooling for the management of measured data, based on Eclipse.
  • Strategic Relevance of Open Source at ETAS GmbH: ETAS GmbH builds a lot of automotive tools on Eclipse and they will talk about the business relevance of open source / Eclipse.
  • Advanced Technology for SW Sharing – Steps towards an open source platform for automotive SW Build Systems: A new build system framework for automotive ECU framework – based on Eclipse

Eclipse is a key technology in automotive software engineering and the conference will introduce many interesting aspects.

AUTOSAR reports with Artop and BIRT

Even in times of model-based development, artefacts like pdf and Word are still important for documentation. With Eclipse based technologies, it is easy to produce documents in various formats from your AUTOSAR xml.

One approach would be to use Eclipse M2T (model-to-text) technologies to generate your own reports. Technologies for that would be Xtend2 or Xpand. An article can be found here.

However, with the Eclipse BIRT project, there is also a more WYSIWYG approach. Taking a very simple AUTOSAR model as an input, we can easily create reports in most formats.

2013-12-17_11h34_27

After creating a BIRT project, we have to configure a so called “Data Source”, that simply takes our ARXML as input:

2013-12-17_11h35_30We then configure one or more “Data Sets” that actually pull the data from the data source:

2013-12-17_11h36_48From there it is mostly drag and drop and maybe a little scripting to create the report in the WYSYWIG editor – it also supports a lot of styling and templating. Including images, custom headers etc:

2013-12-17_11h38_04As you can see, we do create a table with a row for each SWC-Type with the shortname and a subtable for all the ports of that SWC-Type. The BIRT Report Viewer shows the report in HTML.

2013-12-17_11h39_54

By clicking on “Export Report” we can export to a number of formats:

2013-12-17_11h41_39

PDF:

2013-12-17_11h43_12

Excel:

2013-12-17_11h44_27BIRT and ARTOP provide powerful reporting functionality.

 

 

Support for vendor specific parameter / module definitions in COMASSO basic software configuration tool

During the 6th AUTOSAR Open Conference one of the presenters pointed out, that integration of vendor specific / custom parameter definitions into the tool chain can be problematic because of insufficient tool support. Some tools seem to have hard coded user interface for the configuration or proprietary formats for customization.

The COMASSO basic software tooling chose a different approach: User interface and code generator framework are dynamically derived from the .arxml in the workspace and all the configuration data is directly stored in .arxml (so no Import/Export of models is required, just drag and drop).

NVRAM Example

In one minimal example that we use for training purposes, we have a parameter definition for a NvRam module (stripped from the official AUTOSAR definition). In the “file system” view, the param def can bee seen in the workspace.2013-11-15_14h14_09

This results in the “Model view”:2013-11-15_14h14_30Introducing a vendor specific module

Now, assume the construed scenario, that we would like to provide a BSW/MCAL module for a Graphical Processor Unit (GPU). For the configuration tool to work, all what we need is a param definition for that.

Although COMASSO does not require a specific project structure, for convenience, we copy the NvM file structure and create an new param def callsed GpU_ParamDef.arxml. In XML, we define a new module:

After saving the new param def .arxml, the tool scans the workspace and rebuilds the user interface.

2013-11-15_14h26_25Without a single line of additional code, the tool knows about the vendor specific parameter definition. The icon in front of the GPU is grey, because we do not have configured any data yet. Opening the editor shows that it knows about the VSMD now as well:

2013-11-15_14h29_42Any configuration here will be directly stored as values in a .arxml file.

 Code generation

But not only the user interface knows immediately about the vendor specific modules – so does the generation and validation framework. As you can see below, we can now directly access the GPU module configuration from the AUTOSAR root (line 11) and then loop through the new GPU descriptors (line 12).

And the content assist system is working with the information as well. No need to remember what we defined. In line 16 we invoke content assist on “d” (which is a GPUBlockDescriptor) and get suggestions on all the attributes.

2013-11-15_14h34_47Open Language

The code generation and validation framework is Eclipse Xtend/Xpand. It is publically available and not a proprietary solution.

Parameter Description

In addition, when clicking on an element in the configuration editor, there is a documentation view that shows the important information from the .arxml in a nicer view. Any documentation that is stored in the .arxml will be easily accessible.

2013-11-15_14h38_30