Managing Autosar Complexity with (abstract) Models

A certain number of developers in the automotive domain complain that the introduction of AUTOSAR and its tooling increases the development efforts because of the complexity of the AUTOSAR standard. There is a number of approaches to solve this problem – one of the most used is the introduction of additional models.

The complexity of AUTOSAR

One often heard comment about AUTOSAR is that the meta-model and the basic softwae configuration is rather complex and that it takes quite some time to actually get acquainted with that. One of the reasons is that AUTOSAR aspires to be an interchange standard for all the software engineering related artefacts in automotive software design and implementation. As such, it has to address all potentially needed data. However, for the single project or developer, that also means that they will be confronted with all that aspects that they never had to consider. Overall, one might argue that AUTOSAR does not increase the complexity, but it exposes the inherent complexity of the industry for all to see. That however implies, that a user might need more manageable views on AUTOSAR.

Example: Datatypes

The broad applicability of AUTOSAR leads to a meta-model, that will take some analysis to understand. For application data types, the following meta-model extract shows relevant elements for  “specifying lower and upper ranges that constrain the applicable value interval.”

Extract from AUTOSAR SW Component Template

Extract from AUTOSAR SW Component Template

Which is quite a flexible meta-model. However, all the compositions make it complex from the perspective of an ECU developer. The meta-model for enumerations is even more complex.

2015-04-01_22h30_31

Extract from AUTOSAR SW Component Template: Enumerations

If you come from C with its lean notation

it seems quite obvious that the AUTOSAR model might be a bit scary. So a lot of projects and companies are looking for custom ways to deal with AUTOSAR.

Custom Tooling or COTS

In the definition of a software engineering tool chain, there will always be trade-offs between implementing custom tooling or deploying commercial tools. Especially for innovative projects, the teams will use innovative modeling and design methods and as such will need tooling specially designed for that. The design and implementation of custom tooling is a core competency of tooling / methodology departments.

A more abstract view on AUTOSAR

The general approach will be to provide a view on AUTOSAR (or a custom model) that is suited to the special needs of a given user group and then generate AUTOSAR from that. Some technical ideas would be:

“Customize” the AUTOSAR model

In this approach, the AUTOSAR model’s features are used to annotate / add custom data that is more manageable. This case can be subdivided into the BSW configuration and the other parts of the model.

BSW: Vendor specific parameter definitions

The ECUC configuration supports the definition of custom parameter definitions that can be used to abstract the complexity. Take the following example:

  • The standard DEM EventParameter has various attributes (EventAvailable, CounterThreshold, PrestorageSupported).
  • Let’s assume, that in our setting, only 3 different combinations might be valid. So it would be nice if we could just say “combination 1″ instead of setting those manually
  • So we could define a VSMD that is based on the official DEM and have a QXQYEventParameter that has only one attribute “combination”
  • From that we could then generate the DEMEventParameter’s values based on the combination attribute.

This is actually done in real life projects. Technologies to use could be:

  • COMASSO: Create a Xpand script and use the “update model”
  • Artop: Use the new workflow and ECUC accessors mechanism

SYstem / SW Templates:

The generic structure template of AUTOSAR allows us to add “special data” to model elements. According to the standard “Special data groups (Sdgs) provide a standardized mechanism to store arbitrary data for which no other element exists of the data model.” (from the Generic Structure specification). Similar to the approach of the BSW we could add simple annotations to model elements and then derive more complex configuration for that (e.g. port communication attributes).

Adapt standard modeling languages

Another approach is to adapt standard (flexible) modeling languages like UML. Tools like Enterprise Architect provide powerful modeling features and the UML diagrams with their classes, ports and connections have striking similarities to the AUTOSAR component models (well, component models all have similarities, obviously). So a frequently seen approach is to customize the models through stereotypes / tagged values and maybe customize the tool.

The benefit of this approach is that it provides more flexibility in designing a custom meta-model by not being restricted by the AUTOSAR extension features. This approach is also useful for projects that already had a component-based modeling methodology prior to AUTOSAR and want to be able to continue using these models.

A transformation to AUTOSAR is easily written with Artop and Xtend.

Defining custom (i.e. domain specific models)

With modern frameworks like Xtext or Sirius, it has become very easy to implement your own meta-models and tooling around that. Xtext is well suited for all use cases where a textual representation of the model is required. Amongst others, that addresses a technical audience that is used for programming and wants fast and comfortable editing of the models with the keyboard.

German readers will find a good industry success story in the 2014 special edition of “heise Developer Embedded” explaining the use of Xtext at ZF Friedrichshafen.

The future?

AUTOSAR is a huge standard that covers a broad spectrum. Specific user groups will need specific views on the standard to be able to work efficiently. Not all of those use cases will be covered by commercial tools and innovation will drive new approaches.

Custom models with transformations to and from AUTOSAR are one solution and the infrastructure required for that is provided in the community project Artop and the ecosystem around the Eclipse modeling framework EMF.

 

 

Maven, Tycho, Surefire, AspectJ and Equinox Weaving

This is a very short technical post. But since it took us some time to find the solution (there is little information on the Web), we wanted to add another possible search hit for others.

JUnit tests from within Eclipse work well with AspectJ runtime weaving in Equinox. Finding information on how to activate the same in Maven Tycho surefire was more difficult.

You have to configure the surefireplugin to use the Equinox Weaving hook and start the AspectJ Weaving Bundle:

 

In addition, on command line configuration, you could pass debug information. We have set the property:
and use it later on

Visualizing Build Action Manifests with COMASSO BSWDT

The AUTOSAR Build Action Manifest (BAMF) specifies a standardized exchange format / meta-model for the build steps required in building AUTOSAR software. In contrast to other build dependency tools such as make, the BAMFs support the modeling of dependencies on a model level. The dependencies can reference Ecuc Parameter definitions to indicate which parts of the BSW are going to be updated or consumed by a given build step. However, even for the standard basic software these dependencies can be very complex.

The COMASSO BSWDT tool provides a feature for visualizing these dependencies. In the example that we use for training the Xpand framework with the BSW code generators, we see that we have three build steps for updating the model, validation and generation.

2015-03-14_17h46_06The order of the build steps is inferred by the build framework from the modeled dependencies. But in the build control view cannot really see the network of dependencies, only the final flat list.

With File->Export.. we can export the BAMF dependencies as a gml-Graph file.

2015-03-14_17h51_31The generated file can then be opened in any tool that can deal with the GML format. We use the free edition of yED and see the dependencies:

2015-03-14_17h55_50

To optimize the visualization, you should choose the hierarchical layout.

2015-03-14_17h58_09The solid lines indicate model dependencies, dotted lines explicit dependencies. If a real life project yields a grpah that is too complex, you can easily delete nodes from the list on the bottom left.

 

 

 

 

 

 

 

Lightweight Product Line AUTOSAR BSW configuration with physical files and splitables

During AUTOSAR development, the configuration of the basic software is a major task.  Usually, some of the parameters are going to be the same for all your projects (company-wide, product line) and some of them will be specific to a given project / ECU. And you might have a combination of these within the same BSW configuration container.

One of the approaches uses the AUTOSAR Split(t)able concept, which allows you to split the contents of a model element over more than one physical ARXML-file.

So let us assume, we are going to configure the DemGeneral container, and we assume that we have a company-wide policy for all projects to set DemAvailabilitySupport to 1. And let’s assume that the DemBswErrorBufferSize within the same container should be set specific to each ECU.

So let us create an .arxml for the company-wide settings. That might be distributed through some general channel, e.g. when setting up a new project.

2015-02-18_10h11_58Now we can place an Ecuc specific .arxml next in the same project.

2015-02-18_10h11_58bNow any tool that supports splitables (e.g. the COMASSO BSWDT tooling) will be able to create a merged model. The following screenshots are based on our own prototype of an AspectJ based splitable support for Artop. By simply using the context menu

2015-02-18_10h37_26

we will see the merged view:

2015-02-18_10h19_18You can see a few things in that screen shot:

  • First, and most important, the two physically separated DemGenerals are now actually one. The tooling (code generators etc.) will now only see one model, one CONF package and one DemGeneral.

Our demonstrator shows additional information as decorators after the element (decorators can be deactivated through preferences in Eclipse).

  • The “[x slices]” decorator shows how many elements are actually used to make up the joined element. We have 2 physical DemGeneral, so it says “2 slices”. We have one DemAvailabilitySupport, so it shows “1 slices”
  • The decorator at the end shows the physical files that are actually involved for the merging of a given element.

Now if for some reason, a new version of the company-wide .arxml is provided, we only need to exchange / update that file – there is no need for a complex diff and merge.

There are some usecases that cannot be covered with this scenario, but it opens a lot of possibilities for managing variations in BSW configuration on a file level.

 

 

 

 

 

Writing Fast tests for uniqueness in AUTOSAR (and other models) with Java 8

One of the repeating tasks when writing model checks for AUTOSAR (but also for other models) is to check for uniqueness according to some criterion. In AUTOSAR, this could be that all elements in a collection must be unique with respect to their shortName, or e.g., that all ComMChannels must be unique with respect to their channel ids.

Such a test is very easily written, but many of the ad-hoc implementations that we see perform badly when the model grows – which is easily missed when testing with small models.

Assume that we want to test ComMChannels for uniqueness based on their channel ids. To illustrate the problem, two slides from our COMASSO slides. A straight forward implementation with Xpand Check (in the COMASSO framework) would be:2015-01-16_09h50_30For each ComMChannel it checks if any of the other ComMChannel in the same ComM has the same ID. BUT:

2015-01-16_09h50_43So this costs a lot of performance. The trick is to loop only once and use a Set to store data. This can be done in Xpand and other languages. Java 8’s lambdas make it especially nice, since we can have a single method that collects the duplicates:

This code uses both templates methods as well as passing functions to have a very generic interface. We can use it anywhere to search for duplicates in a collection.
Now looking for duplicates in ComMChannel ids would look like this:

And thanks to template methods and Java 8 we can use the same groupBy to look for duplicate Shortnames:

Similar code could also be written in the Xtend2 language.

With the new Sphinx Check framework, a check in an Artop based tool might look like this:

 

 

Towards a generic splitable framework implementation – Post #3

In the previous blog posts I have layed out the basic thoughts about a framework for splitable implementation. As indicated, we are using AspectJ to intercept the method calls to the EMF model’s classes and change the behavior to present a merged model to the clients. If the AspectJ plugins are used in the clients’ executable, it will see merged models.

So for a model org.mymodel  we intercept all calls to the getters by declaring the JoinPoint

and in the actual code for the execution we want to do the following :

  • Check if the return type (could be the direct return type or the parameter of a parameterized type) is splitable. If so, we need to replace the returned object with the primary slice.
  • If the object that the method invoked on is splitable, we need to find all slices and return the joined results.

This is the preliminary code for illustration (not yet finished for all cases, since it is just used for a specific meta-model right now):

If we look at the implementation of FeatureCalculator, we see that internally we call the same method on all the slices:

But why does this not cause an infinite loop, since the get method’s are intercepted. Because through the definition of the join points, we make sure that our aspect is not executed when called from an Aspect or from the Splitable Frameworks classes. That means that all our implementation of the framework can safely operate on the source models.

We are almost set. But when we start a client with that AspectJ configuration, we will see that EMF calls will be intercepted when loading a model. But obviously the models should be first loaded as source models and only later accesses should be intercepted to present the joined model.

We can configure that through additional Join Points:

The following additional definitions are being used to disable the Aspect if the current call stack (control flow) is from certain EMF / Sphinx infrastructure classes:

 Result

With this configuration, our models load fine and the Sphinx model explorer shows the joined elements wherever a slice is in the model. Right now our implementation supports the joining of models and packages (i.e. it supports containment references) but can be extended to do more.

Upcoming

The following features will be added to the framework:

  • Intercepting more of the necessary methods (e.g. eContainer)
  • Support modification of models (i.e., the intercepted set-methods will modify the source models. The component “Arbiter” will be introduced to decide which source models to modify.
  • (De-)Activiation of the joined view based on transactions. A settable flag to indicate whether the client operates on the joint view our on the source view.
  • Integration with IncQuery

Sphinx

We hope to release the framework as part of the Sphinx project.

Towards a generic splitable framework implementation – Post #2

This blog post is the 2nd in a series of posts detailing some concepts for a generic framework for the support of splitables. For this concept we will include the term “slice”. A “slice” is one of the partial model elements that together form the merged splitables.

Effects of being splittable

If an model element (EObject) is splitable / has a splitable feature, this has two kinds of impacts:

  1. Impact on the “behavior” of the class itself.
  2. Impact on other classes, that have features of a splitable type

Impact on the Class ITSELF

  • any feature call on such an object needs to join the features of all this objects’ slices. This includes primitive and more complex features.

IMPACT ON OTHER CLASSES

  • By Feature: any feature call on another class’ feature, that could contain a splittable object, needs to filter the result of the feature to return a result, where the slices are merged.
    This affects the entire model, all classes could be affected, since all could have a feature of a splitable type.
    This affects any features that are a super-class of a splittable object.
  • By containment/aggregation: If a class is splittable, then all possible classes in all containment trees that lead to a splittable class must also be splittable. Otherwise it would not really be possible to create a spittable tree, since the splittable element could be only in one file when the containment hierarchy is not splittable.
  • By inheritance: That also implies, that if a class gains “splittable” through this containment rule, all of it’s subclasses must be splittable too, since it could be that they have the specific instance of the containment feature.

Components of a splittable tooling

  • ISplitKeyProvider: For a given object, identifies a “key” that is used to identify slices that constitute an splitable. E.g., this could be the fully qualified name.
  • ISliceFinder: Given an object, finds all object that constitute that splittable.
  • ISplittingConfiguration: Given an object or structural features, decides if that is splittable.
  • IPrimarySliceFinder: Given an object, finds the object that represents the joined splitable.
  • IFeatureCalculator: Create a joint view of the splittables
  • Arbiter: Decide where to put new elements / how to move elements

ISplitKeyProvider

For an object, returns the key that is used to identifiy the slices that together make up a model. In a simple implementation, this could e.g. be the fully qualified name. For some AUTOSAR elements, this is the FQN plus additional variant information.

Slice Finder

In a simple implementation, all the slices can be identified through their key and the slice finder finds all objects with the same key. In more advanced implementation, caching by key.

Primary Slice Finder

Returns an object, that represents the fusioned object given a specific key. That could be a new object, in case a model is copied. Or, if we inject behavior with AspectJ, one of the slices that is used. Important: Must always be unique, i.e. for a given split key, this must _always_ return the same object.

IFeatureCalculator

Must be able to create a joint view of all the slices. Can be e.g. on copy when creating a shadow model, it could also be dynamic, creating a common view e.g. with AspectJ.

Joining Structural Features

  • For references / containment, that could contain a splittable, only return the PrimarySlice within the feature.
    • 0..1: Intercept call and replace with PrimarySlice
    • 0..n: Return new list that replaces contents with primary slice
  • Primitives / All objects with 0..1 multiplicity (including references, containment)
    • Return value when all slices have the same value or 0
    • Throw exception otherwise (merge conflict)
  • Needs a list implementation that delegates the contents of the original list. Should always return the same list –> Feature Calculator is responsible. Could maintain own cache or add a new value by means of AspectJ

 

 

 

 

 

 

 

Towards a generic splitable framework implementation – Post #1

Splitables are a concept introduced by AUTOSAR. In a nutshell, splitables allow for model elements to be split over a number of AUTOSAR files. E.g., two AUTOSAR files could contain the same package and the full model would actually consist over the merged contents of these two files. Some information can be found in the previous blog posts.

Since the introduction of splitables, we have been thinking about different ways of implementation. Now that the concepts has also found its way into the EAST-ADL standard as well as into customer models, it will be of interest to shed some light on this issue from different angles as well as introduce the current activities towards a framework for splitable models. Since Artop, EATOP (EAST-ADL) are all EMF-based, the considerations related to EMF.

Basic Requirements

For the discussion, we assume that we have three basic meta-model classes:

If we have a split model, then we might have two Package with the same name in different Resources from the same ResourceSet. For such a package p, p.getElements() should return the joined list of all those packages in the ResourceSet.

There could be a number of approaches to actually to that.

A dedicated API

We could introduce helper functions in addition to the actual setter/getter etc either directly in the model or as an additional API. That would duplicate a lot fo the model API and would also be a source of errors, because clients must make sure that they use the correct API calls.

Design Desicion
The framework for Splitable support should not introduce a dedicated API for accessing the Split objects. Clients should be able to work on the standard API.

Putting the logic into the metamodel (ecore, getters / setters)

Since EMF generates the Java code for the model out of the .ecore, we could actually put the additional logic required to “join” the models into the operations of the ecore. This could be done manually, which would imply a lot of work. Another approach would be modify the code generators that generate the Java code to include the required functionality.

But that would actually mean that we would have to coordinate with all the projects that we would like to support splitables for and also in the meantime to reproduce the build environments etc.

Requirement: Non Intrusiveness
The framework for Splitable support should support working with the binary builds of the supported meta-models.

In addition, we will see later on, that we would like to (de-)activate the splitable mechanism depending on who actually calls the framework. That would add additional overhead with this approach.

The (shadow) copy.

The simplest approach of course is to actually create a new model out of the source models that actually contains the merged view on the model. That means that any code that processes a model needs to invoke the copying before accessing the model.

Requirement: Agnostic Clients
The client code for loading   the meta-model should not need special initialisation.

In addition, a pure copy would imply that we cannot modify the model. When modifiying a joined models, any model modifications must be reflected in the source models.

Requirement: Model Modification
The framework for Splitable support should be extendable so that it can also support model modification.

With a copy of the model, a framework that works with a shadow copy could work like this:

  • A map that keeps the tracing between source model elements and joined model is maintained.
  • When the joined model is modified, listen to the changes on the model and apply them to the source model.

This could be a feasible approach, however, there is also a further idea:

Intercepting and modifiying calls to the model

If we could intercept calls to the model (e.g. getters and setters), we could change the behavior of those methods to either reflect the “real” model structure or the actual “merged” view depending on the actual need. Such a different behavior could be considered a cross-cutting concern which directly leads to aspect-oriented programming. Further blog posts will detail the current thoughts/concepts/status of an Aspect-Oriented approach for a splitable framework for EMF models.

P.S.: Notes on Artop

Artop already provides an splitable implementation. But since it is part of Artop, it cannot directly be re-used for non-AUTOSAR customers. Documentation is a bit sparse, but some of the usage can be found in the test cases:

And the Javdoc from SplitableMerge.java shows that a clone is created.

where as the SplitableEObjectsProvider follows the “intercept idea”. In the default implementation we see

and the comment of the SplitableEObject shows:

 

 

Dynamic Workflows

In this guest post by Amine Lajmi, Amine introduces the workflow framework that was designed and implemented by him and itemis Paris for Sphinx.

 Dynamic Workflows

Still today, MWE is used for the definition of workflows in several model-driven applications (Xtext workflows, legacy Xpand and Check files, etc.), mainly to perform tasks like model loading, running transformations, batch validation, etc. Even though MWE workflows with Xpand and Check are less used, component-based approaches for defining tool chains are still needed for new developments in Xtend and Java.
Workflow components, such as code generators, are often executed in a separate Eclipse instance. In practice, the way a developer usually works is: write some transformations, launch a runtime instance, test the transformations, go back to the IDE and fix some bugs, re-launch the runtime, etc.
The Dynamic Workflows framework newly introduced in Sphinx is a contribution that solves these issues. It offers complete support for developing workflows in Xtend and Java, running workflows within the same Eclipse instance, so more convenient for the developer, in a transactional environment. This blog post will introduce how to use the framework, on the Hummingbird example shipped with Sphinx.

A SIMPLE WORKFLOW EXAMPLE

To define a dynamic workflow, subclass: org.eclipse.sphinx.emf.mwe.dynamic.WorkspaceWorkflow. The workflow could be written in Xtend, or Java, or both. Here is an example in Xtend:

To run the workflow, navigate to the Model Explorer, right-click on the workflow file then select Run Workflow > Run. The IDE console should display the message.

WORKSPACE WORKFLOW COMPONENTS

A workflow aggregates a set of workflow components as children. To define a workflow component, subclass org.eclipse.sphinx.emf.mwe.dynamic.components.AbstractWorkspaceWorkflowComponent, for example:

MODEL WORKFLOW COMPONENTS

Model Workflow Components are workflow components which need to read and/or write in models. To define a model workflow component, you need to subclass org.eclipse.sphinx.emf.mwe.dynamic.components.AbstractModelWorkflowComponent.
In a transactional context, read and write accesses to models are handled differently: in one case the workflow component is encapsulated into a read transaction, while in the other case, the workflow component is encapsulated into a write transaction.
The Dynamic Workflows framework allows defining both types of model workflow components.

READ-ONLY MODEL WORKFLOW COMPONENTS

By default all model workflow components are executed into a read transaction unless it is specified in their constructors. The following component reads the model slot from the context, and prints its content to the console.

READ-WRITE MODEL WORKFLOW COMPONENTS

As stated before, if the model workflow component needs to modify any model, it should be encapsulated into a write transaction. This is handled in the constructor of the component; the only thing to care about is to call the parent constructor with the flag modifiesModel set to true, see example below.

Finally, to add a workflow component to a workflow, simply add the component to the list children.

To run the workflow, select the model to validate from the Model Explorer, and select Run Workflow > Run. A dialog window invites you to select the workflow to execute (see Fig1); use the search field to filter the available classes, and then click OK. The console output should now display the content of the model slot.

Fig1. Running a dynamic workflow on a model input

Fig1. Running a dynamic workflow on a model input

You can also embed components written with Xtend and Java in the same workflow.

HEADLESS EXECUTION SUPPORT

The Dynamic Workflows framework allows executing workflows outside the UI. The general syntax of the headless application is the following:
eclipse -noSplash
-data an Eclipse workspace location
-application org.eclipse.sphinx.emf.mwe.dynamic.headless.WorkflowRunner
-workflow a workspace relative path of a Java or Xtend file, or a fully qualified name of a binary class
-model a workspace relative path to a model, or a resource URI (possibly with fragment), or a file URI
Some examples:
• eclipse
-noSplash
-data /my/workspace/location
-application org.eclipse.sphinx.emf.mwe.dynamic.headless.WorkflowRunner
-workflow org.example/src/org/example/MyWorkflowFile.java
-model org.example/model/MyModelFile.hummingbird
• eclipse
-noSplash
-data /my/workspace/location
-application org.eclipse.sphinx.emf.mwe.dynamic.headless.WorkflowRunner
-workflow org.example/src/org/example/MyWorkflowFile.xtend
-model platform:/resource/org.example/model/MyModelFile.hummingbird
• eclipse
-noSplash
-data /my/workspace/location
-application org.eclipse.sphinx.emf.mwe.dynamic.headless.WorkflowRunner
-workflow org.example/src/org/example/MyWorkflowFile.xtend
-model platform:/resource/org.example/model/MyModelFile.hummingbird#//@components.0

New Sphinx Validation/Check Framework

In this guest post by Amine Lajmi, Amine introduces the new check framework that was designed and implemented by him and itemis Paris for Sphinx.

VALIDATION/CHECK FRAMEWORK

The standard EMF validation seems tightly coupled to the EMF Ecore metamodel, and difficult to extend without modifying the metamodel. As a consequence, many users are seeking a less intrusive framework, with a more flexible/declarative mechanism.

Therefore, a new annotation-based validation framework has been introduced in Sphinx. The framework follows the same principle of the validation framework of Xtext, as it uses @Chek annotated methods for validating model objects. Moreover, the framework allows extracting validation constants such as error messages and severities into a separate model called catalog, making the maintenance of the validators much more comfortable.

In this post, we will show how to set up a validator based on the Check framework. We will use the Hummingbird example shipped with Sphinx as a running example.

UNDERSTANDING CHECK VALIDATORS

A check validator is a Java class extending org.eclipse.sphinx.emf.check.AbstractCheckValidator. It contains a set of methods annotated with @Check. The method parameter indicates the type of the object to be validated. At runtime, the validator dispatches the calls to the appropriate methods, based on the type of the current object.

A validator optionally makes use of a Catalog. When a catalog is used, @Check annotations reference constraints by their unique Id in the catalog, so there is a logical mapping from methods to constraints. For a constraint to be applicable within the scope of a validator, the set of categories specified in its @Check annotation should be a subset of the set of categories referenced by the constraint in the check catalog. Roughly speaking, categories are used to narrow down the scope of methods, i.e. their applicability.

CONTRIBUTING CHECK VALIDATORS

A check validator is contributed through the extension point org.eclipse.sphinx.emf.check.checkvalidators.

It is possible to contribute multiple check validators for the same EPackage. It is also possible to a catalog to be shared between multiple check validators, simply use the same plugin URI of the check catalog in the contribution to the extension point.

DEFINING CONSTRAINTS

Let’s start with something trivial. The following method is called on objects of type Application, it raises an error stating the application name is invalid. Notice, the API provides three methods: error(…), info(…), and warning(…) to explicitly force the type of the issue at design time.

The following @Check annotation points to a catalog constraint with an id equals to “ApplicationNameNotValid”. Instead of using explicit severities, a more general method called issue(…) is used. While error(…) raises systematically an error, issue(…) gets its severity type at runtime from the catalog.

The severity type is set directly on the check catalog (cf. Fig1), which gives the user more flexibility at design time.

 

Fig1. Configuring validation constraints and categories from the catalog

Fig1. Configuring validation constraints and categories from the catalog

Each constraint has a unique identifier, and refers to a possibly empty set of Categories. In Fig1, the constraint with the id ApplicationNameNotValid is associated to Category 1, its severity is Error, and its message expects a placeholder (see {0} in the message value). This additional parameter is provided in the issue(…) method call as a last parameter; in this example it has a value “for all categories”.

UNDERSTANDING CATEGORIES

A constraint can be applicable for one or more categories. This means the constraint will be evaluated only when the category referenced in the @Check annotation is selected by the user. For example, the following constraint is called either when Category 1, or Category 2, or both categories are selected.

Convention: if no categories are explicitly specified, the constraint applies for all categories.
The following example checks the Component name, and raises info if the component name has an illegal character. In the validator class, the constraint is referenced as follows:

The annotation specifies the constraint ComponentNameNotValid should be checked only for Category 1, which means it is applied only when the user selects Category 1 from the UI.

RUNNING CHECK VALIDATORS

The easiest way to launch a check validation is to use the contextual menu (see Fig2). Right click on the object to be validated, then select Validate > Check-based Validation. Validation is performed on the selected object and all its direct and indirect children.

 

Fig2. Running a check validation from the UI

Fig2. Running a check validation from the UI

The user is then asked for what categories the validation should be performed; let’s select Category 1 for which we know a constraint is defined. The categories displayed in the UI are the ones defined in the check catalog.

check3

Fig3. Selecting the categories for which validation should apply

 

The Check Validation problems view displays an error as expected as the constraint ApplicationNameNotValid is violated. Note the error markers are displayed on both Check Validation View and the classical Problems View. The Check Validation View gives additional information about the errors such as the validator which raised the issue, and the ability to navigate to the error in the model under validation.

check4

Fig4. Validation markers in the check validation view