Requirements and Product Lines (Variants / Feature Models) with ReqIF/RMF

Product Lines and Handling of the complexity of variants is one of the current challenges in many industries. In the engineering process, it is important to find the correct views on the engineering data for the product lines and variants. Requirements engineering is no exception.

Requirements and Variants

In the publicly funded research project IMES, itemis as one of the partners is investigating new tool platforms that seamlessly integrate the software engineering artifacts of OEMs and 1st-tier suppliers. This blog article describes our current implementation of variant handling for requirements. This solution allows the specification of presence conditions (i.e. specifying which requirements are applicable to which features) and the tailoring of the requirements models. We are using the Eclipse technologies Xtext, EFM and RMF – but everything is usable in standalone mode as a Java program from the command line, so it could be generally applied to any tool that handles ReqIF.

Eclipse Feature Model

For the modeling of features, we are using the Eclipse Feature Model Project, which contains meta-models for feature and variant models contributed by Pure Systems, a company specialised on tools for product line and variant management. The project does provide the basic EMF editors, and we added Xtext-based textual editors for features (but they are not part of this blog article).

Presence Conditions

For each requirement, we want to specify for which features the requirement is applicable. A simple selection box is not sufficient, since we might want more powerful expressions (e.g. a requirement applies for a specific car, if it is “sold in the Asian market and it has more then 200 hp”). Since the 2.x version of Xtext, it is easy to write for these expressions. Our Xtext solution examines the feature model and provides the features in the expression language.

RMF

The Eclipse Requirement Modeling Framework is the Eclipse-EMF based implementation of the ReqIF standard meta-model for requirements. It is expected that most tools will support the standard with some of the next releases.

Combining technologies

To support product line / variant management with requirements, we have the following steps:

  1. Model the features in the feature model
  2. Model one or more variants in a variant model
  3. Add an additional attribute for the presence condition to your requirements
  4. Define the presence conditions
  5. Run the tool

You will now have a tailored version of your requirements that provide a specific view on the variant. We are using a feature of the ReqIF standard: The hierarchy structure of requirements is not stored in the requirements. Instead, you can have one or more so-called “specification hierarchies”. The elements of these hierarchies are just used to build up the structure, they do not contain any data but only point to the actual requirements. So a requirement can be referenced by zero to one hierarchies.

We create a second hierarchy, that is a copy of the first hierarchy, with the presence conditions applied. So the original data is preserved.

Command Line and GUI

For the standalone version, only the variant model, feature model and an ReqIF file with presence condition attributes are required. So you could basically enter the presence condition in any tool with text input.

For Eclipse-based tooling, we provide additional support for the presence condition. The Xtext editors for the language can be embedded into the ProR editor of RMF, so you will get full content assist, error checking and navigation for your feature models.

ReqIF – Towards a scripting and constraint language

ReqIF is a very flexible meta-model and exchange format definition. The possible attributes of requirements are not predefined, but can be defined within the standard itself. This makes implementation of queries and constraints somewhat more complex. But with Xtext, we can easily create a language that knows about the data model of a ReqIF file and implements a language.

A simple definition of a string attribute “Description” for a requirement type “XType” would look like this in XML:

Because of this flexibility, you will not find just an attribute with the name “Description” in your requirements, but so-called AttributeValues that reference their attribute definitions, which actually define the attribute names.

To retrieve the value of an attribute, you’d have to traverse a requirement’s attribute value, compare the names and fine the object. There is a helper object for that (ReqIF10Util) that comes with RMF:

So, in your code you would have something like ReqIF10Util.getAttributeValueForLabel(element,”Description”). But since AttributeValue is an abstract class, your following code would have quite some case-statements depending on the actual type of AttributeValue, since the ReqIF meta-model does not define an abstract method for retrieving the actual value.

A user friendly language

We don’t want that. What we’d like to do is to have a language where we could just write requirement.Description. Actually, this is very easy to do in Xtext. As an example, we create a Xtext language that allows to specify expressions on requirement objects. It simply references the ReqIF object SpecObjectType and then is followed with an Expression. All is needed is a JvMModelInferrer that creates JvmTypes from the referenced ReqIF model. And voila, a complete editor that knows about the types of the referenced ReqIF and provides their attributes directly:

 

 

 

 

 

 

Description is a defined attribute in ReqIF and can be accessed directly know.

A constraint interpreter

Specifying expressions is nice, but executing is what’s interesting. As you see in the screenshot above, there is an error marker at the expression. It says:

 

This is because we simply added an interpreter for the language that is executed on the ReqIF model and is integrated into the validator. So if we have a requirement that causes the expression to evaluate to false, an error is flagged.

Further uses

Of course, the constraint evaluation can be integrated in the tabular editor that comes with RMF, called ProR. In addition, there are further uses of the approach

  • Could be used as a query language to filter requirements
  • In combination with Xtend2 expressions might be used for reporting infrastructure
  • It is not only possible to query models, but also to modify models.

Since the Xbase expressions can be both interpreted as well as compiled to Java, this is a fast and flexible infrastructure.

ReqIF – user friendly editing of data types and spec object types

ReqIF is a powerful and flexible format for specifying schemas for requirements. A ReqIF model contains both the definition of data types/requirement types and the actual requirement content. Both are stored in an XML format that is not designed for human readability, so a dedicated tool for the type specification increases usability. One kind of tools for that purpose are form-based tools, like the Eclipse RMF/ProR editor. However these do not present the whole type model at one glance.

Textual formats

The AUTOSAR standard faces a similar challenge. One approach here is the ARText tool, that allows you to specify AUTOSAR models in a textual way. One of the many benefits is the fact that textual definitions can easily be conveyed in e-mails, articles, presentations etc. So a similar approach can be used for ReqIF. With the Eclipse projects RMF and Xtext, the tooling described in this post has been created with an effort less than 3 hours.

Eclipse Frameworks

The Eclipse RMF project provides the meta-model of ReqIF in Eclipse’s de-facto standard EMF (Eclipse Modeling Framework), so it can be easily used with the many EMF-based Eclipse project. One of these project is Xtext, which allows the user to define textual domain specific languages (DSLs) and generates a user-friendly editing environment, including highlighting, content assist, validation etc.

Xtext for ReqIF

The main step is to create a Xtext grammar, that references the ReqIF meta-model (first highlighted line below). Xtext can then suggest a concrete textual syntax for that meta-model, but we prefer to define that manually (second highlighted block), since that is usually more concise. The highlighted block contains the necessary code for boolean types – the full grammar contains more code for all ReqIF supported types (Strings, Integers, XHTML, etc.)

The Runtime Editor

That’s basically it. From the definition above, Xtext generates the full editor with comfortable editing feature. The screenshot shows the ReqIF type editor with content assist. This textual representation is more user friendly than the XML.

 ReqIF-XML

However, Xtext files are stored in plain text. And we need to have a ReqIF-XML conforming representation. Since both the Xtext model and RMF are based on EMF, it is only a matter of saving the model with a different file extension – EMF does all the work! It knows about how to save models depending on the extension. Xtext also provides a so called “Builder” which is called every time the Xtext model changes and can be used for model-to-model or model-to-text transformations. So this can be easily used to save the model in ReqIF-format each time it changes.

 

 

 

 

 

That’s it. Each time the Xtext model is saved, Eclipse will automatically create a ReqIF, that can be opened e.g. with ProR/RMF (or any other ReqIF-enabled tool).

 

Existing ReqIF

Existing ReqIF XML can be transformed to textual representation in the same way. The code is basically the same as for saving in xml format:

Since ReqIF supports object identification through UUID, it is possible to update existing ReqIF files after editing their type definition in the textual representation.

 

Xtext End User / Domain Experts Cheat Sheet

The basic idea of domain specific languages is to provide domain experts with an easy to use language for their modeling needs. However, a domain specific language is always tied to a tool for editing the model. With Xtext, this is obviously Eclipse. Eclipse provides some rich editing features, especially for navigating in models.

Most existing cheat sheets refer to the Java IDE of Eclipse, which you cannot give to domain experts because it contains elements not relevant to them (debugging) or have a slightly different meaning.

So here is an attempt of a small cheat sheet for Xtext end users (domain experts):

 

Editing

ALT-Left and ALT-Right Back / forward Go back and forward in history of where you were in editors
CTRL-PgUp and CTRL-PgDown Cycle tabs  Cycle through tabs of open editors
CTRL-Up and CTRL-Down Scroll  Scroll line up and down
CTRL-M Maximize  Maximize / Restore current editor window
CTRL-W Close Close Current Editor
CTRL-D Delete Line Delete Current Line
CTRL-/ Toggle Comment Toggle Comment for line / selection
CTRL+SHIFT-F Format Auto format
ALT-UP / ALT-Down Move Move current line / selection one line up / down
CTRL-Q Last Edit Go to last edit location
CTRL-L Goto Go to line

Model Editing

Ctrl-Shift-G Find References Find all elements that refer to the current element
F3 or CTRL-MouseClick Follow Link Follow reference under cursor
Ctrl-O Pop up Outline Pops up an outline for easy navigation / filtering
CTRL-1 Quick Fix Quick Fix of Errors (where provided by DSL designer)
CTRL-SPACE Content Assist Get suggestions of possible values
ALT-SHIFT-R Rename Rename current element (will rename other occurrences as well.
CTRL-SHIFT-F3 Open Model element Locate a model element in your workspace (only exported elements are listed)
ALT-SHIFT-Up / Down Expand selection Expand selection to containing element

Xtext Global and Local Scoping – overview

Just recently I posted a small article with an overview on the Xtext scoping architecture. One of the things that took me a while to understand is the difference of global and local scoping. Several people I discussed this had their own ways to explain these. Here is my current way of explaining some types of scoping in Xtext.

Assume that we have a language that supports defining classes, much like in UML: A class can have a name, attributes and it can inherit from another class. And we want to support models that consist of several Xtext files.

Suppose we have the following two files:

A simple definition of two classes, with B inheriting from A. So, obviously we are having a reference from B to A (inheritance). In the grammar that could look like

and Xtext supports us by providing all the automatic linking when it sees “inherits”. But how does it know what to link? If all our models were restricted to be contained in one single file / resource, it would be easy: Just get the current resource and traverse all the contained model elements to find one that matches “A” and set up the link.

Global Scoping

But now, with models split over several files, what do we have to do:

  1. First, find all the model files candidates that could be relevant for our search
  2. Define a strategy to decide if a model file is really considered for traversal
  3. Traverse the model file and find all the candidates that could match our reference to “A”

We wouldn’t want to to this manually. Luckily, Xtext does it for you – actually, it does it a little different, since the approach above would be inefficient – traversing all the model files for each reference would be slow.

So, every time a file changes, Xtext collects the objects and the names that they are known by and puts this information into the index. So instead of traversing all models in all candidate files, Xtext just has to have a look if there is a matching entry in the index. Assuming that the classes above would be contained in a package called “P”. The index could contain:

name Object
P Package “P”
P.A Class A
P.A.a Attribute b
P.B Class B
P.B.b Attribute b

But would we want to see all the possible candidate elements in our reference (types are automatically checked). In most cases no. So Xtext provides several mechanisms to “restrict” the search. Please the the Xtext documentation for details:

  • URI imports: You can have import statements, that actually reference the full URIs of the other files to be considered.
  • Namespace import: You can define a namespace import. If you than provide something like “import P.*” in your grammar, the search will be restricted to those elements whose fully qualified name starts with “P.”
  • Java class path based

There is additional ways to tweak and modify Xtext behavior and there is more data stored in the index (references) etc. But I hope this gives you the general idea. However, the story is not finished.

 Local Scoping

Now suppose we want to have a second grammar, where we define instances of the classes and values for their attributes, just like this:

Since the classes are in a separate file, it is obvious that the link from the instance to its defining class is done in the way described above. But also the attributes values are linked to the attribute definition in the class model, so how is this done?

Obviously, we could also just use the index. But we have to restrict the possible references as well, since in instance “a”, a reference to attribute “b” would not be valid. So what we could do is to get the list of possible references from the index, traverse the class hierarchy and filter out all attributes that are not valid.

But, for finding all valid attributes, we are traversing all attributes in the class hierarchy anyway. So why not forget about the index, and just build up the list of possible references while we traverse the model? This is done by local scoping, which you can easily customize to modify the list of reference candidates. See the Xtext documentation on how to do that.

So, then why do we need the attributes in the index? Simple, we don’t. The list of possible attribute values is calculated by us, so we don’t need them there. For our own Xtext grammars, we can override the strategy of what is put into the index and what not. So we decide to not put attributes in the index. Xtext per default cannot know what you want to put in the index, so it puts in everything from your model that has a “name” attribute. It might be worthwhile customizing this behavior, since it could save us a lot of CPU and memory.

 Summary

For me, it is useful to divide scoping in two different aspects:

  1. I can find out the possible candidates of a link by traversing references in my model. The traversal can easily span multiple resources/files, as long as the traversal can be done by standard EMF means.
  2. To find the possible candidates, I would have to think about a strategy on which model files to load etc. I cannot access these candidates by traversing the references that I already have, because I might not even reach those models –> global scoping.

 

 

Compiling Xtext Expressions and Google Collections’ Function

In our small DSL for graphiti editors, you can specify simple code with Xtext’s Expressions. E.g., in the specification for a connection, the color should be green if the domain object has the attribute “positive” set, and red otherwise.

So in the editor, this can be specified by means of an XBlockExpression. The Xtend2 builder also provides support for compiling the expressions in your DSL to Java, however, the standard implementation compiles them to a method body. The result of the expression is always passed back by a return statement.

Since modifiying the compiler was too complex and generating an extra class for the calculation would clutter the class structure too much, a simple solution is to wrap the generated code with a google collection Function.

 

The generated code looks  like:

 

On a site note, be aware that you have to work with injections to set up the compile framework in your generator. In the generator class I have a statement

and the generator is invoked by

. On the contrary, a

would not work.

 

A DSL for Graphiti Editors

Funded by the BMBF

Funded by the BMBF

For the public funded research projects IMES we are building graphical editors for functional nets and other systems. Since Graphiti is drawing a lot of interest, it is a good occasion for using the new technology. However, it turns out, that you have to write a lot of code. The Graphiti team seems to address this with a “Pattern Framework”, but it is hard to find documentation on this.

So what do you do to avoid writing boilerplate code? Of course, MDD with textual DSLs and M2T (i.e. code generation).

A textual DSL for Graphiti Editors

 

So the first draft of a language to specify editors was quickly implemented. Here are some core features as seen above:

  1. The DSL references the Ecore of the domain model, giving full content assist. This is used to e.g. generate the CreateFeature in Graphiti
  2. The “label” references an attribute in the ecore. This attribute is displayed as a label in the diagram. Moreover, the DirectEditingFeature and the Update Feature for the attribute is generated.
  3. In this example, a logical function can be at the root of the model, but it could also be a child of another logical function. So we have two contexts here. When the shape is created on / dragged to the diagram, the domain object is added to the functionNetworkProject.logicalFunction collection. On the other hand, if its container is another logical function, then the collection used is LogicalFunction.subFunction.
    This generates code for the Create and Add Feature. In addition, it also creates a move feature, that rewires the containment if the shape is dragged from one container to another. The default behavior of Graphiti is two disallow moving an object out of its container.
    With LogicalPorts, there is only the LogicalFunction context, so they cannot be placed on the diagram background, only within logical functions.
  4. Ports are usually placed on the border of the parent shape. In Graphiti, this requires a “MoveFeature”, that checks if the new position is valid. The “OnBorder” keyword creates such a feature that allows moving the ports on the border shape only.
  5. In the meta-model, a connection (Logical Channel) between to ports is not a reference, but a model element with references to the endpoints. The ObjectMapping specifies which element to create and which references to set for the endpoints.
  6. The channel is owned by the function that owns the functions connected by the channel (so this is two levels up). To allow for complex operations, the AddToModelFunc generates a stub that delegates the adding of the domain object to the model to a method that can be implemented manually.
  7. And finally, the connection has to know which elements / shapes it can connect.

And here is a screenshot of the generated editor:

 

The DSL/Generator are in an early phase and not designed to cover all use cases. But they will definitely reduce our implementation efforts for editors.

There is also another initiative to create a DSL for Graphiti editors, called “Spray”. However, no code has been published yet under “Spray” and since we could not wait for initial contributions, we just had to start anyway…

 

 

Augmenting UML with textual DSLs

Funded by the BMBF

For the IMES research project we are investigating introducing more formal notations in the requirements specification of automotive software systems. We are currently in an early stage where we define the criteria that the notations should fulfill for several artifacts in the software development process. One of those stages will be the definition of more formal notations for functional nets. To show a prototyping scenario, I am using UML (classes with ports) as model elements for functions and use Papyrus and Xtext to show how to adapt the tool to specify timing constraints with formal notations.

The Papyrus team has been using the Xtext integration to create a nice user interface for UML modeling. My scenario differs a little bit in that I am not using Xtext to edit parts of the UML model (like transition labels, property definitions etc.) but I augment the UML with additional models. The screenshot shows a domain specific language for the definition of timing constraints of a Port, with full syntax highlighting, error detection. As you can see in the screenshot, the content assist is fully aware of the UML model, suggesting a list of ports that could be peers of the selected port.

Integration of the Xtext editor took about 4 hours, mostly related to the fact that there is a tutorial by the Papyrus team that is very good for the first half. But then I needed some serious debugging to find about the inner workings. But if you have done it once, integration of such a grammar is very fast.

Right now the timing constraints are simply stored as text in a UML-Comment that belongs to the port as plain text. This makes code generation and model validation a little inconvenient, but I already have an idea in my mind on how to use a resource set that would make life simpler.

All in all, it is great fun extending the Papyrus tool and impressive what nice features you can implement with Eclipse based tooling in very short time!

Validating Xtext models with OCL

Installing the ocl editor

Installing the ocl editor

OCL and Xtext work together quite well on Eclipse: There are some OCL editors implemented with Xtext. But if you are familiar with OCL, could you also use it to write validations for Xtext models? Of course, since Xtext generates an .ecore for each grammar and you can use OCL constraints on an .ecore. How to do this is described in the Eclipse OCL help, but it is a little hard to find and if you are mainly an Xtext user you might not see all the necessary steps.

It is necessary to install the OCL editor and examples. Now, in your Xtext project, in the src-gen directory you will find the ecore that has been generated for your Xtext grammar. In the context menu, you will find a “open with OCLinEcore” menu entry that will open a textual editor.

In this editor, you can edit the OCL constraints for the model elements.

Editing OCL constraints
Editing OCL constraints

After that, you have to tweak the genmodel a bit, by opening it and setting the property Model/Operation Reflection to true and regenerating code from the genmodel. Caution: If you change your grammar and restart the .mwe2 work flow, the .ecore might be overwritten. So keep a copy of all your OCL constraints to be able to paste them back in.

Now start your Xtext editor and you will see the OCL constraint errors show up alongside with the normal Xtext errors:

OCL constraints in Xtext
OCL constraints in Xtext

Other useful things:

You can also open Xtext models with the builtin-tree like editor (Open with … <GrammarName> Model Editor).  Open the OCL console and you can type away OCL queries on the element selected in the tree view (which is your Xtext model).

Opening the OCL console
Opening the OCL console
1: Type the query, 2: query result

Xtext and existing .ecore: Making use of the Atlan Model zoo

Xtext is a great tool for defining your own languages and generating editors for that. A lot of the work involved is in the definition of the language. So obviously, if there is some available similar language or meta-model you might want to reuse that. With Xtext, you can use existing .ecore files and create a grammar and editor out of those.

The AtlanMod Meta-Model Zoos

One source to start looking for existing meta-models is the AtlanMod Meta-Model Zoo. It provides a variety of meta-models. The meta-models are originally in KM3 format, but they have an automatic conversion into other meta-meta-models, so you can download meta-models in UML or .ecore format. As an example, we will use the FiniteStateMachine.ecore from the AtlandMod Meta-Model Zoo.

The EMF Project

Adding the .ecore to your project

Adding the .ecore to your project

To start, simply download your .ecore from the website. Create an empty EMF Project in your Eclipse workspace and copy the .ecore into your “model” sub-directory. The Xtext project wizard does actually require a .genmodel, so we need to create this first. If you open your .ecore and have a look at the properties of the FSM and PrimitiveTypes package, you will see that the properties “Ns Prefix” and “Ns URI” are unset. We need to set these values to be able to create a genmodel.

In addition, the PrimitiveTypes have unset Instance Type Names and we need those for the genmodel too, so we set them (we will get rid of most of the Primitive Types package later, but at this stage we still need to keep it).

Setting the model properties
Setting the model properties

At this point in time, make sure that the ecore is saved and then create a genmodel for the .ecore. After the genmodel is created, create all necessary emf-projects.

Creating the Xtext Project

Use the Create New Project wizard and select “Xtext from existing ecore”. Add the FSM genmodel by clicking “add” and you are almost ready to go.

A small but important selection

There is a very important but small detail: In the dialog-box you have to select the model element that should be the root rule of your grammar. Usually this is not set to a good default, so you need to pick it by yourself (State Machine in our case). Create the project and find the default grammar.

Cleaning the Xtext Project

You will find some errors in the project. The first is a duplicate import. Since the AtlanMod .ecores have more than one root package, we need to do some tweaking here by adding the fragment here. If you are familiar with the URI syntax, you might have tried something like appending “#/FSM”.

Problem with import

However, this does not work for root packages. As Sebastian Zarnekow points out, the names don’t work on the root level, you have to use the numeric reference to the packages.

Import fixed

In addition there is a number of optimizations that you can do:

  • References in the grammar often use EString as the parser rule, you might want to replace the EStrings with the ID
  • The model zoo uses String for names etc., you might want to really check if you want Strings, replace them with the terminal STRING, or if you want IDs (replace them with ID)
  • Similar with Integers (replace by INT) and other datatypes. At this point, you might have replaced all data types by built-ins so that you might want to remove the PrimitiveTypes import and rules completely
  • The generated grammar is verbose. Often you would want the “name” rule of an element to just be an ID, not preceded by the keyword “name”. So you might want to remove some of the keywords (Have a look at the generated editor to see where the keywords are too annoying for you).

At this point, you are ready to go and try your new editor.