Visualizing Build Action Manifests with COMASSO BSWDT

The AUTOSAR Build Action Manifest (BAMF) specifies a standardized exchange format / meta-model for the build steps required in building AUTOSAR software. In contrast to other build dependency tools such as make, the BAMFs support the modeling of dependencies on a model level. The dependencies can reference Ecuc Parameter definitions to indicate which parts of the BSW are going to be updated or consumed by a given build step. However, even for the standard basic software these dependencies can be very complex.

The COMASSO BSWDT tool provides a feature for visualizing these dependencies. In the example that we use for training the Xpand framework with the BSW code generators, we see that we have three build steps for updating the model, validation and generation.

2015-03-14_17h46_06The order of the build steps is inferred by the build framework from the modeled dependencies. But in the build control view cannot really see the network of dependencies, only the final flat list.

With File->Export.. we can export the BAMF dependencies as a gml-Graph file.

2015-03-14_17h51_31The generated file can then be opened in any tool that can deal with the GML format. We use the free edition of yED and see the dependencies:

2015-03-14_17h55_50

To optimize the visualization, you should choose the hierarchical layout.

2015-03-14_17h58_09The solid lines indicate model dependencies, dotted lines explicit dependencies. If a real life project yields a grpah that is too complex, you can easily delete nodes from the list on the bottom left.

 

 

 

 

 

 

 

Lightweight Product Line AUTOSAR BSW configuration with physical files and splitables

During AUTOSAR development, the configuration of the basic software is a major task.  Usually, some of the parameters are going to be the same for all your projects (company-wide, product line) and some of them will be specific to a given project / ECU. And you might have a combination of these within the same BSW configuration container.

One of the approaches uses the AUTOSAR Split(t)able concept, which allows you to split the contents of a model element over more than one physical ARXML-file.

So let us assume, we are going to configure the DemGeneral container, and we assume that we have a company-wide policy for all projects to set DemAvailabilitySupport to 1. And let’s assume that the DemBswErrorBufferSize within the same container should be set specific to each ECU.

So let us create an .arxml for the company-wide settings. That might be distributed through some general channel, e.g. when setting up a new project.

2015-02-18_10h11_58Now we can place an Ecuc specific .arxml next in the same project.

2015-02-18_10h11_58bNow any tool that supports splitables (e.g. the COMASSO BSWDT tooling) will be able to create a merged model. The following screenshots are based on our own prototype of an AspectJ based splitable support for Artop. By simply using the context menu

2015-02-18_10h37_26

we will see the merged view:

2015-02-18_10h19_18You can see a few things in that screen shot:

  • First, and most important, the two physically separated DemGenerals are now actually one. The tooling (code generators etc.) will now only see one model, one CONF package and one DemGeneral.

Our demonstrator shows additional information as decorators after the element (decorators can be deactivated through preferences in Eclipse).

  • The “[x slices]” decorator shows how many elements are actually used to make up the joined element. We have 2 physical DemGeneral, so it says “2 slices”. We have one DemAvailabilitySupport, so it shows “1 slices”
  • The decorator at the end shows the physical files that are actually involved for the merging of a given element.

Now if for some reason, a new version of the company-wide .arxml is provided, we only need to exchange / update that file – there is no need for a complex diff and merge.

There are some usecases that cannot be covered with this scenario, but it opens a lot of possibilities for managing variations in BSW configuration on a file level.

 

 

 

 

 

Towards a generic splitable framework implementation – Post #2

This blog post is the 2nd in a series of posts detailing some concepts for a generic framework for the support of splitables. For this concept we will include the term “slice”. A “slice” is one of the partial model elements that together form the merged splitables.

Effects of being splittable

If an model element (EObject) is splitable / has a splitable feature, this has two kinds of impacts:

  1. Impact on the “behavior” of the class itself.
  2. Impact on other classes, that have features of a splitable type

Impact on the Class ITSELF

  • any feature call on such an object needs to join the features of all this objects’ slices. This includes primitive and more complex features.

IMPACT ON OTHER CLASSES

  • By Feature: any feature call on another class’ feature, that could contain a splittable object, needs to filter the result of the feature to return a result, where the slices are merged.
    This affects the entire model, all classes could be affected, since all could have a feature of a splitable type.
    This affects any features that are a super-class of a splittable object.
  • By containment/aggregation: If a class is splittable, then all possible classes in all containment trees that lead to a splittable class must also be splittable. Otherwise it would not really be possible to create a spittable tree, since the splittable element could be only in one file when the containment hierarchy is not splittable.
  • By inheritance: That also implies, that if a class gains “splittable” through this containment rule, all of it’s subclasses must be splittable too, since it could be that they have the specific instance of the containment feature.

Components of a splittable tooling

  • ISplitKeyProvider: For a given object, identifies a “key” that is used to identify slices that constitute an splitable. E.g., this could be the fully qualified name.
  • ISliceFinder: Given an object, finds all object that constitute that splittable.
  • ISplittingConfiguration: Given an object or structural features, decides if that is splittable.
  • IPrimarySliceFinder: Given an object, finds the object that represents the joined splitable.
  • IFeatureCalculator: Create a joint view of the splittables
  • Arbiter: Decide where to put new elements / how to move elements

ISplitKeyProvider

For an object, returns the key that is used to identifiy the slices that together make up a model. In a simple implementation, this could e.g. be the fully qualified name. For some AUTOSAR elements, this is the FQN plus additional variant information.

Slice Finder

In a simple implementation, all the slices can be identified through their key and the slice finder finds all objects with the same key. In more advanced implementation, caching by key.

Primary Slice Finder

Returns an object, that represents the fusioned object given a specific key. That could be a new object, in case a model is copied. Or, if we inject behavior with AspectJ, one of the slices that is used. Important: Must always be unique, i.e. for a given split key, this must _always_ return the same object.

IFeatureCalculator

Must be able to create a joint view of all the slices. Can be e.g. on copy when creating a shadow model, it could also be dynamic, creating a common view e.g. with AspectJ.

Joining Structural Features

  • For references / containment, that could contain a splittable, only return the PrimarySlice within the feature.
    • 0..1: Intercept call and replace with PrimarySlice
    • 0..n: Return new list that replaces contents with primary slice
  • Primitives / All objects with 0..1 multiplicity (including references, containment)
    • Return value when all slices have the same value or 0
    • Throw exception otherwise (merge conflict)
  • Needs a list implementation that delegates the contents of the original list. Should always return the same list –> Feature Calculator is responsible. Could maintain own cache or add a new value by means of AspectJ

 

 

 

 

 

 

 

Towards a generic splitable framework implementation – Post #1

Splitables are a concept introduced by AUTOSAR. In a nutshell, splitables allow for model elements to be split over a number of AUTOSAR files. E.g., two AUTOSAR files could contain the same package and the full model would actually consist over the merged contents of these two files. Some information can be found in the previous blog posts.

Since the introduction of splitables, we have been thinking about different ways of implementation. Now that the concepts has also found its way into the EAST-ADL standard as well as into customer models, it will be of interest to shed some light on this issue from different angles as well as introduce the current activities towards a framework for splitable models. Since Artop, EATOP (EAST-ADL) are all EMF-based, the considerations related to EMF.

Basic Requirements

For the discussion, we assume that we have three basic meta-model classes:

If we have a split model, then we might have two Package with the same name in different Resources from the same ResourceSet. For such a package p, p.getElements() should return the joined list of all those packages in the ResourceSet.

There could be a number of approaches to actually to that.

A dedicated API

We could introduce helper functions in addition to the actual setter/getter etc either directly in the model or as an additional API. That would duplicate a lot fo the model API and would also be a source of errors, because clients must make sure that they use the correct API calls.

Design Desicion
The framework for Splitable support should not introduce a dedicated API for accessing the Split objects. Clients should be able to work on the standard API.

Putting the logic into the metamodel (ecore, getters / setters)

Since EMF generates the Java code for the model out of the .ecore, we could actually put the additional logic required to “join” the models into the operations of the ecore. This could be done manually, which would imply a lot of work. Another approach would be modify the code generators that generate the Java code to include the required functionality.

But that would actually mean that we would have to coordinate with all the projects that we would like to support splitables for and also in the meantime to reproduce the build environments etc.

Requirement: Non Intrusiveness
The framework for Splitable support should support working with the binary builds of the supported meta-models.

In addition, we will see later on, that we would like to (de-)activate the splitable mechanism depending on who actually calls the framework. That would add additional overhead with this approach.

The (shadow) copy.

The simplest approach of course is to actually create a new model out of the source models that actually contains the merged view on the model. That means that any code that processes a model needs to invoke the copying before accessing the model.

Requirement: Agnostic Clients
The client code for loading   the meta-model should not need special initialisation.

In addition, a pure copy would imply that we cannot modify the model. When modifiying a joined models, any model modifications must be reflected in the source models.

Requirement: Model Modification
The framework for Splitable support should be extendable so that it can also support model modification.

With a copy of the model, a framework that works with a shadow copy could work like this:

  • A map that keeps the tracing between source model elements and joined model is maintained.
  • When the joined model is modified, listen to the changes on the model and apply them to the source model.

This could be a feasible approach, however, there is also a further idea:

Intercepting and modifiying calls to the model

If we could intercept calls to the model (e.g. getters and setters), we could change the behavior of those methods to either reflect the “real” model structure or the actual “merged” view depending on the actual need. Such a different behavior could be considered a cross-cutting concern which directly leads to aspect-oriented programming. Further blog posts will detail the current thoughts/concepts/status of an Aspect-Oriented approach for a splitable framework for EMF models.

P.S.: Notes on Artop

Artop already provides an splitable implementation. But since it is part of Artop, it cannot directly be re-used for non-AUTOSAR customers. Documentation is a bit sparse, but some of the usage can be found in the test cases:

And the Javdoc from SplitableMerge.java shows that a clone is created.

where as the SplitableEObjectsProvider follows the “intercept idea”. In the default implementation we see

and the comment of the SplitableEObject shows:

 

 

Dynamic Workflows

In this guest post by Amine Lajmi, Amine introduces the workflow framework that was designed and implemented by him and itemis Paris for Sphinx.

 Dynamic Workflows

Still today, MWE is used for the definition of workflows in several model-driven applications (Xtext workflows, legacy Xpand and Check files, etc.), mainly to perform tasks like model loading, running transformations, batch validation, etc. Even though MWE workflows with Xpand and Check are less used, component-based approaches for defining tool chains are still needed for new developments in Xtend and Java.
Workflow components, such as code generators, are often executed in a separate Eclipse instance. In practice, the way a developer usually works is: write some transformations, launch a runtime instance, test the transformations, go back to the IDE and fix some bugs, re-launch the runtime, etc.
The Dynamic Workflows framework newly introduced in Sphinx is a contribution that solves these issues. It offers complete support for developing workflows in Xtend and Java, running workflows within the same Eclipse instance, so more convenient for the developer, in a transactional environment. This blog post will introduce how to use the framework, on the Hummingbird example shipped with Sphinx.

A SIMPLE WORKFLOW EXAMPLE

To define a dynamic workflow, subclass: org.eclipse.sphinx.emf.mwe.dynamic.WorkspaceWorkflow. The workflow could be written in Xtend, or Java, or both. Here is an example in Xtend:

To run the workflow, navigate to the Model Explorer, right-click on the workflow file then select Run Workflow > Run. The IDE console should display the message.

WORKSPACE WORKFLOW COMPONENTS

A workflow aggregates a set of workflow components as children. To define a workflow component, subclass org.eclipse.sphinx.emf.mwe.dynamic.components.AbstractWorkspaceWorkflowComponent, for example:

MODEL WORKFLOW COMPONENTS

Model Workflow Components are workflow components which need to read and/or write in models. To define a model workflow component, you need to subclass org.eclipse.sphinx.emf.mwe.dynamic.components.AbstractModelWorkflowComponent.
In a transactional context, read and write accesses to models are handled differently: in one case the workflow component is encapsulated into a read transaction, while in the other case, the workflow component is encapsulated into a write transaction.
The Dynamic Workflows framework allows defining both types of model workflow components.

READ-ONLY MODEL WORKFLOW COMPONENTS

By default all model workflow components are executed into a read transaction unless it is specified in their constructors. The following component reads the model slot from the context, and prints its content to the console.

READ-WRITE MODEL WORKFLOW COMPONENTS

As stated before, if the model workflow component needs to modify any model, it should be encapsulated into a write transaction. This is handled in the constructor of the component; the only thing to care about is to call the parent constructor with the flag modifiesModel set to true, see example below.

Finally, to add a workflow component to a workflow, simply add the component to the list children.

To run the workflow, select the model to validate from the Model Explorer, and select Run Workflow > Run. A dialog window invites you to select the workflow to execute (see Fig1); use the search field to filter the available classes, and then click OK. The console output should now display the content of the model slot.

Fig1. Running a dynamic workflow on a model input

Fig1. Running a dynamic workflow on a model input

You can also embed components written with Xtend and Java in the same workflow.

HEADLESS EXECUTION SUPPORT

The Dynamic Workflows framework allows executing workflows outside the UI. The general syntax of the headless application is the following:
eclipse -noSplash
-data an Eclipse workspace location
-application org.eclipse.sphinx.emf.mwe.dynamic.headless.WorkflowRunner
-workflow a workspace relative path of a Java or Xtend file, or a fully qualified name of a binary class
-model a workspace relative path to a model, or a resource URI (possibly with fragment), or a file URI
Some examples:
• eclipse
-noSplash
-data /my/workspace/location
-application org.eclipse.sphinx.emf.mwe.dynamic.headless.WorkflowRunner
-workflow org.example/src/org/example/MyWorkflowFile.java
-model org.example/model/MyModelFile.hummingbird
• eclipse
-noSplash
-data /my/workspace/location
-application org.eclipse.sphinx.emf.mwe.dynamic.headless.WorkflowRunner
-workflow org.example/src/org/example/MyWorkflowFile.xtend
-model platform:/resource/org.example/model/MyModelFile.hummingbird
• eclipse
-noSplash
-data /my/workspace/location
-application org.eclipse.sphinx.emf.mwe.dynamic.headless.WorkflowRunner
-workflow org.example/src/org/example/MyWorkflowFile.xtend
-model platform:/resource/org.example/model/MyModelFile.hummingbird#//@components.0

New Sphinx Validation/Check Framework

In this guest post by Amine Lajmi, Amine introduces the new check framework that was designed and implemented by him and itemis Paris for Sphinx.

VALIDATION/CHECK FRAMEWORK

The standard EMF validation seems tightly coupled to the EMF Ecore metamodel, and difficult to extend without modifying the metamodel. As a consequence, many users are seeking a less intrusive framework, with a more flexible/declarative mechanism.

Therefore, a new annotation-based validation framework has been introduced in Sphinx. The framework follows the same principle of the validation framework of Xtext, as it uses @Chek annotated methods for validating model objects. Moreover, the framework allows extracting validation constants such as error messages and severities into a separate model called catalog, making the maintenance of the validators much more comfortable.

In this post, we will show how to set up a validator based on the Check framework. We will use the Hummingbird example shipped with Sphinx as a running example.

UNDERSTANDING CHECK VALIDATORS

A check validator is a Java class extending org.eclipse.sphinx.emf.check.AbstractCheckValidator. It contains a set of methods annotated with @Check. The method parameter indicates the type of the object to be validated. At runtime, the validator dispatches the calls to the appropriate methods, based on the type of the current object.

A validator optionally makes use of a Catalog. When a catalog is used, @Check annotations reference constraints by their unique Id in the catalog, so there is a logical mapping from methods to constraints. For a constraint to be applicable within the scope of a validator, the set of categories specified in its @Check annotation should be a subset of the set of categories referenced by the constraint in the check catalog. Roughly speaking, categories are used to narrow down the scope of methods, i.e. their applicability.

CONTRIBUTING CHECK VALIDATORS

A check validator is contributed through the extension point org.eclipse.sphinx.emf.check.checkvalidators.

It is possible to contribute multiple check validators for the same EPackage. It is also possible to a catalog to be shared between multiple check validators, simply use the same plugin URI of the check catalog in the contribution to the extension point.

DEFINING CONSTRAINTS

Let’s start with something trivial. The following method is called on objects of type Application, it raises an error stating the application name is invalid. Notice, the API provides three methods: error(…), info(…), and warning(…) to explicitly force the type of the issue at design time.

The following @Check annotation points to a catalog constraint with an id equals to “ApplicationNameNotValid”. Instead of using explicit severities, a more general method called issue(…) is used. While error(…) raises systematically an error, issue(…) gets its severity type at runtime from the catalog.

The severity type is set directly on the check catalog (cf. Fig1), which gives the user more flexibility at design time.

 

Fig1. Configuring validation constraints and categories from the catalog

Fig1. Configuring validation constraints and categories from the catalog

Each constraint has a unique identifier, and refers to a possibly empty set of Categories. In Fig1, the constraint with the id ApplicationNameNotValid is associated to Category 1, its severity is Error, and its message expects a placeholder (see {0} in the message value). This additional parameter is provided in the issue(…) method call as a last parameter; in this example it has a value “for all categories”.

UNDERSTANDING CATEGORIES

A constraint can be applicable for one or more categories. This means the constraint will be evaluated only when the category referenced in the @Check annotation is selected by the user. For example, the following constraint is called either when Category 1, or Category 2, or both categories are selected.

Convention: if no categories are explicitly specified, the constraint applies for all categories.
The following example checks the Component name, and raises info if the component name has an illegal character. In the validator class, the constraint is referenced as follows:

The annotation specifies the constraint ComponentNameNotValid should be checked only for Category 1, which means it is applied only when the user selects Category 1 from the UI.

RUNNING CHECK VALIDATORS

The easiest way to launch a check validation is to use the contextual menu (see Fig2). Right click on the object to be validated, then select Validate > Check-based Validation. Validation is performed on the selected object and all its direct and indirect children.

 

Fig2. Running a check validation from the UI

Fig2. Running a check validation from the UI

The user is then asked for what categories the validation should be performed; let’s select Category 1 for which we know a constraint is defined. The categories displayed in the UI are the ones defined in the check catalog.

check3

Fig3. Selecting the categories for which validation should apply

 

The Check Validation problems view displays an error as expected as the constraint ApplicationNameNotValid is violated. Note the error markers are displayed on both Check Validation View and the classical Problems View. The Check Validation View gives additional information about the errors such as the validator which raised the issue, and the ability to navigate to the error in the model under validation.

check4

Fig4. Validation markers in the check validation view

 

Sphinx is listening to Resources – Synchronizing Model Descriptors

The Eclipse Sphinx project provides a number of useful features for (EMF-based) model-management.  One of the features is reloading the model when the underlying resources change. Sphinx uses so called ModelDescriptors to define which files belong to a model, and on a change these have to be updated as well.

One of the relevant code pieces is org.eclipse.sphinx.emf.Activator:

 
The ModelDescriptorSynchronizer registers itself with standard means at the Eclipse Workspace to listen for resource changes and adds a BasicModelDescriptorSynchronizerDelegate delegate :

 

The major task of that class is to map the IResourceChangeEvent from the Eclipse framework to Sphinx’ IModelDescriptorSyncRequest which specify what to update in model descriptors:

But it also does some handling of the model registry caches:

org.eclipse.sphinx.emf.internal.model.ModelDescriptorSynchronizer itself extends org.eclipse.sphinx.platform.resources.syncing.AbstractResourceSynchronizer which itself implements IResourceChangeListener from Eclipse and is a base class for handling resource changes. It basically executes the the syncRequests:

In our case, the sync requests are of type ModelDescriptorSyncRequest, because that is how createSyncRequest is @Overriden in org.eclipse.sphinx.emf.internal.model.ModelDescriptorSynchronizer.

Now the Model Descriptors are actually processed in this class:

And we are finally at the org.eclipse.sphinx.emf.model.ModelDescriptorRegistry, where the information about all the loaded models is actually kept (to be detailed in a further blog post).

AUTOSAR reports with Artop and BIRT

Even in times of model-based development, artefacts like pdf and Word are still important for documentation. With Eclipse based technologies, it is easy to produce documents in various formats from your AUTOSAR xml.

One approach would be to use Eclipse M2T (model-to-text) technologies to generate your own reports. Technologies for that would be Xtend2 or Xpand. An article can be found here.

However, with the Eclipse BIRT project, there is also a more WYSIWYG approach. Taking a very simple AUTOSAR model as an input, we can easily create reports in most formats.

2013-12-17_11h34_27

After creating a BIRT project, we have to configure a so called “Data Source”, that simply takes our ARXML as input:

2013-12-17_11h35_30We then configure one or more “Data Sets” that actually pull the data from the data source:

2013-12-17_11h36_48From there it is mostly drag and drop and maybe a little scripting to create the report in the WYSYWIG editor – it also supports a lot of styling and templating. Including images, custom headers etc:

2013-12-17_11h38_04As you can see, we do create a table with a row for each SWC-Type with the shortname and a subtable for all the ports of that SWC-Type. The BIRT Report Viewer shows the report in HTML.

2013-12-17_11h39_54

By clicking on “Export Report” we can export to a number of formats:

2013-12-17_11h41_39

PDF:

2013-12-17_11h43_12

Excel:

2013-12-17_11h44_27BIRT and ARTOP provide powerful reporting functionality.

 

 

Support for vendor specific parameter / module definitions in COMASSO basic software configuration tool

During the 6th AUTOSAR Open Conference one of the presenters pointed out, that integration of vendor specific / custom parameter definitions into the tool chain can be problematic because of insufficient tool support. Some tools seem to have hard coded user interface for the configuration or proprietary formats for customization.

The COMASSO basic software tooling chose a different approach: User interface and code generator framework are dynamically derived from the .arxml in the workspace and all the configuration data is directly stored in .arxml (so no Import/Export of models is required, just drag and drop).

NVRAM Example

In one minimal example that we use for training purposes, we have a parameter definition for a NvRam module (stripped from the official AUTOSAR definition). In the “file system” view, the param def can bee seen in the workspace.2013-11-15_14h14_09

This results in the “Model view”:2013-11-15_14h14_30Introducing a vendor specific module

Now, assume the construed scenario, that we would like to provide a BSW/MCAL module for a Graphical Processor Unit (GPU). For the configuration tool to work, all what we need is a param definition for that.

Although COMASSO does not require a specific project structure, for convenience, we copy the NvM file structure and create an new param def callsed GpU_ParamDef.arxml. In XML, we define a new module:

After saving the new param def .arxml, the tool scans the workspace and rebuilds the user interface.

2013-11-15_14h26_25Without a single line of additional code, the tool knows about the vendor specific parameter definition. The icon in front of the GPU is grey, because we do not have configured any data yet. Opening the editor shows that it knows about the VSMD now as well:

2013-11-15_14h29_42Any configuration here will be directly stored as values in a .arxml file.

 Code generation

But not only the user interface knows immediately about the vendor specific modules – so does the generation and validation framework. As you can see below, we can now directly access the GPU module configuration from the AUTOSAR root (line 11) and then loop through the new GPU descriptors (line 12).

And the content assist system is working with the information as well. No need to remember what we defined. In line 16 we invoke content assist on “d” (which is a GPUBlockDescriptor) and get suggestions on all the attributes.

2013-11-15_14h34_47Open Language

The code generation and validation framework is Eclipse Xtend/Xpand. It is publically available and not a proprietary solution.

Parameter Description

In addition, when clicking on an element in the configuration editor, there is a documentation view that shows the important information from the .arxml in a nicer view. Any documentation that is stored in the .arxml will be easily accessible.

2013-11-15_14h38_30

 

 

 

 

 

 

 

Functional Architectures (EAST-ADL) and Managing Behavior Models

In the early phases of systems engineering, functional architectures are used to create a functional description of the system without specifying details about the implementation (e.g. the decision of implementation in HW or SW). In EAST-ADL, the building blocks of functional architectures (Function Types) can be associated with so-called Function Behaviors that support the addition of formal notations for the specification of the behavior of a logical function.

In the research project IMES, we are using the EAST-ADL functional architecture and combine it with block diagrams. Let’s consider the following simplified functional model of a Start-Stop system:

2013-11-10_12h53_32In EAST-ADL, you can add any number of behaviors to a function. We already have a behavior activationControlBehavior1 attached to the function type activationControl and we are adding another behavior called newBehavior.2013-11-10_12h54_06

A behavior itself does not specifiy anything yet in EAST-ADL. It is more like a container that allows to refer to other artefacts actually containing the behavior specification. You can refer to any specification (like ML/SL or other tools). For our example, we are using the block diagram design and simulation tool DAMOS, which is available as open source at www.eclipse.org.

We can now invoke the “Create Block Diagram” on the behavior. 2013-11-10_12h55_34

The tool will then create a block diagram with in- und output ports that are taken from the function type (you can see in the diagram that we have two input ports and one output ports).2013-11-10_12h56_20 Add this point, the architects and developers can specify the actual logic. Of course it is also possible to use existing models and derive the function type from that (“reverse from existing”). That would work on a variety of existing specifications, like state machines, ML/SL etc.

A DAMOS model could look like this:2013-11-10_12h56_48

At this point of the engineering process, we would have:

  • A functional architecture
  • one or more function behaviors with behavior models attached to the function types

Obviously, the next thing that comes to mind is  to combine all the single behavior models and to create a simulation model for a (sub-) system. Since we might have more than one model attached to a function, we have to pick a specific model for each of the function types.

This is done by creating a system configuration.2013-11-10_13h02_52That will create a configuration file that associates a function type (on the left side) with a behavior model (right side). Of course, using Eclipse Xtext, this is not only a text file, but a full model with validation (are the associated models correct) and full type aware content assist:2013-11-10_13h16_52After the configuration has been defined, a simulation model is generated that can then be run to combine the set of function models.