Visualizing Build Action Manifests with COMASSO BSWDT

The AUTOSAR Build Action Manifest (BAMF) specifies a standardized exchange format / meta-model for the build steps required in building AUTOSAR software. In contrast to other build dependency tools such as make, the BAMFs support the modeling of dependencies on a model level. The dependencies can reference Ecuc Parameter definitions to indicate which parts of the BSW are going to be updated or consumed by a given build step. However, even for the standard basic software these dependencies can be very complex.

The COMASSO BSWDT tool provides a feature for visualizing these dependencies. In the example that we use for training the Xpand framework with the BSW code generators, we see that we have three build steps for updating the model, validation and generation.

2015-03-14_17h46_06The order of the build steps is inferred by the build framework from the modeled dependencies. But in the build control view cannot really see the network of dependencies, only the final flat list.

With File->Export.. we can export the BAMF dependencies as a gml-Graph file.

2015-03-14_17h51_31The generated file can then be opened in any tool that can deal with the GML format. We use the free edition of yED and see the dependencies:

2015-03-14_17h55_50

To optimize the visualization, you should choose the hierarchical layout.

2015-03-14_17h58_09The solid lines indicate model dependencies, dotted lines explicit dependencies. If a real life project yields a grpah that is too complex, you can easily delete nodes from the list on the bottom left.

 

 

 

 

 

 

 

Towards a generic splitable framework implementation – Post #3

In the previous blog posts I have layed out the basic thoughts about a framework for splitable implementation. As indicated, we are using AspectJ to intercept the method calls to the EMF model’s classes and change the behavior to present a merged model to the clients. If the AspectJ plugins are used in the clients’ executable, it will see merged models.

So for a model org.mymodel  we intercept all calls to the getters by declaring the JoinPoint

and in the actual code for the execution we want to do the following :

  • Check if the return type (could be the direct return type or the parameter of a parameterized type) is splitable. If so, we need to replace the returned object with the primary slice.
  • If the object that the method invoked on is splitable, we need to find all slices and return the joined results.

This is the preliminary code for illustration (not yet finished for all cases, since it is just used for a specific meta-model right now):

If we look at the implementation of FeatureCalculator, we see that internally we call the same method on all the slices:

But why does this not cause an infinite loop, since the get method’s are intercepted. Because through the definition of the join points, we make sure that our aspect is not executed when called from an Aspect or from the Splitable Frameworks classes. That means that all our implementation of the framework can safely operate on the source models.

We are almost set. But when we start a client with that AspectJ configuration, we will see that EMF calls will be intercepted when loading a model. But obviously the models should be first loaded as source models and only later accesses should be intercepted to present the joined model.

We can configure that through additional Join Points:

The following additional definitions are being used to disable the Aspect if the current call stack (control flow) is from certain EMF / Sphinx infrastructure classes:

 Result

With this configuration, our models load fine and the Sphinx model explorer shows the joined elements wherever a slice is in the model. Right now our implementation supports the joining of models and packages (i.e. it supports containment references) but can be extended to do more.

Upcoming

The following features will be added to the framework:

  • Intercepting more of the necessary methods (e.g. eContainer)
  • Support modification of models (i.e., the intercepted set-methods will modify the source models. The component “Arbiter” will be introduced to decide which source models to modify.
  • (De-)Activiation of the joined view based on transactions. A settable flag to indicate whether the client operates on the joint view our on the source view.
  • Integration with IncQuery

Sphinx

We hope to release the framework as part of the Sphinx project.

Towards a generic splitable framework implementation – Post #1

Splitables are a concept introduced by AUTOSAR. In a nutshell, splitables allow for model elements to be split over a number of AUTOSAR files. E.g., two AUTOSAR files could contain the same package and the full model would actually consist over the merged contents of these two files. Some information can be found in the previous blog posts.

Since the introduction of splitables, we have been thinking about different ways of implementation. Now that the concepts has also found its way into the EAST-ADL standard as well as into customer models, it will be of interest to shed some light on this issue from different angles as well as introduce the current activities towards a framework for splitable models. Since Artop, EATOP (EAST-ADL) are all EMF-based, the considerations related to EMF.

Basic Requirements

For the discussion, we assume that we have three basic meta-model classes:

If we have a split model, then we might have two Package with the same name in different Resources from the same ResourceSet. For such a package p, p.getElements() should return the joined list of all those packages in the ResourceSet.

There could be a number of approaches to actually to that.

A dedicated API

We could introduce helper functions in addition to the actual setter/getter etc either directly in the model or as an additional API. That would duplicate a lot fo the model API and would also be a source of errors, because clients must make sure that they use the correct API calls.

Design Desicion
The framework for Splitable support should not introduce a dedicated API for accessing the Split objects. Clients should be able to work on the standard API.

Putting the logic into the metamodel (ecore, getters / setters)

Since EMF generates the Java code for the model out of the .ecore, we could actually put the additional logic required to “join” the models into the operations of the ecore. This could be done manually, which would imply a lot of work. Another approach would be modify the code generators that generate the Java code to include the required functionality.

But that would actually mean that we would have to coordinate with all the projects that we would like to support splitables for and also in the meantime to reproduce the build environments etc.

Requirement: Non Intrusiveness
The framework for Splitable support should support working with the binary builds of the supported meta-models.

In addition, we will see later on, that we would like to (de-)activate the splitable mechanism depending on who actually calls the framework. That would add additional overhead with this approach.

The (shadow) copy.

The simplest approach of course is to actually create a new model out of the source models that actually contains the merged view on the model. That means that any code that processes a model needs to invoke the copying before accessing the model.

Requirement: Agnostic Clients
The client code for loading   the meta-model should not need special initialisation.

In addition, a pure copy would imply that we cannot modify the model. When modifiying a joined models, any model modifications must be reflected in the source models.

Requirement: Model Modification
The framework for Splitable support should be extendable so that it can also support model modification.

With a copy of the model, a framework that works with a shadow copy could work like this:

  • A map that keeps the tracing between source model elements and joined model is maintained.
  • When the joined model is modified, listen to the changes on the model and apply them to the source model.

This could be a feasible approach, however, there is also a further idea:

Intercepting and modifiying calls to the model

If we could intercept calls to the model (e.g. getters and setters), we could change the behavior of those methods to either reflect the “real” model structure or the actual “merged” view depending on the actual need. Such a different behavior could be considered a cross-cutting concern which directly leads to aspect-oriented programming. Further blog posts will detail the current thoughts/concepts/status of an Aspect-Oriented approach for a splitable framework for EMF models.

P.S.: Notes on Artop

Artop already provides an splitable implementation. But since it is part of Artop, it cannot directly be re-used for non-AUTOSAR customers. Documentation is a bit sparse, but some of the usage can be found in the test cases:

And the Javdoc from SplitableMerge.java shows that a clone is created.

where as the SplitableEObjectsProvider follows the “intercept idea”. In the default implementation we see

and the comment of the SplitableEObject shows:

 

 

Supporting Model-To-Transformation in Java with AspectJ

There are a number of Model-To-Model-Transformation Frameworks provided under the Eclipse umbrelle (e.g. QVTO). Some of these provide a lot of support. However, in some scenarios, you need to either adapt them or implement your own M2M with Java. For us, some of the cases are:

  • Framework does not support model loading saving with Sphinx
  • Framework does not support EMF unsettable attributes (eIsSet)
  • Framework performance.

However, one of the most annoying problems in writing transformations is caching already created elements. Consider the following abstract example:

  • Our source model has an element of type Package ‘p’
  • In the package, there are sub elements ‘a’ and ‘b’
  • ‘a’ is a typed element, it’s type is ‘b’

Usually when mapping the contents of p, we would iterate over its contents and transform each element. However, when transforming ‘a’, the transformed element’s ‘A’ type should be set to the transformed ‘b’ (‘B’). But we have not transformed it yet. If we transform it at this point in time, it will be transformed again when iterating further through the package, resulting at two different ‘B’ in the target model.

Obviously we should keep a Cache of mapping results and not create a new target instance if a source element has been transformed before. However, managing those caches will clutter our transformation rules.

But it is quite easy do factor this out by using AspectJ. First, we define an annotation for methods that should be cached:

 

Then we define an aspect that intercepts all calls to methods annotated with @Cached:

  •  In line 14, we build a cache key out of the current object, the method called and its arguments
  • If this combination is already cached, we return the key entry
  • otherwise we proceed to the original method and cache its result

The transformator then looks like this:

(OK, so this is Xtend2 rather than Java, because the polymorphic dispatch comes handy).

Note that we are transforming EAST-ADL models here. The package contains Units and Quantities. Units refer to Quantitites. Quantity transformations are called twice: From the package transformation and from the Unit transformation. However, by using the caching, we get the correct structure.

In addition, we also solve a little annonying problem: Elements have to be put in the containment hierarchy as well. But that would require additional code in the transformators and passing around the containers.

In line 25 in the aspect, we check if the target object implements the AutoAdd interface and call the method in that case. So the transformation can take care of the addition to the containment hierarchy at a defined place.

The code above is an implementation prototype to show initial feasability.

 

 

Functional Architectures (EAST-ADL) and Managing Behavior Models

In the early phases of systems engineering, functional architectures are used to create a functional description of the system without specifying details about the implementation (e.g. the decision of implementation in HW or SW). In EAST-ADL, the building blocks of functional architectures (Function Types) can be associated with so-called Function Behaviors that support the addition of formal notations for the specification of the behavior of a logical function.

In the research project IMES, we are using the EAST-ADL functional architecture and combine it with block diagrams. Let’s consider the following simplified functional model of a Start-Stop system:

2013-11-10_12h53_32In EAST-ADL, you can add any number of behaviors to a function. We already have a behavior activationControlBehavior1 attached to the function type activationControl and we are adding another behavior called newBehavior.2013-11-10_12h54_06

A behavior itself does not specifiy anything yet in EAST-ADL. It is more like a container that allows to refer to other artefacts actually containing the behavior specification. You can refer to any specification (like ML/SL or other tools). For our example, we are using the block diagram design and simulation tool DAMOS, which is available as open source at www.eclipse.org.

We can now invoke the “Create Block Diagram” on the behavior. 2013-11-10_12h55_34

The tool will then create a block diagram with in- und output ports that are taken from the function type (you can see in the diagram that we have two input ports and one output ports).2013-11-10_12h56_20 Add this point, the architects and developers can specify the actual logic. Of course it is also possible to use existing models and derive the function type from that (“reverse from existing”). That would work on a variety of existing specifications, like state machines, ML/SL etc.

A DAMOS model could look like this:2013-11-10_12h56_48

At this point of the engineering process, we would have:

  • A functional architecture
  • one or more function behaviors with behavior models attached to the function types

Obviously, the next thing that comes to mind is  to combine all the single behavior models and to create a simulation model for a (sub-) system. Since we might have more than one model attached to a function, we have to pick a specific model for each of the function types.

This is done by creating a system configuration.2013-11-10_13h02_52That will create a configuration file that associates a function type (on the left side) with a behavior model (right side). Of course, using Eclipse Xtext, this is not only a text file, but a full model with validation (are the associated models correct) and full type aware content assist:2013-11-10_13h16_52After the configuration has been defined, a simulation model is generated that can then be run to combine the set of function models.

 

 

 

 

 

A common component model for communication matrixes and architecture metrics for EAST-ADL, AUTOSAR and others.

In the engineering process, it is often of interest to see communication matrixes or metrics for your architecture. Depending on the meta-model (EAST-ADL, AUTOSAR, SysML)  and the tool, that kind of information might or might not be available. In the IMES research project, we are using the Eclipse-Adapter mechanism to provide a lightwight common component model that adapts to the different models on the fly.

The Eclipse Adapter Mechanism is used to provide a common model for different meta-models

The Eclipse Adapter Mechanism is used to provide a common model for different meta-models

The tool implements different views / editors and metrics, that work on the common component model, providing the mechanism for EAST-ADL analysis and design functions, error models and AUTOSAR software architectures.

The following EAST-ADL model

A small EAST-ADL analysis function model.

A small EAST-ADL analysis function model.

can be shown in the communication matrix editor, which uses the same code for EAST-ADL and AUTOSAR.

2013-08-08_09h33_19

 

 

 

Authorization / Authentification with EMF and Graphiti for Automotive models

The Eclipse frameworks provides a lot of projects that make the design and implementation of modeling tools very efficient. For many products, the models have to be secured against unauthorized access or modification. This is not often seen in standard EMF based toolings. In the research project IMES, we are using a number of technologies to achieve that goal:

  • Shiro provides a framework for authentication / authorization. It can be used with a number of authentication mechanisms, from plain text files to LDAP servers.
  • Graphiti for the graphical editors
  • Xtext and Model 2 Model transformation to generate Graphiti based editors. The code generation introduces access authorization into the editors. The user will get feedback that an operation is not allowed, before it is actually executed.

A text based authorization configuration could look like this (text based would probably not be used in production, but it is easy to demonstrate):

2013-07-25_15h04_44

We specify roles oem and tier1. oem can edit everything, except the model in the file Motor.eaxml. tier1 cannot edit anything, but view everything except for the model element “activate”.

Now we can supply credentials in a preference page:2013-07-25_15h04_34

 

Editing

Since oem_user has role oem, she is not allowed  modify anything except for the data in a specific model resource. The graphical editor queries the framework for access rights and provides visual feedback.2013-07-25_16h40_45

Viewing

There is additional support for viewing rights. In our configuration, the OEM can see everything, but there might be a specific function that should not be shown to 3rd parties. For the OEM, the diagram is showing all the labels. But for tier1_user role has denied read access. So the port label is rendered differently:

2013-07-25_16h39_42

 

CDO’ification of Sphinx: Database Storage for AUTOSAR, EAST-ADL and Req-IF

The Eclipse Sphinx project is an important but little known infrastructure project that supports workspace management for tool platforms such as Artop (AUTOSAR), EATOP (EAST-ADL) and – in the future – RMF (Req-IF). Right now it is closely tied to EMF XML-based resources.

3-tier for information security

The models that are held in these tools often will contain sensitive data. Many companies require a 3-tier architecture with a database backend for these (including authorization and authentification). In the research project “IMES” we are extending Sphinx so that it supports CDO as a “backend”. CDO in turn supports storage in a number of databases.

Refactoring Sphinx

Right now we have refactored Sphinx so that the strong connection to EMF XMI Resources is replaced by an abstract layer. The connection to CDO is currently in progress.

 2013-05-28_13h58_09Branching, Merging, Versioning

In addition, CDO will provide its functionality on branching, merging and versioning.

Assessing architecture quality: Metrics for AUTOSAR software architecture and EAST-ADL functional architectures

Both in AUTOSAR and EAST-ADL we are defining architectures – be it on the software level or on the function level. But when is an architecture a good architecture? How to find flaws? One approach to assess the quality are architecture metrics. In the research project IMES we have implemented a set of metrics both for the EAST-ADL and AUTOSAR models. The metrics calculation is based on the Eclipse projects ARTOP and EATOP.

One of the often-mentioned metrics in literature are “Provided Service Utilization” (PSU) and its counter-part, the “Required Service Utilization” (RSU). PSU measures the degree of utilization of a “service provider” by dividing the number of actually used  service by the total number of offered services:

PSU(s)=PS_used(s)/PS_total(s)

A metric of 1.0 indicates that all offered services are being used, a metric of 0.0 indicates that none of the services is being used. Values below 1.0 are indicators for potential architectural issues.

To apply the PSU concepts to AUTOSAR or EAST-ADL, the elements “service provider” and “service” have to be identitfied in the meta-models. In AUTOSAR, obvious candidates for providers are the SwComponentTypes and SwComponentPrototpyes. Candidates for services are the ports  (it is also possible to work with the data elements of a port, resulting in a finer granularity of the metric). Picking SwComponentTypes or SwComponentPrototpyes for “provider” results in subtle differences:

2013-04-04_13h24_08

In the above model, the metrics for the prototypes are PSU(p1)=1.0 (Port op is connected) and PSU(p2) = 0.0 (not connected). For the component types, PSU(ACI1O1_A)=1.0, since it is only relevant if the port is connected at all.

Connected ports

Deciding if a port is actually connected is more complex than in the simple example above. Delegation connectors just “forward” the ports without functionality – that has to be taken into account. A port being connected through a delegation does not count as a connected port for the metrics calculation. Starting from a port, all transitive connections have to be followed to see if the port is finally connected to a software component that actually consumes data (i.e., not a SwCompositionType), or if it is delegated to the highest level or not delegated at all.

Since we also want to calculate metrics for partial architectures, we define a delegation to the root level as “connected”, but there are other approaches that define such a delegation as “unconnected”.

Side Effect for Usability

The algorithm for finding connected ports has a nice side effect: It can be offered to the user as an extra functionality. In complex models, it is often difficult to track the actual destination of data elements through a port. The algorithm makes that easy.

PSU  and CPSU

Now the PSU of SwComponentTypes and SwComponentPrototypes can easily be calculated.2013-04-04_13h36_42

In the model above, PSU(In0Out3)==0.666. The same algorithms can be used to calculate the CPSU that indicates the ratio of (connected ports/total ports) of all ports of all SwComponentPrototypes of a subsytem. A metric < 1.0 indicates that the system has unconnected ports. In the above model, CPSU = 0.5. It is important to identify prototypes that are only composition types, since they are only structural and should not be used for metrics calculation by counting ports more than once.

RSU, EAST-ADL and variants

The calculation of RSU is analogous to the PSU calculation. And the metrcis calculation is not restricted to software architectures. Metrics for  EAST-ADL analysis functions and design functions can be calculated in the same way. In fact, our implementation is working on an abstract component / service model that maps to Artop and EATOP implementations, so that we can actually use the same code for both meta-models.

In addition, it can be combined with the variant / featuremodels, so that metrics for entire product lines can be calculated and “loose ends” can be found.

 

Software Tool Chain for Automotive – the IMES research project

At itemis, we are currently participating in a public funded research project (IMES) that deals with the software development tool chain for automotive software, especially in early phases. We do have two main focus points in the project:

  • An integrated tool chain for logical functions and software architecture, including traceability to requirements and variant management.
  • A prototype for a trans-company model repository that avoids the current gaps in the tool chain.

The project is a good example how you can leverag existing Eclipse technologies and combine them:

IMES Architecture

IMES Architecture

  • The development process starts with requirements. We use the Eclipse RMF project to import ReqIF and display/edit requirements. This is also the basis for traceability. Eclipse RMF is an open source project contributed by itemis and the university of Düsseldorf / Formal Mind
  • The specification of feature models and model variants is done with tooling based on the Eclipse Feature Model project. The basis of the feature model project had been contributed by pure systems and recently, itemis has released the textual editors for the features / variants as open source and will put them under the Eclipse umbrellas. The editors are based on Xtext, another open source project.
  • Functional modeling (logical architecture) is based on the just recently published EAST-ADL meta-model / its implementation that is published under the name “EATOP”. The meta-model has been contributed by Continental AG as part of a public funded research project. The editors are implemented by itemis and rely heavily on the Eclipse Graphiti framework.
  • In early phases, logical architecture is often done in combination with functional prototypes. These are provided in terms of state machines or, more often, data flow / block diagrams. While the OEM will still use commercial modeling tools for this purpose, we use Eclipse DAMOS to show the integration.
  • Logical Functions are mapped to software architecture. The software architecture editor is developed in the scope of IMES and is based on Artop – an AUTOSAR meta-model implementation by the Artop user group.
  • The same applies for the deployment to real ECUs. Both editors again are based on the Graphiti framework.
  • All of the artifacts can be traced on model element level. This is done by using Yakindu CReMa, which has been developed within the VERDE research project.
  • Again, on model element level, change management tickets can be assigned on a fine level to any tickets. Eclipse Mylyn provides all the infrastructure and the connection to the editors is developed on the IMES side.

On the repository side, we are bringing together two of Eclipse’s interesting projects:

  • Sphinx provides a lot of functionality for workspace management and helps several editing tools from different sources to work together on the same models.
  • CDO is a persistency framework with branching / versioning support (think “Git” for models). It will allow for a number of different collaboration scenarios between OEM and 1st tier, including working together on the same server, working on offline repositories with sync etc.

The Eclipse eco-system is truly marvelous.