AUTOSAR: OCL, Xtend, oAW for validation

In a recent post, I had written about Model-to-Model-transformation with Xtend. In addition to M2M-transformation, Xtend and the new Sphinx Check framework are a good pair for model validation. There are other frameworks, such as OCL, which are also candidates. Xpand (formerly known as oAW) is used in COMASSO. This blog post sketches some questions / issues to consider when choosing a framework for model validation.

Support for unsettable attributes

EMF supports attributes that can have the status “unset” (i.e. they have never been explicitly set), as well as default values. When accessing this kind of model element attributes with the standard getter-method, you will not be able to distinguish whether the model element has been explicitly set to the same value as the default value or if it has never been touched.

If this kind of check is relevant, the validation technology should support access to the EMF methods that support the explicit predicate if a value has been set.

Inverse References

With AUTOSAR, a large number of checks will involve some logic to see, if a given element is referenced by other elements (e.g. checks like “find all signals that are not referenced from a PDU”). Usually, these references are uni-directional and traversal of the model is required to find referencing elements. In these cases, performance is heavily influenced by the underlying framework support. A direct solution would be to traverse the entire model or use the utility functions of EMF. However, if the technology allows access to frameworks like IncQuery, a large number of queries / checks can be significantly sped up.

Error Reporting

Error Reporting is central for the usability of the generated error messages. This involves a few aspects that can be explained at a simple example: Consider that we want to check that each PDU has a unique id.

Context

In Xpand (and similar in OCL), a check could look like:

This results in quadratic runtime, since the list of PDU is is fully traversed for each PDU that is checked. This can be improved in several ways:

  1. Keep the context on the PDU level, but allow some efficient caching so that the code is not executed so often. However, that involves some additional effort in making sure that the caches are created / torn down at the right time (e.g. after model modification)
  2. Move the context up to package or model level and have a framework that allows to generate errors/warning not only for the elements in the context, but on any other (sub-) elements. The Sphinx Check framework supports this.

Calculation intensive error messages

Sometimes the calculation of a check is quite complex and, in addition, generating a meaningful error message might need some results from that calculation. Consider e.g. this example from the ocl documentation:

The fragment

is used in both the error message as well as the calculation of the predicate. In this case, this is no big deal, but when the calculation gets more complex, this can be annoying. Approaches are

  1. Factor the code into a helper function and call it twice. Under the assumption, that the error message is only evaluated when a check fails that should not incur much overhead. However, it moves the actual predicate away from the invariant statement.
  2. In Xpand, any information can be attached to any model elements. So in the check body, the result of the complex calculation can be attached to the model element and the information is retrieved in the calculation of the error message.
  3. In the Sphinx Check framework, error messages can be calculated from within the check body.

User documentation for checks

Most validation frameworks support the definition of at least an error code (error id) and a descriptive message. However, more detailed explanations of the checks are often required for the users to be able to work with and fix check results. For the development process, it is beneficial if that kind of description is stored close to the actual tests. This could be achieved by analysing comments near the validations, tools like javadoc etc. The Sphinx frameworks describes ids, messages, severities and user documentation in an EMF model. During runtime of an Eclipse RCP, it is possible to use the dynamic help functionality of the Eclipse Help to generate documentation for all registered checks on the fly.

 

 

Here are some additional features of the Xtend language that come in Handy when writing validations:

TopicExplanation
ComfortXtend has a number of features that make writing checks very concise and comfortable. The most important is the concise syntax to navigate over models. This helps to avoid loops that would be required when implementing in Java


val r = eAllContents.filter(EcucChoiceReferenceDef).findFirst[
shortName == "DemMemoryDestinationRef"]
}
PerformanceXtend compiles to plain Java. This gives higher performance than many interpreted transformation languages. In addition, you can use any Java profiler (such as Yourkit, JProfiler) to find bottlenecks in your transformations.
Long-Term-SupportXtend compiles to plain Java. You can just keep the compiled java code for safety and be totally independent about the Xtend project itself.
Test-SupportXtend compiles to plain Java. You can just use any testing tools (such as JUnit integration in Eclipse or mvn/surefire). We have extensive test cases for the transformation that are documented in nice reports that are generated with standard Java tooling.
Code CoverageXtend compiles to plain Java. You can just use any code coverage tools (such as Jacoco)
DebuggingDebugger integration is fully supported to step through your code.
ExtensibilityXtend is fully integrated with Java. It does not matter if you write your code in Java or Xtend.
DocumentationYou can use standard Javadocs in your Xtend transformations and use the standard tooling to get reports.
ModularityXtend integrates with Dependency Injection. Systems like Google Guice can be used to configure combinations of model transformation.
Active AnnotationsXtend supports the customization of its mapping to Java with active annotations. That makes it possible to adapt and extend the transformation system to custom requirements.
Full EMF supportThe Xtend transformations operate on the generated EMF classes. That makes it easy to work with unsettable attributes etc.
IDE IntegrationThe Xtend editors support essential operations such as "Find References", "Go To declaration" etc.

 

Meanderings on AUTOSAR model repositories (and other models)

When working with AUTOSAR models, model storage and management is topic to be solved. In this blog post, I coarsely discuss some thoughts – it is intended as collection of topics that projects using AUTOSAR models have to adresse, but it is no way complete and each of the issues introduced would fill a lot of technical discussion on their own.

Storage Approaches

Databases

Types of Databases

  1. Relational databases are often the first thing that comes to mind when discussing technologies for model repositories. However, AUTOSAR models (as many other meta-models for engineering data) are actually model elements with a lot of relationships – effectively a complex graph. Relational databases per se do not match well to this structure (see impedance, e.g. in Philip Hauer’s post).
  2. NoSQL databases: There are a number of alternatives to the relational databases, known under the term “NoSQL”, this includes key-value stores, document stores etc. But since the structure of AUTOSAR models is an element graph, the most natural candidate seems to be a graph database
  3. Graph databases treat nodes and their relationships as 1st-class-citizens and have a more natural correspondance to the engineering models.

Database-EMF integration

Eclipse provides technologies to store EMF-based models in database repositories of various kinds. With CDO, it is quite easy to store EMF based models in a relational backend (H2, Oracle) and the combination is very powerful (in the IMES project, we run an embedded H2 (relational) database in Eclipse with CDO to store AUTOSAR models, as well as a centralized server to show the replication mechanisms of such a setup).

However there are drawbacks (schema integration) and advantages (relational database products are well understood by IT departments). There are also projects that provide other CDO backends or more direct integrations with other NoSQL databases (such as Neo4J, OrientDB).

Flat files

AUTOSAR defines an exchange format for AUTOSAR models based on XML. Using that for storage is an obvious idea and Artop does exactly that by using the AUTOSAR XML format as storage format for EMF models.

Source code control Systems

Since the files are stored as text files, models can be managed by putting them into source code control systems, such as SVN or GIT.

Performance

One criterion for the selection of a specific technologies is performance.

Memory

While working with AUTOSAR BSW configurations for ECUs, during the days of 32-bit machines and JVM, memory limitations actually posed a problem for the file-based approach. In settings like this, setting up a database server with the full models and clients that access them is one approach. For Robert Bosch GmbH, we did a study in CDO performance that was presented at EclipseCon. In the end, for this particular setting it was considered more economic to provide 64bit systems to the developers than to send up a more complex infrastructure with a database backend. 64bit systems can easily hold the data for the BSW configuration of an ECU (and more).

Speed

Often it is argued, that with a remote database server, availability of the model is “instant”, since no local loading of models is required. When looking at BSW configurations with tools like COMASSO BSWDT, it turns out that loading times are still reasonable even with a file based approach.

Specifics of AUTOSAR models

Dangling References / References by fully qualified name

In an AUTOSAR XML, the model need not necessarily be self-contained. A reference to another model element is specified by a fully qualified name (FQN) and that element need not be in the same .arxml. This is useful for model management, since you can combine partial models into a larger model in a very flexible way. However, that means  that these references need to be resolved to find the target element. But there is a technical issue: The referring element specifies the reference as a fully qualified name such as ‘A/B/C’. However, to calculate the FQN for a given element, you have to take its entire containment hierarchy into account. That means, that if a short name of any element changes, that will affect all the fully qualified names of its descendants.

File-BASED / Sphinx / Artop

Coming from Artop, the basic infrastructure Sphinx has support for that reference resolving. The proxy resolution is based on the fully qualified names and has some optimizations.  Unresolved references are cached in a blacklist and will not be traversed until the model changes in a way that a retry makes sense. New files are automatically detected and added to the model. New versions of Sphinx even support improved proxy resolving with the IncQuery project.

DATABASE

While it was mentioned above, that it is easy to store AUTOSAR models in a database with CDO, this refers to fully resolved models mainly. Supporting the flexible fully qualified name mechanisms needs extra design by the repository developers.

Splitables

In AUTOSAR, the contents of an element can be split over different physical files. That means that a part of your SoftwareComponent A can be specified in file x.arxml, while another part can be specified in y.arxml. This is also very handy for model management. E.g., as a tier-1, you can just keep your parts of a DEM (Diagnostic Event Manager) configuration in specific files and then drop in the DEM.arxml from the OEM, creating a new combined model. And when the DEM.arxml from the OEM changes, it should be convenient to just overwrite the old one.

However, that implies additional mechanism for all persistence technologies to create the merged view / model.

File-BASED / Sphinx / Artop / COMASSO

There are various mechanism used in different tools for this:

  • Artop has an approach that uses Java Proxies to create the merged view dynamically
  • We have an implementation that is more generic and uses AspectJ on EMF models.
  • COMASSO uses a specific meta-model variation that is EMF like (but slightly different), that allows the merging of splitables based on a dedicated infrastructure.

DATABASES

Additional design considerations have to be done for splitables. Possible approaches could be:

  • Store the fragments of the split objects in the database and merge them while querying / updating
  • Store only merged objects in the database and provide some kind of update mechanism (e.g. by remembering the origin of elements)

BSW meta-model

For the basic software configuration (BSW), the AUTOSAR metamodel provides meta-model elements to define custom parameter containers and types. That means that at this level, we have meta-modeling mechanisms within the AUTOSAR standard.

This has some impact on the persistence. The repository could just decide to store the AUTOSAR meta-model elements (i.e. ParameterDefinitions and ParameterValue), thus requiring additional logic to find values for a given parameter definition – a similar problem exists when accessing those parts of AUTOSAR from code and solutions exist for that.

Representing the parameter definitions directly in the database might not be simple – such dynamic schema definitions are difficult to realize in relational databases. Scheme-free databases from the NoSQL family seem better suited for this job.

Workflow

Offline Work

The persistence technology will also have effects on the possibility to work with the data without a network connection to the repository.  Although world-wide connectivity is continuously improving, you might not want to rely on a stable network connection while doing winter testing in a car on a frozen lake in the northern hemisphere. In these cases, at least read access might be required.

Some repository technologies (e.g. CDO some DB backends) provide cloning and offline access, so do file based solutions.

Long running transactions

Work on engineering models often is conceptual work that is not done within (milli-) seconds, but work on the model often takes days, weeks or more. E.g., consider a developer who wants to introduce some new signals in a car’s network. He will first start by getting a working copy of the current network database. Then he will start working on his task, until he has a concept that he things is feasible. Often, he cannot change / add new signals on his own, but he has to discuss with a company wide CCB. Only after that every affected partner agreed on the change (and that will involve a lot of amendments), can he finalize his changes and publish them as official change.

The infrastructure must support this kind of “working branches”. CDO supports this for some database backends and works very nicely – but it is also very obvious that the workflow above is very similar to the workflow used in software development. Storing the file based models in “code repositories” such as SVN or Git is also a feasible approach. The merging of conflicting changes can then be comfortably supported with EMF compare (which is the same technology as used by infrastructure such as CDO).

Inconsistent state

These offline / working branches must support saving / storage of an inconsistent state of the model(!). As the development of such a change takes quite some time, it must be possible to save and restore intermediate increments, even when they are not consistent. A user must be able to save his incomplete work when leaving in the evening and resume next morning. Anything else will immediately kill user acceptance.

Authorization / Authentication

Information protection is a central feature of modeling tools. Even if a malevolent person does not get access to full engineering data, information can be inferred from small bits of data, such as names of new signals introduced into the boardnet. If you find a signal name like SIG_NUCLEAR_REACTOR_ENGINE_OFF, that would be very telling in itself. Let’s have a look at two aspects of information protection:

  • Prevent industrial espionage (control who sees what)
  • Prevent sabotage (invalid modification of data)

Authentication is a prerequisite for both, but supported by most technologies anyway.

Control read-access

If models are served to the user only “partially” (i.e. through filtered queries in a 3-tier architecture), it seems easier to provide/deny access to fine-grained elements. But it also involves a lot of additional considerations and tool adaptions. E.g., if a certain user is not allowed to learn about a specific signal, how to we deal with the PDUs that refer to that signal? We should not just filter that specific signal out, since it would actually cause misinformation. We could create an anonymous representation (we have done this with CDO’s access control mechanism, showing “ACCESS-DENIED” everywhere such a signal is used), but that also has a lot of side-effects, e.g. for validation etc.

CDO has a fine-grained access control on the elements mapped to the database (and I am proud to say that this was in part sponsored by the company I work for within the IMES project and Eike Stepper has done some very nice work on that).

In addition, if you support offline work, you must additionally make sure that this does not impose a problem, since now a copy of data is on the disk – off-the-shelf replication copies the data and would have a copy of the original, unfiltered data (definitely, nowadays you would only store on encrypted disks anyway).

One of the approaches would be to define as coarse-grained access control as possible, e.g. by introducing (sub-) model branches that can be granted access to in a whole. This would also be the approach to be taken for file-based storage. Access to the model files can be supported through authorization of the infrastructure.

Control data modification

Another risk is that someone actively modifies the data to sabotage the systems, e.g. by changing a signal timing. Motivation for such activation can vary from negligence, paid sabotage to a disgruntled employee. Similar considerations apply to this use case as to the read-access described above. In the three-tier-architecture, access validation can be done by the backend.

But how about the file-based approach. Can the user not manipulate the files before submitting them? Yes he could, but a verification is still possible when the model is submitted back into the repository. A verification hook can be used that checks what has been modified. And with the API of EMF compare, these verification could be made on the actual meta-model API (not on some file change set) to see if any invalid changes have been made in the models.

Conclusion

AUTOSAR models can be stored in a number of ways. Using Artop and the Eclipse based framework, a number of options are available. Choosing the right approach depends on a specific use cases and functional and (non)-functional requirements.

Considering Agile for your tool chain development

Developing and integrating complex toolchains in automotive companies is a challenging task. A number of those challenges is directly addressed by the principles behind the “Agile Manifesto”. So it is worth while to see what “Agile” has to offer for these, independent of a specific framework (SCRUM, Kanban, …)

Keeping the (internal) customer happy

Tool departments and ECU development projects often have a special kind of relationship. Often, the ECU projects feel that the tooling they are being provided with do not fulfill their requirements with respect to functionality, availability or support. They are often in very different organizational units.

Principles behind the Agile Manifesto
Our highest priority is to satisfy the customer
through early and continuous delivery
of valuable software.
So if agile lives up to its promises (and is correctly implemented!), then this should address these issues.

Dealing with a changing environment

For any project that is not short-term, your environment (and thus your requirements) is sure to change. Especially tooling for new technologies (like AUTOSAR Ethernet, Autonomous Driving, etc.) involve a lot of feedback loops based on experience in the project. And opportunity for change is abundant in addition to that:

  • Your customer’s timeline. The ECU needs to be finished earlier, later, not at all, wait – now it’s at the original schedule
  • Shifting priorities in functionality needed by the customer
  • Availability of new technologies: We have seen planned work packages turned completely redundant by the same functionality having become available in open source.

 

Principles behind the Agile Manifesto
Welcome changing requirements, even late in
development. Agile processes harness change for
the customer’s competitive advantage.

Getting customer committment

In some settings, a tool or tool chain is being discussed with the customer, then implemented independently by a team and provided to the customer upon completion, only to find the customer frustrated by what he has been delivered. This might lead to finger-pointing and dissatisfaction. A good way to get customer commitment is by integrating them into the development and be able to get early feedback and improvement suggestions. Because then it is “our” tool, not “their” tool. And everybody likes “our” tool more than “their” tool.

 

Principles behind the Agile Manifesto
Business people and developers must work
together daily throughout the project.
.

Agile is not simple

Agile methodologies and frameworks address some of the typical challenges of tool departments. However, “Agile” is no panacea and it doing it right is not necessarily an easy task. It forces change and it needs management support – otherwise you might just spin the buzzwords. But for many projects it is well worth the journey.

 

 

 

 

Integrating Rhapsody in your AUTOSAR toolchain

UML tools such as Enterprise Architect or Rhapsody (and others) are well established in the software development process. Sometimes the modeling guidelines are following a custom modelling, e.g. with specific profiles. So when you are modelling for AUTOSAR systems, at sometimes you are faced with the problem of transforming your model to AUTOSAR.

For customer projects, we have analyzed/implemented different stratgies.

Artop as a integration tool

First of all, if you are transforming to AUTOSAR, the recommendation is to transform to an Artop model and let Artop do all the serialization. Directly creating the AUTOSAR-XML (.arxml) is cumbersome, error-prone and generally “not-fun”.

Getting data out: Files or API

To access the data in Rhapsody, you could either read the stored files or access the data through the API of Rhapsody. This post describes aspects of the second approach.

Scenario 1: Accessing directly without intermediate storage

In this scenario, the transformation uses the “live” data from a running Rhapsody as data source. Rhapsody provides a Java based API (basically a wrapper to Windows COM-API). So it is very easy to write a transformation from “Rhapsody-Java” to “Artop-Java”. A recommended technology would be the open source Xtend language, since it provides a lot of useful features for that use case (see a description in this blog post).

Scenario 2: Storing the data from Rhapsody locally, transforming from that local representation

In this scenario, the data from Rhapsody is being extracted via the Java-API and stored locally. Further transformation steps can work on that stored copy. A feasible approach is to store the copied data in EMF. With reflection and other approaches, you can create the required .ecore-definitions from the Rhapsody provided Java classes. After that, you can also use transformation technologies that require an .ecore-definition as a basis for the transformation (but you can still use Xtend). The stored data will be very close to the Rhapsody representation of UML.

Scenario 3: Storing the data in “Eclipse UML” ecore, transforming from that local representation

In this scenario, the data is stored in the format of the Eclipse provided UML .ecore files, which represent a UML meta-model that is true to the standard. That means that your outgoing transformation would be more conforming to the standard UML meta-model and you could use other integrations that use that meta-model. However, you would have to map to that UML meta-model first.

There are several technical approaches to that. You can even to the conversion “on-the-fly”, implementing a variant of Scenario 1 with on-the-fly conversion.

 Technology as Open Source

The base technologies for the scenarios are available as open source / community source:

  • Eclipse EMF
  • Eclipse Xtend, Qvto (or other transformation languages)
  • Artop (available to AUTOSAR members)

 

Using the Xtend language for M2M transformation

In the last few month, we have been developing a customer project that centers around model-to-model transformation with the target model being AUTOSAR.

In the initial concept phase, we had two major candidates for the M2M-transformation language: Xtend and QVTO. After doing some evaluations, we decided that for the specific use case, Xtend was the technology of choice.

 

TopicExplanation
ComfortXtend has a number of features that make writing model-to-model-transformations very concise and comfortable. The most important is the concise syntax to navigate over models. This helps to avoid loops that would be required when implementing in Java


val r = eAllContents.filter(EcucChoiceReferenceDef).findFirst[
shortName == "DemMemoryDestinationRef"]
}
Traceability / One-Pass TransformationXtend provides so-called "create" methods for creating new target model elements in your transformation. The main usage is to be able to write efficient code without having to implement a multi-pass transformation. This is solved by using an internal cache to return the same target object if the method is invoked for the same input objects more than one time


However, the internally used caches can also be used to generate tracing information about the relationship from source to target model. We use that both for

  • Writing out trace information in a log file

  • Adding trace information about the source elements to the target elements


Both features have been added to the "plain" Xtend, because we can use standard Java mechanisms to access them.


In addition, we can also run a static analysis to see what sourcetarget metaclass combinations exist in our codebase.

PerformanceXtend compiles to plain Java. This gives higher performance than many interpreted transformation languages. In addition, you can use any Java profiler (such as Yourkit, JProfiler) to find bottlenecks in your transformations.
Long-Term-SupportXtend compiles to plain Java. You can just keep the compiled java code for safety and be totally independent about the Xtend project itself.
Test-SupportXtend compiles to plain Java. You can just use any testing tools (such as JUnit integration in Eclipse or mvn/surefire). We have extensive test cases for the transformation that are documented in nice reports that are generated with standard Java tooling.
Code CoverageXtend compiles to plain Java. You can just use any code coverage tools (such as Jacoco)
DebuggingDebugger integration is fully supported to step through your code.
ExtensibilityXtend is fully integrated with Java. It does not matter if you write your code in Java or Xtend.
DocumentationYou can use standard Javadocs in your Xtend transformations and use the standard tooling to get reports.
ModularityXtend integrates with Dependency Injection. Systems like Google Guice can be used to configure combinations of model transformation.
Active AnnotationsXtend supports the customization of its mapping to Java with active annotations. That makes it possible to adapt and extend the transformation system to custom requirements.
Full EMF supportThe Xtend transformations operate on the generated EMF classes. That makes it easy to work with unsettable attributes etc.
IDE IntegrationThe Xtend editors support essential operations such as "Find References", "Go To declaration" etc.

The Xtend syntax on the other hand is not a language based on any standard. But it’s performance, modularity and maintenance features are a strong argument for adding it as a candidate for model transformations.

Managing Autosar Complexity with (abstract) Models

A certain number of developers in the automotive domain complain that the introduction of AUTOSAR and its tooling increases the development efforts because of the complexity of the AUTOSAR standard. There is a number of approaches to solve this problem – one of the most used is the introduction of additional models.

The complexity of AUTOSAR

One often heard comment about AUTOSAR is that the meta-model and the basic softwae configuration is rather complex and that it takes quite some time to actually get acquainted with that. One of the reasons is that AUTOSAR aspires to be an interchange standard for all the software engineering related artefacts in automotive software design and implementation. As such, it has to address all potentially needed data. However, for the single project or developer, that also means that they will be confronted with all that aspects that they never had to consider. Overall, one might argue that AUTOSAR does not increase the complexity, but it exposes the inherent complexity of the industry for all to see. That however implies, that a user might need more manageable views on AUTOSAR.

Example: Datatypes

The broad applicability of AUTOSAR leads to a meta-model, that will take some analysis to understand. For application data types, the following meta-model extract shows relevant elements for  “specifying lower and upper ranges that constrain the applicable value interval.”

Extract from AUTOSAR SW Component Template

Extract from AUTOSAR SW Component Template

Which is quite a flexible meta-model. However, all the compositions make it complex from the perspective of an ECU developer. The meta-model for enumerations is even more complex.

2015-04-01_22h30_31

Extract from AUTOSAR SW Component Template: Enumerations

If you come from C with its lean notation

it seems quite obvious that the AUTOSAR model might be a bit scary. So a lot of projects and companies are looking for custom ways to deal with AUTOSAR.

Custom Tooling or COTS

In the definition of a software engineering tool chain, there will always be trade-offs between implementing custom tooling or deploying commercial tools. Especially for innovative projects, the teams will use innovative modeling and design methods and as such will need tooling specially designed for that. The design and implementation of custom tooling is a core competency of tooling / methodology departments.

A more abstract view on AUTOSAR

The general approach will be to provide a view on AUTOSAR (or a custom model) that is suited to the special needs of a given user group and then generate AUTOSAR from that. Some technical ideas would be:

“Customize” the AUTOSAR model

In this approach, the AUTOSAR model’s features are used to annotate / add custom data that is more manageable. This case can be subdivided into the BSW configuration and the other parts of the model.

BSW: Vendor specific parameter definitions

The ECUC configuration supports the definition of custom parameter definitions that can be used to abstract the complexity. Take the following example:

  • The standard DEM EventParameter has various attributes (EventAvailable, CounterThreshold, PrestorageSupported).
  • Let’s assume, that in our setting, only 3 different combinations might be valid. So it would be nice if we could just say “combination 1″ instead of setting those manually
  • So we could define a VSMD that is based on the official DEM and have a QXQYEventParameter that has only one attribute “combination”
  • From that we could then generate the DEMEventParameter’s values based on the combination attribute.

This is actually done in real life projects. Technologies to use could be:

  • COMASSO: Create a Xpand script and use the “update model”
  • Artop: Use the new workflow and ECUC accessors mechanism

SYstem / SW Templates:

The generic structure template of AUTOSAR allows us to add “special data” to model elements. According to the standard “Special data groups (Sdgs) provide a standardized mechanism to store arbitrary data for which no other element exists of the data model.” (from the Generic Structure specification). Similar to the approach of the BSW we could add simple annotations to model elements and then derive more complex configuration for that (e.g. port communication attributes).

Adapt standard modeling languages

Another approach is to adapt standard (flexible) modeling languages like UML. Tools like Enterprise Architect provide powerful modeling features and the UML diagrams with their classes, ports and connections have striking similarities to the AUTOSAR component models (well, component models all have similarities, obviously). So a frequently seen approach is to customize the models through stereotypes / tagged values and maybe customize the tool.

The benefit of this approach is that it provides more flexibility in designing a custom meta-model by not being restricted by the AUTOSAR extension features. This approach is also useful for projects that already had a component-based modeling methodology prior to AUTOSAR and want to be able to continue using these models.

A transformation to AUTOSAR is easily written with Artop and Xtend.

Defining custom (i.e. domain specific models)

With modern frameworks like Xtext or Sirius, it has become very easy to implement your own meta-models and tooling around that. Xtext is well suited for all use cases where a textual representation of the model is required. Amongst others, that addresses a technical audience that is used for programming and wants fast and comfortable editing of the models with the keyboard.

German readers will find a good industry success story in the 2014 special edition of “heise Developer Embedded” explaining the use of Xtext at ZF Friedrichshafen.

The future?

AUTOSAR is a huge standard that covers a broad spectrum. Specific user groups will need specific views on the standard to be able to work efficiently. Not all of those use cases will be covered by commercial tools and innovation will drive new approaches.

Custom models with transformations to and from AUTOSAR are one solution and the infrastructure required for that is provided in the community project Artop and the ecosystem around the Eclipse modeling framework EMF.

 

 

Maven, Tycho, Surefire, AspectJ and Equinox Weaving

This is a very short technical post. But since it took us some time to find the solution (there is little information on the Web), we wanted to add another possible search hit for others.

JUnit tests from within Eclipse work well with AspectJ runtime weaving in Equinox. Finding information on how to activate the same in Maven Tycho surefire was more difficult.

You have to configure the surefireplugin to use the Equinox Weaving hook and start the AspectJ Weaving Bundle:

 

In addition, on command line configuration, you could pass debug information. We have set the property:
and use it later on

Visualizing Build Action Manifests with COMASSO BSWDT

The AUTOSAR Build Action Manifest (BAMF) specifies a standardized exchange format / meta-model for the build steps required in building AUTOSAR software. In contrast to other build dependency tools such as make, the BAMFs support the modeling of dependencies on a model level. The dependencies can reference Ecuc Parameter definitions to indicate which parts of the BSW are going to be updated or consumed by a given build step. However, even for the standard basic software these dependencies can be very complex.

The COMASSO BSWDT tool provides a feature for visualizing these dependencies. In the example that we use for training the Xpand framework with the BSW code generators, we see that we have three build steps for updating the model, validation and generation.

2015-03-14_17h46_06The order of the build steps is inferred by the build framework from the modeled dependencies. But in the build control view cannot really see the network of dependencies, only the final flat list.

With File->Export.. we can export the BAMF dependencies as a gml-Graph file.

2015-03-14_17h51_31The generated file can then be opened in any tool that can deal with the GML format. We use the free edition of yED and see the dependencies:

2015-03-14_17h55_50

To optimize the visualization, you should choose the hierarchical layout.

2015-03-14_17h58_09The solid lines indicate model dependencies, dotted lines explicit dependencies. If a real life project yields a grpah that is too complex, you can easily delete nodes from the list on the bottom left.

 

 

 

 

 

 

 

Lightweight Product Line AUTOSAR BSW configuration with physical files and splitables

During AUTOSAR development, the configuration of the basic software is a major task.  Usually, some of the parameters are going to be the same for all your projects (company-wide, product line) and some of them will be specific to a given project / ECU. And you might have a combination of these within the same BSW configuration container.

One of the approaches uses the AUTOSAR Split(t)able concept, which allows you to split the contents of a model element over more than one physical ARXML-file.

So let us assume, we are going to configure the DemGeneral container, and we assume that we have a company-wide policy for all projects to set DemAvailabilitySupport to 1. And let’s assume that the DemBswErrorBufferSize within the same container should be set specific to each ECU.

So let us create an .arxml for the company-wide settings. That might be distributed through some general channel, e.g. when setting up a new project.

2015-02-18_10h11_58Now we can place an Ecuc specific .arxml next in the same project.

2015-02-18_10h11_58bNow any tool that supports splitables (e.g. the COMASSO BSWDT tooling) will be able to create a merged model. The following screenshots are based on our own prototype of an AspectJ based splitable support for Artop. By simply using the context menu

2015-02-18_10h37_26

we will see the merged view:

2015-02-18_10h19_18You can see a few things in that screen shot:

  • First, and most important, the two physically separated DemGenerals are now actually one. The tooling (code generators etc.) will now only see one model, one CONF package and one DemGeneral.

Our demonstrator shows additional information as decorators after the element (decorators can be deactivated through preferences in Eclipse).

  • The “[x slices]” decorator shows how many elements are actually used to make up the joined element. We have 2 physical DemGeneral, so it says “2 slices”. We have one DemAvailabilitySupport, so it shows “1 slices”
  • The decorator at the end shows the physical files that are actually involved for the merging of a given element.

Now if for some reason, a new version of the company-wide .arxml is provided, we only need to exchange / update that file – there is no need for a complex diff and merge.

There are some usecases that cannot be covered with this scenario, but it opens a lot of possibilities for managing variations in BSW configuration on a file level.

 

 

 

 

 

Writing Fast tests for uniqueness in AUTOSAR (and other models) with Java 8

One of the repeating tasks when writing model checks for AUTOSAR (but also for other models) is to check for uniqueness according to some criterion. In AUTOSAR, this could be that all elements in a collection must be unique with respect to their shortName, or e.g., that all ComMChannels must be unique with respect to their channel ids.

Such a test is very easily written, but many of the ad-hoc implementations that we see perform badly when the model grows – which is easily missed when testing with small models.

Assume that we want to test ComMChannels for uniqueness based on their channel ids. To illustrate the problem, two slides from our COMASSO slides. A straight forward implementation with Xpand Check (in the COMASSO framework) would be:2015-01-16_09h50_30For each ComMChannel it checks if any of the other ComMChannel in the same ComM has the same ID. BUT:

2015-01-16_09h50_43So this costs a lot of performance. The trick is to loop only once and use a Set to store data. This can be done in Xpand and other languages. Java 8’s lambdas make it especially nice, since we can have a single method that collects the duplicates:

This code uses both templates methods as well as passing functions to have a very generic interface. We can use it anywhere to search for duplicates in a collection.
Now looking for duplicates in ComMChannel ids would look like this:

And thanks to template methods and Java 8 we can use the same groupBy to look for duplicate Shortnames:

Similar code could also be written in the Xtend2 language.

With the new Sphinx Check framework, a check in an Artop based tool might look like this: