Using AspectJ to analyze exceptions in production systems

For several automotive customers, we are building (Eclipse-based) systems that process engineering data – not only interactively, but in nightly jobs. From time to time, there might be a problem in test or even production systems, throwing an exception.

The default of Java is obviously only to show the stack trace. For a quick analysis, you might also want to like to know which arguments have been passed to the method that the exception was thrown in.

We are using AspectJ to “weave” relevant plugins (we use runtime weaving). The aspect looks like this:

This adds an aspect that is executed after an exception has been thrown. It will print a string representation of the method’s arguments. If the exception is propagated, the arguments of all methods up the call stack will be printed in sequence.

Tycho 0.24, pom less builds and Eclipse m2e maven integration

A very short technical article which might be useful in the next few days.

Tycho 0.24 has been released, and one of its most interesting features is the support for POM-less builds. The Tycho 0.24 release notes explain the use of the extension. However, there is a problem with the .mvn/extensions.xml not being found when using Eclipse variables in the “Base Directory” field of the run configuration.

I have created an m2e Bugzilla https://bugs.eclipse.org/bugs/show_bug.cgi?id=481173 showing problem and workaround:

Update: The m2e team fixed the bug within 48 hours. Very nice. Should be in one of the next builds.

Automotive MDSD – What pickle?

Over at the modeling languages website, Scott Finnie has started a number of posts detailing his point of view on the status of model driven approaches. He concludes his first post with the statement “No wonder we’re in a pickle.”. Reading that, I myself was wondering that I don’t feel in a pickle at all.

Since the comment section of modeling-languages.com has limited formatting for a longish post, I’d like to put forward my 2c here.

Yes, model driven is not as visible and hyped as many other current technologies and, compared to the entire software industry, its percentual usage might not be impressing. But looking at my focus industry, automotive, I think we have come a long way in the last 10 years.

Model-Driven development is an important and accepted part of the automotive engineer’s toolbox. Most of the industry thinks of ML/SL and Ascet when hearing of the MD* terms. But actually there are so many other modeling techniques in use.

Architecture models are one of the strongest trends, not the least driven by the AUTOSAR standard. But even aside (or complementary to) AUTOSAR, the industry is investing in architecture models in UML or DSLs, often in connection with feature modeling and functional modeling. Code is generated from those architectural models, with the implementation being generated from ML/SL, Ascet, state machines or being written manually. Quality of the engineering data is improving noticeably through this approach.

Companies introduce their custom DSLs to model architecture, network communication, test cases, variants and generate a huge set of artefacts from that.  Models and transformations are being used to connect the AUTOSAR domain to the infotainment domain by transformations to/and from Genivi’s Franca, itself being used to generate code for the infotainment platform.

Model Driven approaches are popping up everywhere and a lot has happened in the past few years. One of the key factors for that development, in my point of view, is the availability of low-cost, accessible tooling and I would consider two tools to be most influential in this regard

  • Enterprise Architect made UML modeling available at a reasonable price, opening it up to a much wider market. That made it possible for more projects and individual to build their MDSD approach based on UML and (custom) generators.
  • Xtext made a great framework and tool for domain specific languages available at no cost to companies, projects and enthusiasts. It is always fun and amazing to come into an existing project and find what many teams have already built on Xtext. Xtext itself of course is standing on the shoulder of giants, Eclipse and the Eclipse Modeling Framework (EMF).

Maybe Eclipse Sirius will be able to do for Graphical DSLs what Xtext has done for textual modeling. 15 years ago I was working for a company that offered UML-modelling and one of the first template-based code generation technologies at a high price. The situation is completely different now – and very enjoyable.

EclipseCon Europe 2015 from an Automotive Perspective

As Eclipse is established as a tooling platform in automotive industry, the EclipseCon Europe conference in Ludwigsburg is an invaluable source of information. This year’s EclipseCon is full of interesting talks. Here is a selection from my “automotive tooling / methodology” perspective.

  • The Car – Just Another Thing in the Internet of Things? – 

    Michael Würtenberger is the head of BMW CarIT. In addition to the interesting talk, BMW CarIT was also one of the strong drivers of the Eclipse / EMF based AUTOSAR tooling platform “Artop” and it is very interesting to learn about their topics.

  • openETCS – Eclipse in the Rail Domain, 

    Rover Use Case, Specification, design and implementation using Polarsys Tools

    Scenarios@run.time – Modeling, Analyzing, and Executing Specifications of Distributed Systems

    The multiple facets of the PBS (Product Breakdown Structure)

    openMDM5: From a fat client to a scalable, omni-channel architecture

    Enhanced Project Management for Embedded C/C++ Programming using Software Components

    All industries are also looking into what the “other” industry is doing to learn about strength and weaknesses.

  • Brace Yourself! With Long Term Support: Long-Term-Support is an important issue when choosing a technology as strategic platform. This talk should provide information to use in discussions when arguing for/against Eclipse and Open Source.

  • The Eclipse Way: Learn and understand how Eclipse manages to develop such a strong ecosystem

  • My experience as an Eclipse contributor:
    As more and more companies from the automotive domain actively participate in Open Source, you will learn on what that means from a contributor’s view.

On the technical side, a lot of talks are of interest for automotive tooling, including:

  • Ecore Editor- Reloaded

    CDO’s New Clothes

    GEF4 – Sightseeing Mars
    All Sirius Talks
    All Xtext Talks
    Modeling Symposium

    Customizable Automatic Layout for Complex Diagrams Is Coming to Eclipse

    Since EMF is a major factor in Eclipse in automotive, these talks provide interesting information on modeling automotive data.

  • EMF Compare + EGit = Seamless Collaborative Modeling

    News from Git in Eclipse

    Tailor-made model comparison: how to customize EMF Compare for your modeling language

    Storing models in files has many advantages over storing them in a database. These technologies help in setting up a model management infrastructure that satisfies a lot of requirements.

  • 40 features/400 plugins: Operating a build pipeline with high-frequently updating P2 repositories

    POM-less Tycho builds

    Oomph: Eclipse the Way You Want It

    High productivity development with Eclipse and Java 8
    Docker Beginners Tutorial

    These will introduce the latest information on build management and development tools.

Of course there are a lot more talks at EclipseCon that are also of interest. Checkout the program. It is very interesting this year.

Sphinx’ ResourceSetListener, Notification Processing and URI change detection

The Sphinx framework adds functionality for model management to the base EMF tooling. Since it was developed within the Artop/AUTOSAR activities, it also includes code to make sure that the name-based references of AUTOSAR models are always correct.  This includes some post processing after model changes by working on the notifications that are generated within a transaction.

Detecting changes in reference URIs and dirty state of resources.

Assume that we have two resources SR.arxml (source) and target (TR.arxml) in the same ResourceSet (and on disk). TR contains an element TE with a qualified name /AUTOSAR/P/P2/Target which is referenced from a source element SE that is contained in SR. That means that the string “/AUTOSAR/P/P2/Target” is to be found somewhere in SR.arxml

Now what happens if some code changes TE’s name to NewName and saves the resource TR.arxml? If only TR.arxml would be saved, that means that we now would have an inconsistent model on disk, since the SR.arxml would still contain “/AUTOSAR/P/P2/Target” as a reference, which could not be resolved the next time the model is loaded.

We see that there are some model modifications that affect not only the resource of the modified elements, but also referencing resources. Sphinx determines the “affected” other resources and marks them as “dirty”, so that they are serialized the next time the model is written to make sure that name based references are still correct.

But obviously, only changes that affect the URIs of referencing elements should cause other resources to be set to dirty. Features that are not involved should not have that effect. Sphinx offers the possibility to specify specific strategies per meta-model. The interface is IURIChangeDetectorDelegate.

The default implementation for XMI based resources is:

 

This will cause a change notification to be generated for any EObject that is modified, of course we do not want that. In contrast, the beginning of the Artop implementation of AutosarURIChangeDetectorDelegate looks like this:

 

A notification on a feature is only processed if the modified object is an Identifiable and the modified feature is the one used for URI calculation (shortName in AUTOSAR). There is additional code in this fragment to detect changes in containment hierarchy, which is not shown here.

So if you use Sphinx for your own metamodel with name based referencing, have a look at AutosarURIChangeDetectorDelegate and create your own custom implementation for efficiency.

 LocalProxyChangeListener

In addition, Sphinx detects objects that have been removed from the containment tree and updates references by turning the removed object into proxies! That might be unexpected if you then work with the removed object later on. The rationale is well-explained in the Javadoc of LocalProxyChangeListener:
Detects {@link EObject model object}s that have been removed from their
{@link org.eclipse.emf.ecore.EObject#eResource() containing resource} and/or their {@link EObject#eContainer
containing object} and turns them as well as all their directly and indirectly contained objects into proxies. This
offers the following benefits:

  • After removal of an {@link EObject model object} from its container, all other {@link EObject model object}s that
    are still referencing the removed {@link EObject model object} will know that the latter is no longer available but
    can still figure out its type and {@link URI}.
  • If a new {@link EObject model object} with the same type as the removed one is added to the same container again
    later on, the proxies which the other {@link EObject model object}s are still referencing will be resolved as usual
    and therefore get automatically replaced by the newly added {@link EObject model object}.
  • In big models, this approach can yield significant advantages in terms of performance because it helps avoiding
    full deletions of {@link EObject model object}s involving expensive searches for their cross-references and those of
    all their directly and indirectly contained objects. It does all the same not lead to references pointing at
    “floating” {@link EObject model object}s, i.e., {@link EObject model object}s that are not directly or indirectly
    contained in a resource.

 

 

 

 

AUTOSAR: OCL, Xtend, oAW for validation

In a recent post, I had written about Model-to-Model-transformation with Xtend. In addition to M2M-transformation, Xtend and the new Sphinx Check framework are a good pair for model validation. There are other frameworks, such as OCL, which are also candidates. Xpand (formerly known as oAW) is used in COMASSO. This blog post sketches some questions / issues to consider when choosing a framework for model validation.

Support for unsettable attributes

EMF supports attributes that can have the status “unset” (i.e. they have never been explicitly set), as well as default values. When accessing this kind of model element attributes with the standard getter-method, you will not be able to distinguish whether the model element has been explicitly set to the same value as the default value or if it has never been touched.

If this kind of check is relevant, the validation technology should support access to the EMF methods that support the explicit predicate if a value has been set.

Inverse References

With AUTOSAR, a large number of checks will involve some logic to see, if a given element is referenced by other elements (e.g. checks like “find all signals that are not referenced from a PDU”). Usually, these references are uni-directional and traversal of the model is required to find referencing elements. In these cases, performance is heavily influenced by the underlying framework support. A direct solution would be to traverse the entire model or use the utility functions of EMF. However, if the technology allows access to frameworks like IncQuery, a large number of queries / checks can be significantly sped up.

Error Reporting

Error Reporting is central for the usability of the generated error messages. This involves a few aspects that can be explained at a simple example: Consider that we want to check that each PDU has a unique id.

Context

In Xpand (and similar in OCL), a check could look like:

This results in quadratic runtime, since the list of PDU is is fully traversed for each PDU that is checked. This can be improved in several ways:

  1. Keep the context on the PDU level, but allow some efficient caching so that the code is not executed so often. However, that involves some additional effort in making sure that the caches are created / torn down at the right time (e.g. after model modification)
  2. Move the context up to package or model level and have a framework that allows to generate errors/warning not only for the elements in the context, but on any other (sub-) elements. The Sphinx Check framework supports this.

Calculation intensive error messages

Sometimes the calculation of a check is quite complex and, in addition, generating a meaningful error message might need some results from that calculation. Consider e.g. this example from the ocl documentation:

The fragment

is used in both the error message as well as the calculation of the predicate. In this case, this is no big deal, but when the calculation gets more complex, this can be annoying. Approaches are

  1. Factor the code into a helper function and call it twice. Under the assumption, that the error message is only evaluated when a check fails that should not incur much overhead. However, it moves the actual predicate away from the invariant statement.
  2. In Xpand, any information can be attached to any model elements. So in the check body, the result of the complex calculation can be attached to the model element and the information is retrieved in the calculation of the error message.
  3. In the Sphinx Check framework, error messages can be calculated from within the check body.

User documentation for checks

Most validation frameworks support the definition of at least an error code (error id) and a descriptive message. However, more detailed explanations of the checks are often required for the users to be able to work with and fix check results. For the development process, it is beneficial if that kind of description is stored close to the actual tests. This could be achieved by analysing comments near the validations, tools like javadoc etc. The Sphinx frameworks describes ids, messages, severities and user documentation in an EMF model. During runtime of an Eclipse RCP, it is possible to use the dynamic help functionality of the Eclipse Help to generate documentation for all registered checks on the fly.

 

 

Here are some additional features of the Xtend language that come in Handy when writing validations:

TopicExplanation
ComfortXtend has a number of features that make writing checks very concise and comfortable. The most important is the concise syntax to navigate over models. This helps to avoid loops that would be required when implementing in Java


val r = eAllContents.filter(EcucChoiceReferenceDef).findFirst[
shortName == "DemMemoryDestinationRef"]
}
PerformanceXtend compiles to plain Java. This gives higher performance than many interpreted transformation languages. In addition, you can use any Java profiler (such as Yourkit, JProfiler) to find bottlenecks in your transformations.
Long-Term-SupportXtend compiles to plain Java. You can just keep the compiled java code for safety and be totally independent about the Xtend project itself.
Test-SupportXtend compiles to plain Java. You can just use any testing tools (such as JUnit integration in Eclipse or mvn/surefire). We have extensive test cases for the transformation that are documented in nice reports that are generated with standard Java tooling.
Code CoverageXtend compiles to plain Java. You can just use any code coverage tools (such as Jacoco)
DebuggingDebugger integration is fully supported to step through your code.
ExtensibilityXtend is fully integrated with Java. It does not matter if you write your code in Java or Xtend.
DocumentationYou can use standard Javadocs in your Xtend transformations and use the standard tooling to get reports.
ModularityXtend integrates with Dependency Injection. Systems like Google Guice can be used to configure combinations of model transformation.
Active AnnotationsXtend supports the customization of its mapping to Java with active annotations. That makes it possible to adapt and extend the transformation system to custom requirements.
Full EMF supportThe Xtend transformations operate on the generated EMF classes. That makes it easy to work with unsettable attributes etc.
IDE IntegrationThe Xtend editors support essential operations such as "Find References", "Go To declaration" etc.

 

Considering Agile for your tool chain development

Developing and integrating complex toolchains in automotive companies is a challenging task. A number of those challenges is directly addressed by the principles behind the “Agile Manifesto”. So it is worth while to see what “Agile” has to offer for these, independent of a specific framework (SCRUM, Kanban, …)

Keeping the (internal) customer happy

Tool departments and ECU development projects often have a special kind of relationship. Often, the ECU projects feel that the tooling they are being provided with do not fulfill their requirements with respect to functionality, availability or support. They are often in very different organizational units.

Principles behind the Agile Manifesto
Our highest priority is to satisfy the customer
through early and continuous delivery
of valuable software.
So if agile lives up to its promises (and is correctly implemented!), then this should address these issues.

Dealing with a changing environment

For any project that is not short-term, your environment (and thus your requirements) is sure to change. Especially tooling for new technologies (like AUTOSAR Ethernet, Autonomous Driving, etc.) involve a lot of feedback loops based on experience in the project. And opportunity for change is abundant in addition to that:

  • Your customer’s timeline. The ECU needs to be finished earlier, later, not at all, wait – now it’s at the original schedule
  • Shifting priorities in functionality needed by the customer
  • Availability of new technologies: We have seen planned work packages turned completely redundant by the same functionality having become available in open source.

 

Principles behind the Agile Manifesto
Welcome changing requirements, even late in
development. Agile processes harness change for
the customer’s competitive advantage.

Getting customer committment

In some settings, a tool or tool chain is being discussed with the customer, then implemented independently by a team and provided to the customer upon completion, only to find the customer frustrated by what he has been delivered. This might lead to finger-pointing and dissatisfaction. A good way to get customer commitment is by integrating them into the development and be able to get early feedback and improvement suggestions. Because then it is “our” tool, not “their” tool. And everybody likes “our” tool more than “their” tool.

 

Principles behind the Agile Manifesto
Business people and developers must work
together daily throughout the project.
.

Agile is not simple

Agile methodologies and frameworks address some of the typical challenges of tool departments. However, “Agile” is no panacea and it doing it right is not necessarily an easy task. It forces change and it needs management support – otherwise you might just spin the buzzwords. But for many projects it is well worth the journey.

 

 

 

 

Integrating Rhapsody in your AUTOSAR toolchain

UML tools such as Enterprise Architect or Rhapsody (and others) are well established in the software development process. Sometimes the modeling guidelines are following a custom modelling, e.g. with specific profiles. So when you are modelling for AUTOSAR systems, at sometimes you are faced with the problem of transforming your model to AUTOSAR.

For customer projects, we have analyzed/implemented different stratgies.

Artop as a integration tool

First of all, if you are transforming to AUTOSAR, the recommendation is to transform to an Artop model and let Artop do all the serialization. Directly creating the AUTOSAR-XML (.arxml) is cumbersome, error-prone and generally “not-fun”.

Getting data out: Files or API

To access the data in Rhapsody, you could either read the stored files or access the data through the API of Rhapsody. This post describes aspects of the second approach.

Scenario 1: Accessing directly without intermediate storage

In this scenario, the transformation uses the “live” data from a running Rhapsody as data source. Rhapsody provides a Java based API (basically a wrapper to Windows COM-API). So it is very easy to write a transformation from “Rhapsody-Java” to “Artop-Java”. A recommended technology would be the open source Xtend language, since it provides a lot of useful features for that use case (see a description in this blog post).

Scenario 2: Storing the data from Rhapsody locally, transforming from that local representation

In this scenario, the data from Rhapsody is being extracted via the Java-API and stored locally. Further transformation steps can work on that stored copy. A feasible approach is to store the copied data in EMF. With reflection and other approaches, you can create the required .ecore-definitions from the Rhapsody provided Java classes. After that, you can also use transformation technologies that require an .ecore-definition as a basis for the transformation (but you can still use Xtend). The stored data will be very close to the Rhapsody representation of UML.

Scenario 3: Storing the data in “Eclipse UML” ecore, transforming from that local representation

In this scenario, the data is stored in the format of the Eclipse provided UML .ecore files, which represent a UML meta-model that is true to the standard. That means that your outgoing transformation would be more conforming to the standard UML meta-model and you could use other integrations that use that meta-model. However, you would have to map to that UML meta-model first.

There are several technical approaches to that. You can even to the conversion “on-the-fly”, implementing a variant of Scenario 1 with on-the-fly conversion.

 Technology as Open Source

The base technologies for the scenarios are available as open source / community source:

  • Eclipse EMF
  • Eclipse Xtend, Qvto (or other transformation languages)
  • Artop (available to AUTOSAR members)

 

Using the Xtend language for M2M transformation

In the last few month, we have been developing a customer project that centers around model-to-model transformation with the target model being AUTOSAR.

In the initial concept phase, we had two major candidates for the M2M-transformation language: Xtend and QVTO. After doing some evaluations, we decided that for the specific use case, Xtend was the technology of choice.

 

TopicExplanation
ComfortXtend has a number of features that make writing model-to-model-transformations very concise and comfortable. The most important is the concise syntax to navigate over models. This helps to avoid loops that would be required when implementing in Java


val r = eAllContents.filter(EcucChoiceReferenceDef).findFirst[
shortName == "DemMemoryDestinationRef"]
}
Traceability / One-Pass TransformationXtend provides so-called "create" methods for creating new target model elements in your transformation. The main usage is to be able to write efficient code without having to implement a multi-pass transformation. This is solved by using an internal cache to return the same target object if the method is invoked for the same input objects more than one time


However, the internally used caches can also be used to generate tracing information about the relationship from source to target model. We use that both for

  • Writing out trace information in a log file

  • Adding trace information about the source elements to the target elements


Both features have been added to the "plain" Xtend, because we can use standard Java mechanisms to access them.


In addition, we can also run a static analysis to see what sourcetarget metaclass combinations exist in our codebase.

PerformanceXtend compiles to plain Java. This gives higher performance than many interpreted transformation languages. In addition, you can use any Java profiler (such as Yourkit, JProfiler) to find bottlenecks in your transformations.
Long-Term-SupportXtend compiles to plain Java. You can just keep the compiled java code for safety and be totally independent about the Xtend project itself.
Test-SupportXtend compiles to plain Java. You can just use any testing tools (such as JUnit integration in Eclipse or mvn/surefire). We have extensive test cases for the transformation that are documented in nice reports that are generated with standard Java tooling.
Code CoverageXtend compiles to plain Java. You can just use any code coverage tools (such as Jacoco)
DebuggingDebugger integration is fully supported to step through your code.
ExtensibilityXtend is fully integrated with Java. It does not matter if you write your code in Java or Xtend.
DocumentationYou can use standard Javadocs in your Xtend transformations and use the standard tooling to get reports.
ModularityXtend integrates with Dependency Injection. Systems like Google Guice can be used to configure combinations of model transformation.
Active AnnotationsXtend supports the customization of its mapping to Java with active annotations. That makes it possible to adapt and extend the transformation system to custom requirements.
Full EMF supportThe Xtend transformations operate on the generated EMF classes. That makes it easy to work with unsettable attributes etc.
IDE IntegrationThe Xtend editors support essential operations such as "Find References", "Go To declaration" etc.

The Xtend syntax on the other hand is not a language based on any standard. But it’s performance, modularity and maintenance features are a strong argument for adding it as a candidate for model transformations.

Managing Autosar Complexity with (abstract) Models

A certain number of developers in the automotive domain complain that the introduction of AUTOSAR and its tooling increases the development efforts because of the complexity of the AUTOSAR standard. There is a number of approaches to solve this problem – one of the most used is the introduction of additional models.

The complexity of AUTOSAR

One often heard comment about AUTOSAR is that the meta-model and the basic softwae configuration is rather complex and that it takes quite some time to actually get acquainted with that. One of the reasons is that AUTOSAR aspires to be an interchange standard for all the software engineering related artefacts in automotive software design and implementation. As such, it has to address all potentially needed data. However, for the single project or developer, that also means that they will be confronted with all that aspects that they never had to consider. Overall, one might argue that AUTOSAR does not increase the complexity, but it exposes the inherent complexity of the industry for all to see. That however implies, that a user might need more manageable views on AUTOSAR.

Example: Datatypes

The broad applicability of AUTOSAR leads to a meta-model, that will take some analysis to understand. For application data types, the following meta-model extract shows relevant elements for  “specifying lower and upper ranges that constrain the applicable value interval.”

Extract from AUTOSAR SW Component Template

Extract from AUTOSAR SW Component Template

Which is quite a flexible meta-model. However, all the compositions make it complex from the perspective of an ECU developer. The meta-model for enumerations is even more complex.

2015-04-01_22h30_31

Extract from AUTOSAR SW Component Template: Enumerations

If you come from C with its lean notation

it seems quite obvious that the AUTOSAR model might be a bit scary. So a lot of projects and companies are looking for custom ways to deal with AUTOSAR.

Custom Tooling or COTS

In the definition of a software engineering tool chain, there will always be trade-offs between implementing custom tooling or deploying commercial tools. Especially for innovative projects, the teams will use innovative modeling and design methods and as such will need tooling specially designed for that. The design and implementation of custom tooling is a core competency of tooling / methodology departments.

A more abstract view on AUTOSAR

The general approach will be to provide a view on AUTOSAR (or a custom model) that is suited to the special needs of a given user group and then generate AUTOSAR from that. Some technical ideas would be:

“Customize” the AUTOSAR model

In this approach, the AUTOSAR model’s features are used to annotate / add custom data that is more manageable. This case can be subdivided into the BSW configuration and the other parts of the model.

BSW: Vendor specific parameter definitions

The ECUC configuration supports the definition of custom parameter definitions that can be used to abstract the complexity. Take the following example:

  • The standard DEM EventParameter has various attributes (EventAvailable, CounterThreshold, PrestorageSupported).
  • Let’s assume, that in our setting, only 3 different combinations might be valid. So it would be nice if we could just say “combination 1” instead of setting those manually
  • So we could define a VSMD that is based on the official DEM and have a QXQYEventParameter that has only one attribute “combination”
  • From that we could then generate the DEMEventParameter’s values based on the combination attribute.

This is actually done in real life projects. Technologies to use could be:

  • COMASSO: Create a Xpand script and use the “update model”
  • Artop: Use the new workflow and ECUC accessors mechanism

SYstem / SW Templates:

The generic structure template of AUTOSAR allows us to add “special data” to model elements. According to the standard “Special data groups (Sdgs) provide a standardized mechanism to store arbitrary data for which no other element exists of the data model.” (from the Generic Structure specification). Similar to the approach of the BSW we could add simple annotations to model elements and then derive more complex configuration for that (e.g. port communication attributes).

Adapt standard modeling languages

Another approach is to adapt standard (flexible) modeling languages like UML. Tools like Enterprise Architect provide powerful modeling features and the UML diagrams with their classes, ports and connections have striking similarities to the AUTOSAR component models (well, component models all have similarities, obviously). So a frequently seen approach is to customize the models through stereotypes / tagged values and maybe customize the tool.

The benefit of this approach is that it provides more flexibility in designing a custom meta-model by not being restricted by the AUTOSAR extension features. This approach is also useful for projects that already had a component-based modeling methodology prior to AUTOSAR and want to be able to continue using these models.

A transformation to AUTOSAR is easily written with Artop and Xtend.

Defining custom (i.e. domain specific models)

With modern frameworks like Xtext or Sirius, it has become very easy to implement your own meta-models and tooling around that. Xtext is well suited for all use cases where a textual representation of the model is required. Amongst others, that addresses a technical audience that is used for programming and wants fast and comfortable editing of the models with the keyboard.

German readers will find a good industry success story in the 2014 special edition of “heise Developer Embedded” explaining the use of Xtext at ZF Friedrichshafen.

The future?

AUTOSAR is a huge standard that covers a broad spectrum. Specific user groups will need specific views on the standard to be able to work efficiently. Not all of those use cases will be covered by commercial tools and innovation will drive new approaches.

Custom models with transformations to and from AUTOSAR are one solution and the infrastructure required for that is provided in the community project Artop and the ecosystem around the Eclipse modeling framework EMF.