Meanderings on AUTOSAR model repositories (and other models)

When working with AUTOSAR models, model storage and management is topic to be solved. In this blog post, I coarsely discuss some thoughts – it is intended as collection of topics that projects using AUTOSAR models have to adresse, but it is no way complete and each of the issues introduced would fill a lot of technical discussion on their own.

Storage Approaches

Databases

Types of Databases

  1. Relational databases are often the first thing that comes to mind when discussing technologies for model repositories. However, AUTOSAR models (as many other meta-models for engineering data) are actually model elements with a lot of relationships – effectively a complex graph. Relational databases per se do not match well to this structure (see impedance, e.g. in Philip Hauer’s post).
  2. NoSQL databases: There are a number of alternatives to the relational databases, known under the term “NoSQL”, this includes key-value stores, document stores etc. But since the structure of AUTOSAR models is an element graph, the most natural candidate seems to be a graph database
  3. Graph databases treat nodes and their relationships as 1st-class-citizens and have a more natural correspondance to the engineering models.

Database-EMF integration

Eclipse provides technologies to store EMF-based models in database repositories of various kinds. With CDO, it is quite easy to store EMF based models in a relational backend (H2, Oracle) and the combination is very powerful (in the IMES project, we run an embedded H2 (relational) database in Eclipse with CDO to store AUTOSAR models, as well as a centralized server to show the replication mechanisms of such a setup).

However there are drawbacks (schema integration) and advantages (relational database products are well understood by IT departments). There are also projects that provide other CDO backends or more direct integrations with other NoSQL databases (such as Neo4J, OrientDB).

Flat files

AUTOSAR defines an exchange format for AUTOSAR models based on XML. Using that for storage is an obvious idea and Artop does exactly that by using the AUTOSAR XML format as storage format for EMF models.

Source code control Systems

Since the files are stored as text files, models can be managed by putting them into source code control systems, such as SVN or GIT.

Performance

One criterion for the selection of a specific technologies is performance.

Memory

While working with AUTOSAR BSW configurations for ECUs, during the days of 32-bit machines and JVM, memory limitations actually posed a problem for the file-based approach. In settings like this, setting up a database server with the full models and clients that access them is one approach. For Robert Bosch GmbH, we did a study in CDO performance that was presented at EclipseCon. In the end, for this particular setting it was considered more economic to provide 64bit systems to the developers than to send up a more complex infrastructure with a database backend. 64bit systems can easily hold the data for the BSW configuration of an ECU (and more).

Speed

Often it is argued, that with a remote database server, availability of the model is “instant”, since no local loading of models is required. When looking at BSW configurations with tools like COMASSO BSWDT, it turns out that loading times are still reasonable even with a file based approach.

Specifics of AUTOSAR models

Dangling References / References by fully qualified name

In an AUTOSAR XML, the model need not necessarily be self-contained. A reference to another model element is specified by a fully qualified name (FQN) and that element need not be in the same .arxml. This is useful for model management, since you can combine partial models into a larger model in a very flexible way. However, that means  that these references need to be resolved to find the target element. But there is a technical issue: The referring element specifies the reference as a fully qualified name such as ‘A/B/C’. However, to calculate the FQN for a given element, you have to take its entire containment hierarchy into account. That means, that if a short name of any element changes, that will affect all the fully qualified names of its descendants.

File-BASED / Sphinx / Artop

Coming from Artop, the basic infrastructure Sphinx has support for that reference resolving. The proxy resolution is based on the fully qualified names and has some optimizations.  Unresolved references are cached in a blacklist and will not be traversed until the model changes in a way that a retry makes sense. New files are automatically detected and added to the model. New versions of Sphinx even support improved proxy resolving with the IncQuery project.

DATABASE

While it was mentioned above, that it is easy to store AUTOSAR models in a database with CDO, this refers to fully resolved models mainly. Supporting the flexible fully qualified name mechanisms needs extra design by the repository developers.

Splitables

In AUTOSAR, the contents of an element can be split over different physical files. That means that a part of your SoftwareComponent A can be specified in file x.arxml, while another part can be specified in y.arxml. This is also very handy for model management. E.g., as a tier-1, you can just keep your parts of a DEM (Diagnostic Event Manager) configuration in specific files and then drop in the DEM.arxml from the OEM, creating a new combined model. And when the DEM.arxml from the OEM changes, it should be convenient to just overwrite the old one.

However, that implies additional mechanism for all persistence technologies to create the merged view / model.

File-BASED / Sphinx / Artop / COMASSO

There are various mechanism used in different tools for this:

  • Artop has an approach that uses Java Proxies to create the merged view dynamically
  • We have an implementation that is more generic and uses AspectJ on EMF models.
  • COMASSO uses a specific meta-model variation that is EMF like (but slightly different), that allows the merging of splitables based on a dedicated infrastructure.

DATABASES

Additional design considerations have to be done for splitables. Possible approaches could be:

  • Store the fragments of the split objects in the database and merge them while querying / updating
  • Store only merged objects in the database and provide some kind of update mechanism (e.g. by remembering the origin of elements)

BSW meta-model

For the basic software configuration (BSW), the AUTOSAR metamodel provides meta-model elements to define custom parameter containers and types. That means that at this level, we have meta-modeling mechanisms within the AUTOSAR standard.

This has some impact on the persistence. The repository could just decide to store the AUTOSAR meta-model elements (i.e. ParameterDefinitions and ParameterValue), thus requiring additional logic to find values for a given parameter definition – a similar problem exists when accessing those parts of AUTOSAR from code and solutions exist for that.

Representing the parameter definitions directly in the database might not be simple – such dynamic schema definitions are difficult to realize in relational databases. Scheme-free databases from the NoSQL family seem better suited for this job.

Workflow

Offline Work

The persistence technology will also have effects on the possibility to work with the data without a network connection to the repository.  Although world-wide connectivity is continuously improving, you might not want to rely on a stable network connection while doing winter testing in a car on a frozen lake in the northern hemisphere. In these cases, at least read access might be required.

Some repository technologies (e.g. CDO some DB backends) provide cloning and offline access, so do file based solutions.

Long running transactions

Work on engineering models often is conceptual work that is not done within (milli-) seconds, but work on the model often takes days, weeks or more. E.g., consider a developer who wants to introduce some new signals in a car’s network. He will first start by getting a working copy of the current network database. Then he will start working on his task, until he has a concept that he things is feasible. Often, he cannot change / add new signals on his own, but he has to discuss with a company wide CCB. Only after that every affected partner agreed on the change (and that will involve a lot of amendments), can he finalize his changes and publish them as official change.

The infrastructure must support this kind of “working branches”. CDO supports this for some database backends and works very nicely – but it is also very obvious that the workflow above is very similar to the workflow used in software development. Storing the file based models in “code repositories” such as SVN or Git is also a feasible approach. The merging of conflicting changes can then be comfortably supported with EMF compare (which is the same technology as used by infrastructure such as CDO).

Inconsistent state

These offline / working branches must support saving / storage of an inconsistent state of the model(!). As the development of such a change takes quite some time, it must be possible to save and restore intermediate increments, even when they are not consistent. A user must be able to save his incomplete work when leaving in the evening and resume next morning. Anything else will immediately kill user acceptance.

Authorization / Authentication

Information protection is a central feature of modeling tools. Even if a malevolent person does not get access to full engineering data, information can be inferred from small bits of data, such as names of new signals introduced into the boardnet. If you find a signal name like SIG_NUCLEAR_REACTOR_ENGINE_OFF, that would be very telling in itself. Let’s have a look at two aspects of information protection:

  • Prevent industrial espionage (control who sees what)
  • Prevent sabotage (invalid modification of data)

Authentication is a prerequisite for both, but supported by most technologies anyway.

Control read-access

If models are served to the user only “partially” (i.e. through filtered queries in a 3-tier architecture), it seems easier to provide/deny access to fine-grained elements. But it also involves a lot of additional considerations and tool adaptions. E.g., if a certain user is not allowed to learn about a specific signal, how to we deal with the PDUs that refer to that signal? We should not just filter that specific signal out, since it would actually cause misinformation. We could create an anonymous representation (we have done this with CDO’s access control mechanism, showing “ACCESS-DENIED” everywhere such a signal is used), but that also has a lot of side-effects, e.g. for validation etc.

CDO has a fine-grained access control on the elements mapped to the database (and I am proud to say that this was in part sponsored by the company I work for within the IMES project and Eike Stepper has done some very nice work on that).

In addition, if you support offline work, you must additionally make sure that this does not impose a problem, since now a copy of data is on the disk – off-the-shelf replication copies the data and would have a copy of the original, unfiltered data (definitely, nowadays you would only store on encrypted disks anyway).

One of the approaches would be to define as coarse-grained access control as possible, e.g. by introducing (sub-) model branches that can be granted access to in a whole. This would also be the approach to be taken for file-based storage. Access to the model files can be supported through authorization of the infrastructure.

Control data modification

Another risk is that someone actively modifies the data to sabotage the systems, e.g. by changing a signal timing. Motivation for such activation can vary from negligence, paid sabotage to a disgruntled employee. Similar considerations apply to this use case as to the read-access described above. In the three-tier-architecture, access validation can be done by the backend.

But how about the file-based approach. Can the user not manipulate the files before submitting them? Yes he could, but a verification is still possible when the model is submitted back into the repository. A verification hook can be used that checks what has been modified. And with the API of EMF compare, these verification could be made on the actual meta-model API (not on some file change set) to see if any invalid changes have been made in the models.

Conclusion

AUTOSAR models can be stored in a number of ways. Using Artop and the Eclipse based framework, a number of options are available. Choosing the right approach depends on a specific use cases and functional and (non)-functional requirements.

Functional Architectures (EAST-ADL) and Managing Behavior Models

In the early phases of systems engineering, functional architectures are used to create a functional description of the system without specifying details about the implementation (e.g. the decision of implementation in HW or SW). In EAST-ADL, the building blocks of functional architectures (Function Types) can be associated with so-called Function Behaviors that support the addition of formal notations for the specification of the behavior of a logical function.

In the research project IMES, we are using the EAST-ADL functional architecture and combine it with block diagrams. Let’s consider the following simplified functional model of a Start-Stop system:

2013-11-10_12h53_32In EAST-ADL, you can add any number of behaviors to a function. We already have a behavior activationControlBehavior1 attached to the function type activationControl and we are adding another behavior called newBehavior.2013-11-10_12h54_06

A behavior itself does not specifiy anything yet in EAST-ADL. It is more like a container that allows to refer to other artefacts actually containing the behavior specification. You can refer to any specification (like ML/SL or other tools). For our example, we are using the block diagram design and simulation tool DAMOS, which is available as open source at www.eclipse.org.

We can now invoke the “Create Block Diagram” on the behavior. 2013-11-10_12h55_34

The tool will then create a block diagram with in- und output ports that are taken from the function type (you can see in the diagram that we have two input ports and one output ports).2013-11-10_12h56_20 Add this point, the architects and developers can specify the actual logic. Of course it is also possible to use existing models and derive the function type from that (“reverse from existing”). That would work on a variety of existing specifications, like state machines, ML/SL etc.

A DAMOS model could look like this:2013-11-10_12h56_48

At this point of the engineering process, we would have:

  • A functional architecture
  • one or more function behaviors with behavior models attached to the function types

Obviously, the next thing that comes to mind is  to combine all the single behavior models and to create a simulation model for a (sub-) system. Since we might have more than one model attached to a function, we have to pick a specific model for each of the function types.

This is done by creating a system configuration.2013-11-10_13h02_52That will create a configuration file that associates a function type (on the left side) with a behavior model (right side). Of course, using Eclipse Xtext, this is not only a text file, but a full model with validation (are the associated models correct) and full type aware content assist:2013-11-10_13h16_52After the configuration has been defined, a simulation model is generated that can then be run to combine the set of function models.

 

 

 

 

 

A common component model for communication matrixes and architecture metrics for EAST-ADL, AUTOSAR and others.

In the engineering process, it is often of interest to see communication matrixes or metrics for your architecture. Depending on the meta-model (EAST-ADL, AUTOSAR, SysML)  and the tool, that kind of information might or might not be available. In the IMES research project, we are using the Eclipse-Adapter mechanism to provide a lightwight common component model that adapts to the different models on the fly.

The Eclipse Adapter Mechanism is used to provide a common model for different meta-models

The Eclipse Adapter Mechanism is used to provide a common model for different meta-models

The tool implements different views / editors and metrics, that work on the common component model, providing the mechanism for EAST-ADL analysis and design functions, error models and AUTOSAR software architectures.

The following EAST-ADL model

A small EAST-ADL analysis function model.

A small EAST-ADL analysis function model.

can be shown in the communication matrix editor, which uses the same code for EAST-ADL and AUTOSAR.

2013-08-08_09h33_19

 

 

 

Authorization / Authentification with EMF and Graphiti for Automotive models

The Eclipse frameworks provides a lot of projects that make the design and implementation of modeling tools very efficient. For many products, the models have to be secured against unauthorized access or modification. This is not often seen in standard EMF based toolings. In the research project IMES, we are using a number of technologies to achieve that goal:

  • Shiro provides a framework for authentication / authorization. It can be used with a number of authentication mechanisms, from plain text files to LDAP servers.
  • Graphiti for the graphical editors
  • Xtext and Model 2 Model transformation to generate Graphiti based editors. The code generation introduces access authorization into the editors. The user will get feedback that an operation is not allowed, before it is actually executed.

A text based authorization configuration could look like this (text based would probably not be used in production, but it is easy to demonstrate):

2013-07-25_15h04_44

We specify roles oem and tier1. oem can edit everything, except the model in the file Motor.eaxml. tier1 cannot edit anything, but view everything except for the model element “activate”.

Now we can supply credentials in a preference page:2013-07-25_15h04_34

 

Editing

Since oem_user has role oem, she is not allowed  modify anything except for the data in a specific model resource. The graphical editor queries the framework for access rights and provides visual feedback.2013-07-25_16h40_45

Viewing

There is additional support for viewing rights. In our configuration, the OEM can see everything, but there might be a specific function that should not be shown to 3rd parties. For the OEM, the diagram is showing all the labels. But for tier1_user role has denied read access. So the port label is rendered differently:

2013-07-25_16h39_42

 

CDO’ification of Sphinx: Database Storage for AUTOSAR, EAST-ADL and Req-IF

The Eclipse Sphinx project is an important but little known infrastructure project that supports workspace management for tool platforms such as Artop (AUTOSAR), EATOP (EAST-ADL) and – in the future – RMF (Req-IF). Right now it is closely tied to EMF XML-based resources.

3-tier for information security

The models that are held in these tools often will contain sensitive data. Many companies require a 3-tier architecture with a database backend for these (including authorization and authentification). In the research project “IMES” we are extending Sphinx so that it supports CDO as a “backend”. CDO in turn supports storage in a number of databases.

Refactoring Sphinx

Right now we have refactored Sphinx so that the strong connection to EMF XMI Resources is replaced by an abstract layer. The connection to CDO is currently in progress.

 2013-05-28_13h58_09Branching, Merging, Versioning

In addition, CDO will provide its functionality on branching, merging and versioning.

Assessing architecture quality: Metrics for AUTOSAR software architecture and EAST-ADL functional architectures

Both in AUTOSAR and EAST-ADL we are defining architectures – be it on the software level or on the function level. But when is an architecture a good architecture? How to find flaws? One approach to assess the quality are architecture metrics. In the research project IMES we have implemented a set of metrics both for the EAST-ADL and AUTOSAR models. The metrics calculation is based on the Eclipse projects ARTOP and EATOP.

One of the often-mentioned metrics in literature are “Provided Service Utilization” (PSU) and its counter-part, the “Required Service Utilization” (RSU). PSU measures the degree of utilization of a “service provider” by dividing the number of actually used  service by the total number of offered services:

PSU(s)=PS_used(s)/PS_total(s)

A metric of 1.0 indicates that all offered services are being used, a metric of 0.0 indicates that none of the services is being used. Values below 1.0 are indicators for potential architectural issues.

To apply the PSU concepts to AUTOSAR or EAST-ADL, the elements “service provider” and “service” have to be identitfied in the meta-models. In AUTOSAR, obvious candidates for providers are the SwComponentTypes and SwComponentPrototpyes. Candidates for services are the ports  (it is also possible to work with the data elements of a port, resulting in a finer granularity of the metric). Picking SwComponentTypes or SwComponentPrototpyes for “provider” results in subtle differences:

2013-04-04_13h24_08

In the above model, the metrics for the prototypes are PSU(p1)=1.0 (Port op is connected) and PSU(p2) = 0.0 (not connected). For the component types, PSU(ACI1O1_A)=1.0, since it is only relevant if the port is connected at all.

Connected ports

Deciding if a port is actually connected is more complex than in the simple example above. Delegation connectors just “forward” the ports without functionality – that has to be taken into account. A port being connected through a delegation does not count as a connected port for the metrics calculation. Starting from a port, all transitive connections have to be followed to see if the port is finally connected to a software component that actually consumes data (i.e., not a SwCompositionType), or if it is delegated to the highest level or not delegated at all.

Since we also want to calculate metrics for partial architectures, we define a delegation to the root level as “connected”, but there are other approaches that define such a delegation as “unconnected”.

Side Effect for Usability

The algorithm for finding connected ports has a nice side effect: It can be offered to the user as an extra functionality. In complex models, it is often difficult to track the actual destination of data elements through a port. The algorithm makes that easy.

PSU  and CPSU

Now the PSU of SwComponentTypes and SwComponentPrototypes can easily be calculated.2013-04-04_13h36_42

In the model above, PSU(In0Out3)==0.666. The same algorithms can be used to calculate the CPSU that indicates the ratio of (connected ports/total ports) of all ports of all SwComponentPrototypes of a subsytem. A metric < 1.0 indicates that the system has unconnected ports. In the above model, CPSU = 0.5. It is important to identify prototypes that are only composition types, since they are only structural and should not be used for metrics calculation by counting ports more than once.

RSU, EAST-ADL and variants

The calculation of RSU is analogous to the PSU calculation. And the metrcis calculation is not restricted to software architectures. Metrics for  EAST-ADL analysis functions and design functions can be calculated in the same way. In fact, our implementation is working on an abstract component / service model that maps to Artop and EATOP implementations, so that we can actually use the same code for both meta-models.

In addition, it can be combined with the variant / featuremodels, so that metrics for entire product lines can be calculated and “loose ends” can be found.

 

Software Tool Chain for Automotive – the IMES research project

At itemis, we are currently participating in a public funded research project (IMES) that deals with the software development tool chain for automotive software, especially in early phases. We do have two main focus points in the project:

  • An integrated tool chain for logical functions and software architecture, including traceability to requirements and variant management.
  • A prototype for a trans-company model repository that avoids the current gaps in the tool chain.

The project is a good example how you can leverag existing Eclipse technologies and combine them:

IMES Architecture

IMES Architecture

  • The development process starts with requirements. We use the Eclipse RMF project to import ReqIF and display/edit requirements. This is also the basis for traceability. Eclipse RMF is an open source project contributed by itemis and the university of Düsseldorf / Formal Mind
  • The specification of feature models and model variants is done with tooling based on the Eclipse Feature Model project. The basis of the feature model project had been contributed by pure systems and recently, itemis has released the textual editors for the features / variants as open source and will put them under the Eclipse umbrellas. The editors are based on Xtext, another open source project.
  • Functional modeling (logical architecture) is based on the just recently published EAST-ADL meta-model / its implementation that is published under the name “EATOP”. The meta-model has been contributed by Continental AG as part of a public funded research project. The editors are implemented by itemis and rely heavily on the Eclipse Graphiti framework.
  • In early phases, logical architecture is often done in combination with functional prototypes. These are provided in terms of state machines or, more often, data flow / block diagrams. While the OEM will still use commercial modeling tools for this purpose, we use Eclipse DAMOS to show the integration.
  • Logical Functions are mapped to software architecture. The software architecture editor is developed in the scope of IMES and is based on Artop – an AUTOSAR meta-model implementation by the Artop user group.
  • The same applies for the deployment to real ECUs. Both editors again are based on the Graphiti framework.
  • All of the artifacts can be traced on model element level. This is done by using Yakindu CReMa, which has been developed within the VERDE research project.
  • Again, on model element level, change management tickets can be assigned on a fine level to any tickets. Eclipse Mylyn provides all the infrastructure and the connection to the editors is developed on the IMES side.

On the repository side, we are bringing together two of Eclipse’s interesting projects:

  • Sphinx provides a lot of functionality for workspace management and helps several editing tools from different sources to work together on the same models.
  • CDO is a persistency framework with branching / versioning support (think “Git” for models). It will allow for a number of different collaboration scenarios between OEM and 1st tier, including working together on the same server, working on offline repositories with sync etc.

The Eclipse eco-system is truly marvelous.

Managing functional networks and behavior models with EAST-ADL

Functional networks are an important artefact in early system design. They specifiy system behavior without any decision about the realization (HW/SW) of the specific function. Logic Functions are often prototyped with behavior models, like data flow diagrams. In the IMES research project we implement an editor for EAST-ADL function models, that together with Yakindu traceability, can be used to manage logic functions and behavior models.

A Graphiti based EAST-ADL Editor

Funded by the BMBF

Funded by the BMBF

In the public funded research project IMES we are building a set of editors for the SW Development process in Automotive. Some of the blogs here describe the AUTOSAR editors. Before the actual software development, the engineering process uses function models to describe the functionality without having decided which parts of the function are implemented in software or hardware.

For those function models, we had defined our own simple meta-model. But since we prefer standards and the EATOP-project has released an EAST-ADL .ecore, I migrated the editors to use the EAST-ADL analysis function model.

In IMES, we are using a textual DSL (not Spray!) to describe the Graphiti based editors. That made the migration very easy. It consisted basically in

  • Generating the EMF plugins from the EATOP ecore
  • Adjusting the textual DSL describing the editor.

Finished. Time overall: Two hours.

We will need some additional time to integrate EEF, variant management etc., what we had already for our home-grown meta-model. But getting the editor ready was very quick.