# Octave cheat sheet for Coursera Machine Learning course

Switching from / to Octave when working with the Coursera machine learning course can be a bit of a hassle, since some of the Octave syntax is special. Here is my own cheat sheet to remember the most important statements required for the exercises.

## Matrices

• , Column separator
• ; Row separator

### Submatrices

• slices just like in python : a(1,2:3) selects first row, cols 2 and 3

Adds a column of ones on the left side

• rows gets number of rows in X
• ones(rows(X),1) creates a column vector of 1
• […] combines both

Dot operator

### do not print result of statement / REDUCE VERBOSE OUTPUT

Append “;” to statement

## Neural Networks

### Multiplication of layers

Assume calculation of layer with n nodes to m nodes, then $$\Theta$$ will be a matrix with m rows and n colums. And the multiplication will be $$a^{(n+1)}=\Theta*a^{(n)}$$. The final value will be $$a^{(n+1)}=sigmoid(\Theta*a^{(n)})$$

### Sigmoid

which means:

$$1/{1+e^{-z}}$$ for each of the vector elements.

# Ramping Up for the Udacity Self-Driving Car Nanodegree

I was having great fun with the first exercise / graded project of Udacity’s Self-Driving Car Nanodegree (https://de.udacity.com/drive/). The first exercise is the implementation of a (simple) lane detection based on provided still images and video recordings from a dash cam. Having worked with image processing and Python before, I thought it was quite straightforward. It seems however, that some of the participants with less background in the technologies needed more ramp-up (there are several discussions on an Udacity provided slack channel).

Anyone being accepted for the Nanodegree program or planning to apply, might consider to read up on the following topics (no need to deep-dive, though):

## Image processing

• Of course, implementing a lane detection requires the processing of images. The basic knowledge required is to learn about the encoding of images, i.e. RGB representation as well as HSV/HSL representation (in case you want to do some extra processing)

## Python

Python is a straightforward scripting language, but it has some peculiarities, that you might not have seen before:

• Slices : A dedicated syntax for retrieving partial lists/arrays/matrices
• Assignments: Python supports slices and other things on the left side of an assignment operator
• Lambda / methods as arguments: Your code will be invoked as callback by passing around methods as parameters.

## Git / Github

• Examples projects / configuration are provided by cloning a repo from Github
• You can submit your first projects as a zip file or as submitting a link to a Github Repo

Being prepared for this topics will really make the first deadline (1 week) for project submission much easier.

# Using AspectJ to analyze exceptions in production systems

For several automotive customers, we are building (Eclipse-based) systems that process engineering data – not only interactively, but in nightly jobs. From time to time, there might be a problem in test or even production systems, throwing an exception.

The default of Java is obviously only to show the stack trace. For a quick analysis, you might also want to like to know which arguments have been passed to the method that the exception was thrown in.

We are using AspectJ to “weave” relevant plugins (we use runtime weaving). The aspect looks like this:

This adds an aspect that is executed after an exception has been thrown. It will print a string representation of the method’s arguments. If the exception is propagated, the arguments of all methods up the call stack will be printed in sequence.

# Chinese IT and tech podcasts

While I am working mainly in the domain of software engineering for automotive software, I try to keep up with general development in IT to see which technologies can be relevant for the toolchains and software engineering in complex technical systems and in general. A good way to do that is to listen to podcasts – and I try to combine that with keeping my Chinese language skills up to date by listening to Chinese tech podcasts. Information about the podcasts is dispersed over a few sites in the internet, so I’ll provide a list of my favourites. I am just listing the web sites without much comments, please have a look at the content yourself.

# Tycho 0.24, pom less builds and Eclipse m2e maven integration

A very short technical article which might be useful in the next few days.

Tycho 0.24 has been released, and one of its most interesting features is the support for POM-less builds. The Tycho 0.24 release notes explain the use of the extension. However, there is a problem with the .mvn/extensions.xml not being found when using Eclipse variables in the “Base Directory” field of the run configuration.

I have created an m2e Bugzilla https://bugs.eclipse.org/bugs/show_bug.cgi?id=481173 showing problem and workaround:

Update: The m2e team fixed the bug within 48 hours. Very nice. Should be in one of the next builds.

# Automotive MDSD – What pickle?

Over at the modeling languages website, Scott Finnie has started a number of posts detailing his point of view on the status of model driven approaches. He concludes his first post with the statement “No wonder we’re in a pickle.”. Reading that, I myself was wondering that I don’t feel in a pickle at all.

Since the comment section of modeling-languages.com has limited formatting for a longish post, I’d like to put forward my 2c here.

Yes, model driven is not as visible and hyped as many other current technologies and, compared to the entire software industry, its percentual usage might not be impressing. But looking at my focus industry, automotive, I think we have come a long way in the last 10 years.

Model-Driven development is an important and accepted part of the automotive engineer’s toolbox. Most of the industry thinks of ML/SL and Ascet when hearing of the MD* terms. But actually there are so many other modeling techniques in use.

Architecture models are one of the strongest trends, not the least driven by the AUTOSAR standard. But even aside (or complementary to) AUTOSAR, the industry is investing in architecture models in UML or DSLs, often in connection with feature modeling and functional modeling. Code is generated from those architectural models, with the implementation being generated from ML/SL, Ascet, state machines or being written manually. Quality of the engineering data is improving noticeably through this approach.

Companies introduce their custom DSLs to model architecture, network communication, test cases, variants and generate a huge set of artefacts from that.  Models and transformations are being used to connect the AUTOSAR domain to the infotainment domain by transformations to/and from Genivi’s Franca, itself being used to generate code for the infotainment platform.

Model Driven approaches are popping up everywhere and a lot has happened in the past few years. One of the key factors for that development, in my point of view, is the availability of low-cost, accessible tooling and I would consider two tools to be most influential in this regard

• Enterprise Architect made UML modeling available at a reasonable price, opening it up to a much wider market. That made it possible for more projects and individual to build their MDSD approach based on UML and (custom) generators.
• Xtext made a great framework and tool for domain specific languages available at no cost to companies, projects and enthusiasts. It is always fun and amazing to come into an existing project and find what many teams have already built on Xtext. Xtext itself of course is standing on the shoulder of giants, Eclipse and the Eclipse Modeling Framework (EMF).

Maybe Eclipse Sirius will be able to do for Graphical DSLs what Xtext has done for textual modeling. 15 years ago I was working for a company that offered UML-modelling and one of the first template-based code generation technologies at a high price. The situation is completely different now – and very enjoyable.

# EclipseCon Europe 2015 from an Automotive Perspective

As Eclipse is established as a tooling platform in automotive industry, the EclipseCon Europe conference in Ludwigsburg is an invaluable source of information. This year’s EclipseCon is full of interesting talks. Here is a selection from my “automotive tooling / methodology” perspective.

• # The Car – Just Another Thing in the Internet of Things? –

Michael Würtenberger is the head of BMW CarIT. In addition to the interesting talk, BMW CarIT was also one of the strong drivers of the Eclipse / EMF based AUTOSAR tooling platform “Artop” and it is very interesting to learn about their topics.

• openETCS – Eclipse in the Rail Domain,

# Enhanced Project Management for Embedded C/C++ Programming using Software Components

All industries are also looking into what the “other” industry is doing to learn about strength and weaknesses.

• Brace Yourself! With Long Term Support: Long-Term-Support is an important issue when choosing a technology as strategic platform. This talk should provide information to use in discussions when arguing for/against Eclipse and Open Source.

• The Eclipse Way: Learn and understand how Eclipse manages to develop such a strong ecosystem

• My experience as an Eclipse contributor:
As more and more companies from the automotive domain actively participate in Open Source, you will learn on what that means from a contributor’s view.

On the technical side, a lot of talks are of interest for automotive tooling, including:

# Customizable Automatic Layout for Complex Diagrams Is Coming to Eclipse

Since EMF is a major factor in Eclipse in automotive, these talks provide interesting information on modeling automotive data.

# Tailor-made model comparison: how to customize EMF Compare for your modeling language

Storing models in files has many advantages over storing them in a database. These technologies help in setting up a model management infrastructure that satisfies a lot of requirements.

# High productivity development with Eclipse and Java 8 Docker Beginners Tutorial

These will introduce the latest information on build management and development tools.

Of course there are a lot more talks at EclipseCon that are also of interest. Checkout the program. It is very interesting this year.

# Sphinx’ ResourceSetListener, Notification Processing and URI change detection

The Sphinx framework adds functionality for model management to the base EMF tooling. Since it was developed within the Artop/AUTOSAR activities, it also includes code to make sure that the name-based references of AUTOSAR models are always correct.  This includes some post processing after model changes by working on the notifications that are generated within a transaction.

# Detecting changes in reference URIs and dirty state of resources.

Assume that we have two resources SR.arxml (source) and target (TR.arxml) in the same ResourceSet (and on disk). TR contains an element TE with a qualified name /AUTOSAR/P/P2/Target which is referenced from a source element SE that is contained in SR. That means that the string “/AUTOSAR/P/P2/Target” is to be found somewhere in SR.arxml

Now what happens if some code changes TE’s name to NewName and saves the resource TR.arxml? If only TR.arxml would be saved, that means that we now would have an inconsistent model on disk, since the SR.arxml would still contain “/AUTOSAR/P/P2/Target” as a reference, which could not be resolved the next time the model is loaded.

We see that there are some model modifications that affect not only the resource of the modified elements, but also referencing resources. Sphinx determines the “affected” other resources and marks them as “dirty”, so that they are serialized the next time the model is written to make sure that name based references are still correct.

But obviously, only changes that affect the URIs of referencing elements should cause other resources to be set to dirty. Features that are not involved should not have that effect. Sphinx offers the possibility to specify specific strategies per meta-model. The interface is IURIChangeDetectorDelegate.

The default implementation for XMI based resources is:

This will cause a change notification to be generated for any EObject that is modified, of course we do not want that. In contrast, the beginning of the Artop implementation of AutosarURIChangeDetectorDelegate looks like this:

A notification on a feature is only processed if the modified object is an Identifiable and the modified feature is the one used for URI calculation (shortName in AUTOSAR). There is additional code in this fragment to detect changes in containment hierarchy, which is not shown here.

So if you use Sphinx for your own metamodel with name based referencing, have a look at AutosarURIChangeDetectorDelegate and create your own custom implementation for efficiency.

# LocalProxyChangeListener

In addition, Sphinx detects objects that have been removed from the containment tree and updates references by turning the removed object into proxies! That might be unexpected if you then work with the removed object later on. The rationale is well-explained in the Javadoc of LocalProxyChangeListener:
Detects {@link EObject model object}s that have been removed from their
containing object} and turns them as well as all their directly and indirectly contained objects into proxies. This
offers the following benefits:

• After removal of an {@link EObject model object} from its container, all other {@link EObject model object}s that
are still referencing the removed {@link EObject model object} will know that the latter is no longer available but
can still figure out its type and {@link URI}.
• If a new {@link EObject model object} with the same type as the removed one is added to the same container again
later on, the proxies which the other {@link EObject model object}s are still referencing will be resolved as usual
and therefore get automatically replaced by the newly added {@link EObject model object}.
• In big models, this approach can yield significant advantages in terms of performance because it helps avoiding
full deletions of {@link EObject model object}s involving expensive searches for their cross-references and those of
all their directly and indirectly contained objects. It does all the same not lead to references pointing at
“floating” {@link EObject model object}s, i.e., {@link EObject model object}s that are not directly or indirectly
contained in a resource.

# AUTOSAR: OCL, Xtend, oAW for validation

In a recent post, I had written about Model-to-Model-transformation with Xtend. In addition to M2M-transformation, Xtend and the new Sphinx Check framework are a good pair for model validation. There are other frameworks, such as OCL, which are also candidates. Xpand (formerly known as oAW) is used in COMASSO. This blog post sketches some questions / issues to consider when choosing a framework for model validation.

# Support for unsettable attributes

EMF supports attributes that can have the status “unset” (i.e. they have never been explicitly set), as well as default values. When accessing this kind of model element attributes with the standard getter-method, you will not be able to distinguish whether the model element has been explicitly set to the same value as the default value or if it has never been touched.

If this kind of check is relevant, the validation technology should support access to the EMF methods that support the explicit predicate if a value has been set.

# Inverse References

With AUTOSAR, a large number of checks will involve some logic to see, if a given element is referenced by other elements (e.g. checks like “find all signals that are not referenced from a PDU”). Usually, these references are uni-directional and traversal of the model is required to find referencing elements. In these cases, performance is heavily influenced by the underlying framework support. A direct solution would be to traverse the entire model or use the utility functions of EMF. However, if the technology allows access to frameworks like IncQuery, a large number of queries / checks can be significantly sped up.

## Error Reporting

Error Reporting is central for the usability of the generated error messages. This involves a few aspects that can be explained at a simple example: Consider that we want to check that each PDU has a unique id.

## Context

In Xpand (and similar in OCL), a check could look like:

This results in quadratic runtime, since the list of PDU is is fully traversed for each PDU that is checked. This can be improved in several ways:

1. Keep the context on the PDU level, but allow some efficient caching so that the code is not executed so often. However, that involves some additional effort in making sure that the caches are created / torn down at the right time (e.g. after model modification)
2. Move the context up to package or model level and have a framework that allows to generate errors/warning not only for the elements in the context, but on any other (sub-) elements. The Sphinx Check framework supports this.

## Calculation intensive error messages

Sometimes the calculation of a check is quite complex and, in addition, generating a meaningful error message might need some results from that calculation. Consider e.g. this example from the ocl documentation:

The fragment

is used in both the error message as well as the calculation of the predicate. In this case, this is no big deal, but when the calculation gets more complex, this can be annoying. Approaches are

1. Factor the code into a helper function and call it twice. Under the assumption, that the error message is only evaluated when a check fails that should not incur much overhead. However, it moves the actual predicate away from the invariant statement.
2. In Xpand, any information can be attached to any model elements. So in the check body, the result of the complex calculation can be attached to the model element and the information is retrieved in the calculation of the error message.
3. In the Sphinx Check framework, error messages can be calculated from within the check body.

## User documentation for checks

Most validation frameworks support the definition of at least an error code (error id) and a descriptive message. However, more detailed explanations of the checks are often required for the users to be able to work with and fix check results. For the development process, it is beneficial if that kind of description is stored close to the actual tests. This could be achieved by analysing comments near the validations, tools like javadoc etc. The Sphinx frameworks describes ids, messages, severities and user documentation in an EMF model. During runtime of an Eclipse RCP, it is possible to use the dynamic help functionality of the Eclipse Help to generate documentation for all registered checks on the fly.

Here are some additional features of the Xtend language that come in Handy when writing validations:

TopicExplanation
ComfortXtend has a number of features that make writing checks very concise and comfortable. The most important is the concise syntax to navigate over models. This helps to avoid loops that would be required when implementing in Java

val r = eAllContents.filter(EcucChoiceReferenceDef).findFirst[
shortName == "DemMemoryDestinationRef"]
}
PerformanceXtend compiles to plain Java. This gives higher performance than many interpreted transformation languages. In addition, you can use any Java profiler (such as Yourkit, JProfiler) to find bottlenecks in your transformations.
Long-Term-SupportXtend compiles to plain Java. You can just keep the compiled java code for safety and be totally independent about the Xtend project itself.
Test-SupportXtend compiles to plain Java. You can just use any testing tools (such as JUnit integration in Eclipse or mvn/surefire). We have extensive test cases for the transformation that are documented in nice reports that are generated with standard Java tooling.
Code CoverageXtend compiles to plain Java. You can just use any code coverage tools (such as Jacoco)
DebuggingDebugger integration is fully supported to step through your code.
ExtensibilityXtend is fully integrated with Java. It does not matter if you write your code in Java or Xtend.
DocumentationYou can use standard Javadocs in your Xtend transformations and use the standard tooling to get reports.
ModularityXtend integrates with Dependency Injection. Systems like Google Guice can be used to configure combinations of model transformation.
Active AnnotationsXtend supports the customization of its mapping to Java with active annotations. That makes it possible to adapt and extend the transformation system to custom requirements.
Full EMF supportThe Xtend transformations operate on the generated EMF classes. That makes it easy to work with unsettable attributes etc.
IDE IntegrationThe Xtend editors support essential operations such as "Find References", "Go To declaration" etc.

# Meanderings on AUTOSAR model repositories (and other models)

When working with AUTOSAR models, model storage and management is topic to be solved. In this blog post, I coarsely discuss some thoughts – it is intended as collection of topics that projects using AUTOSAR models have to adresse, but it is no way complete and each of the issues introduced would fill a lot of technical discussion on their own.

# Storage Approaches

## Databases

### Types of Databases

1. Relational databases are often the first thing that comes to mind when discussing technologies for model repositories. However, AUTOSAR models (as many other meta-models for engineering data) are actually model elements with a lot of relationships – effectively a complex graph. Relational databases per se do not match well to this structure (see impedance, e.g. in Philip Hauer’s post).
2. NoSQL databases: There are a number of alternatives to the relational databases, known under the term “NoSQL”, this includes key-value stores, document stores etc. But since the structure of AUTOSAR models is an element graph, the most natural candidate seems to be a graph database
3. Graph databases treat nodes and their relationships as 1st-class-citizens and have a more natural correspondance to the engineering models.

### Database-EMF integration

Eclipse provides technologies to store EMF-based models in database repositories of various kinds. With CDO, it is quite easy to store EMF based models in a relational backend (H2, Oracle) and the combination is very powerful (in the IMES project, we run an embedded H2 (relational) database in Eclipse with CDO to store AUTOSAR models, as well as a centralized server to show the replication mechanisms of such a setup).

However there are drawbacks (schema integration) and advantages (relational database products are well understood by IT departments). There are also projects that provide other CDO backends or more direct integrations with other NoSQL databases (such as Neo4J, OrientDB).

## Flat files

AUTOSAR defines an exchange format for AUTOSAR models based on XML. Using that for storage is an obvious idea and Artop does exactly that by using the AUTOSAR XML format as storage format for EMF models.

### Source code control Systems

Since the files are stored as text files, models can be managed by putting them into source code control systems, such as SVN or GIT.

# Performance

One criterion for the selection of a specific technologies is performance.

## Memory

While working with AUTOSAR BSW configurations for ECUs, during the days of 32-bit machines and JVM, memory limitations actually posed a problem for the file-based approach. In settings like this, setting up a database server with the full models and clients that access them is one approach. For Robert Bosch GmbH, we did a study in CDO performance that was presented at EclipseCon. In the end, for this particular setting it was considered more economic to provide 64bit systems to the developers than to send up a more complex infrastructure with a database backend. 64bit systems can easily hold the data for the BSW configuration of an ECU (and more).

## Speed

Often it is argued, that with a remote database server, availability of the model is “instant”, since no local loading of models is required. When looking at BSW configurations with tools like COMASSO BSWDT, it turns out that loading times are still reasonable even with a file based approach.

# Specifics of AUTOSAR models

## Dangling References / References by fully qualified name

In an AUTOSAR XML, the model need not necessarily be self-contained. A reference to another model element is specified by a fully qualified name (FQN) and that element need not be in the same .arxml. This is useful for model management, since you can combine partial models into a larger model in a very flexible way. However, that means  that these references need to be resolved to find the target element. But there is a technical issue: The referring element specifies the reference as a fully qualified name such as ‘A/B/C’. However, to calculate the FQN for a given element, you have to take its entire containment hierarchy into account. That means, that if a short name of any element changes, that will affect all the fully qualified names of its descendants.

### File-BASED / Sphinx / Artop

Coming from Artop, the basic infrastructure Sphinx has support for that reference resolving. The proxy resolution is based on the fully qualified names and has some optimizations.  Unresolved references are cached in a blacklist and will not be traversed until the model changes in a way that a retry makes sense. New files are automatically detected and added to the model. New versions of Sphinx even support improved proxy resolving with the IncQuery project.

### DATABASE

While it was mentioned above, that it is easy to store AUTOSAR models in a database with CDO, this refers to fully resolved models mainly. Supporting the flexible fully qualified name mechanisms needs extra design by the repository developers.

## Splitables

In AUTOSAR, the contents of an element can be split over different physical files. That means that a part of your SoftwareComponent A can be specified in file x.arxml, while another part can be specified in y.arxml. This is also very handy for model management. E.g., as a tier-1, you can just keep your parts of a DEM (Diagnostic Event Manager) configuration in specific files and then drop in the DEM.arxml from the OEM, creating a new combined model. And when the DEM.arxml from the OEM changes, it should be convenient to just overwrite the old one.

However, that implies additional mechanism for all persistence technologies to create the merged view / model.

### File-BASED / Sphinx / Artop / COMASSO

There are various mechanism used in different tools for this:

• Artop has an approach that uses Java Proxies to create the merged view dynamically
• We have an implementation that is more generic and uses AspectJ on EMF models.
• COMASSO uses a specific meta-model variation that is EMF like (but slightly different), that allows the merging of splitables based on a dedicated infrastructure.

### DATABASES

Additional design considerations have to be done for splitables. Possible approaches could be:

• Store the fragments of the split objects in the database and merge them while querying / updating
• Store only merged objects in the database and provide some kind of update mechanism (e.g. by remembering the origin of elements)

## BSW meta-model

For the basic software configuration (BSW), the AUTOSAR metamodel provides meta-model elements to define custom parameter containers and types. That means that at this level, we have meta-modeling mechanisms within the AUTOSAR standard.

This has some impact on the persistence. The repository could just decide to store the AUTOSAR meta-model elements (i.e. ParameterDefinitions and ParameterValue), thus requiring additional logic to find values for a given parameter definition – a similar problem exists when accessing those parts of AUTOSAR from code and solutions exist for that.

Representing the parameter definitions directly in the database might not be simple – such dynamic schema definitions are difficult to realize in relational databases. Scheme-free databases from the NoSQL family seem better suited for this job.

# Workflow

## Offline Work

The persistence technology will also have effects on the possibility to work with the data without a network connection to the repository.  Although world-wide connectivity is continuously improving, you might not want to rely on a stable network connection while doing winter testing in a car on a frozen lake in the northern hemisphere. In these cases, at least read access might be required.

Some repository technologies (e.g. CDO some DB backends) provide cloning and offline access, so do file based solutions.

## Long running transactions

Work on engineering models often is conceptual work that is not done within (milli-) seconds, but work on the model often takes days, weeks or more. E.g., consider a developer who wants to introduce some new signals in a car’s network. He will first start by getting a working copy of the current network database. Then he will start working on his task, until he has a concept that he things is feasible. Often, he cannot change / add new signals on his own, but he has to discuss with a company wide CCB. Only after that every affected partner agreed on the change (and that will involve a lot of amendments), can he finalize his changes and publish them as official change.

The infrastructure must support this kind of “working branches”. CDO supports this for some database backends and works very nicely – but it is also very obvious that the workflow above is very similar to the workflow used in software development. Storing the file based models in “code repositories” such as SVN or Git is also a feasible approach. The merging of conflicting changes can then be comfortably supported with EMF compare (which is the same technology as used by infrastructure such as CDO).

## Inconsistent state

These offline / working branches must support saving / storage of an inconsistent state of the model(!). As the development of such a change takes quite some time, it must be possible to save and restore intermediate increments, even when they are not consistent. A user must be able to save his incomplete work when leaving in the evening and resume next morning. Anything else will immediately kill user acceptance.

# Authorization / Authentication

Information protection is a central feature of modeling tools. Even if a malevolent person does not get access to full engineering data, information can be inferred from small bits of data, such as names of new signals introduced into the boardnet. If you find a signal name like SIG_NUCLEAR_REACTOR_ENGINE_OFF, that would be very telling in itself. Let’s have a look at two aspects of information protection:

• Prevent industrial espionage (control who sees what)
• Prevent sabotage (invalid modification of data)

Authentication is a prerequisite for both, but supported by most technologies anyway.

If models are served to the user only “partially” (i.e. through filtered queries in a 3-tier architecture), it seems easier to provide/deny access to fine-grained elements. But it also involves a lot of additional considerations and tool adaptions. E.g., if a certain user is not allowed to learn about a specific signal, how to we deal with the PDUs that refer to that signal? We should not just filter that specific signal out, since it would actually cause misinformation. We could create an anonymous representation (we have done this with CDO’s access control mechanism, showing “ACCESS-DENIED” everywhere such a signal is used), but that also has a lot of side-effects, e.g. for validation etc.

CDO has a fine-grained access control on the elements mapped to the database (and I am proud to say that this was in part sponsored by the company I work for within the IMES project and Eike Stepper has done some very nice work on that).

In addition, if you support offline work, you must additionally make sure that this does not impose a problem, since now a copy of data is on the disk – off-the-shelf replication copies the data and would have a copy of the original, unfiltered data (definitely, nowadays you would only store on encrypted disks anyway).

One of the approaches would be to define as coarse-grained access control as possible, e.g. by introducing (sub-) model branches that can be granted access to in a whole. This would also be the approach to be taken for file-based storage. Access to the model files can be supported through authorization of the infrastructure.

## Control data modification

Another risk is that someone actively modifies the data to sabotage the systems, e.g. by changing a signal timing. Motivation for such activation can vary from negligence, paid sabotage to a disgruntled employee. Similar considerations apply to this use case as to the read-access described above. In the three-tier-architecture, access validation can be done by the backend.

But how about the file-based approach. Can the user not manipulate the files before submitting them? Yes he could, but a verification is still possible when the model is submitted back into the repository. A verification hook can be used that checks what has been modified. And with the API of EMF compare, these verification could be made on the actual meta-model API (not on some file change set) to see if any invalid changes have been made in the models.

# Conclusion

AUTOSAR models can be stored in a number of ways. Using Artop and the Eclipse based framework, a number of options are available. Choosing the right approach depends on a specific use cases and functional and (non)-functional requirements.