# Using Eclipse CDT for the Udacity Self Driving Cars Nanodegree C++ projects

There are a number of C/C++ IDEs that can be used for working on the Udacity SDCND C++ projects. This blog post contains a few tips for using Eclipse CDT to solve the Extended Kalman Filter project.

1. Get Eclipse CDT. Either by downloading and installing it from the Eclipse Website or by using the setup from my last post.
2. Checkout the project from github
3. invoke cmake ..  -DCMAKE_BUILD_TYPE=Debug invoke cmake ..  -G “Eclipse CDT4 – Unix Makefiles” (in contrast to the documentation, I compile with debug)
4. This creates a .cproject and .project in the build directory. Copy those one level higher.
5. Import the project as existing project in Eclipse.
6. I usually compile / run from the command line. Open a shell directly from Eclipse
7. To run your code from within Eclipse or to debug from within Eclipse, use the context menu.
8. Debug mode will stop in “main()”, to disable this. Click on the settings for your debug configuration:
Goto the debugger tab and disable the option:

# An Integrated Development Environment for the SDCND

The self-driving car nanodegree from Udacity includes a number of different projects in Python and C++ in different environments (local PC, AWS if you don’t have an GPU). A common discussion topic on Slack is  the question, what everyone is using as editor / IDE.

Since I have been recently working on the topic of development environments for ML in automotive, I took the inspiration and set up and easy to install configuration for Eclipse including

• PyDev for Python Development
• CDT for C/C++ development
• EGit for the integrated git access
• AWS Eclipse integration for starting / stopping and accessing the AWS instances
• Linux Tools for Docker for Docker management.

All these can be easily installed by means of the Eclipse installer (also known as “Oomph”). If you’d like to try the configuration, perform the following steps:

3. Start the Eclipse Installer. Choose the “Advanced Configuration” Mode in the drop-down menu on the top right.
4. In the first diaglog, chose the C/C++ Developer entry as a base, and use “Oxygen” as product:
5. Click next, use the “+” button at the top to add our setup file:
6. Click next and proceed to the final dialog. The installer will download and install all the Eclipse packages.

You can now start to clone the SDCND repositories and work on your projects.

# Udacity Self Driving Car Nano-Degree – is it worth the money?

The Udacity Self Driving Car Nano-Degree currently has a lot of visibility both within the automotive industry as well as with software developers who are interested to move into the industry. Udacity has provided the curriculum of the three-part nano-degree on medium.com which is useful for finding out what you will learn. But what kind of training material and support you get for your fees? Is it just Powerpoint slides or good video lectures? Are exercises just multiple choices or challenging projects? What support do you get? To give potential new students an impression, here is my take (after passing the first part) on what you get for your money:

• The actual courses. They consist of textual documents as well as many video clips with explanations. The videos are professionally produced and generally explain things well. However, I also looked for further material on the web to maybe clarify some things by getting a different explanation or deepen topics of special interest.
• A number of smaller (non-graded) exercises that can be solved directly in the browser. These consist of multiple choice quizzes as well as programming exercises and are useful for checking your understanding during the lectures.
• Support through a mentor (online). I have not tried the mentor, since although I found solving the projects sometimes a bit challenging, I preferred asking on Slack (see below).
• Several graded projects (five for the first part). Udacity provides a lot of the infrastructure for solving some of the projects:
• Easy installation of the required Python environment through provided conda configurations.
• AWS credits in case you need a virtual machine with a GPU to solve the projects that deal with neural networks. This is helpful if you do not have access to a GPU. If you own a PC/notebook with a decent Nvidia graphics card, this is not required – but it still a good opportunity to learn about AWS.
• A Unity-based car driving simulator, including Python libraries to control the car from your own code. You will use that for one of the machine learning exercises to create a neural network to drive the car along the track automatically.
• Collections of the data sets required to train your models – most of these are collections of publicly available data sets (traffic signs, car images) – but you get it all readily bundled.
• Submitted projects are reviewed by a real human – it is not just a robot that checks that your code runs and produces the correct output. Many of the reviews I have received are really detailed and provide additional information for further research even when you pass the project on the first time. Some of the reviewers did code reviews and provided feedback on some of the APIs that had been used.
In addition, Udacity really has quality requirements for passing, it is not possible to pass with sub-par implementations.
• A discussion forum in the web. I mainly checked the forum for solved installation problems with some tools.
• Several slack channels which are helpful both for discussing the projects as well as for networking. Udacity provides slack channels for the graded projects so that student can exchange ideas and questions as well as geographically grouped channels (e.g. by continent and county) so that you can find students nearby for cooperation.

All in all, I find the price justified for the value that you actually get as a student.

# Setting up Tensorflow w/ GPU for the SDCND on Windows with Nvidia GPU

The Udacity Self-Driving-Car Nano Degree’s second project is an implementation of a traffic sign classifier with Tensorflow. Installation can be a bit confusing, because you could use your PC, an AWS instance, use Anaconda (as recommended) or Docker. Also, when installing, installation information is spread out over the documentation of Udacity, Tensorflow and Nvidia.

When you start reading documentation, Udacity will point you to Tensorflow, which will point you to Nvidia. So here is a short write-up on how I managed to install on a notebook with a Nvidia GPU.

Make sure to check versions before installing in the documentation of Tensorflow, because not all versions of Cuda and CudNN will work with Tensorflow.

• Install CUDA® Toolkit 8.0
• Install cuDNN v6 or v6.1 (must match CUDA version, may have to register for Nvidia developer program). Modify PATH variable manually to include path to cuDNN dll
• Install Anaconda for windows
• Install and activate environment carnd starter kit
• At this point in time, you could run the tensorflow hello world, to see if tensorflow is correctly installed. This is without GPU support.
• Start python Anaconda shell through Anaconda shell, not other shells (there will be an icon in your windows starter)
• Execute pip install –ignore-installed –upgrade tensorflow-gpu

There is a debug srcipt provided by someone at Github that should help in troubleshooting your installation. I did not need it, I just took out the following lines to test in python program and in my Notebook to see if GPU support is running.

In addition, Tensorflow will print some additional information, if it is running with GPU on the command line. You can see it directly in the output or in the shell where you started the Jupyter notebook.

2017-10-06 21:22:02.967457: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 970M, pci bus id: 0000:01:00.0)

# Octave cheat sheet for Coursera Machine Learning course

Switching from / to Octave when working with the Coursera machine learning course can be a bit of a hassle, since some of the Octave syntax is special. Here is my own cheat sheet to remember the most important statements required for the exercises.

## Matrices

• , Column separator
• ; Row separator

### Submatrices

• slices just like in python : a(1,2:3) selects first row, cols 2 and 3

Adds a column of ones on the left side

• rows gets number of rows in X
• ones(rows(X),1) creates a column vector of 1
• […] combines both

Dot operator

### do not print result of statement / REDUCE VERBOSE OUTPUT

Append “;” to statement

## Neural Networks

### Multiplication of layers

Assume calculation of layer with n nodes to m nodes, then $$\Theta$$ will be a matrix with m rows and n colums. And the multiplication will be $$a^{(n+1)}=\Theta*a^{(n)}$$. The final value will be $$a^{(n+1)}=sigmoid(\Theta*a^{(n)})$$

### Sigmoid

which means:

$$1/{1+e^{-z}}$$ for each of the vector elements.

# Ramping Up for the Udacity Self-Driving Car Nanodegree

I was having great fun with the first exercise / graded project of Udacity’s Self-Driving Car Nanodegree (https://de.udacity.com/drive/). The first exercise is the implementation of a (simple) lane detection based on provided still images and video recordings from a dash cam. Having worked with image processing and Python before, I thought it was quite straightforward. It seems however, that some of the participants with less background in the technologies needed more ramp-up (there are several discussions on an Udacity provided slack channel).

Anyone being accepted for the Nanodegree program or planning to apply, might consider to read up on the following topics (no need to deep-dive, though):

## Image processing

• Of course, implementing a lane detection requires the processing of images. The basic knowledge required is to learn about the encoding of images, i.e. RGB representation as well as HSV/HSL representation (in case you want to do some extra processing)

## Python

Python is a straightforward scripting language, but it has some peculiarities, that you might not have seen before:

• Slices : A dedicated syntax for retrieving partial lists/arrays/matrices
• Assignments: Python supports slices and other things on the left side of an assignment operator
• Lambda / methods as arguments: Your code will be invoked as callback by passing around methods as parameters.

## Git / Github

• Examples projects / configuration are provided by cloning a repo from Github
• You can submit your first projects as a zip file or as submitting a link to a Github Repo

Being prepared for this topics will really make the first deadline (1 week) for project submission much easier.

# Chinese IT and tech podcasts

While I am working mainly in the domain of software engineering for automotive software, I try to keep up with general development in IT to see which technologies can be relevant for the toolchains and software engineering in complex technical systems and in general. A good way to do that is to listen to podcasts – and I try to combine that with keeping my Chinese language skills up to date by listening to Chinese tech podcasts. Information about the podcasts is dispersed over a few sites in the internet, so I’ll provide a list of my favourites. I am just listing the web sites without much comments, please have a look at the content yourself.

# Tycho 0.24, pom less builds and Eclipse m2e maven integration

A very short technical article which might be useful in the next few days.

Tycho 0.24 has been released, and one of its most interesting features is the support for POM-less builds. The Tycho 0.24 release notes explain the use of the extension. However, there is a problem with the .mvn/extensions.xml not being found when using Eclipse variables in the “Base Directory” field of the run configuration.

I have created an m2e Bugzilla https://bugs.eclipse.org/bugs/show_bug.cgi?id=481173 showing problem and workaround:

Update: The m2e team fixed the bug within 48 hours. Very nice. Should be in one of the next builds.

# Sphinx’ ResourceSetListener, Notification Processing and URI change detection

The Sphinx framework adds functionality for model management to the base EMF tooling. Since it was developed within the Artop/AUTOSAR activities, it also includes code to make sure that the name-based references of AUTOSAR models are always correct.  This includes some post processing after model changes by working on the notifications that are generated within a transaction.

# Detecting changes in reference URIs and dirty state of resources.

Assume that we have two resources SR.arxml (source) and target (TR.arxml) in the same ResourceSet (and on disk). TR contains an element TE with a qualified name /AUTOSAR/P/P2/Target which is referenced from a source element SE that is contained in SR. That means that the string “/AUTOSAR/P/P2/Target” is to be found somewhere in SR.arxml

Now what happens if some code changes TE’s name to NewName and saves the resource TR.arxml? If only TR.arxml would be saved, that means that we now would have an inconsistent model on disk, since the SR.arxml would still contain “/AUTOSAR/P/P2/Target” as a reference, which could not be resolved the next time the model is loaded.

We see that there are some model modifications that affect not only the resource of the modified elements, but also referencing resources. Sphinx determines the “affected” other resources and marks them as “dirty”, so that they are serialized the next time the model is written to make sure that name based references are still correct.

But obviously, only changes that affect the URIs of referencing elements should cause other resources to be set to dirty. Features that are not involved should not have that effect. Sphinx offers the possibility to specify specific strategies per meta-model. The interface is IURIChangeDetectorDelegate.

The default implementation for XMI based resources is:

This will cause a change notification to be generated for any EObject that is modified, of course we do not want that. In contrast, the beginning of the Artop implementation of AutosarURIChangeDetectorDelegate looks like this:

A notification on a feature is only processed if the modified object is an Identifiable and the modified feature is the one used for URI calculation (shortName in AUTOSAR). There is additional code in this fragment to detect changes in containment hierarchy, which is not shown here.

So if you use Sphinx for your own metamodel with name based referencing, have a look at AutosarURIChangeDetectorDelegate and create your own custom implementation for efficiency.

# LocalProxyChangeListener

In addition, Sphinx detects objects that have been removed from the containment tree and updates references by turning the removed object into proxies! That might be unexpected if you then work with the removed object later on. The rationale is well-explained in the Javadoc of LocalProxyChangeListener:
Detects {@link EObject model object}s that have been removed from their
containing object} and turns them as well as all their directly and indirectly contained objects into proxies. This
offers the following benefits:

• After removal of an {@link EObject model object} from its container, all other {@link EObject model object}s that
are still referencing the removed {@link EObject model object} will know that the latter is no longer available but
can still figure out its type and {@link URI}.
• If a new {@link EObject model object} with the same type as the removed one is added to the same container again
later on, the proxies which the other {@link EObject model object}s are still referencing will be resolved as usual
and therefore get automatically replaced by the newly added {@link EObject model object}.
• In big models, this approach can yield significant advantages in terms of performance because it helps avoiding
full deletions of {@link EObject model object}s involving expensive searches for their cross-references and those of
all their directly and indirectly contained objects. It does all the same not lead to references pointing at
“floating” {@link EObject model object}s, i.e., {@link EObject model object}s that are not directly or indirectly
contained in a resource.

# AUTOSAR: OCL, Xtend, oAW for validation

In a recent post, I had written about Model-to-Model-transformation with Xtend. In addition to M2M-transformation, Xtend and the new Sphinx Check framework are a good pair for model validation. There are other frameworks, such as OCL, which are also candidates. Xpand (formerly known as oAW) is used in COMASSO. This blog post sketches some questions / issues to consider when choosing a framework for model validation.

# Support for unsettable attributes

EMF supports attributes that can have the status “unset” (i.e. they have never been explicitly set), as well as default values. When accessing this kind of model element attributes with the standard getter-method, you will not be able to distinguish whether the model element has been explicitly set to the same value as the default value or if it has never been touched.

If this kind of check is relevant, the validation technology should support access to the EMF methods that support the explicit predicate if a value has been set.

# Inverse References

With AUTOSAR, a large number of checks will involve some logic to see, if a given element is referenced by other elements (e.g. checks like “find all signals that are not referenced from a PDU”). Usually, these references are uni-directional and traversal of the model is required to find referencing elements. In these cases, performance is heavily influenced by the underlying framework support. A direct solution would be to traverse the entire model or use the utility functions of EMF. However, if the technology allows access to frameworks like IncQuery, a large number of queries / checks can be significantly sped up.

## Error Reporting

Error Reporting is central for the usability of the generated error messages. This involves a few aspects that can be explained at a simple example: Consider that we want to check that each PDU has a unique id.

## Context

In Xpand (and similar in OCL), a check could look like:

This results in quadratic runtime, since the list of PDU is is fully traversed for each PDU that is checked. This can be improved in several ways:

1. Keep the context on the PDU level, but allow some efficient caching so that the code is not executed so often. However, that involves some additional effort in making sure that the caches are created / torn down at the right time (e.g. after model modification)
2. Move the context up to package or model level and have a framework that allows to generate errors/warning not only for the elements in the context, but on any other (sub-) elements. The Sphinx Check framework supports this.

## Calculation intensive error messages

Sometimes the calculation of a check is quite complex and, in addition, generating a meaningful error message might need some results from that calculation. Consider e.g. this example from the ocl documentation:

The fragment

is used in both the error message as well as the calculation of the predicate. In this case, this is no big deal, but when the calculation gets more complex, this can be annoying. Approaches are

1. Factor the code into a helper function and call it twice. Under the assumption, that the error message is only evaluated when a check fails that should not incur much overhead. However, it moves the actual predicate away from the invariant statement.
2. In Xpand, any information can be attached to any model elements. So in the check body, the result of the complex calculation can be attached to the model element and the information is retrieved in the calculation of the error message.
3. In the Sphinx Check framework, error messages can be calculated from within the check body.

## User documentation for checks

Most validation frameworks support the definition of at least an error code (error id) and a descriptive message. However, more detailed explanations of the checks are often required for the users to be able to work with and fix check results. For the development process, it is beneficial if that kind of description is stored close to the actual tests. This could be achieved by analysing comments near the validations, tools like javadoc etc. The Sphinx frameworks describes ids, messages, severities and user documentation in an EMF model. During runtime of an Eclipse RCP, it is possible to use the dynamic help functionality of the Eclipse Help to generate documentation for all registered checks on the fly.

Here are some additional features of the Xtend language that come in Handy when writing validations:

TopicExplanation
ComfortXtend has a number of features that make writing checks very concise and comfortable. The most important is the concise syntax to navigate over models. This helps to avoid loops that would be required when implementing in Java

val r = eAllContents.filter(EcucChoiceReferenceDef).findFirst[
shortName == "DemMemoryDestinationRef"]
}
PerformanceXtend compiles to plain Java. This gives higher performance than many interpreted transformation languages. In addition, you can use any Java profiler (such as Yourkit, JProfiler) to find bottlenecks in your transformations.
Long-Term-SupportXtend compiles to plain Java. You can just keep the compiled java code for safety and be totally independent about the Xtend project itself.
Test-SupportXtend compiles to plain Java. You can just use any testing tools (such as JUnit integration in Eclipse or mvn/surefire). We have extensive test cases for the transformation that are documented in nice reports that are generated with standard Java tooling.
Code CoverageXtend compiles to plain Java. You can just use any code coverage tools (such as Jacoco)
DebuggingDebugger integration is fully supported to step through your code.
ExtensibilityXtend is fully integrated with Java. It does not matter if you write your code in Java or Xtend.
DocumentationYou can use standard Javadocs in your Xtend transformations and use the standard tooling to get reports.
ModularityXtend integrates with Dependency Injection. Systems like Google Guice can be used to configure combinations of model transformation.
Active AnnotationsXtend supports the customization of its mapping to Java with active annotations. That makes it possible to adapt and extend the transformation system to custom requirements.
Full EMF supportThe Xtend transformations operate on the generated EMF classes. That makes it easy to work with unsettable attributes etc.
IDE IntegrationThe Xtend editors support essential operations such as "Find References", "Go To declaration" etc.