Solving the Udacity SDCND exercises with the Windows Subsystem for Linux (WSL) / Windows Ubuntu

Some of the projects in the SDCND nano-degree are C/C++ projects base on cmake. Though there is some support for using direct Windows development, to some, developing on Linux is more comfortable and Linux is also actually relevent as an OS in the automotive domain. There are several scenarios that you could use on a Windows PC:

  • Setup dual boot and boot into the OS that you currently need for development – boring and cumbersome
  • Set up a virtual machine (e.g. with VirtualBox) that runs an Ubuntu Linux. A very nice solution, and actually one that I prefer.
  • Use the Windows Subsystem for Linux (WSL) to run Ubuntu directly on your machine. This is a new feature of Windows 10.

For the last scenario, most of the instructions for developing on Linux directly apply. However, WSL does not (directly) support programs with a GUI. So what would you do for that?

In the Unix world, GUIs are displayed on a system called the X-Server. So what would we need to run, e.g. Eclipse in Ubuntu directly on Windows?

  1. Install and run an X-Server. I use MobaXTerm. It is a standalone program and can be easily configured.
  2. Install Ubuntu from the Microsoft app store:
    2018-01-21_20h12_47
  3. Start Ubuntu from the windows menu
  4. In the Ubuntu shell, install a Java run time / JDK.
    sudo apt-get update
    sudo apt-get update
  5. Download an Eclipse distribution for Linux 64 bit. Unpack it (hint: gunzip -c , tar xvf  ).
  6. In the Ubuntu shell:
    export DISPLAY=localhost:0.0
    This sets the information where the X-Server is to be found to display the GUI-based programs from the Ubuntu Shell
  7. Run the Eclipse binary -> voila, a Unix-Styled Eclipse on your Windows desktop:
    2018-01-21_20h15_32

 

 

Using Eclipse CDT for the Udacity Self Driving Cars Nanodegree C++ projects

There are a number of C/C++ IDEs that can be used for working on the Udacity SDCND C++ projects. This blog post contains a few tips for using Eclipse CDT to solve the Extended Kalman Filter project.

  1. Get Eclipse CDT. Either by downloading and installing it from the Eclipse Website or by using the setup from my last post.
  2. Checkout the project from github
  3. invoke cmake ..  -DCMAKE_BUILD_TYPE=Debug invoke cmake ..  -G “Eclipse CDT4 – Unix Makefiles” (in contrast to the documentation, I compile with debug)
  4. This creates a .cproject and .project in the build directory. Copy those one level higher.
  5. Import the project as existing project in Eclipse.
  6. I usually compile / run from the command line. Open a shell directly from Eclipse
    2018-01-20_16h36_27
  7. To run your code from within Eclipse or to debug from within Eclipse, use the context menu.
    2018-01-20_16h35_05
  8. Debug mode will stop in “main()”, to disable this. Click on the settings for your debug configuration:
    2018-01-20_16h38_53Goto the debugger tab and disable the option:
    2018-01-20_16h39_46

An Integrated Development Environment for the SDCND

The self-driving car nanodegree from Udacity includes a number of different projects in Python and C++ in different environments (local PC, AWS if you don’t have an GPU). A common discussion topic on Slack is  the question, what everyone is using as editor / IDE.

Since I have been recently working on the topic of development environments for ML in automotive, I took the inspiration and set up and easy to install configuration for Eclipse including

  • PyDev for Python Development
  • CDT for C/C++ development
  • EGit for the integrated git access
  • AWS Eclipse integration for starting / stopping and accessing the AWS instances
  • Remote Systems Explorer etc. for easy access to AWS instances
  • Linux Tools for Docker for Docker management.

All these can be easily installed by means of the Eclipse installer (also known as “Oomph”). If you’d like to try the configuration, perform the following steps:

  1. Download the Eclipse Installer from http://www.eclipse.org/downloads/
  2. Download the setup configuration from https://github.com/grafandreas/sdcnd-eclipse-setup/blob/master/SCND-Oomph.setup
  3. Start the Eclipse Installer. Choose the “Advanced Configuration” Mode in the drop-down menu on the top right.
  4. In the first diaglog, chose the C/C++ Developer entry as a base, and use “Oxygen” as product:
    2018-01-06_17h11_55
  5. Click next, use the “+” button at the top to add our setup file:
    2018-01-06_17h12_26
  6. Click next and proceed to the final dialog. The installer will download and install all the Eclipse packages.

You can now start to clone the SDCND repositories and work on your projects.

 

Udacity Self Driving Car Nano-Degree – is it worth the money?

The Udacity Self Driving Car Nano-Degree currently has a lot of visibility both within the automotive industry as well as with software developers who are interested to move into the industry. Udacity has provided the curriculum of the three-part nano-degree on medium.com which is useful for finding out what you will learn. But what kind of training material and support you get for your fees? Is it just Powerpoint slides or good video lectures? Are exercises just multiple choices or challenging projects? What support do you get? To give potential new students an impression, here is my take (after passing the first part) on what you get for your money:

  • The actual courses. They consist of textual documents as well as many video clips with explanations. The videos are professionally produced and generally explain things well. However, I also looked for further material on the web to maybe clarify some things by getting a different explanation or deepen topics of special interest.
  • A number of smaller (non-graded) exercises that can be solved directly in the browser. These consist of multiple choice quizzes as well as programming exercises and are useful for checking your understanding during the lectures.
  • Support through a mentor (online). I have not tried the mentor, since although I found solving the projects sometimes a bit challenging, I preferred asking on Slack (see below).
  • Several graded projects (five for the first part). Udacity provides a lot of the infrastructure for solving some of the projects:
    • Easy installation of the required Python environment through provided conda configurations.
    • AWS credits in case you need a virtual machine with a GPU to solve the projects that deal with neural networks. This is helpful if you do not have access to a GPU. If you own a PC/notebook with a decent Nvidia graphics card, this is not required – but it still a good opportunity to learn about AWS.
    • A Unity-based car driving simulator, including Python libraries to control the car from your own code. You will use that for one of the machine learning exercises to create a neural network to drive the car along the track automatically.
    • Collections of the data sets required to train your models – most of these are collections of publicly available data sets (traffic signs, car images) – but you get it all readily bundled.
    • Submitted projects are reviewed by a real human – it is not just a robot that checks that your code runs and produces the correct output. Many of the reviews I have received are really detailed and provide additional information for further research even when you pass the project on the first time. Some of the reviewers did code reviews and provided feedback on some of the APIs that had been used.
      In addition, Udacity really has quality requirements for passing, it is not possible to pass with sub-par implementations.
  • A discussion forum in the web. I mainly checked the forum for solved installation problems with some tools.
  • Several slack channels which are helpful both for discussing the projects as well as for networking. Udacity provides slack channels for the graded projects so that student can exchange ideas and questions as well as geographically grouped channels (e.g. by continent and county) so that you can find students nearby for cooperation.

All in all, I find the price justified for the value that you actually get as a student.

 

 

 

 

Setting up Tensorflow w/ GPU for the SDCND on Windows with Nvidia GPU

The Udacity Self-Driving-Car Nano Degree’s second project is an implementation of a traffic sign classifier with Tensorflow. Installation can be a bit confusing, because you could use your PC, an AWS instance, use Anaconda (as recommended) or Docker. Also, when installing, installation information is spread out over the documentation of Udacity, Tensorflow and Nvidia.

When you start reading documentation, Udacity will point you to Tensorflow, which will point you to Nvidia. So here is a short write-up on how I managed to install on a notebook with a Nvidia GPU.

Make sure to check versions before installing in the documentation of Tensorflow, because not all versions of Cuda and CudNN will work with Tensorflow.

 

  • Install CUDA® Toolkit 8.0
  • Install cuDNN v6 or v6.1 (must match CUDA version, may have to register for Nvidia developer program). Modify PATH variable manually to include path to cuDNN dll
  • Install Anaconda for windows
  • Install and activate environment carnd starter kit
  • At this point in time, you could run the tensorflow hello world, to see if tensorflow is correctly installed. This is without GPU support.
  • Start python Anaconda shell through Anaconda shell, not other shells (there will be an icon in your windows starter)
  • Execute pip install –ignore-installed –upgrade tensorflow-gpu

There is a debug srcipt provided by someone at Github that should help in troubleshooting your installation. I did not need it, I just took out the following lines to test in python program and in my Notebook to see if GPU support is running.

In addition, Tensorflow will print some additional information, if it is running with GPU on the command line. You can see it directly in the output or in the shell where you started the Jupyter notebook.

2017-10-06 21:22:02.967457: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 970M, pci bus id: 0000:01:00.0)

 

Octave cheat sheet for Coursera Machine Learning course

Switching from / to Octave when working with the Coursera machine learning course can be a bit of a hassle, since some of the Octave syntax is special. Here is my own cheat sheet to remember the most important statements required for the exercises.

Matrices

  • , Column separator
  • ; Row separator

Submatrices

  • indexes start with one
  • slices just like in python : a(1,2:3) selects first row, cols 2 and 3

Adding ONES

Adds a column of ones on the left side

  • rows gets number of rows in X
  • ones(rows(X),1) creates a column vector of 1
  • […] combines both

TranspOSE

APPLY Operator ON VECTOR ELEMENTS

Dot operator

do not print result of statement / REDUCE VERBOSE OUTPUT

Append “;” to statement

FOR-LOOP

 

Functions

Neural Networks

Multiplication of layers

Assume calculation of layer with n nodes to m nodes, then \(\Theta \) will be a matrix with m rows and n colums. And the multiplication will be \(a^{(n+1)}=\Theta*a^{(n)}\). The final value will be \(a^{(n+1)}=sigmoid(\Theta*a^{(n)})\)

Sigmoid

which means:

\(1/{1+e^{-z}}\) for each of the vector elements.

 

Ramping Up for the Udacity Self-Driving Car Nanodegree

I was having great fun with the first exercise / graded project of Udacity’s Self-Driving Car Nanodegree (https://de.udacity.com/drive/). The first exercise is the implementation of a (simple) lane detection based on provided still images and video recordings from a dash cam. Having worked with image processing and Python before, I thought it was quite straightforward. It seems however, that some of the participants with less background in the technologies needed more ramp-up (there are several discussions on an Udacity provided slack channel).

Anyone being accepted for the Nanodegree program or planning to apply, might consider to read up on the following topics (no need to deep-dive, though):

Image processing

  • Of course, implementing a lane detection requires the processing of images. The basic knowledge required is to learn about the encoding of images, i.e. RGB representation as well as HSV/HSL representation (in case you want to do some extra processing)

Python

Python is a straightforward scripting language, but it has some peculiarities, that you might not have seen before:

  • Slices : A dedicated syntax for retrieving partial lists/arrays/matrices
  • Assignments: Python supports slices and other things on the left side of an assignment operator
  • Lambda / methods as arguments: Your code will be invoked as callback by passing around methods as parameters.

Git / Github

  • Examples projects / configuration are provided by cloning a repo from Github
  • You can submit your first projects as a zip file or as submitting a link to a Github Repo

Being prepared for this topics will really make the first deadline (1 week) for project submission much easier.

 

 

Chinese IT and tech podcasts

While I am working mainly in the domain of software engineering for automotive software, I try to keep up with general development in IT to see which technologies can be relevant for the toolchains and software engineering in complex technical systems and in general. A good way to do that is to listen to podcasts – and I try to combine that with keeping my Chinese language skills up to date by listening to Chinese tech podcasts. Information about the podcasts is dispersed over a few sites in the internet, so I’ll provide a list of my favourites. I am just listing the web sites without much comments, please have a look at the content yourself.

 

Tycho 0.24, pom less builds and Eclipse m2e maven integration

A very short technical article which might be useful in the next few days.

Tycho 0.24 has been released, and one of its most interesting features is the support for POM-less builds. The Tycho 0.24 release notes explain the use of the extension. However, there is a problem with the .mvn/extensions.xml not being found when using Eclipse variables in the “Base Directory” field of the run configuration.

I have created an m2e Bugzilla https://bugs.eclipse.org/bugs/show_bug.cgi?id=481173 showing problem and workaround:

Update: The m2e team fixed the bug within 48 hours. Very nice. Should be in one of the next builds.

Sphinx’ ResourceSetListener, Notification Processing and URI change detection

The Sphinx framework adds functionality for model management to the base EMF tooling. Since it was developed within the Artop/AUTOSAR activities, it also includes code to make sure that the name-based references of AUTOSAR models are always correct.  This includes some post processing after model changes by working on the notifications that are generated within a transaction.

Detecting changes in reference URIs and dirty state of resources.

Assume that we have two resources SR.arxml (source) and target (TR.arxml) in the same ResourceSet (and on disk). TR contains an element TE with a qualified name /AUTOSAR/P/P2/Target which is referenced from a source element SE that is contained in SR. That means that the string “/AUTOSAR/P/P2/Target” is to be found somewhere in SR.arxml

Now what happens if some code changes TE’s name to NewName and saves the resource TR.arxml? If only TR.arxml would be saved, that means that we now would have an inconsistent model on disk, since the SR.arxml would still contain “/AUTOSAR/P/P2/Target” as a reference, which could not be resolved the next time the model is loaded.

We see that there are some model modifications that affect not only the resource of the modified elements, but also referencing resources. Sphinx determines the “affected” other resources and marks them as “dirty”, so that they are serialized the next time the model is written to make sure that name based references are still correct.

But obviously, only changes that affect the URIs of referencing elements should cause other resources to be set to dirty. Features that are not involved should not have that effect. Sphinx offers the possibility to specify specific strategies per meta-model. The interface is IURIChangeDetectorDelegate.

The default implementation for XMI based resources is:

 

This will cause a change notification to be generated for any EObject that is modified, of course we do not want that. In contrast, the beginning of the Artop implementation of AutosarURIChangeDetectorDelegate looks like this:

 

A notification on a feature is only processed if the modified object is an Identifiable and the modified feature is the one used for URI calculation (shortName in AUTOSAR). There is additional code in this fragment to detect changes in containment hierarchy, which is not shown here.

So if you use Sphinx for your own metamodel with name based referencing, have a look at AutosarURIChangeDetectorDelegate and create your own custom implementation for efficiency.

 LocalProxyChangeListener

In addition, Sphinx detects objects that have been removed from the containment tree and updates references by turning the removed object into proxies! That might be unexpected if you then work with the removed object later on. The rationale is well-explained in the Javadoc of LocalProxyChangeListener:
Detects {@link EObject model object}s that have been removed from their
{@link org.eclipse.emf.ecore.EObject#eResource() containing resource} and/or their {@link EObject#eContainer
containing object} and turns them as well as all their directly and indirectly contained objects into proxies. This
offers the following benefits:

  • After removal of an {@link EObject model object} from its container, all other {@link EObject model object}s that
    are still referencing the removed {@link EObject model object} will know that the latter is no longer available but
    can still figure out its type and {@link URI}.
  • If a new {@link EObject model object} with the same type as the removed one is added to the same container again
    later on, the proxies which the other {@link EObject model object}s are still referencing will be resolved as usual
    and therefore get automatically replaced by the newly added {@link EObject model object}.
  • In big models, this approach can yield significant advantages in terms of performance because it helps avoiding
    full deletions of {@link EObject model object}s involving expensive searches for their cross-references and those of
    all their directly and indirectly contained objects. It does all the same not lead to references pointing at
    “floating” {@link EObject model object}s, i.e., {@link EObject model object}s that are not directly or indirectly
    contained in a resource.