A GRU and a fully connected layer with softmax output can be used to recognise words in an audio stream (sequence to sequence).
I recorded a random sequence of 100 utterances of 4 words ("left", "right", "stop", "go") with a sampling rate of 11025Hz.
Furthermore 300 seconds of background noise were recorded.
All audio data was grouped into chunks of 512 samples.
The logarithm of the Fourier spectrum of each chunk was used as input for the GRU.
The initial state of the GRU (128 units) was set to zero.
The network was trained in a sequence-to-sequence setting to recognise words given a sequence of audio chunks.

The example was trained using gradient descent with learning rate alpha = 0.05.
The background noise was cycled randomly and words were inserted with random length pause between them.
The network was trained to output the word label 10 times after recognising a word.
The training took about one hour on a CPU.
The resulting example is more minimalistic than the Tensorflow audio recognition example.
The algorithm is very similar to an exercise in Andrew Ng's Deep learning specialization.

An LSTM can be used to recognise words from audio data (sequence to one).
I recorded a random sequence of 320 utterances of 5 words ("left", "right", "stop", "go", and "?" for background noise) with a sampling rate of 16kHz.
The audio data was split into chunks of 512 samples.
The word start and end were identified using a rising and a falling threshold for the root-mean-square value of each chunk.
The logarithm of the Fourier spectrum of each chunk was used as input for the LSTM.
The initial state and output of the LSTM (16 units each) was set to zero.
The final output state of the LSTM was used as input for a fully connected layer with 5 output units followed by a softmax classifier.
The network was trained in a sequence-to-one setting to recognise a word given a sequence of audio chunks as shown in the following figure.

The example was trained using gradient descent with multiplier alpha = 0.001.
The training took seven hours on a CPU.
The sequence-to-one setting handles words of different lengths gracefully.
The resulting example is more minimalistic than the Tensorflow audio recognition example.

In the following video the speech recognition is used to control a robot.

The Gilbert-Johnson-Keerthi (GJK) algorithm is a method for collision detection of convex polyhedra.
The method is used in multiple rigid body simulations for computer games.

Given:

a set of points defining the convex polyhedron A

a set of points defining the convex polyhedron B

The algorithm returns:

the distance of the polyhedra

two closest points

the two simplices (convex subsets of up to 4 points in 3 dimensions) of A and B containing the two closest points

An n-dimensional simplex is the convex hull of n+1 points as shown in the figure below.

The algorithm makes use of the fact, that the distance between two sets A and B is the distance of the Minkowski difference to the origin:

where

The GJK algorithm iteratively updates a simplex until the closest point to the origin is found.

The algorithm iterates using support points.
Given a set M and a vector d, the support point is defined as the furthest point of M in direction d:

The GJK algorithm detects the two closest points of A and B as follows:

Choose any point m in M(A,B) in the Minkowski difference of A and B.

Set the initial simplex w_{0} to w_{0}={m}.

Let d be the closest point of w_{k} to the origin.

If d s(w_{k}, -d)>=d s(M,-d) then return d.

Set w'_{k+1}=w_{k}∪{s(M,-d)}

Set w_{k+1} to the smallest simplex in w'_{k+1} still containing s(M,-d).

Continue with step 3.

Note that step 3 requires finding the closest point of the simplex w_{k} to the origin.
Most implementations of the GJK algorithm seem to use the following approach:

Check if the origin is contained in the 3D simplex.

If not, check whether the origin is near one of the 4 surfaces of the simplex.

If the origin is not in the Delaunay region of any surface, check whether the origin is near one of the 6 edges.

If not in a Delaunay region of an edge, find the closest point of the 4 points.

A much more compact implementation can be obtained using a divide-and-conquer approach:

Let w_{k}={w_{k0},w_{k1},...,w_{kn}}

Solve the least squares equation system

If all t_{i}>=0 and t_{1}+t_{2}+...+t_{n}<=1 (or n=0), then w_{k0}+Ht is the closest point.

Otherwise take the closest point of all sub-simplices with n-1 dimensions using the approach above (i.e. recursion).

Note that this approach will visit each edge up to two times, and each point up to three times.
The performance is not optimal, but it makes for a much more concise implementation.

The least squares solution is:

Here follows an implementation in Scheme (GNU Guile).
By using pairs of points from A and B instead of the Minkowski difference, one can keep track of the information required to determine the pair of closest points of A and B (instead of the closest point of M to the origin).

Here an example of two colliding cuboids simulated using this implementation is shown:

Any feedback, comments, and suggestions are welcome.

OpenGL is a powerful cross-platform standard for 3D visualisation.
OpenGL libraries use a domain specific language (shader language) to describe element-wise operations on vertices (vertex shader) and pixel values (fragment shader).
More recent OpenGL versions also support geometry shaders and tesselation shaders (see OpenGL article on Wikipedia).

The learning curve for OpenGL is quite steep at the beginning.
The reason is, that a program to draw a triangle is almost as complex as a program drawing thousands of triangles.
It is also important to add code for retrieving error messages in order to be able to do development.

I haven't found many minimal examples to understand OpenGL, so I am posting one here.
The example draws a coloured triangle on the screen.

The example uses the widely supported OpenGL version 3.1 (which has the version tag 130).
You can download, compile, and run the example as follows:

Any feedback, comments, and suggestions are welcome.

I am quite interested in how simulators such as the Orbiter space simulator are implemented.
A spacecraft can be seen as a rigid object with a moments of inertia tensor.
Without any forces acting on the object, the rotational moment of the object does not change.
In general the moments of inertia tensor causes the direction of the rotation vector to be different at each point in time even if the rotational moment is not changing.
This motion can be numerically simulated using a higher order integration method such as 4th order Runge-Kutta.
Here is a video showing the resulting simulation of a cuboid tumbling in space:

Brian Vincent Mirtich's thesis demonstrates how to simulate collisions of two convex polyhedra.
Furthermore micro-collisions are used as a simple but powerful method to simulate resting contacts.
If the micro-collisions are sufficietly small, a resting object can be approximated with sufficient accuracy:

One still needs to implement friction (also shown in Mirtich's thesis) which requires a numerical integral to compute the friction occuring during a micro-collision.
Collisions of polyhedra are demonstrated in Mirtich's thesis as well, however it might be simpler to make use of the GJK algorithm.
Planetary bodies, spacecraft, and other non-convex objects could be handled by dividing them into multiple convex objects.
It would also be interesting to integrate soft body physics as shown in Rigs of Rods.
However the accuracy of Rigs of Rods is not sufficiently high for space simulation.
E.g. an object tumbling in space would not preserve its momentum.

Update:

In the following examples, dynamic Coloumb friction with the ground is simulated.