Neanderthal: Tutorials
Basic setup
 Getting Started helps you to set up your environment.
 Hello world, a project that helps at the very beginning.
Presentations (Slides & Videos)

ClojuTRE & SmallFP 2018  Interactive, Functional, GPU Accelerated Programming in Clojure: Take a look at the video. Neanderthal, CUDA, etc.

Bob Konferenz 2017  Bayadera: Bayes + Clojure + GPU: Take a look at the video and slides. Bayadera is a cool library that uses Neanderthal.

EuroClojure 2016  Clojure is Not Afraid of the GPU: Take a look at the video and slides. Please note that Neanderthal became much faster in later releases than it was in this presentation.
Performance comparisons with other fast libraries
 Neanderthal vs ND4J  vol 1 – Native Performance, Java and CPU

Neanderthal vs ND4J  vol 2  The Same Native MKL Backend, 1000 x Speedup.

Neanderthal vs ND4J  vol 3  Clojure Beyond Fast Native MKL Backend.

Neanderthal vs ND4J  vol 4  Fast Vector Broadcasting in Java, CPU and CUDA.
 Neanderthal vs ND4J  vol 5  Why are native map and reduce up to 100x faster in Clojure?.
General and native engine tutorials
 Fast, Native Speed, Vector Computations in Clojure, and the source code for this tutorial.
 Fast Map and Reduce for Primitive Vectors and Matrices, which also comes with source code for this tutorial.
 Neanderthal API Reference contains the desrciption of each function, and also comes with mini examples. There actually is helpful stuff there. Do not skip it!
Deep Learning From Scratch To GPU
GPU computation tutorials

Matrix Computations on the GPU in Clojure (in TFLOPS!). Proceed to this GPU engine tutorial when you want even more speed (source code).

CUDA and cuBLAS GPU matrices in Clojure. The CUDA engine announcement blog post.
Linear Algebra Tutorials
 Clojure Linear Algebra Refresher (1)  Vector Spaces
 Clojure Linear Algebra Refresher (2)  Eigenvalues and eigenvectors
 Clojure Linear Algebra Refresher (3)  Matrix Transformations
 Clojure Linear Algebra Refresher (4)  Linear Transformations
 Coding the Matrix in Neanderthal contains examples that follow the Coding the Matrix book.
Clojure Numerics
 Clojure Numerics, Part 1  Use Matrices Efficiently
 Clojure Numerics, Part 2  General Linear Systems and LU Factorization
 Clojure Numerics, Part 3  Special Linear Systems and Cholesky Factorization
 Clojure Numerics, Part 4  Singular Value Decomposition (SVD)
 Clojure Numerics, Part 5  Orthogonalization and Least Squares
 Clojure Numerics, Part 6  More Linear Algebra Fun with Least Squares
Internal details and edge cases
 Neanderthal Tests show many more details, without explanations in prose. This will help when you are struggling with an edge case.
Making sense of legacy BLAS & LAPACK
BLAS (Basic Linear Algebra Subroutines) and LAPACK are mature and de facto standards for numerical linear algebra. They might seem arcane at first glance,but with a little exploration, it all have sense and reason.
Where to find legacy BLAS & LAPACK documentation
When you need more information beyond Neanderthal API, these links can help you:
A more detailed doc for each subroutine is available at these links.
Naming conventions (briefly)
You see a procedure named DGEMM
in the aforementioned docs and you are completely baffled. Here is how to decipher it:
 D is for Double precision, meaning the function works with doubles
 GE is for GEneral dense matrix, meaning we store all these doubles and not using any clever tricks for special matrix structures like symmetric, triangular etc.
 MM is the operation, it stands for Matrix Multiply.
Generally, Neanderthal will try to abstract away the primitive type (D) and the actual structure (GE) so you can
use methods like mm
to multiply two matrices.
It is a good idea to familiarize yourself with BLAS Naming Scheme. It will help you understand how to efficiently use Neanderthal functions and where to look for a function that does what you need to do.
Tell Us What You Think!
Please take some time to tell us about your experience with the library and this site. Let us know what we should be explaining or is not clear enough. If you are willing to contribute improvements, even better!