Source on Github

Fast native-speed matrix and linear algebra in Clojure

On the GPU: more than 1000x faster for large matrices than the fastest optimized Java libraries!

Works on AMD, Nvidia, and Intel hardware!

On the CPU: 10x - 60x faster than optimized Java libraries.

Slides for my upcoming talk at EuroClojure 2016: Clojure is not afraid of the GPU

Get Started! » Check the benchmarks » Learn to use it. » Join the community »

Very Fast

Handcrafted JNI calls, with almost no overhead, call the machine-optimized native BLAS provided by ATLAS.

Seamless support for GPU computing. Many times faster than any code running on the CPU.

Considerably faster than JBlas, and faster than pure Java core.matrix vectorz even for small matrices: see the benchmarks and comparisons.

Optimized for Clojure

Built with Clojure in mind, and implemented in Clojure.

Sane interface and functions that fit into functional style while still respecting the reality of number crunching.

Comes with fast primitive versions of map and reduce.

Pluggable engine implementations - even a pure Java engine could be plugged in for the cases when speed is not the priority.

Reusable literature.

The code and theory from existing books, articles and tutorials for numeric linear algebra computations is BLAS and LAPACK centered and can be used with Neanderthal.

Comes with a fully documented API, and a set of tutorials.

BLAS and LAPACK

Does not try to poorly reinvent the wheel. Respects decades of BLAS and LAPACK standardization. Does the number crunching part with native BLAS and/or GPU, but provides a tiny Clojure layer for sanity. Win-win :)

Free and Open Source

Licensed under the Eclipse Public License, same as Clojure.