Source on Github

Fast native-speed matrix and linear algebra in Clojure

On the GPU: more than 1500x faster for large matrices than optimized Clojure/Java libraries!

On the CPU: 100x faster than optimized pure Java.

Works on AMD, Nvidia, and Intel hardware!

Slides and video of the presentation at EuroClojure 2016

Get Started! » Check the benchmarks » Learn to use it. » Join the community »

Very Fast

Handcrafted JNI calls, with almost no overhead, call the machine-optimized native BLAS provided by Intel's MKL.

Seamless support for GPU computing. Many times faster than any code running on the CPU.

Faster than JBlas, and orders of magnitude times faster than pure Java core.matrix/vectorz even for small matrices: see the benchmarks and comparisons.

Optimized for Clojure

Built with Clojure in mind, and implemented in Clojure.

Sane interface and functions that fit into functional style while still respecting the reality of number crunching.

Comes with fast primitive versions of map and reduce.

Pluggable engine implementations - even a pure Java engine could be plugged in for the cases when speed is not the priority.

Reusable literature.

The code and theory from existing books, articles and tutorials for numeric linear algebra computations is BLAS and LAPACK centered and can be used with Neanderthal.

Comes with a fully documented API, and a set of tutorials.

BLAS and LAPACK

Does not try to poorly reinvent the wheel. Respects decades of BLAS and LAPACK standardization. Does the number crunching part with native BLAS and/or GPU, but provides a tiny Clojure layer for sanity. Win-win :)

Latest News

LAPACK support available: factorizations and solvers!

GPU engine supports AMD, Nvidia, Intel, and Mac OSX!

I hang around at Uncomplicate group at Slack's Clojurians

Read more at dragan.rocks

Follow the news on the Uncomplicate mailing list or @Uncomplicateorg and @draganrocks Twitter accounts.

Free and Open Source

Licensed under the Eclipse Public License, same as Clojure.