edit

Packages

JuliaML is your one-stop-shop for learning models from data. We provide general abstractions and algorithms for modeling and optimization, implementations of common models, tools for working with datasets, and much more.

The design and structure of the organization is geared towards a modular and composable approach to all things data science. Plug and play models, losses, penalties, and algorithms however you see fit, and at whatever granularity is appropriate. Beginner users and those looking for ready-made solutions can use the convenience package Learn. For custom modeling solutions, choose methods and components from any package:

Learn

An all-in-one workbench, which simply imports and re-exports the packages below. This is a convenience wrapper for an easy way to get started with the JuliaML ecosystem.


Core

LearnBase

The abstractions and methods for JuliaML packages. This is a core dependency of most packages.

LossFunctions

Supervised and unsupervised loss functions for both distance-based (probabilities and regressions) and margin-based (SVM) approaches.

ObjectiveFunctions

Generic definitions of objective functions using abstractions from LearnBase.

PenaltyFunctions

Provides generic implementations for a diverse set of penalty functions that are commonly used for regularization purposes.

MLKernels

A Julia package for Mercer kernel functions (or the covariance functions used in Gaussian processes) that are used in the kernel methods of machine learning.

Transformations (experimental)

Dynamic tensor computation framework. Static transforms, activation functions, neural nets, and more.


Learning Algorithms

LearningStrategies

A generic and modular framework for building custom iterative algorithms in Julia.

StochasticOptimization

Extension of LearningStrategies implementing stochastic gradient descent and online optimization algorithms and components. Parameter update models (Adagrad, ADAM, etc). Minibatch gradient averaging.

ContinuousOptimization (WIP, help needed)

Unconstrained Continuous Full-Batch Optimization Algorithms based on the LearningStrategies framework. This is a prototype package meant to explore how we could move Optim algorithms to a more modular and maintainable framework.


Reinforcement Learning

Reinforce

Abstractions, algorithms, and utilities for reinforcement learning in Julia

OpenAIGym

OpenAI's Gym wrapped as a Reinforce.jl environment

AtariAlgos

Arcade Learning Environment (ALE) wrapped as a Reinforce.jl environment


Tools

MLDataUtils

Dataset iteration and splitting (test/train, K-folds cross validation, batching, etc).

MLLabelUtils

Utility package for working with classification targets and label-encodings (Docs)

MLDatasets

Machine Learning Datasets for Julia

MLMetrics

Metrics for scoring machine learning models in Julia. MSE, accuracy, and more.

ValueHistories

Utilities to efficiently track learning curves or other optimization information

MLPlots

Plotting recipes to be used with Plots. Also check out PlotRecipes.


Other notable packages

MXNet

MXNet Julia Package - flexible and efficient deep learning in Julia

TensorFlow

A Julia wrapper for TensorFlow

Knet

KoƧ University deep learning framework. It supports GPU operation and automatic differentiation using dynamic computational graphs for models defined in plain Julia.

Flux

A high level API for machine learning, implemented in Julia. Flux aims to provide a concise and expressive syntax for architectures that are hard to express within other frameworks. The current focus is on ANNs with TensorFlow or MXNet as a backend.

Mocha

Deep Learning framework for Julia (author recommends MXNet instead)

KSVM

Support Vector Machines (SVM) in pure Julia

Other AI packages