The abstractions and methods for JuliaML packages. This is a core dependency of most packages.
Supervised and unsupervised loss functions for both distance-based (probabilities and regressions) and margin-based (SVM) approaches.
Provides generic implementations for a diverse set of penalty functions that are commonly used for regularization purposes.
A generic and modular framework for building custom iterative algorithms in Julia.
Extension of LearningStrategies implementing stochastic gradient descent and online optimization algorithms and components. Parameter update models (Adagrad, ADAM, etc). Minibatch gradient averaging.
Unconstrained Continuous Full-Batch Optimization Algorithms based on the LearningStrategies framework. This is a prototype package meant to explore how we could move Optim algorithms to a more modular and maintainable framework.
Most active Reinforcement Learning is taking place in: https://github.com/JuliaReinforcementLearning
OpenAI's Gym wrapped as a Reinforce.jl environment
Arcade Learning Environment (ALE) wrapped as a Reinforce.jl environment
Dataset iteration and splitting (test/train, K-folds cross validation, batching, etc).
Utility package for working with classification targets and label-encodings (Docs)
Machine Learning Datasets for Julia
Metrics for scoring machine learning models in Julia. MSE, accuracy, and more.
Utilities to efficiently track learning curves or other optimization information
KoƧ University deep learning framework. It supports GPU operation and automatic differentiation using dynamic computational graphs for models defined in plain Julia.
A high level API for machine learning, implemented in Julia. Flux aims to provide a concise and expressive syntax for architectures that are hard to express within other frameworks. The current focus is on ANNs with TensorFlow or MXNet as a backend.
Support Vector Machines (SVM) in pure Julia