A Tour of Machine Learning Algorithms - great article
Mo Data stashed this in Machine Learning
http://machinelearningmastery.com/a-tour-of-machine-learning-algorithms/
There are so many algorithms available. The difficulty is that there are classes of method and there are extensions to methods and it quickly becomes very difficult to determine what constitutes a canonical algorithm. In this post I want to give you two ways to think about and categorize the algorithms you may come across in the field.
The first is a grouping of algorithms by the learning style. The second is a grouping of algorithms by similarity in form or function (like grouping similar animals together). Both approaches are useful.
LEARNING STYLE
Supervised Learning: Input data is called training data and has a known label or result such as spam/not-spam or a stock price at a time. A model is prepared through a training process where it is required to make predictions and is corrected when those predictions are wrong. The training process continues until the model achieves a desired level of accuracy on the training data. Example problems are classification and regression. Example algorithms are Logistic Regression and the Back Propagation Neural Network.
Unsupervised Learning: Input data is not labelled and does not have a known result. A model is prepared by deducing structures present in the input data. Example problems are association rule learning and clustering. Example algorithms are the Apriori algorithm and k-means.
Semi-Supervised Learning: Input data is a mixture of labelled and unlabelled examples. There is a desired prediction problem but the model must learn the structures to organize the data as well as make predictions. Example problems are classification and regression. Example algorithms are extensions to other flexible methods that make assumptions about how to model the unlabelled data.
Reinforcement Learning: Input data is provided as stimulus to a model from an environment to which the model must respond and react. Feedback is provided not from of a teaching process as in supervised learning, but as punishments and rewards in the environment. Example problems are systems and robot control. Example algorithms are Q-learning and Temporal difference learning.
ALGORITHM SIMILARITY
Regression
Regression is concerned with modelling the relationship between variables that is iteratively refined using a measure of error in the predictions made by the model. Regression methods are a work horse of statistics and have been cooped into statistical machine learning. This may be confusing because we can use regression to refer to the class of problem and the class of algorithm. Really, regression is a process. Some example algorithms are:
- Ordinary Least Squares
- Logistic Regression
- Stepwise Regression
- Multivariate Adaptive Regression Splines (MARS)
- Locally Estimated Scatterplot Smoothing (LOESS)
Instance-based Methods
Instance based learning model a decision problem with instances or examples of training data that are deemed important or required to the model. Such methods typically build up a database of example data and compare new data to the database using a similarity measure in order to find the best match and make a prediction. For this reason, instance-based methods are also called winner-take all methods and memory-based learning. Focus is put on representation of the stored instances and similarity measures used between instances.
- k-Nearest Neighbour (kNN)
- Learning Vector Quantization (LVQ)
- Self-Organizing Map (SOM)
Regularization Methods
An extension made to another method (typically regression methods) that penalizes models based on their complexity, favoring simpler models that are also better at generalizing. I have listed Regularization methods here because they are popular, powerful and generally simple modifications made to other methods.
- Ridge Regression
- Least Absolute Shrinkage and Selection Operator (LASSO)
- Elastic Net
Decision Tree Learning
Decision tree methods construct a model of decisions made based on actual values of attributes in the data. Decisions fork in tree structures until a prediction decision is made for a given record. Decision trees are trained on data for classification and regression problems.
- Classification and Regression Tree (CART)
- Iterative Dichotomiser 3 (ID3)
- C4.5
- Chi-squared Automatic Interaction Detection (CHAID)
- Decision Stump
- Random Forest
- Multivariate Adaptive Regression Splines (MARS)
- Gradient Boosting Machines (GBM)
Bayesian
Bayesian methods are those that are explicitly apply Bayes’ Theorem for problems such as classification and regression.
- Naive Bayes
- Averaged One-Dependence Estimators (AODE)
- Bayesian Belief Network (BBN)
Kernel Methods
Kernel Methods are best known for the popular method Support Vector Machines which is really a constellation of methods in and of itself. Kernel Methods are concerned with mapping input data into a higher dimensional vector space where some classification or regression problems are easier to model.
- Support Vector Machines (SVM)
- Radial Basis Function (RBF)
- Linear Discriminate Analysis (LDA)
Clustering Methods
Clustering, like regression describes the class of problem and the class of methods. Clustering methods are typically organized by the modelling approaches such as centroid-based and hierarchal. All methods are concerned with using the inherent structures in the data to best organize the data into groups of maximum commonality.
- k-Means
- Expectation Maximisation (EM)
Association Rule Learning
Association rule learning are methods that extract rules that best explain observed relationships between variables in data. These rules can discover important and commercially useful associations in large multidimensional datasets that can be exploited by an organisation.
- Apriori algorithm
- Eclat algorithm
Artificial Neural Networks
Artificial Neural Networks are models that are inspired by the structure and/or function of biological neural networks. They are a class of pattern matching that are commonly used for regression and classification problems but are really an enormous subfield comprised of hundreds of algorithms and variations for all manner of problem types. Some of the classically popular methods include (I have separated Deep Learning from this category):
- Perceptron
- Back-Propagation
- Hopfield Network
- Self-Organizing Map (SOM)
- Learning Vector Quantization (LVQ)
Deep Learning
Deep Learning methods are a modern update to Artificial Neural Networks that exploit abundant cheap computation. The are concerned with building much larger and more complex neural networks, and as commented above, many methods are concerned with semi-supervised learning problems where large datasets contain very little labelled data.
- Restricted Boltzmann Machine (RBM)
- Deep Belief Networks (DBN)
- Convolutional Network
- Stacked Auto-encoders
Dimensionality Reduction
Like clustering methods, Dimensionality Reduction seek and exploit the inherent structure in the data, but in this case in an unsupervised manner or order to summarise or describe data using less information. This can be useful to visualize dimensional data or to simplify data which can then be used in a supervized learning method.
- Principal Component Analysis (PCA)
- Partial Least Squares Regression (PLS)
- Sammon Mapping
- Multidimensional Scaling (MDS)
- Projection Pursuit
Ensemble Methods
Ensemble methods are models composed of multiple weaker models that are independently trained and whose predictions are combined in some way to make the overall prediction. Much effort is put into what types of weak learners to combine and the ways in which to combine them. This is a very powerful class of techniques and as such is very popular.
- Boosting
- Bootstrapped Aggregation (Bagging)
- AdaBoost
- Stacked Generalization (blending)
- Gradient Boosting Machines (GBM)
- Random Forest
http://machinelearningmastery.com/a-tour-of-machine-learning-algorithms/
Stashed in: Software!, Big Data!, Turing, Machine Learning
10:06 AM May 30 2014