| ▼Nboost | Set the serialization version of the adaboost class |
| ►Nserialization | |
| Cversion< mlpack::adaboost::AdaBoost< WeakLearnerType, MatType > > | |
| Cversion< mlpack::ann::AddMerge< InputDataType, OutputDataType, CustomLayers... > > | |
| Cversion< mlpack::ann::AtrousConvolution< ForwardConvolutionRule, BackwardConvolutionRule, GradientConvolutionRule, InputDataType, OutputDataType > > | |
| Cversion< mlpack::ann::BRNN< OutputLayerType, MergeLayerType, MergeOutputType, InitializationRuleType, CustomLayer... > > | |
| Cversion< mlpack::ann::Convolution< ForwardConvolutionRule, BackwardConvolutionRule, GradientConvolutionRule, InputDataType, OutputDataType > > | |
| Cversion< mlpack::ann::FFN< OutputLayerType, InitializationRuleType, CustomLayer... > > | |
| Cversion< mlpack::ann::RNN< OutputLayerType, InitializationRuleType, CustomLayer... > > | |
| Cversion< mlpack::ann::Sequential< InputDataType, OutputDataType, Residual, CustomLayers... > > | |
| Cversion< mlpack::ann::TransposedConvolution< ForwardConvolutionRule, BackwardConvolutionRule, GradientConvolutionRule, InputDataType, OutputDataType > > | |
| Cversion< mlpack::kde::KDE< KernelType, MetricType, MatType, TreeType, DualTreeTraversalType, SingleTreeTraversalType > > | |
| ▼Nmlpack | Linear algebra utility functions, generally performed on matrices or vectors |
| ►Nadaboost | |
| CAdaBoost | The AdaBoost class |
| CAdaBoostModel | The model to save to disk |
| ►Namf | Alternating Matrix Factorization |
| CAMF | This class implements AMF (alternating matrix factorization) on the given matrix V |
| CAverageInitialization | This initialization rule initializes matrix W and H to root of the average of V, perturbed with uniform noise |
| CCompleteIncrementalTermination | This class acts as a wrapper for basic termination policies to be used by SVDCompleteIncrementalLearning |
| CGivenInitialization | This initialization rule for AMF simply fills the W and H matrices with the matrices given to the constructor of this object |
| CIncompleteIncrementalTermination | This class acts as a wrapper for basic termination policies to be used by SVDIncompleteIncrementalLearning |
| CMaxIterationTermination | This termination policy only terminates when the maximum number of iterations has been reached |
| CMergeInitialization | This initialization rule for AMF simply takes in two initialization rules, and initialize W with the first rule and H with the second rule |
| CNMFALSUpdate | This class implements a method titled 'Alternating Least Squares' described in the following paper: |
| CNMFMultiplicativeDistanceUpdate | The multiplicative distance update rules for matrices W and H |
| CNMFMultiplicativeDivergenceUpdate | This follows a method described in the paper 'Algorithms for Non-negative |
| CRandomAcolInitialization | This class initializes the W matrix of the AMF algorithm by averaging p randomly chosen columns of V |
| CRandomInitialization | This initialization rule for AMF simply fills the W and H matrices with uniform random noise in [0, 1] |
| CSimpleResidueTermination | This class implements a simple residue-based termination policy |
| CSimpleToleranceTermination | This class implements residue tolerance termination policy |
| CSVDBatchLearning | This class implements SVD batch learning with momentum |
| CSVDCompleteIncrementalLearning | This class computes SVD using complete incremental batch learning, as described in the following paper: |
| CSVDCompleteIncrementalLearning< arma::sp_mat > | TODO : Merge this template specialized function for sparse matrix using common row_col_iterator |
| CSVDIncompleteIncrementalLearning | This class computes SVD using incomplete incremental batch learning, as described in the following paper: |
| CValidationRMSETermination | This class implements validation termination policy based on RMSE index |
| ►Nann | Artificial Neural Network |
| ►Naugmented | |
| ►Ntasks | |
| CAddTask | Generator of instances of the binary addition task |
| CCopyTask | Generator of instances of the binary sequence copy task |
| CSortTask | Generator of instances of the sequence sort task |
| CAdaptiveMaxPooling | Implementation of the AdaptiveMaxPooling layer |
| CAdaptiveMeanPooling | Implementation of the AdaptiveMeanPooling |
| CAdd | Implementation of the Add module class |
| CAddMerge | Implementation of the AddMerge module class |
| CAddVisitor | AddVisitor exposes the Add() method of the given module |
| CAlphaDropout | The alpha - dropout layer is a regularizer that randomly with probability 'ratio' sets input values to alphaDash |
| CAtrousConvolution | Implementation of the Atrous Convolution class |
| CBackwardVisitor | BackwardVisitor executes the Backward() function given the input, error and delta parameter |
| CBaseLayer | Implementation of the base layer |
| CBatchNorm | Declaration of the Batch Normalization layer class |
| CBernoulliDistribution | Multiple independent Bernoulli distributions |
| CBiasSetVisitor | BiasSetVisitor updates the module bias parameters given the parameters set |
| CBilinearInterpolation | Definition and Implementation of the Bilinear Interpolation Layer |
| CBinaryRBM | For more information, see the following paper: |
| CBRNN | Implementation of a standard bidirectional recurrent neural network container |
| CCELU | The CELU activation function, defined by |
| CConcat | Implementation of the Concat class |
| CConcatenate | Implementation of the Concatenate module class |
| CConcatPerformance | Implementation of the concat performance class |
| CConstant | Implementation of the constant layer |
| CConstInitialization | This class is used to initialize weight matrix with constant values |
| CConvolution | Implementation of the Convolution class |
| CCopyVisitor | This visitor is to support copy constructor for neural network module |
| CCosineEmbeddingLoss | Cosine Embedding Loss function is used for measuring whether two inputs are similar or dissimilar, using the cosine distance, and is typically used for learning nonlinear embeddings or semi-supervised learning |
| CCReLU | A concatenated ReLU has two outputs, one ReLU and one negative ReLU, concatenated together |
| CCrossEntropyError | The cross-entropy performance function measures the network's performance according to the cross-entropy between the input and target distributions |
| CDCGAN | For more information, see the following paper: |
| CDeleteVisitor | DeleteVisitor executes the destructor of the instantiated object |
| CDeltaVisitor | DeltaVisitor exposes the delta parameter of the given module |
| CDeterministicSetVisitor | DeterministicSetVisitor set the deterministic parameter given the deterministic value |
| CDiceLoss | The dice loss performance function measures the network's performance according to the dice coefficient between the input and target distributions |
| CDropConnect | The DropConnect layer is a regularizer that randomly with probability ratio sets the connection values to zero and scales the remaining elements by factor 1 /(1 - ratio) |
| CDropout | The dropout layer is a regularizer that randomly with probability 'ratio' sets input values to zero and scales the remaining elements by factor 1 / (1 - ratio) rather than during test time so as to keep the expected sum same |
| CEarthMoverDistance | The earth mover distance function measures the network's performance according to the Kantorovich-Rubinstein duality approximation |
| CElishFunction | The ELiSH function, defined by |
| CElliotFunction | The Elliot function, defined by |
| CELU | The ELU activation function, defined by |
| CEmptyLoss | The empty loss does nothing, letting the user calculate the loss outside the model |
| CFastLSTM | An implementation of a faster version of the Fast LSTM network layer |
| CFFN | Implementation of a standard feed forward network |
| CFFTConvolution | Computes the two-dimensional convolution through fft |
| CFlexibleReLU | The FlexibleReLU activation function, defined by |
| CForwardVisitor | ForwardVisitor executes the Forward() function given the input and output parameter |
| CFullConvolution | |
| CGAN | The implementation of the standard GAN module |
| CGaussianFunction | The gaussian function, defined by |
| CGaussianInitialization | This class is used to initialize weigth matrix with a gaussian |
| CGELUFunction | The GELU function, defined by |
| CGlimpse | The glimpse layer returns a retina-like representation (down-scaled cropped images) of increasing scale around a given location in a given image |
| CGlorotInitializationType | This class is used to initialize the weight matrix with the Glorot Initialization method |
| CGradientSetVisitor | GradientSetVisitor update the gradient parameter given the gradient set |
| CGradientUpdateVisitor | GradientUpdateVisitor update the gradient parameter given the gradient set |
| CGradientVisitor | SearchModeVisitor executes the Gradient() method of the given module using the input and delta parameter |
| CGradientZeroVisitor | |
| CGRU | An implementation of a gru network layer |
| CHardShrink | Hard Shrink operator is defined as, |
| CHardSigmoidFunction | The hard sigmoid function, defined by |
| CHardTanH | The Hard Tanh activation function, defined by |
| CHeInitialization | This class is used to initialize weight matrix with the He initialization rule given by He et |
| CHighway | Implementation of the Highway layer |
| CHingeEmbeddingLoss | The Hinge Embedding loss function is often used to compute the loss between y_true and y_pred |
| CHuberLoss | The Huber loss is a loss function used in robust regression, that is less sensitive to outliers in data than the squared error loss |
| CIdentityFunction | The identity function, defined by |
| CInitTraits | This is a template class that can provide information about various initialization methods |
| CInitTraits< KathirvalavakumarSubavathiInitialization > | Initialization traits of the kathirvalavakumar subavath initialization rule |
| CInitTraits< NguyenWidrowInitialization > | Initialization traits of the Nguyen-Widrow initialization rule |
| CInvQuadFunction | The Inverse Quadratic function, defined by |
| CJoin | Implementation of the Join module class |
| CKathirvalavakumarSubavathiInitialization | This class is used to initialize the weight matrix with the method proposed by T |
| CKLDivergence | The Kullback–Leibler divergence is often used for continuous distributions (direct regression) |
| CL1Loss | The L1 loss is a loss function that measures the mean absolute error (MAE) between each element in the input x and target y |
| CLayerNorm | Declaration of the Layer Normalization class |
| CLayerTraits | This is a template class that can provide information about various layers |
| CLeakyReLU | The LeakyReLU activation function, defined by |
| CLecunNormalInitialization | This class is used to initialize weight matrix with the Lecun Normalization initialization rule |
| CLinear | Implementation of the Linear layer class |
| CLinear3D | Implementation of the Linear3D layer class |
| CLinearNoBias | Implementation of the LinearNoBias class |
| CLiSHTFunction | The LiSHT function, defined by |
| CLoadOutputParameterVisitor | LoadOutputParameterVisitor restores the output parameter using the given parameter set |
| CLogCoshLoss | The Log-Hyperbolic-Cosine loss function is often used to improve variational auto encoder |
| CLogisticFunction | The logistic function, defined by |
| CLogSoftMax | Implementation of the log softmax layer |
| CLookup | The Lookup class stores word embeddings and retrieves them using tokens |
| CLossVisitor | LossVisitor exposes the Loss() method of the given module |
| CLRegularizer | The L_p regularizer for arbitrary integer p |
| CLSTM | Implementation of the LSTM module class |
| CMarginRankingLoss | Margin ranking loss measures the loss given inputs and a label vector with values of 1 or -1 |
| CMaxPooling | Implementation of the MaxPooling layer |
| CMaxPoolingRule | |
| CMeanAbsolutePercentageError | The mean absolute percentage error performance function measures the network's performance according to the mean of the absolute difference between input and target divided by target |
| CMeanBiasError | The mean bias error performance function measures the network's performance according to the mean of errors |
| CMeanPooling | Implementation of the MeanPooling |
| CMeanPoolingRule | |
| CMeanSquaredError | The mean squared error performance function measures the network's performance according to the mean of squared errors |
| CMeanSquaredLogarithmicError | The mean squared logarithmic error performance function measures the network's performance according to the mean of squared logarithmic errors |
| CMiniBatchDiscrimination | Implementation of the MiniBatchDiscrimination layer |
| CMishFunction | The Mish function, defined by |
| CMultiheadAttention | Multihead Attention allows the model to jointly attend to information from different representation subspaces at different positions |
| CMultiplyConstant | Implementation of the multiply constant layer |
| CMultiplyMerge | Implementation of the MultiplyMerge module class |
| CMultiQuadFunction | The Multi Quadratic function, defined by |
| CNaiveConvolution | Computes the two-dimensional convolution |
| CNegativeLogLikelihood | Implementation of the negative log likelihood layer |
| CNetworkInitialization | This class is used to initialize the network with the given initialization rule |
| CNguyenWidrowInitialization | This class is used to initialize the weight matrix with the Nguyen-Widrow method |
| CNoisyLinear | Implementation of the NoisyLinear layer class |
| CNoRegularizer | Implementation of the NoRegularizer |
| CNormalDistribution | Implementation of the Normal Distribution function |
| COivsInitialization | This class is used to initialize the weight matrix with the oivs method |
| COrthogonalInitialization | This class is used to initialize the weight matrix with the orthogonal matrix initialization |
| COrthogonalRegularizer | Implementation of the OrthogonalRegularizer |
| COutputHeightVisitor | OutputHeightVisitor exposes the OutputHeight() method of the given module |
| COutputParameterVisitor | OutputParameterVisitor exposes the output parameter of the given module |
| COutputWidthVisitor | OutputWidthVisitor exposes the OutputWidth() method of the given module |
| CPadding | Implementation of the Padding module class |
| CParametersSetVisitor | ParametersSetVisitor update the parameters set using the given matrix |
| CParametersVisitor | ParametersVisitor exposes the parameters set of the given module and stores the parameters set into the given matrix |
| CPoisson1Function | The Poisson one function, defined by |
| CPoissonNLLLoss | Implementation of the Poisson negative log likelihood loss |
| CPositionalEncoding | Positional Encoding injects some information about the relative or absolute position of the tokens in the sequence |
| CPReLU | The PReLU activation function, defined by (where alpha is trainable) |
| CQuadraticFunction | The Quadratic function, defined by |
| CRandomInitialization | This class is used to initialize randomly the weight matrix |
| CRBF | Implementation of the Radial Basis Function layer |
| CRBM | The implementation of the RBM module |
| CReconstructionLoss | The reconstruction loss performance function measures the network's performance equal to the negative log probability of the target with the input distribution |
| CRectifierFunction | The rectifier function, defined by |
| CRecurrent | Implementation of the RecurrentLayer class |
| CRecurrentAttention | This class implements the Recurrent Model for Visual Attention, using a variety of possible layer implementations |
| CReinforceNormal | Implementation of the reinforce normal layer |
| CReparametrization | Implementation of the Reparametrization layer class |
| CResetCellVisitor | ResetCellVisitor executes the ResetCell() function |
| CResetVisitor | ResetVisitor executes the Reset() function |
| CRewardSetVisitor | RewardSetVisitor set the reward parameter given the reward value |
| CRNN | Implementation of a standard recurrent neural network container |
| CRunSetVisitor | RunSetVisitor set the run parameter given the run value |
| CSaveOutputParameterVisitor | SaveOutputParameterVisitor saves the output parameter into the given parameter set |
| CSelect | The select module selects the specified column from a given input matrix |
| CSequential | Implementation of the Sequential class |
| CSetInputHeightVisitor | SetInputHeightVisitor updates the input height parameter with the given input height |
| CSetInputWidthVisitor | SetInputWidthVisitor updates the input width parameter with the given input width |
| CSigmoidCrossEntropyError | The SigmoidCrossEntropyError performance function measures the network's performance according to the cross-entropy function between the input and target distributions |
| CSoftMarginLoss | |
| CSoftmax | Implementation of the Softmax layer |
| CSoftmin | Implementation of the Softmin layer |
| CSoftplusFunction | The softplus function, defined by |
| CSoftShrink | Soft Shrink operator is defined as, |
| CSoftsignFunction | The softsign function, defined by |
| CSpatialDropout | Implementation of the SpatialDropout layer |
| CSpikeSlabRBM | For more information, see the following paper: |
| CSplineFunction | The Spline function, defined by |
| CStandardGAN | For more information, see the following paper: |
| CSubview | Implementation of the subview layer |
| CSVDConvolution | Computes the two-dimensional convolution using singular value decomposition |
| CSwishFunction | The swish function, defined by |
| CTanhFunction | The tanh function, defined by |
| CTransposedConvolution | Implementation of the Transposed Convolution class |
| CValidConvolution | |
| CVirtualBatchNorm | Declaration of the VirtualBatchNorm layer class |
| CVRClassReward | Implementation of the variance reduced classification reinforcement layer |
| CWeightNorm | Declaration of the WeightNorm layer class |
| CWeightSetVisitor | WeightSetVisitor update the module parameters given the parameters set |
| CWeightSizeVisitor | WeightSizeVisitor returns the number of weights of the given module |
| CWGAN | For more information, see the following paper: |
| CWGANGP | For more information, see the following paper: |
| ►Nbindings | |
| ►Ncli | |
| CCLIOption | A static object whose constructor registers a parameter with the IO class |
| CParameterType | Utility struct to return the type that CLI11 should accept for a given input type |
| CParameterType< arma::Col< eT > > | For vector types, CLI11 will accept a std::string, not an arma::Col<eT> (since it is not clear how to specify a vector on the command-line) |
| CParameterType< arma::Mat< eT > > | For matrix types, CLI11 will accept a std::string, not an arma::mat (since it is not clear how to specify a matrix on the command-line) |
| CParameterType< arma::Row< eT > > | For row vector types, CLI11 will accept a std::string, not an arma::Row<eT> (since it is not clear how to specify a vector on the command-line) |
| CParameterType< std::tuple< mlpack::data::DatasetMapper< PolicyType, std::string >, arma::Mat< eT > > > | For matrix+dataset info types, we should accept a std::string |
| CParameterTypeDeducer | |
| CParameterTypeDeducer< true, T > | |
| ►Ngo | |
| CGoOption | The Go option class |
| ►Njulia | |
| CJuliaOption | The Julia option class |
| ►Nmarkdown | |
| CBindingInfo | Used by the Markdown documentation generator to store multiple documentation objects, indexed by both the binding name (i.e |
| CExampleWrapper | |
| CLongDescriptionWrapper | |
| CMDOption | The Markdown option class |
| CProgramNameWrapper | |
| CSeeAlsoWrapper | |
| CShortDescriptionWrapper | |
| ►Npython | |
| CPyOption | The Python option class |
| ►Nr | |
| CROption | The R option class |
| ►Ntests | |
| CTestOption | A static object whose constructor registers a parameter with the IO class |
| ►Nbound | |
| ►Nmeta | Metaprogramming utilities |
| CIsLMetric | Utility struct where Value is true if and only if the argument is of type LMetric |
| CIsLMetric< metric::LMetric< Power, TakeRoot > > | Specialization for IsLMetric when the argument is of type LMetric |
| CBallBound | Ball bound encloses a set of points at a specific distance (radius) from a specific point (center) |
| CBoundTraits | A class to obtain compile-time traits about BoundType classes |
| CBoundTraits< BallBound< MetricType, VecType > > | A specialization of BoundTraits for this bound type |
| CBoundTraits< CellBound< MetricType, ElemType > > | |
| CBoundTraits< HollowBallBound< MetricType, ElemType > > | A specialization of BoundTraits for this bound type |
| CBoundTraits< HRectBound< MetricType, ElemType > > | |
| CCellBound | The CellBound class describes a bound that consists of a number of hyperrectangles |
| CHollowBallBound | Hollow ball bound encloses a set of points at a specific distance (radius) from a specific point (center) except points at a specific distance from another point (the center of the hole) |
| CHRectBound | Hyper-rectangle bound for an L-metric |
| ►Ncf | Collaborative filtering |
| CAverageInterpolation | This class performs average interpolation to generate interpolation weights for neighborhood-based collaborative filtering |
| CBatchSVDPolicy | Implementation of the Batch SVD policy to act as a wrapper when accessing Batch SVD from within CFType |
| CBiasSVDPolicy | Implementation of the Bias SVD policy to act as a wrapper when accessing Bias SVD from within CFType |
| CCFModel | The model to save to disk |
| CCFType | This class implements Collaborative Filtering (CF) |
| CCombinedNormalization | This normalization class performs a sequence of normalization methods on raw ratings |
| CCosineSearch | Nearest neighbor search with cosine distance |
| CDeleteVisitor | DeleteVisitor deletes the CFType<> object which is pointed to by the variable cf in class CFModel |
| CDummyClass | This class acts as a dummy class for passing as template parameter |
| CGetValueVisitor | GetValueVisitor returns the pointer which points to the CFType object |
| CItemMeanNormalization | This normalization class performs item mean normalization on raw ratings |
| CLMetricSearch | Nearest neighbor search with L_p distance |
| CNMFPolicy | Implementation of the NMF policy to act as a wrapper when accessing NMF from within CFType |
| CNoNormalization | This normalization class doesn't perform any normalization |
| COverallMeanNormalization | This normalization class performs overall mean normalization on raw ratings |
| CPearsonSearch | Nearest neighbor search with pearson distance (or furthest neighbor search with pearson correlation) |
| CPredictVisitor | PredictVisitor uses the CFType object to make predictions on the given combinations of users and items |
| CRandomizedSVDPolicy | Implementation of the Randomized SVD policy to act as a wrapper when accessing Randomized SVD from within CFType |
| CRecommendationVisitor | RecommendationVisitor uses the CFType object to get recommendations for the given users |
| CRegressionInterpolation | Implementation of regression-based interpolation method |
| CRegSVDPolicy | Implementation of the Regularized SVD policy to act as a wrapper when accessing Regularized SVD from within CFType |
| CSimilarityInterpolation | With SimilarityInterpolation, interpolation weights are based on similarities between query user and its neighbors |
| CSVDCompletePolicy | Implementation of the SVD complete incremental policy to act as a wrapper when accessing SVD complete decomposition from within CFType |
| CSVDIncompletePolicy | Implementation of the SVD incomplete incremental to act as a wrapper when accessing SVD incomplete incremental from within CFType |
| CSVDPlusPlusPolicy | Implementation of the SVDPlusPlus policy to act as a wrapper when accessing SVDPlusPlus from within CFType |
| CSVDWrapper | This class acts as the wrapper for all SVD factorizers which are incompatible with CF module |
| CUserMeanNormalization | This normalization class performs user mean normalization on raw ratings |
| CZScoreNormalization | This normalization class performs z-score normalization on raw ratings |
| ►Ncv | |
| CAccuracy | The Accuracy is a metric of performance for classification algorithms that is equal to a proportion of correctly labeled test items among all ones for given test items |
| CCVBase | An auxiliary class for cross-validation |
| CF1 | F1 is a metric of performance for classification algorithms that for binary classification is equal to |
| CKFoldCV | The class KFoldCV implements k-fold cross-validation for regression and classification algorithms |
| CMetaInfoExtractor | MetaInfoExtractor is a tool for extracting meta information about a given machine learning algorithm |
| CMSE | The MeanSquaredError is a metric of performance for regression algorithms that is equal to the mean squared error between predicted values and ground truth (correct) values for given test items |
| CNotFoundMethodForm | |
| CPrecision | Precision is a metric of performance for classification algorithms that for binary classification is equal to , where and are the numbers of true positives and false positives respectively |
| CR2Score | The R2 Score is a metric of performance for regression algorithms that represents the proportion of variance (here y) that has been explained by the independent variables in the model |
| CRecall | Recall is a metric of performance for classification algorithms that for binary classification is equal to , where and are the numbers of true positives and false negatives respectively |
| CSelectMethodForm | A type function that selects a right method form |
| ►CSelectMethodForm< MLAlgorithm > | |
| CFrom | |
| ►CSelectMethodForm< MLAlgorithm, HasMethodForm, HMFs... > | |
| CFrom | |
| CSilhouetteScore | The Silhouette Score is a metric of performance for clustering that represents the quality of clusters made as a result |
| CSimpleCV | SimpleCV splits data into two sets - training and validation sets - and then runs training on the training set and evaluates performance on the validation set |
| CTrainForm | A wrapper struct for holding a Train form |
| CTrainForm< MT, PT, void, false, false > | |
| CTrainForm< MT, PT, void, false, true > | |
| CTrainForm< MT, PT, void, true, false > | |
| CTrainForm< MT, PT, void, true, true > | |
| CTrainForm< MT, PT, WT, false, false > | |
| CTrainForm< MT, PT, WT, false, true > | |
| CTrainForm< MT, PT, WT, true, false > | |
| CTrainForm< MT, PT, WT, true, true > | |
| CTrainFormBase4 | |
| CTrainFormBase5 | |
| CTrainFormBase6 | |
| CTrainFormBase7 | |
| ►Ndata | Functions to load and save matrices and models |
| CBagOfWordsEncodingPolicy | Definition of the BagOfWordsEncodingPolicy class |
| CCharExtract | The class is used to split a string into characters |
| CCustomImputation | A simple custom imputation class |
| CDatasetMapper | Auxiliary information for a dataset, including mappings to/from strings (or other types) and the datatype of each dimension |
| CDictionaryEncodingPolicy | DicitonaryEnocdingPolicy is used as a helper class for StringEncoding |
| ►CHasSerialize | |
| Ccheck | |
| CHasSerializeFunction | |
| CImageInfo | Implements meta-data of images required by data::Load and data::Save for loading and saving images into arma::Mat |
| CImputer | Given a dataset of a particular datatype, replace user-specified missing value with a variable dependent on the StrategyType and MapperType |
| CIncrementPolicy | IncrementPolicy is used as a helper class for DatasetMapper |
| CListwiseDeletion | A complete-case analysis to remove the values containing mappedValue |
| CLoadCSV | Load the csv file.This class use boost::spirit to implement the parser, please refer to following link http://theboostcpplibraries.com/boost.spirit for quick review |
| CMaxAbsScaler | A simple MaxAbs Scaler class |
| CMeanImputation | A simple mean imputation class |
| CMeanNormalization | A simple Mean Normalization class |
| CMedianImputation | This is a class implementation of simple median imputation |
| CMinMaxScaler | A simple MinMax Scaler class |
| CMissingPolicy | MissingPolicy is used as a helper class for DatasetMapper |
| CPCAWhitening | A simple PCAWhitening class |
| CScalingModel | The model to save to disk |
| CSplitByAnyOf | Tokenizes a string using a set of delimiters |
| CStandardScaler | A simple Standard Scaler class |
| CStringEncoding | The class translates a set of strings into numbers using various encoding algorithms |
| CStringEncodingDictionary | This class provides a dictionary interface for the purpose of string encoding |
| CStringEncodingDictionary< boost::string_view > | |
| CStringEncodingDictionary< int > | |
| CStringEncodingPolicyTraits | This is a template struct that provides some information about various encoding policies |
| CStringEncodingPolicyTraits< DictionaryEncodingPolicy > | The specialization provides some information about the dictionary encoding policy |
| CTfIdfEncodingPolicy | Definition of the TfIdfEncodingPolicy class |
| CZCAWhitening | A simple ZCAWhitening class |
| ►Ndbscan | |
| CDBSCAN | DBSCAN (Density-Based Spatial Clustering of Applications with Noise) is a clustering technique described in the following paper: |
| COrderedPointSelection | This class can be used to sequentially select the next point to use for DBSCAN |
| CRandomPointSelection | This class can be used to randomly select the next point to use for DBSCAN |
| ►Ndecision_stump | |
| CDecisionStump | This class implements a decision stump |
| ►Ndet | Density Estimation Trees |
| CDTree | A density estimation tree is similar to both a decision tree and a space partitioning tree (like a kd-tree) |
| CPathCacher | This class is responsible for caching the path to each node of the tree |
| ►Ndistribution | Probability distributions |
| CDiagonalGaussianDistribution | A single multivariate Gaussian distribution with diagonal covariance |
| CDiscreteDistribution | A discrete distribution where the only observations are discrete observations |
| CGammaDistribution | This class represents the Gamma distribution |
| CGaussianDistribution | A single multivariate Gaussian distribution |
| CLaplaceDistribution | The multivariate Laplace distribution centered at 0 has pdf |
| CRegressionDistribution | A class that represents a univariate conditionally Gaussian distribution |
| ►Nemst | Euclidean Minimum Spanning Trees |
| CDTBRules | |
| CDTBStat | A statistic for use with mlpack trees, which stores the upper bound on distance to nearest neighbors and the component which this node belongs to |
| CDualTreeBoruvka | Performs the MST calculation using the Dual-Tree Boruvka algorithm, using any type of tree |
| CEdgePair | An edge pair is simply two indices and a distance |
| CUnionFind | A Union-Find data structure |
| ►Nfastmks | Fast max-kernel search |
| CFastMKS | An implementation of fast exact max-kernel search |
| CFastMKSModel | A utility struct to contain all the possible FastMKS models, for use by the mlpack_fastmks program |
| CFastMKSRules | The FastMKSRules class is a template helper class used by FastMKS class when performing exact max-kernel search |
| CFastMKSStat | The statistic used in trees with FastMKS |
| ►Ngmm | Gaussian Mixture Models |
| CDiagonalConstraint | Force a covariance matrix to be diagonal |
| CDiagonalGMM | A Diagonal Gaussian Mixture Model |
| CEigenvalueRatioConstraint | Given a vector of eigenvalue ratios, ensure that the covariance matrix always has those eigenvalue ratios |
| CEMFit | This class contains methods which can fit a GMM to observations using the EM algorithm |
| CGMM | A Gaussian Mixture Model (GMM) |
| CNoConstraint | This class enforces no constraint on the covariance matrix |
| CPositiveDefiniteConstraint | Given a covariance matrix, force the matrix to be positive definite |
| ►Nhmm | Hidden Markov Models |
| CHMM | A class that represents a Hidden Markov Model with an arbitrary type of emission distribution |
| CHMMModel | A serializable HMM model that also stores the type |
| CHMMRegression | A class that represents a Hidden Markov Model Regression (HMMR) |
| ►Nhpt | |
| CCVFunction | This wrapper serves for adapting the interface of the cross-validation classes to the one that can be utilized by the mlpack optimizers |
| ►CDeduceHyperParameterTypes | A type function for deducing types of hyper-parameters from types of arguments in the Optimize method in HyperParameterTuner |
| CResultHolder | |
| ►CDeduceHyperParameterTypes< PreFixedArg< T >, Args... > | Defining DeduceHyperParameterTypes for the case when not all argument types have been processed, and the next one is the type of an argument that should be fixed |
| CResultHolder | |
| ►CDeduceHyperParameterTypes< T, Args... > | Defining DeduceHyperParameterTypes for the case when not all argument types have been processed, and the next one (T) is a collection type or an arithmetic type |
| CIsCollectionType | A type function to check whether Type is a collection type (for that it should define value_type) |
| CResultHolder | |
| CResultHPType | A type function to deduce the result hyper-parameter type for ArgumentType |
| CResultHPType< ArithmeticType, true > | |
| CResultHPType< CollectionType, false > | |
| CFixedArg | A struct for storing information about a fixed argument |
| CHyperParameterTuner | The class HyperParameterTuner for the given MLAlgorithm utilizes the provided Optimizer to find the values of hyper-parameters that optimize the value of the given Metric |
| CIsPreFixedArg | A type function for checking whether the given type is PreFixedArg |
| CPreFixedArg | A struct for marking arguments as ones that should be fixed (it can be useful for the Optimize method of HyperParameterTuner) |
| CPreFixedArg< T & > | The specialization of the template for references |
| ►Nkde | Kernel Density Estimation |
| CAbsErrorVisitor | AbsErrorVisitor modifies absolute error tolerance for a KDEType |
| CBandwidthVisitor | BandwidthVisitor modifies the bandwidth of a KDEType kernel |
| CDeleteVisitor | |
| CDualBiKDE | DualBiKDE computes a Kernel Density Estimation on the given KDEType |
| CDualMonoKDE | DualMonoKDE computes a Kernel Density Estimation on the given KDEType |
| CKDE | The KDE class is a template class for performing Kernel Density Estimations |
| CKDECleanRules | A dual-tree traversal Rules class for cleaning used trees before performing kernel density estimation |
| CKDEDefaultParams | KDEDefaultParams contains the default input parameter values for KDE |
| CKDEModel | |
| CKDERules | A dual-tree traversal Rules class for kernel density estimation |
| CKDEStat | Extra data for each node in the tree for the task of kernel density estimation |
| CKernelNormalizer | KernelNormalizer holds a set of methods to normalize estimations applying in each case the appropiate kernel normalizer function |
| CMCBreakCoefVisitor | MCBreakCoefVisitor sets the Monte Carlo break coefficient |
| CMCEntryCoefVisitor | MCEntryCoefVisitor sets the Monte Carlo entry coefficient |
| CMCProbabilityVisitor | MCProbabilityVisitor sets the Monte Carlo probability for a given KDEType |
| CMCSampleSizeVisitor | MCSampleSizeVisitor sets the Monte Carlo intial sample size for a given KDEType |
| CModeVisitor | ModeVisitor exposes the Mode() method of the KDEType |
| CMonteCarloVisitor | MonteCarloVisitor activates or deactivates Monte Carlo for a given KDEType |
| CRelErrorVisitor | RelErrorVisitor modifies relative error tolerance for a KDEType |
| CTrainVisitor | TrainVisitor trains a given KDEType using a reference set |
| ►Nkernel | Kernel functions |
| CCauchyKernel | The Cauchy kernel |
| CCosineDistance | The cosine distance (or cosine similarity) |
| CEpanechnikovKernel | The Epanechnikov kernel, defined as |
| CExampleKernel | An example kernel function |
| CGaussianKernel | The standard Gaussian kernel |
| CHyperbolicTangentKernel | Hyperbolic tangent kernel |
| CKernelTraits | This is a template class that can provide information about various kernels |
| CKernelTraits< CauchyKernel > | Kernel traits for the Cauchy kernel |
| CKernelTraits< CosineDistance > | Kernel traits for the cosine distance |
| CKernelTraits< EpanechnikovKernel > | Kernel traits for the Epanechnikov kernel |
| CKernelTraits< GaussianKernel > | Kernel traits for the Gaussian kernel |
| CKernelTraits< LaplacianKernel > | Kernel traits of the Laplacian kernel |
| CKernelTraits< SphericalKernel > | Kernel traits for the spherical kernel |
| CKernelTraits< TriangularKernel > | Kernel traits for the triangular kernel |
| CKMeansSelection | Implementation of the kmeans sampling scheme |
| CLaplacianKernel | The standard Laplacian kernel |
| CLinearKernel | The simple linear kernel (dot product) |
| CNystroemMethod | |
| COrderedSelection | |
| CPolynomialKernel | The simple polynomial kernel |
| CPSpectrumStringKernel | The p-spectrum string kernel |
| CRandomSelection | |
| CSphericalKernel | The spherical kernel, which is 1 when the distance between the two argument points is less than or equal to the bandwidth, or 0 otherwise |
| CTriangularKernel | The trivially simple triangular kernel, defined by |
| ►Nkmeans | K-Means clustering |
| CAllowEmptyClusters | Policy which allows K-Means to create empty clusters without any error being reported |
| CDualTreeKMeans | An algorithm for an exact Lloyd iteration which simply uses dual-tree nearest-neighbor search to find the nearest centroid for each point in the dataset |
| CDualTreeKMeansRules | |
| CDualTreeKMeansStatistic | |
| CElkanKMeans | |
| CHamerlyKMeans | |
| CKillEmptyClusters | Policy which allows K-Means to "kill" empty clusters without any error being reported |
| CKMeans | This class implements K-Means clustering, using a variety of possible implementations of Lloyd's algorithm |
| CMaxVarianceNewCluster | When an empty cluster is detected, this class takes the point furthest from the centroid of the cluster with maximum variance as a new cluster |
| CNaiveKMeans | This is an implementation of a single iteration of Lloyd's algorithm for k-means |
| CPellegMooreKMeans | An implementation of Pelleg-Moore's 'blacklist' algorithm for k-means clustering |
| CPellegMooreKMeansRules | The rules class for the single-tree Pelleg-Moore kd-tree traversal for k-means clustering |
| CPellegMooreKMeansStatistic | A statistic for trees which holds the blacklist for Pelleg-Moore k-means clustering (which represents the clusters that cannot possibly own any points in a node) |
| CRandomPartition | A very simple partitioner which partitions the data randomly into the number of desired clusters |
| CRefinedStart | A refined approach for choosing initial points for k-means clustering |
| CSampleInitialization | |
| ►Nkpca | |
| CKernelPCA | This class performs kernel principal components analysis (Kernel PCA), for a given kernel |
| CNaiveKernelRule | |
| CNystroemKernelRule | |
| ►Nlcc | |
| CLocalCoordinateCoding | An implementation of Local Coordinate Coding (LCC) that codes data which approximately lives on a manifold using a variation of l1-norm regularized sparse coding; in LCC, the penalty on the absolute value of each point's coefficient for each atom is weighted by the squared distance of that point to that atom |
| ►Nlmnn | Large Margin Nearest Neighbor |
| CConstraints | Interface for generating distance based constraints on a given dataset, provided corresponding true labels and a quantity parameter (k) are specified |
| CLMNN | An implementation of Large Margin nearest neighbor metric learning technique |
| CLMNNFunction | The Large Margin Nearest Neighbors function |
| ►Nmath | Miscellaneous math routines |
| CColumnsToBlocks | Transform the columns of the given matrix into a block format |
| CRangeType | Simple real-valued range |
| ►Nmatrix_completion | |
| CMatrixCompletion | This class implements the popular nuclear norm minimization heuristic for matrix completion problems |
| ►Nmeanshift | Mean shift clustering |
| CMeanShift | This class implements mean shift clustering |
| ►Nmetric | |
| CBLEU | BLEU, or the Bilingual Evaluation Understudy, is an algorithm for evaluating the quality of text which has been machine translated from one natural language to another |
| CIoU | Definition of Intersection over Union metric |
| CIPMetric | The inner product metric, IPMetric, takes a given Mercer kernel (KernelType), and when Evaluate() is called, returns the distance between the two points in kernel space: |
| CLMetric | The L_p metric for arbitrary integer p, with an option to take the root |
| CMahalanobisDistance | The Mahalanobis distance, which is essentially a stretched Euclidean distance |
| CNMS | Definition of Non Maximal Supression |
| ►Nmvu | |
| CMVU | Meant to provide a good abstraction for users |
| ►Nnaive_bayes | The Naive Bayes Classifier |
| CNaiveBayesClassifier | The simple Naive Bayes classifier |
| ►Nnca | Neighborhood Components Analysis |
| CNCA | An implementation of Neighborhood Components Analysis, both a linear dimensionality reduction technique and a distance learning technique |
| CSoftmaxErrorFunction | The "softmax" stochastic neighbor assignment probability function |
| ►Nneighbor | |
| CAlphaVisitor | Exposes the Alpha() method of the given RAType |
| CBiSearchVisitor | BiSearchVisitor executes a bichromatic neighbor search on the given NSType |
| CDeleteVisitor | DeleteVisitor deletes the given NSType instance |
| CDrusillaSelect | |
| CEpsilonVisitor | EpsilonVisitor exposes the Epsilon method of the given NSType |
| CFirstLeafExactVisitor | Exposes the FirstLeafExact() method of the given RAType |
| CFurthestNS | This class implements the necessary methods for the SortPolicy template parameter of the NeighborSearch class |
| CLSHSearch | The LSHSearch class; this class builds a hash on the reference set and uses this hash to compute the distance-approximate nearest-neighbors of the given queries |
| CMonoSearchVisitor | MonoSearchVisitor executes a monochromatic neighbor search on the given NSType |
| CNaiveVisitor | NaiveVisitor exposes the Naive() method of the given RAType |
| CNearestNS | This class implements the necessary methods for the SortPolicy template parameter of the NeighborSearch class |
| CNeighborSearch | The NeighborSearch class is a template class for performing distance-based neighbor searches |
| ►CNeighborSearchRules | The NeighborSearchRules class is a template helper class used by NeighborSearch class when performing distance-based neighbor searches |
| CCandidateCmp | Compare two candidates based on the distance |
| CNeighborSearchStat | Extra data for each node in the tree |
| CNSModel | The NSModel class provides an easy way to serialize a model, abstracts away the different types of trees, and also reflects the NeighborSearch API |
| CQDAFN | |
| CRAModel | The RAModel class provides an abstraction for the RASearch class, abstracting away the TreeType parameter and allowing it to be specified at runtime in this class |
| CRAQueryStat | Extra data for each node in the tree |
| CRASearch | The RASearch class: This class provides a generic manner to perform rank-approximate search via random-sampling |
| CRASearchRules | The RASearchRules class is a template helper class used by RASearch class when performing rank-approximate search via random-sampling |
| CRAUtil | |
| CReferenceSetVisitor | ReferenceSetVisitor exposes the referenceSet of the given NSType |
| CSampleAtLeavesVisitor | Exposes the SampleAtLeaves() method of the given RAType |
| CSearchModeVisitor | SearchModeVisitor exposes the SearchMode() method of the given NSType |
| CSingleModeVisitor | Exposes the SingleMode() method of the given RAType |
| CSingleSampleLimitVisitor | Exposes the SingleSampleLimit() method of the given RAType |
| CTauVisitor | Exposes the Tau() method of the given RAType |
| CTrainVisitor | TrainVisitor sets the reference set to a new reference set on the given NSType |
| ►Nnn | |
| CSparseAutoencoder | A sparse autoencoder is a neural network whose aim to learn compressed representations of the data, typically for dimensionality reduction, with a constraint on the activity of the neurons in the network |
| CSparseAutoencoderFunction | This is a class for the sparse autoencoder objective function |
| ►Npca | |
| CExactSVDPolicy | Implementation of the exact SVD policy |
| CPCA | This class implements principal components analysis (PCA) |
| CQUICSVDPolicy | Implementation of the QUIC-SVD policy |
| CRandomizedBlockKrylovSVDPolicy | Implementation of the randomized block krylov SVD policy |
| CRandomizedSVDPolicy | Implementation of the randomized SVD policy |
| ►Nperceptron | |
| CPerceptron | This class implements a simple perceptron (i.e., a single layer neural network) |
| CRandomInitialization | This class is used to initialize weights for the weightVectors matrix in a random manner |
| CSimpleWeightUpdate | |
| CZeroInitialization | This class is used to initialize the matrix weightVectors to zero |
| ►Nradical | |
| CRadical | An implementation of RADICAL, an algorithm for independent component analysis (ICA) |
| ►Nrange | Range-search routines |
| CBiSearchVisitor | BiSearchVisitor executes a bichromatic range search on the given RSType |
| CDeleteVisitor | DeleteVisitor deletes the given RSType instance |
| CMonoSearchVisitor | MonoSearchVisitor executes a monochromatic range search on the given RSType |
| CNaiveVisitor | NaiveVisitor exposes the Naive() method of the given RSType |
| CRangeSearch | The RangeSearch class is a template class for performing range searches |
| CRangeSearchRules | The RangeSearchRules class is a template helper class used by RangeSearch class when performing range searches |
| CRangeSearchStat | Statistic class for RangeSearch, to be set to the StatisticType of the tree type that range search is being performed with |
| CReferenceSetVisitor | ReferenceSetVisitor exposes the referenceSet of the given RSType |
| CRSModel | |
| CSingleModeVisitor | SingleModeVisitor exposes the SingleMode() method of the given RSType |
| CTrainVisitor | TrainVisitor sets the reference set to a new reference set on the given RSType |
| ►Nregression | Regression methods |
| CBayesianLinearRegression | A Bayesian approach to the maximum likelihood estimation of the parameters of the linear regression model |
| CLARS | An implementation of LARS, a stage-wise homotopy-based algorithm for l1-regularized linear regression (LASSO) and l1+l2 regularized linear regression (Elastic Net) |
| CLinearRegression | A simple linear regression algorithm using ordinary least squares |
| CLogisticRegression | The LogisticRegression class implements an L2-regularized logistic regression model, and supports training with multiple optimizers and classification |
| CLogisticRegressionFunction | The log-likelihood function for the logistic regression objective function |
| CSoftmaxRegression | Softmax Regression is a classifier which can be used for classification when the data available can take two or more class values |
| CSoftmaxRegressionFunction | |
| ►Nrl | |
| ►CAcrobot | Implementation of Acrobot game |
| CAction | |
| CState | |
| CAggregatedPolicy | |
| CAsyncLearning | Wrapper of various asynchronous learning algorithms, e.g |
| ►CCartPole | Implementation of Cart Pole task |
| CAction | Implementation of action of Cart Pole |
| CState | Implementation of the state of Cart Pole |
| CCategoricalDQN | Implementation of the Categorical Deep Q-Learning network |
| ►CContinuousActionEnv | To use the dummy environment, one may start by specifying the state and action dimensions |
| CAction | Implementation of continuous action |
| CState | Implementation of state of the dummy environment |
| ►CContinuousDoublePoleCart | Implementation of Continuous Double Pole Cart Balancing task |
| CAction | Implementation of action of Continuous Double Pole Cart |
| CState | Implementation of the state of Continuous Double Pole Cart |
| ►CContinuousMountainCar | Implementation of Continuous Mountain Car task |
| CAction | Implementation of action of Continuous Mountain Car |
| CState | Implementation of state of Continuous Mountain Car |
| ►CDiscreteActionEnv | To use the dummy environment, one may start by specifying the state and action dimensions |
| CAction | Implementation of discrete action |
| CState | Implementation of state of the dummy environment |
| ►CDoublePoleCart | Implementation of Double Pole Cart Balancing task |
| CAction | Implementation of action of Double Pole Cart |
| CState | Implementation of the state of Double Pole Cart |
| CDuelingDQN | Implementation of the Dueling Deep Q-Learning network |
| CGreedyPolicy | Implementation for epsilon greedy policy |
| ►CMountainCar | Implementation of Mountain Car task |
| CAction | Implementation of action of Mountain Car |
| CState | Implementation of state of Mountain Car |
| CNStepQLearningWorker | Forward declaration of NStepQLearningWorker |
| COneStepQLearningWorker | Forward declaration of OneStepQLearningWorker |
| COneStepSarsaWorker | Forward declaration of OneStepSarsaWorker |
| ►CPendulum | Implementation of Pendulum task |
| CAction | Implementation of action of Pendulum |
| CState | Implementation of state of Pendulum |
| ►CPrioritizedReplay | Implementation of prioritized experience replay |
| CTransition | |
| CQLearning | Implementation of various Q-Learning algorithms, such as DQN, double DQN |
| ►CRandomReplay | Implementation of random experience replay |
| CTransition | |
| CRewardClipping | Interface for clipping the reward to some value between the specified maximum and minimum value (Clipping here is implemented as .) |
| CSAC | Implementation of Soft Actor-Critic, a model-free off-policy actor-critic based deep reinforcement learning algorithm |
| CSimpleDQN | |
| CSumTree | Implementation of SumTree |
| CTrainingConfig | |
| ►Nsfinae | |
| CMethodFormDetector | |
| CMethodFormDetector< Class, MethodForm, 0 > | |
| CMethodFormDetector< Class, MethodForm, 1 > | |
| CMethodFormDetector< Class, MethodForm, 2 > | |
| CMethodFormDetector< Class, MethodForm, 3 > | |
| CMethodFormDetector< Class, MethodForm, 4 > | |
| CMethodFormDetector< Class, MethodForm, 5 > | |
| CMethodFormDetector< Class, MethodForm, 6 > | |
| CMethodFormDetector< Class, MethodForm, 7 > | |
| CSigCheck | Utility struct for checking signatures |
| ►Nsparse_coding | |
| CDataDependentRandomInitializer | A data-dependent random dictionary initializer for SparseCoding |
| CNothingInitializer | A DictionaryInitializer for SparseCoding which does not initialize anything; it is useful for when the dictionary is already known and will be set with SparseCoding::Dictionary() |
| CRandomInitializer | A DictionaryInitializer for use with the SparseCoding class |
| CSparseCoding | An implementation of Sparse Coding with Dictionary Learning that achieves sparsity via an l1-norm regularizer on the codes (LASSO) or an (l1+l2)-norm regularizer on the codes (the Elastic Net) |
| ►Nsvd | |
| CBiasSVD | Bias SVD is an improvement on Regularized SVD which is a matrix factorization techniques |
| CBiasSVDFunction | This class contains methods which are used to calculate the cost of BiasSVD's objective function, to calculate gradient of parameters with respect to the objective function, etc |
| CQUIC_SVD | QUIC-SVD is a matrix factorization technique, which operates in a subspace such that A's approximation in that subspace has minimum error(A being the data matrix) |
| CRandomizedBlockKrylovSVD | Randomized block krylov SVD is a matrix factorization that is based on randomized matrix approximation techniques, developed in in "Randomized Block Krylov Methods for Stronger and Faster Approximate
Singular Value Decomposition" |
| CRandomizedSVD | Randomized SVD is a matrix factorization that is based on randomized matrix approximation techniques, developed in in "Finding structure with randomness:
Probabilistic algorithms for constructing approximate matrix decompositions" |
| CRegularizedSVD | Regularized SVD is a matrix factorization technique that seeks to reduce the error on the training set, that is on the examples for which the ratings have been provided by the users |
| CRegularizedSVDFunction | The data is stored in a matrix of type MatType, so that this class can be used with both dense and sparse matrix types |
| CSVDPlusPlus | SVD++ is a matrix decomposition tenique used in collaborative filtering |
| CSVDPlusPlusFunction | This class contains methods which are used to calculate the cost of SVD++'s objective function, to calculate gradient of parameters with respect to the objective function, etc |
| ►Nsvm | |
| CLinearSVM | The LinearSVM class implements an L2-regularized support vector machine model, and supports training with multiple optimizers and classification |
| CLinearSVMFunction | The hinge loss function for the linear SVM objective function |
| ►Ntree | Trees and tree-building procedures |
| ►CAllCategoricalSplit | The AllCategoricalSplit is a splitting function that will split categorical features into many children: one child for each category |
| CAuxiliarySplitInfo | |
| CAllDimensionSelect | This dimension selection policy allows any dimension to be selected for splitting |
| CAxisParallelProjVector | AxisParallelProjVector defines an axis-parallel projection vector |
| ►CBestBinaryNumericSplit | The BestBinaryNumericSplit is a splitting function for decision trees that will exhaustively search a numeric dimension for the best binary split |
| CAuxiliarySplitInfo | |
| CBinaryNumericSplit | The BinaryNumericSplit class implements the numeric feature splitting strategy devised by Gama, Rocha, and Medas in the following paper: |
| CBinaryNumericSplitInfo | |
| ►CBinarySpaceTree | A binary space partitioning tree, such as a KD-tree or a ball tree |
| CBreadthFirstDualTreeTraverser | |
| CDualTreeTraverser | A dual-tree traverser for binary space trees; see dual_tree_traverser.hpp |
| CSingleTreeTraverser | A single-tree traverser for binary space trees; see single_tree_traverser.hpp for implementation |
| CCategoricalSplitInfo | |
| CCompareCosineNode | |
| CCosineTree | |
| ►CCoverTree | A cover tree is a tree specifically designed to speed up nearest-neighbor computation in high-dimensional spaces |
| CDualTreeTraverser | A dual-tree cover tree traverser; see dual_tree_traverser.hpp |
| CSingleTreeTraverser | A single-tree cover tree traverser; see single_tree_traverser.hpp for implementation |
| CDecisionTree | This class implements a generic decision tree learner |
| CDiscreteHilbertValue | The DiscreteHilbertValue class stores Hilbert values for all of the points in a RectangleTree node, and calculates Hilbert values for new points |
| CEmptyStatistic | Empty statistic if you are not interested in storing statistics in your tree |
| CExampleTree | This is not an actual space tree but instead an example tree that exists to show and document all the functions that mlpack trees must implement |
| CFirstPointIsRoot | This class is meant to be used as a choice for the policy class RootPointPolicy of the CoverTree class |
| CGiniGain | The Gini gain, a measure of set purity usable as a fitness function (FitnessFunction) for decision trees |
| CGiniImpurity | |
| CGreedySingleTreeTraverser | |
| CHilbertRTreeAuxiliaryInformation | |
| CHilbertRTreeDescentHeuristic | This class chooses the best child of a node in a Hilbert R tree when inserting a new point |
| CHilbertRTreeSplit | The splitting procedure for the Hilbert R tree |
| CHoeffdingCategoricalSplit | This is the standard Hoeffding-bound categorical feature proposed in the paper below: |
| CHoeffdingInformationGain | |
| CHoeffdingNumericSplit | The HoeffdingNumericSplit class implements the numeric feature splitting strategy alluded to by Domingos and Hulten in the following paper: |
| CHoeffdingTree | The HoeffdingTree object represents all of the necessary information for a Hoeffding-bound-based decision tree |
| CHoeffdingTreeModel | This class is a serializable Hoeffding tree model that can hold four different types of Hoeffding trees |
| CHyperplaneBase | HyperplaneBase defines a splitting hyperplane based on a projection vector and projection value |
| CInformationGain | The standard information gain criterion, used for calculating gain in decision trees |
| CIsSpillTree | |
| CIsSpillTree< tree::SpillTree< MetricType, StatisticType, MatType, HyperplaneType, SplitType > > | |
| CMeanSpaceSplit | |
| ►CMeanSplit | A binary space partitioning tree node is split into its left and right child |
| CSplitInfo | An information about the partition |
| CMidpointSpaceSplit | |
| ►CMidpointSplit | A binary space partitioning tree node is split into its left and right child |
| CSplitInfo | A struct that contains an information about the split |
| ►CMinimalCoverageSweep | The MinimalCoverageSweep class finds a partition along which we can split a node according to the coverage of two resulting nodes |
| CSweepCost | A struct that provides the type of the sweep cost |
| ►CMinimalSplitsNumberSweep | The MinimalSplitsNumberSweep class finds a partition along which we can split a node according to the number of required splits of the node |
| CSweepCost | A struct that provides the type of the sweep cost |
| CMultipleRandomDimensionSelect | This dimension selection policy allows the selection from a few random dimensions |
| CNoAuxiliaryInformation | |
| CNumericSplitInfo | |
| ►COctree | |
| CDualTreeTraverser | A dual-tree traverser; see dual_tree_traverser.hpp |
| CSingleTreeTraverser | A single-tree traverser; see single_tree_traverser.hpp |
| CProjVector | ProjVector defines a general projection vector (not necessarily axis-parallel) |
| CQueueFrame | |
| CRandomDimensionSelect | This dimension selection policy only selects one single random dimension |
| CRandomForest | |
| ►CRectangleTree | A rectangle type tree tree, such as an R-tree or X-tree |
| CDualTreeTraverser | A dual tree traverser for rectangle type trees |
| CSingleTreeTraverser | A single traverser for rectangle type trees |
| CRPlusPlusTreeAuxiliaryInformation | |
| CRPlusPlusTreeDescentHeuristic | |
| CRPlusPlusTreeSplitPolicy | The RPlusPlusTreeSplitPolicy helps to determine the subtree into which we should insert a child of an intermediate node that is being split |
| CRPlusTreeDescentHeuristic | |
| CRPlusTreeSplit | The RPlusTreeSplit class performs the split process of a node on overflow |
| CRPlusTreeSplitPolicy | The RPlusPlusTreeSplitPolicy helps to determine the subtree into which we should insert a child of an intermediate node that is being split |
| ►CRPTreeMaxSplit | This class splits a node by a random hyperplane |
| CSplitInfo | An information about the partition |
| ►CRPTreeMeanSplit | This class splits a binary space tree |
| CSplitInfo | An information about the partition |
| CRStarTreeDescentHeuristic | When descending a RectangleTree to insert a point, we need to have a way to choose a child node when the point isn't enclosed by any of them |
| CRStarTreeSplit | A Rectangle Tree has new points inserted at the bottom |
| CRTreeDescentHeuristic | When descending a RectangleTree to insert a point, we need to have a way to choose a child node when the point isn't enclosed by any of them |
| CRTreeSplit | A Rectangle Tree has new points inserted at the bottom |
| CSpaceSplit | |
| ►CSpillTree | A hybrid spill tree is a variant of binary space trees in which the children of a node can "spill over" each other, and contain shared datapoints |
| CSpillDualTreeTraverser | A generic dual-tree traverser for hybrid spill trees; see spill_dual_tree_traverser.hpp for implementation |
| CSpillSingleTreeTraverser | A generic single-tree traverser for hybrid spill trees; see spill_single_tree_traverser.hpp for implementation |
| CTraversalInfo | The TraversalInfo class holds traversal information which is used in dual-tree (and single-tree) traversals |
| CTreeTraits | The TreeTraits class provides compile-time information on the characteristics of a given tree type |
| CTreeTraits< BinarySpaceTree< MetricType, StatisticType, MatType, bound::BallBound, SplitType > > | This is a specialization of the TreeType class to the BallTree tree type |
| CTreeTraits< BinarySpaceTree< MetricType, StatisticType, MatType, bound::CellBound, SplitType > > | This is a specialization of the TreeType class to the UBTree tree type |
| CTreeTraits< BinarySpaceTree< MetricType, StatisticType, MatType, bound::HollowBallBound, SplitType > > | This is a specialization of the TreeType class to an arbitrary tree with HollowBallBound (currently only the vantage point tree is supported) |
| CTreeTraits< BinarySpaceTree< MetricType, StatisticType, MatType, BoundType, RPTreeMaxSplit > > | This is a specialization of the TreeType class to the max-split random projection tree |
| CTreeTraits< BinarySpaceTree< MetricType, StatisticType, MatType, BoundType, RPTreeMeanSplit > > | This is a specialization of the TreeType class to the mean-split random projection tree |
| CTreeTraits< BinarySpaceTree< MetricType, StatisticType, MatType, BoundType, SplitType > > | This is a specialization of the TreeTraits class to the BinarySpaceTree tree type |
| CTreeTraits< CoverTree< MetricType, StatisticType, MatType, RootPointPolicy > > | The specialization of the TreeTraits class for the CoverTree tree type |
| CTreeTraits< Octree< MetricType, StatisticType, MatType > > | This is a specialization of the TreeTraits class to the Octree tree type |
| CTreeTraits< RectangleTree< MetricType, StatisticType, MatType, RPlusTreeSplit< SplitPolicyType, SweepType >, DescentType, AuxiliaryInformationType > > | Since the R+/R++ tree can not have overlapping children, we should define traits for the R+/R++ tree |
| CTreeTraits< RectangleTree< MetricType, StatisticType, MatType, SplitType, DescentType, AuxiliaryInformationType > > | This is a specialization of the TreeType class to the RectangleTree tree type |
| CTreeTraits< SpillTree< MetricType, StatisticType, MatType, HyperplaneType, SplitType > > | This is a specialization of the TreeType class to the SpillTree tree type |
| CUBTreeSplit | Split a node into two parts according to the median address of points contained in the node |
| ►CVantagePointSplit | The class splits a binary space partitioning tree node according to the median distance to the vantage point |
| CSplitInfo | A struct that contains an information about the split |
| ►CXTreeAuxiliaryInformation | The XTreeAuxiliaryInformation class provides information specific to X trees for each node in a RectangleTree |
| CSplitHistoryStruct | The X tree requires that the tree records it's "split history" |
| CXTreeSplit | A Rectangle Tree has new points inserted at the bottom |
| ►Nutil | |
| CBindingDetails | This structure holds all of the information about bindings documentation |
| CExample | |
| CIsStdVector | Metaprogramming structure for vector detection |
| CIsStdVector< std::vector< T, A > > | Metaprogramming structure for vector detection |
| CLongDescription | |
| CNullOutStream | Used for Log::Debug when not compiled with debugging symbols |
| CParamData | This structure holds all of the information about a single parameter, including its value (which is set when ParseCommandLine() is called) |
| CPrefixedOutStream | Allows us to output to an ostream with a prefix at the beginning of each line, in the same way we would output to cout or cerr |
| CProgramName | |
| CSeeAlso | |
| CShortDescription | |
| CBacktrace | Provides a backtrace |
| CIO | Parses the command line for parameters and holds user-specified parameters |
| CLog | Provides a convenient way to give formatted output |
| CTimer | The timer class provides a way for mlpack methods to be timed |
| CTimers | |
| CInitHMMModel | |
| CIsVector | If value == true, then VecType is some sort of Armadillo vector or subview |
| CIsVector< arma::Col< eT > > | |
| CIsVector< arma::Row< eT > > | |
| CIsVector< arma::SpCol< eT > > | |
| CIsVector< arma::SpRow< eT > > | |
| CIsVector< arma::SpSubview< eT > > | |
| CIsVector< arma::subview_col< eT > > | |
| CIsVector< arma::subview_row< eT > > | |
| CLayerNameVisitor | Implementation of a class that returns the string representation of the name of the given layer |
| CTrainHMMModel | |