Ara
Toplam kayıt 19, listelenen: 11-19
Omnivariate rule induction using a novel pairwise statistical test
(IEEE Computer Soc, 2013-09)
Rule learning algorithms, for example, RIPPER, induces univariate rules, that is, a propositional condition in a rule uses only one feature. In this paper, we propose an omnivariate induction of rules where under each ...
Mapping classifiers and datasets
(Pergamon-Elsevier Science Ltd, 2011-04)
Given the posterior probability estimates of 14 classifiers on 38 datasets, we plot two-dimensional maps of classifiers and datasets using principal component analysis (PCA) and Isomap. The similarity between classifiers ...
Soft decision trees
(IEEE, 2012)
We discuss a novel decision tree architecture with soft decisions at the internal nodes where we choose both children with probabilities given by a sigmoid gating function. Our algorithm is incremental where new nodes are ...
Müşterilerin GSP analizi kullanarak kümelenmesi
(Institute of Electrical and Electronics Engineers Inc., 2018-07-05)
Bu çalışma ile mevcut misafir ve rezervasyon verisi kullanılarak doğal öbeklenmeleri tespit ederek misafir davranışları tespit ettik. Ayrıca verilen hizmetleri ve satış stratejilerini bu davranışlara göre özelleştirdik. ...
Bagging soft decision trees
(Springer Verlag, 2016)
The decision tree is one of the earliest predictive models in machine learning. In the soft decision tree, based on the hierarchical mixture of experts model, internal binary nodes take soft decisions and choose both ...
Feature extraction from discrete attributes
(IEEE, 2010)
In many pattern recognition applications, first decision trees are used due to their simplicity and easily interpretable nature. In this paper, we extract new features by combining k discrete attributes, where for each ...
Univariate margin tree
(Springer, 2010)
In many pattern recognition applications, first decision trees are used due to their simplicity and easily interpretable nature. In this paper, we propose a new decision tree learning algorithm called univariate margin ...
On the VC-dimension of univariate decision trees
(2012)
In this paper, we give and prove lower bounds of the VC-dimension of the univariate decision tree hypothesis class. The VC-dimension of the univariate decision tree depends on the VC-dimension values of its subtrees and ...
Statistical tests using hinge/ε-sensitive loss
(Springer-Verlag, 2013)
Statistical tests used in the literature to compare algorithms use the misclassification error which is based on the 0/1 loss and square loss for regression. Kernel-based, support vector machine classifiers (regressors) ...