Arama Sonuçları

Listeleniyor 1 - 9 / 9
  • Yayın
    Incremental construction of rule ensembles using classifiers produced by different class orderings
    (IEEE, 2016) Yıldız, Olcay Taner; Ulaş, Aydın
    In this paper, we discuss a novel approach to incrementally construct a rule ensemble. The approach constructs an ensemble from a dynamically generated set of rule classifiers. Each classifier in this set is trained by using a different class ordering. We investigate criteria including accuracy, ensemble size, and the role of starting point in the search. Fusion is done by averaging. Using 22 data sets, floating search finds small, accurate ensembles in polynomial time.
  • Yayın
    Hybrid high dimensional model representation (HHDMR) on the partitioned data
    (Elsevier B.V., 2006-01-01) Tunga, Mehmet Alper; Demiralp, Metin
    A multivariate interpolation problem is generally constructed for appropriate determination of a multivariate function whose values are given at a finite number of nodes of a multivariate grid. One way to construct the solution of this problem is to partition the given multivariate data into low-variate data. High dimensional model representation (HDMR) and generalized high dimensional model representation (GHDMR) methods are used to make this partitioning. Using the components of the HDMR or the GHDMR expansions the multivariate data can be partitioned. When a cartesian product set in the space of the independent variables is given, the HDMR expansion is used. On the other band, if the nodes are the elements of a random discrete data the GHDMR expansion is used instead of HDMR. These two expansions work well for the multivariate data that have the additive nature. If the data have multiplicative nature then factorized high dimensional model representation (FHDMR) is used. But in most cases the nature of the given multivariate data and the sought multivariate function have neither additive nor multiplicative nature. They have a hybrid nature. So, a new method is developed to obtain better results and it is called hybrid high dimensional model representation (HHDMR). This new expansion includes both the HDMR (or GHDMR) and the FHDMR expansions through a hybridity parameter. In this work, the general structure of this hybrid expansion is given. It has tried to obtain the best value for the hybridity parameter. According to this value the analytical structure of the sought multivariate function can be determined via HHDMR.
  • Yayın
    BinBRO: Binary Battle Royale Optimizer algorithm
    (Elsevier Ltd, 2022-02-04) (Rahkar Farshi), Taymaz Akan; Agahian, Saeid; Dehkharghani, Rahim
    Stochastic methods attempt to solve problems that cannot be solved by deterministic methods with reasonable time complexity. Optimization algorithms benefit from stochastic methods; however, they do not guarantee to obtain the optimal solution. Many optimization algorithms have been proposed for solving problems with continuous nature; nevertheless, they are unable to solve discrete or binary problems. Adaptation and use of continuous optimization algorithms for solving discrete problems have gained growing popularity in recent decades. In this paper, the binary version of a recently proposed optimization algorithm, Battle Royale Optimization, which we named BinBRO, has been proposed. The proposed algorithm has been applied to two benchmark datasets: the uncapacitated facility location problem, and the maximum-cut graph problem, and has been compared with 6 other binary optimization algorithms, namely, Particle Swarm Optimization, different versions of Genetic Algorithm, and different versions of Artificial Bee Colony algorithm. The BinBRO-based algorithms could rank first among those algorithms when applying on all benchmark datasets of both problems, UFLP and Max-Cut.
  • Yayın
    Battle Royale Optimizer for solving binary optimization problems
    (Elsevier B.V., 2022-05) Akan, Taymaz; Agahian, Saeid; Dehkharghani, Rahim
    Battle Royale Optimizer (BRO) is a recently proposed metaheuristic optimization algorithm used only in continuous problem spaces. The BinBRO is a binary version of BRO. The BinBRO algorithm employs a differential expression, which utilizes a dissimilarity measure between binary vectors instead of a vector subtraction operator, used in the original BRO algorithm to find the nearest neighbor. To evaluate BinBRO, we applied it to two popular benchmark datasets: the uncapacitated facility location problem (UFLP) and the maximum-cut (Max-Cut) graph problems from OR-Library. An open-source MATLAB implementation of BinBRO is available on CodeOcean and GitHub websites.
  • Yayın
    Searching for the optimal ordering of classes in rule induction
    (IEEE, 2012-11-15) Ata, Sezin; Yıldız, Olcay Taner
    Rule induction algorithms such as Ripper, solve a K > 2 class problem by converting it into a sequence of K - 1 two-class problems. As a usual heuristic, the classes are fed into the algorithm in the order of increasing prior probabilities. In this paper, we propose two algorithms to improve this heuristic. The first algorithm starts with the ordering the heuristic provides and searches for better orderings by swapping consecutive classes. The second algorithm transforms the ordering search problem into an optimization problem and uses the solution of the optimization problem to extract the optimal ordering. We compared our algorithms with the original Ripper on 8 datasets from UCI repository [2]. Simulation results show that our algorithms produce rulesets that are significantly better than those produced by Ripper proper.
  • Yayın
    An incremental model selection algorithm based on cross-validation for finding the architecture of a Hidden Markov model on hand gesture data sets
    (IEEE, 2009-12-13) Ulaş, Aydın; Yıldız, Olcay Taner
    In a multi-parameter learning problem, besides choosing the architecture of the learner, there is the problem of finding the optimal parameters to get maximum performance. When the number of parameters to be tuned increases, it becomes infeasible to try all the parameter sets, hence we need an automatic mechanism to find the optimum parameter setting using computationally feasible algorithms. In this paper, we define the problem of optimizing the architecture of a Hidden Markov Model (HMM) as a state space search and propose the MSUMO (Model Selection Using Multiple Operators) framework that incrementally modifies the structure and checks for improvement using cross-validation. There are five variants that use forward/backward search, single/multiple operators, and depth-first/breadth-first search. On four hand gesture data sets, we compare the performance of MSUMO with the optimal parameter set found by exhaustive search in terms of expected error and computational complexity.
  • Yayın
    Quadratic programming for class ordering in rule induction
    (Elsevier Science BV, 2015-03-01) Yıldız, Olcay Taner
    Separate-and-conquer type rule induction algorithms such as Ripper, solve a K>2 class problem by converting it into a sequence of K - 1 two-class problems. As a usual heuristic, the classes are fed into the algorithm in the order of increasing prior probabilities. Although the heuristic works well in practice, there is much room for improvement. In this paper, we propose a novel approach to improve this heuristic. The approach transforms the ordering search problem into a quadratic optimization problem and uses the solution of the optimization problem to extract the optimal ordering. We compared new Ripper (guided by the ordering found with our approach) with original Ripper (guided by the heuristic ordering) on 27 datasets. Simulation results show that our approach produces rulesets that are significantly better than those produced by the original Ripper.
  • Yayın
    Subset selection for tuning of hyper-parameters in artificial neural networks
    (IEEE, 2017) Aki, K.K.Emre; Erkoç, Tuğba; Eskil, Mustafa Taner
    Hyper-parameters of a machine learning architecture define its design. Tuning of hyper-parameters is costly and for large data sets outright impractical, whether it is performed manually or algorithmically. In this study we propose a Neocognitron based method for reducing the training set to a fraction, while keeping the dynamics and complexity of the domain. Our approach does not require processing of the entire training set, making it feasible for larger data sets. In our experiments we could successfully reduce the MNIST training data set to less than 2.5% (1,489 images) by processing less than 10% of the 60K images. We showed that the reduced data set can be used for tuning of number of hidden neurons in a multi-layer perceptron.
  • Yayın
    Crossing minimization in weighted bipartite graphs
    (Springer, 2007) Çakıroğlu, Olca Arda; Erten, Cesim; Karataş, Ömer; Sözdinler, Melih
    Given a bipartite graph G = (L-0, L-1, E) and a fixed ordering of the nodes in L-0, the problem of finding an ordering of the nodes in L-1 that minimizes the number of crossings has received much attention in literature. The problem is NP-complete in general and several practically efficient heuristics and polynomial-time algorithms with a constant approximation ratio have been suggested. We generalize the problem and consider the version where the edges have nonnegative weights. Although this problem is more general and finds specific applications in automatic graph layout problems similar to those of the unweighted case, it has not received as much attention. We provide a new technique that efficiently approximates a solution to this more general problem within a constant approximation ratio of 3. In addition we provide appropriate generalizations of some common heuristics usually employed for the unweighted case and compare their performances.