You are here
Derivative free optimization using a population-based stochastic gradient estimator.
|Title||Derivative free optimization using a population-based stochastic gradient estimator.|
|Publication Type||Conference Paper|
|Year of Publication||2014|
|Authors||Khayrattee A, Anagnostopoulos GC|
|Conference Name||Genetic and Evolutionary Computation Conference, (GECCO '14), Vancouver, BC, Canada, July 12-16, 2014|
|Publisher||Association for Computing Machinery (ACM)|
|Conference Location||Vancouver, BC, Canada|
In this paper we introduce a derivative-free optimization method that is derived from a population based stochastic gradient estimator. We first demonstrate some properties of this estimator and show how it is expected to always yield a descent direction. We analytically show that the difference between the expected function value and the optimum decreases exponentially for strongly convex functions and the expected distance between the current point and the optimum has an upper bound. Then we experimentally tune the parameters of our algorithm to get the best performance. Finally, we use the Black-Box-Optimization-Benchmarking test function suite to evaluate the performance of the algorithm. The experiments indicate that the method offer notable performance advantages especially, when applied to objective functions that are ill-conditioned and potentially multi-modal. This result, coupled with the low computational cost when compared to Quasi-Newton methods, makes it quite attractive.
Nominated for Best Paper Award.
Acceptance rate 33% (180/544).