You are here

Derivative free optimization using a population-based stochastic gradient estimator.

TitleDerivative free optimization using a population-based stochastic gradient estimator.
Publication TypeConference Paper
Year of Publication2014
AuthorsKhayrattee A, Anagnostopoulos GC
EditorArnold DV
Conference NameGenetic and Evolutionary Computation Conference, (GECCO '14), Vancouver, BC, Canada, July 12-16, 2014
PublisherAssociation for Computing Machinery (ACM)
Conference LocationVancouver, BC, Canada

In this paper we introduce a derivative-free optimization method that is derived from a population based stochastic gradient estimator. We first demonstrate some properties of this estimator and show how it is expected to always yield a descent direction. We analytically show that the difference between the expected function value and the optimum decreases exponentially for strongly convex functions and the expected distance between the current point and the optimum has an upper bound. Then we experimentally tune the parameters of our algorithm to get the best performance. Finally, we use the Black-Box-Optimization-Benchmarking test function suite to evaluate the performance of the algorithm. The experiments indicate that the method offer notable performance advantages especially, when applied to objective functions that are ill-conditioned and potentially multi-modal. This result, coupled with the low computational cost when compared to Quasi-Newton methods, makes it quite attractive.


Nominated for Best Paper Award.

Acceptance rate 33% (180/544).


Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer