开发指南
注意
此开发指南页面仍在积极更新中。我们希望使添加新的黑盒优化器尽可能简单。考虑到黑盒优化器在高维问题上的运行时间相对较长,当添加任何新的黑盒优化器时,至少两位该库的核心开发人员将手动检查源代码并运行测试代码,以检查其编程正确性。
在阅读本页之前,需要先阅读用户指南以了解关于这个开源Python库PyPop7的一些基本信息。请注意,由于本主题主要面向高级开发者,最终用户可以自由跳过此页。
文档字符串约定
对于文档字符串约定,本库首先使用PEP 257。 由于本库建立在NumPy生态系统之上, 我们进一步使用来自numpydoc的文档字符串约定。
此外,现在PEP 465被用作矩阵乘法的专用中缀运算符。我们正在修改所有现有的Python代码,以在PEP 465下简化它们。
库依赖
这个开源的Python库在很大程度上依赖于三个核心的科学计算Python库,即NumPy、SciPy和Scikit-Learn。更具体地说,对于所有优化器,选择了numpy.array数据结构作为存储和操作种群(例如,采样、更新、索引和排序)的基本方式,这导致了显著的加速。有时,如果可能的话,Numba会被用来进一步加速大规模黑盒优化的实际时间。使用NumPy作为核心计算引擎的一个明显优势是,Pypop7可以无缝集成到NumPy生态系统中,考虑到SciPy迄今为止只涵盖了有限数量的基于种群的BBOs。
对于这个Python库的PyPI安装,使用了setup.cfg,
为了这个Python库的开发,使用了requirements.txt。
统一的API
对于PyPop7,我们使用流行的面向对象编程(OOP)范式来构建所有优化器,这可以提供一致性、灵活性和简单性。我们没有采用另一种流行的面向过程编程范式。然而,在未来的版本中,我们可能会在最终用户级别(而不是开发者级别)提供这样的接口。
对于所有优化器,需要继承名为Optimizer的抽象类,以提供统一的API。
优化器选项的初始化
为了初始化优化器选项,应该继承Optimizer的以下函数__init__:
def __init__(self, problem, options): # here all members will be inherited by any subclass of `Optimizer`
每个子类的所有专属成员将在继承Optimizer的上述功能后定义。
种群初始化
我们将优化器选项的初始化与种群(一组个体)的初始化分开,以获得更好的灵活性。为了实现这一点,应修改以下函数initialize:
def initialize(self): # for population initialization raise NotImplementedError # need to be implemented in any subclass of `Optimizer`
它的另一个目标是最小化类成员的数量,以便最终用户更容易设置,但开发人员需要稍微增加对变量的控制。
每一代的计算
通过修改以下函数iterate来更新每一代(迭代):
def iterate(self): # for one generation (iteration) raise NotImplementedError # need to be implemented in any subclass of `Optimizer`
整个优化过程的控制
通过修改以下函数optimize来控制整个搜索过程:
def optimize(self, fitness_function=None): # entire optimization process return None # `None` should be replaced in any subclass of `Optimizer`
通常,常见的辅助任务(例如,打印详细信息、重启)都在此函数内执行。
使用纯随机搜索作为示例
在以下的Python代码中,我们使用纯随机搜索(PRS),可能是最简单的黑箱优化器,作为示例。
import numpy as np from pypop7.optimizers.core.optimizer import Optimizer # base class of all black-box optimizers class PRS(Optimizer): """Pure Random Search (PRS). .. note:: `PRS` is one of the *simplest* and *earliest* black-box optimizers, dating back to at least `1950s <https://pubsonline.informs.org/doi/abs/10.1287/opre.6.2.244>`_. Here we include it mainly for *benchmarking* purpose. As pointed out in `Probabilistic Machine Learning <https://probml.github.io/pml-book/book2.html>`_, *this should always be tried as a baseline*. Parameters ---------- problem : dict problem arguments with the following common settings (`keys`): * 'fitness_function' - objective function to be **minimized** (`func`), * 'ndim_problem' - number of dimensionality (`int`), * 'upper_boundary' - upper boundary of search range (`array_like`), * 'lower_boundary' - lower boundary of search range (`array_like`). options : dict optimizer options with the following common settings (`keys`): * 'max_function_evaluations' - maximum of function evaluations (`int`, default: `np.inf`), * 'max_runtime' - maximal runtime to be allowed (`float`, default: `np.inf`), * 'seed_rng' - seed for random number generation needed to be *explicitly* set (`int`); and with the following particular setting (`key`): * 'x' - initial (starting) point (`array_like`). Attributes ---------- x : `array_like` initial (starting) point. Examples -------- Use the `PRS` optimizer to minimize the well-known test function `Rosenbrock <http://en.wikipedia.org/wiki/Rosenbrock_function>`_: .. code-block:: python :linenos: >>> import numpy >>> from pypop7.benchmarks.base_functions import rosenbrock # function to be minimized >>> from pypop7.optimizers.rs.prs import PRS >>> problem = {'fitness_function': rosenbrock, # define problem arguments ... 'ndim_problem': 2, ... 'lower_boundary': -5.0*numpy.ones((2,)), ... 'upper_boundary': 5.0*numpy.ones((2,))} >>> options = {'max_function_evaluations': 5000, # set optimizer options ... 'seed_rng': 2022} >>> prs = PRS(problem, options) # initialize the optimizer class >>> results = prs.optimize() # run the optimization process >>> print(results) For its correctness checking of coding, refer to `this code-based repeatability report <https://tinyurl.com/mrx2kffy>`_ for more details. References ---------- Bergstra, J. and Bengio, Y., 2012. Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13(2). https://www.jmlr.org/papers/v13/bergstra12a.html Schmidhuber, J., Hochreiter, S. and Bengio, Y., 2001. Evaluating benchmark problems by random guessing. A Field Guide to Dynamical Recurrent Networks, pp.231-235. https://ml.jku.at/publications/older/ch9.pdf Brooks, S.H., 1958. A discussion of random methods for seeking maxima. Operations Research, 6(2), pp.244-251. https://pubsonline.informs.org/doi/abs/10.1287/opre.6.2.244 """ def __init__(self, problem, options): """Initialize the class with two inputs (problem arguments and optimizer options).""" Optimizer.__init__(self, problem, options) self.x = options.get('x') # initial (starting) point self.verbose = options.get('verbose', 1000) self._n_generations = 0 # number of generations def _sample(self, rng): x = rng.uniform(self.initial_lower_boundary, self.initial_upper_boundary) return x def initialize(self): """Only for the initialization stage.""" if self.x is None: x = self._sample(self.rng_initialization) else: x = np.copy(self.x) assert len(x) == self.ndim_problem return x def iterate(self): """Only for the iteration stage.""" return self._sample(self.rng_optimization) def _print_verbose_info(self, fitness, y): """Save fitness and control console verbose information.""" if self.saving_fitness: if not np.isscalar(y): fitness.extend(y) else: fitness.append(y) if self.verbose and ((not self._n_generations % self.verbose) or (self.termination_signal > 0)): info = ' * Generation {:d}: best_so_far_y {:7.5e}, min(y) {:7.5e} & Evaluations {:d}' print(info.format(self._n_generations, self.best_so_far_y, np.min(y), self.n_function_evaluations)) def _collect(self, fitness, y=None): """Collect necessary output information.""" if y is not None: self._print_verbose_info(fitness, y) results = Optimizer._collect(self, fitness) results['_n_generations'] = self._n_generations return results def optimize(self, fitness_function=None, args=None): # for all iterations (generations) """For the entire optimization/evolution stage: initialization + iteration.""" fitness = Optimizer.optimize(self, fitness_function) x = self.initialize() # population initialization y = self._evaluate_fitness(x, args) # to evaluate fitness of starting point while not self._check_terminations(): self._print_verbose_info(fitness, y) # to save fitness and control console verbose information x = self.iterate() y = self._evaluate_fitness(x, args) # to evaluate each new point self._n_generations += 1 results = self._collect(fitness, y) # to collect all necessary output information return results
我们决定采用主动的开发/维护模式,即一旦添加了新的黑箱优化器或修复了严重的错误,我们将很快发布一个新的PyPI版本。
可重复性代码/报告
优化器 |
可重复性代码 |
生成的图表/数据 |
|---|---|---|
MMES |
||
FCMAES |
||
LMMAES |
||
LMCMA |
||
LMCMAES |
||
RMES |
||
R1ES |
||
VKDCMA |
||
VDCMA |
||
CCMAES2016 |
||
OPOA2015 |
||
OPOA2010 |
||
CCMAES2009 |
||
OPOC2009 |
||
OPOC2006 |
||
SEPCMAES |
||
DDCMA |
||
MAES |
||
FMAES |
||
CMAES |
||
SAMAES |
||
SAES |
||
CSAES |
||
DSAES |
||
SSAES |
||
RES |
||
R1NES |
||
SNES |
||
XNES |
||
ENES |
||
ONES |
||
SGES |
||
RPEDA |
||
UMDA |
||
AEMNA |
||
EMNA |
||
DCEM |
||
DSCEM |
||
MRAS |
||
SCEM |
||
SHADE |
||
JADE |
||
代码 |
||
TDE |
||
CDE |
||
CCPSO2 |
||
IPSO |
||
CLPSO |
||
CPSO |
||
SPSOL |
||
SPSO |
||
HCC |
不适用 |
不适用 |
COCMA |
不适用 |
不适用 |
COEA |
||
COSYNE |
||
ESA |
||
CSA |
||
NSA |
N/A |
N/A |
ASGA |
||
GL25 |
||
G3PCX |
||
GENITOR |
N/A |
N/A |
LEP |
||
FEP |
||
CEP |
||
POWELL |
||
GPS |
不适用 |
不适用 |
NM |
||
HJ |
||
CS |
N/A |
N/A |
BES |
||
GS |
||
SRS |
不适用 |
不适用 |
ARHC |
||
RHC |
||
PRS |
用于开发的Python IDE
尽管可以使用其他Python IDE(例如,Spyder,Visual Studio)进行开发,但目前我们主要使用PyCharm Community Edition和Anaconda来开发我们的开源库。我们非常感谢jetbrains和anaconda提供这两种免费的开发工具。请注意,我们并不排除任何其他开发选择。