Algorithms in PyGMO are objects, constructed and then used to optimize a problem via their evolve method. The user can implement his own algorithm in Python (in which case they need to derive from PyGMO.algorithm.base). You may follow the Adding a new algorithm tutorial. We also provide a number of algorithms that are considered useful for general purposes. Each algorithm can be associated only to problems of certain types: (Continuous, Integer or Mixed Integer)-(Constrained, Unconstrained)-(Single, Multi-objective).
Common Name | Name in PyGMO | Type | Comments |
---|---|---|---|
Differential Evolution (DE) | PyGMO.algorithm.de | C-U-S | The original algorithm |
Self-adaptive DE (jDE) | PyGMO.algorithm.jde | C-U-S | Self-adaptive F, CR |
DE with p-best crossover (mde_pbx) | PyGMO.algorithm.mde_pbx | C-U-S | Self-adaptive F, CR |
Differential Evolution (DE) | PyGMO.algorithm.de_1220 | C-U-S | Our own brew. self adaptive F, CR and variants |
Particle Swarm Optimization (PSO) | PyGMO.algorithm.pso | C-U-S | The PSO algorithm (canonical, with constriction factor, FIPS, etc.) |
Particle Swarm Optimization (PSO) | PyGMO.algorithm.pso_gen | C-U-S | Generational (also problems deriving from base_stochastic) |
Simple Genetic Algorithm GRAY (SGA_GRAY) | PyGMO.algorithm.sga_gray | C-U-S | Simple genetic algorithm with gray binary encoding |
Simple Genetic Algorithm (SGA) | PyGMO.algorithm.sga | MI-U-S | |
Vector Evaluated Genetic Algorithm (VEGA) | PyGMO.algorithm.vega | MI-U-M | VEGA algorithm, multi-objective extension of SGA |
(N+1)-EA Evol. Algorithm (SEA) | PyGMO.algorithm.sea | I-U-M | The multiobjective extension uses crowding distance operator |
Non-dominated Sorting GA (NSGA2) | PyGMO.algorithm.nsga_II | C-U-M | NSGA-II |
S-Metric Selection EMOA (SMS-EMOA) | PyGMO.algorithm.sms_emoa | C-U-M | Relies on the hypervolume computation. |
Corana’s Simulated Annealing (SA) | PyGMO.algorithm.sa_corana | C-U-S | |
Parallel Decomposition (PADE) | PyGMO.algorithm.pade | C-U-M | Parallel Decomposition (based on the MOEA/D framework) |
Non-dominated Sorting PSO (NSPSO) | PyGMO.algorithm.nspso | C-U-M | Multi-Objective PSO |
Strength Pareto EA 2 (SPEA2) | PyGMO.algorithm.spea2 | C-U-M | Strength Pareto Evolutionary Algorithm 2 |
Artificial Bee Colony (ABC) | PyGMO.algorithm.bee_colony | C-U-S | |
Improved Harmony Search (IHS) | PyGMO.algorithm.ihs | MI-U-M | Integer and Multiobjetive not tested yet |
Monte Carlo Search (MC) | PyGMO.algorithm.monte_carlo | MI-C-S | |
Monte Carlo Search (MC) | PyGMO.algorithm.py_example | MI-C-S | Written directly in Python |
Covariance Matrix Adaptation-ES | PyGMO.algorithm.py_cmaes | C-U-S | Written directly in Python |
Covariance Matrix Adaptation-ES | PyGMO.algorithm.cmaes | C-U-S |
Common Name | Name in PyGMO | Type | Comments |
---|---|---|---|
Monotonic Basin Hopping (MBH) | PyGMO.algorithm.mbh | N/A | |
Multistart (MS) | PyGMO.algorithm.ms | N/A | |
Augmented Lagrangian (AL) | PyGMO.algorithm.nlopt_auglag | C-C-S | Requires PyGMO to be compiled with nlopt option. Minimization assumed |
Augmented Lagrangian (AL) | PyGMO.algorithm.nlopt_auglag_eq | C-C-S | Requires PyGMO to be compiled with nlopt option. Minimization assumed |
Cstrs co-evolution | PyGMO.algorithm.cstrs_co_evolution | C-C-S | Minimization assumed |
Cstrs Self-Adaptive | PyGMO.algorithm.cstrs_self_adaptive | C-C-S | Minimization assumed |
Cstrs Immune System | PyGMO.algorithm.cstrs_immune_system | C-C-S | Immune system constraints handling technique |
Cstrs CORE | PyGMO.algorithm.cstrs_core | C-C-S | CORE constraints handling technique (repairing technique) |
Common Name | Name in PyGMO | Type | Comments |
---|---|---|---|
Compass Search (CS) | PyGMO.algorithm.cs | C-U-S | |
Nelder-Mead simplex | PyGMO.algorithm.scipy_fmin | C-U-S | SciPy required. Minimization assumed |
Nelder-Mead simplex | PyGMO.algorithm.gsl_nm | C-U-S | Requires PyGMO to be compiled with GSL option. Minimization assumed |
Nelder-Mead simplex variant 2 | PyGMO.algorithm.gsl_nm2 | C-U-S | Requires PyGMO to be compiled with GSL option. Minimization assumed |
Nelder-Mead simplex variant 2r | PyGMO.algorithm.gsl_nm2rand | C-U-S | Requires PyGMO to be compiled with GSL option. Minimization assumed |
Subplex (a Nelder-Mead variant) | PyGMO.algorithm.nlopt_sbplx | C-C-S | Requires PyGMO to be compiled with nlopt option. Minimization assumed |
L-BFGS-B | PyGMO.algorithm.scipy_l_bfgs_b | C-U-S | SciPy required. Minimization assumed |
BFGS | PyGMO.algorithm.gsl_bfgs2 | C-U-S | Requires PyGMO to be compiled with GSL option. Minimization assumed |
BFGS 2 | PyGMO.algorithm.gsl_bfgs | C-U-S | Requires PyGMO to be compiled with GSL option. Minimization assumed |
Sequential Least SQuares Prog. | PyGMO.algorithm.scipy_slsqp | C-C-S | SciPy required. Minimization assumed |
Sequential Least SQuares Prog. | PyGMO.algorithm.nlopt_slsqp | C-C-S | Requires PyGMO to be compiled with nlopt option. Minimization assumed |
Truncated Newton Method | PyGMO.algorithm.scipy_tnc | C-U-S | SciPy required. Minimization assumed |
Conjugate Gradient (fr) | PyGMO.algorithm.gsl_fr | C-U-S | Requires PyGMO to be compiled with GSL option. Minimization assumed |
Conjugate Gradient (pr) | PyGMO.algorithm.gsl_pr | C-U-S | Requires PyGMO to be compiled with GSL option. Minimization assumed |
COBYLA | PyGMO.algorithm.scipy_cobyla | C-C-S | SciPy required. Minimization assumed |
COBYLA | PyGMO.algorithm.nlopt_cobyla | C-C-S | Requires PyGMO to be compiled with nlopt option. Minimization assumed |
BOBYQA | PyGMO.algorithm.nlopt_bobyqa | C-C-S | Requires PyGMO to be compiled with nlopt option. Minimization assumed |
Method of Moving Asymptotes | PyGMO.algorithm.nlopt_mma | C-C-S | Requires PyGMO to be compiled with nlopt option. Minimization assumed |
SNOPT | PyGMO.algorithm.snopt | C-C-S | Requires PyGMO to be compiled with snopt option. Minimization assumed |
IPOPT | PyGMO.algorithm.ipopt | C-C-S | Requires PyGMO to be compiled with ipopt option. Minimization assumed |
All PyGMO algorithms derive from this class
Returns the evolved population
Constructs a Differential Evolution algorithm:
USAGE: algorithm.de(gen=1, f=0.5, cr=0.9, variant=2, ftol=1e-6, xtol=1e-6, screen_output = False)
gen: number of generations
f: weighting factor in [0,1] (if -1 self-adptation is used)
cr: crossover in [0,1] (if -1 self-adptation is used)
ftol stop criteria on f
xtol stop criteria on x
Constructs a jDE algorithm (self-adaptive DE)
REF: “Self-adaptive differential evolution algorithm in constrained real-parameter optimization” J Brest, V Zumer, MS Maucec - Evolutionary Computation, 2006. http://dsp.szu.edu.cn/DSP2006/research/publication/yan/WebEdit/UploadFile/Self-adaptive%20Differential%20Evolution%20Algorithm%20for%20Constrained%20Real-Parameter%20Optimization.pdf
USAGE: algorithm.jde(gen=100, variant=2, variant_adptv=1, ftol=1e-6, xtol=1e-6, memory = False, screen_output = False)
gen: number of generations
1. best/1/exp 2. rand/1/exp 3. rand-to-best/1/exp 4. best/2/exp 5. rand/2/exp 6. best/1/bin 7. rand/1/bin 8. rand-to-best/1/bin 9. best/2/bin 10. rand/2/bin 11. best/3/exp 12. best/3/bin 13. rand/3/exp 14. rand/3/bin 15. rand-to-current/2/exp 16. rand-to-current/2/bin 17. rand-to-best-and-current/2/exp 18. rand-to-best-and-current/2/bin
ftol: stop criteria on f
xtol: stop criteria on x
memory: if True the algorithm internal state is saved and used for the next call
screen_output: activates screen output of the algorithm (do not use in archipealgo, otherwise the screen will be flooded with
different island outputs)
Constructs a mde_pbx algorithm (self-adaptive DE)
REF: “An Adaptive Differential Evolution Algorithm With Novel Mutation and Crossover Strategies for Global Numerical Optimization” - IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS?PART B: CYBERNETICS, VOL. 42, NO. 2, APRIL 20
USAGE: algorithm.mde_pbx(gen=100, qperc=0.15, nexp=1.5, ftol=1e-6, xtol=1e-6, screen_output = False)
Constructs a Differential Evolution algorithm (our own brew). Self adaptation on F, CR and mutation variant.:
USAGE: algorithm.de_1220(gen=100, variant_adptv=1, allowed_variants = [i for i in range(1,19)], memory = False, ftol=1e-6, xtol=1e-6, screen_output = False)
gen: number of generations
1. best/1/exp 2. rand/1/exp 3. rand-to-best/1/exp 4. best/2/exp 5. rand/2/exp 6. best/1/bin 7. rand/1/bin 8. rand-to-best/1/bin 9. best/2/bin 10. rand/2/bin 11. best/3/exp 12. best/3/bin 13. rand/3/exp 14. rand/3/bin 15. rand-to-current/2/exp 16. rand-to-current/2/bin 17. rand-to-best-and-current/2/exp 18. rand-to-best-and-current/2/bin
ftol: stop criteria on f
xtol: stop criteria on x
memory: if True the algorithm internal state is saved and used for the next call
Constructs a Particle Swarm Optimization (steady-state). The position update is applied immediately after the velocity update
REF (for variants 5-6): http://cswww.essex.ac.uk/staff/rpoli/papers/PoliKennedyBlackwellSI2007.pdf
REF (for variants 1-4): Kennedy, J.; Eberhart, R. (1995). “Particle Swarm Optimization”. Proceedings of IEEE International Conference on Neural Networks. IV. pp. 1942?1948.
USAGE: algorithm.pso(gen=1, omega = 0.7298, eta1 = 2.05, eta2 = 2.05, vcoeff = 0.5, variant = 5, neighb_type = 2, neighb_param = 4)
gen: number of generations
omega: constriction factor (or particle inertia weight) in [0,1]
eta1: Cognitive component in [0,4]
eta2: Social component in [0,4]
vcoeff: Maximum velocity coefficient (w.r.t. the box-bounds width) in [0,1]
PSO canonical (with inertia weight)
and equal random weights of social and cognitive components)
same random number for all components.)
same random number for all components and equal weights of social and cognitive components)
PSO canonical (with constriction factor)
Fully Informed Particle Swarm (FIPS)
(also outdegree) in the swarm topology. Particles have neighbours up to a radius of k = neighb_param / 2 in the ring. If the Randomly-varying neighbourhood topology is selected, neighb_param represents each particle’s maximum outdegree in the swarm topology. The minimum outdegree is 1 (the particle always connects back to itself).
Constructs a Particle Swarm Optimization (generational). The position update is applied only at the end of an entire loop over the population (swarm). Use this version for stochastic problems.
USAGE: algorithm.pso_gen(gen=1, omega = 0.7298, eta1 = 2.05, eta2 = 2.05, vcoeff = 0.5, variant = 5, neighb_type = 2, neighb_param = 4)
gen: number of generations
omega: constriction factor (or particle inertia weight) in [0,1]
eta1: Cognitive component in [0,4]
eta2: Social component in [0,4]
vcoeff: Maximum velocity coefficient (w.r.t. the box-bounds width) in [0,1]
PSO canonical (with inertia weight)
and equal random weights of social and cognitive components)
same random number for all components.)
same random number for all components and equal weights of social and cognitive components)
PSO canonical (with constriction factor)
Fully Informed Particle Swarm (FIPS)
(also outdegree) in the swarm topology. Particles have neighbours up to a radius of k = neighb_param / 2 in the ring. If the Randomly-varying neighbourhood topology is selected, neighb_param represents each particle’s maximum outdegree in the swarm topology. The minimum outdegree is 1 (the particle always connects back to itself).
Constructs a simple (N+1)-EA: A Simple Evolutionary Algorithm
USAGE: algorithm.ea(gen = 1) SEE : Oliveto, Pietro S., Jun He, and Xin Yao. “Time complexity of evolutionary algorithms for combinatorial optimization: A decade of results.” International Journal of Automation and Computing 4.3 (2007): 281-293.
Constructs a Simple Genetic Algorithm (generational)
USAGE: algorithm.sga(self, gen=1, cr=.95, m=.02, elitism=1, mutation=sga.mutation.GAUSSIAN, width = 0.1, selection=sga.selection.ROULETTE, crossover=sga.crossover.EXPONENTIAL)
gen: number of generations
cr: crossover factor in [0,1]
m: mutation probability (for each component) [0,1]
elitism: number of generation after which the best is reinserted
mutation: mutation type (one of [RANDOM, GAUSSIAN])
this is the std normalized with the width)
selection: selection startegy (one of [ROULETTE, BEST20])
crossover: crossover strategy (one of [BINOMIAL, EXPONENTIAL])
Random mutation (width is set by the width argument in PyGMO.algorithm.sga)
Gaussian mutation (bell shape standard deviation is set by the width argument in PyGMO.algorithm.sga multiplied by the box-bounds width)
Roulette selection mechanism
Best 20% individuals are inserted over and over again
Binomial crossover
Exponential crossover
Constructs a Vector evaluated genetic algorithm
USAGE: algorithm.vega(self, gen=1, cr=.95, m=.02, elitism=1, mutation=vega.mutation.GAUSSIAN, width = 0.1, crossover=vega.crossover.EXPONENTIAL)
gen: number of generations
cr: crossover factor in [0,1]
m: mutation probability (for each component) [0,1]
elitism: number of generation after which the best is reinserted
mutation: mutation type (one of [RANDOM, GAUSSIAN])
this is the std normalized with the width)
crossover: crossover strategy (one of [BINOMIAL, EXPONENTIAL])
Random mutation (width is set by the width argument in PyGMO.algorithm.vega)
Gaussian mutation (bell shape standard deviation is set by the width argument in PyGMO.algorithm.vega multiplied by the box-bounds width)
Binomial crossover
Exponential crossover
Constructs a Simple Genetic Algorithm with gray binary encoding (generational)
USAGE: algorithm.sga_gray(self, gen=1, cr=.95, m=.02, elitism=1, mutation=sga.mutation.UNIFORM, selection=sga.selection.ROULETTE, crossover=sga.crossover.SINGLE_POINT)
Uniform mutation
Roulette selection mechanism
Best 20% individuals are inserted over and over again
Single point crossover
Constructs a Non-dominated Sorting Genetic Algorithm (NSGA_II)
USAGE: algorithm.nsga_II(self, gen=100, cr = 0.95, eta_c = 10, m = 0.01, eta_m = 10)
Constructs a S-Metric Selection Evolutionary Multiobjective Optimiser Algorithm (SMS-EMOA)
USAGE: algorithm.sms_emoa(self, gen=100, sel_m = 2, cr = 0.95, eta_c = 10, m = 0.01, eta_m = 10)
Constructs a Parallel Decomposition Algorithm (PaDe).
For each element of the population a different single objective problem is generated using a decomposition method. Those single-objective problems are thus solved in an island model. At the end of the evolution the population is set as the best individual in each single-objective island. This algorithm, original with PaGMO, builds upon the MOEA/D framework
USAGE: algorithm.pade(self, gen=10, max_parallelism = 1, decomposition = decompose.WEIGHTED, solver = jde(100), T = 8, weights = pade.RANDOM, z = None)
Random generation of the weight vector
Weight vectors are generated to equally divide the search space (requires a particular population size)
Weight vector are generated using a low discrepancy sequence
Constructs a Multi Objective PSO
Individual with better crowding distance are prefered
Individuals with better niche count are prefered
The MaxMin method is used to obtain the non-dominated set and to mantain diversity
Constructs a Strenght Pareto Evolutionary Algorithm 2
USAGE: algorithm.spea2(gen=100, cr = 0.95, eta_c = 10, m = 0.01, eta_m = 50, archive_size = -1)
Constructs Corana’s Simulated Annealing
USAGE: algorithm.sa_corana(iter = 10000, Ts = 10, Tf = .1, steps = 1, bin_size = 20, range = 1)
NOTE: as this version of simulated annealing loops through the chromosome, the iter number needs to be selected large enough to allow the temperature schedule to actuallt make sense. For example if your problem has D dimensions then in order to have at least N temperature adjustments (from Ts to Tf) one should select iter = D * N * steps * bin_size.
Constructs an Artificial Bee Colony Algorithm
USAGE: algorithm.bee_colony(gen = 100, limit = 20)
are made where NP is the population size)
limit: number of tries after which a source of food is dropped if not improved
Constructs a Multistart Algorithm
USAGE: algorithm.ms(algorithm = algorithm.de(), iter = 1)
NOTE: starting from pop1, at each iteration a random pop2 is evolved with the selected algorithm and its final best replaces the worst of pop1
When True, the algorithms produces output on screen
Algorithm to be multistarted
Constructs a Monotonic Basin Hopping Algorithm (generalized to accept any algorithm)
USAGE: algorithm.mbh(algorithm = algorithm.cs(), stop = 5, perturb = 5e-2);
NOTE: Starting from pop, algorithm is applied to the perturbed pop returning pop2. If pop2 is better than pop then pop=pop2 and a counter is reset to zero. If pop2 is not better the counter is incremented. If the counter is larger than stop, optimization is terminated
algorithm: ‘local’ optimiser
stop: number of no improvements before halting the optimization
it has to have the same dimension of the problem mbh will be applied to)
screen_output: activates screen output of the algorithm (do not use in archipealgo, otherwise the screen will be flooded with
different island outputs)
When True, the algorithms produces output on screen
Algorithm to perform mbh ‘local’ search
Constructs a co-evolution adaptive penalty algorithm for constrained optimization.
USAGE: algorithm.cstrs_co_evolution(original_algo = _algorithm.jde(), original_algo_penalties = _algorithm.jde(), pop_penalties_size = 30, gen = 20, method = cstrs_co_evolution.method.SIMPLE, pen_lower_bound = 0, pen_upper_bound = 100000,f_tol = 1e-15,x_tol = 1e-15):
original_algo: optimizer to use as ‘original’ optimization method
original_algo_penalties: optimizer to use as ‘original’ optimization method for population encoding penalties coefficients
pop_penalties_size: size of the population encoding the penalty parameters.
gen: number of generations.
Three possibililties are available: SIMPLE, SPLIT_NEQ_EQ and SPLIT_CONSTRAINTS. The simple one is the original version of the Coello/He implementation. The SPLIT_NEQ_EQ, splits the equalities and inequalities constraints in two different sets for the penalty weigths, containing respectively inequalities and equalities weigths. The SPLIT_CONSTRAINTS splits the constraints in M set of weigths wehere M is the number of constraints.
pen_lower_bound: the lower boundary used for penalty.
pen_upper_bound: the upper boundary used for penalty.
ftol: 1e-15 by default. The stopping criteria on the x tolerance.
xtol: 1e-15 by default. The stopping criteria on the f tolerance.
Constructs an immune system algorithm for constrained optimization.
USAGE: algorithm._cstrs_immune_system(algorithm = _algorithm.jde(), algorithm_immune = _algorithm.jde(), gen = 1, select_method = cstrs_immune_system.select_method.BEST_ANTIBODY, inject_method = cstrs_immune_system.inject_method.CHAMPION, distance_method = cstrs_immune_system.distance_method.EUCLIDEAN, phi = 0.5, gamma = 0.5, sigma = 1./3., ftol = 1e-15, xtol = 1e-15):
When True, the algorithms produces output on screen
Constructs CORE (Constrained Optimization by Random Evolution) algorithm for constrained optimization (belong to the family of repairing techniques).
USAGE: algorithm._cstrs_core(algorithm = _algorithm.jde(), repair_algorithm = _algorithm.jde(), gen = 1, repair_frequency = 10, repair_ratio = 1., f_tol = 1e-15, x_tol = 1e-15):
When True, the algorithms produces output on screen
Constructs a Compass Search Algorithm
USAGE: algorithm.cs(max_eval = 1, stop_range = 0.01, start_range = 0.1, reduction_coeff = 0.5);
max_eval: maximum number of function evaluations
stop_range: when the range is reduced to a value smaller than stop_range cs stops
start_range: starting range (non-dimensional wrt ub-lb)
across one chromosome
Constructs an Improved Harmony Search Algorithm
USAGE: algorithm.ihs(iter = 100, hmcr = 0.85, par_min = 0.35, par_max = 0.99, bw_min = 1E-5, bw_max = 1);
Constructs a Monte Carlo Algorithm
USAGE: algorithm.monte_carlo(iter = 10000)
Constructs a Monte-Carlo (random sampling) algorithm
USAGE: algorithm.py_example(iter = 10)
Constructs a Covariance Matrix Adaptation Evolutionary Strategy (Python)
USAGE: algorithm.py_cmaes(gen = 500, cc = -1, cs = -1, c1 = -1, cmu = -1, sigma0=0.5, ftol = 1e-6, xtol = 1e-6, memory = False, screen_output = False)
NOTE: In our variant of the algorithm, particle memory is used to extract the elite and reinsertion is made aggressively ..... getting rid of the worst guy). Also, the bounds of the problem are enforced, as to allow PaGMO machinery to work. Fine control on each iteration can be achieved by calling the algo with gen=1 (algo state is stored, cmaes will continue at next call ... without initializing again all its state!!)
Constructs a Covariance Matrix Adaptation Evolutionary Strategy (C++)
USAGE: algorithm.cmaes(gen = 500, cc = -1, cs = -1, c1 = -1, cmu = -1, sigma0=0.5, ftol = 1e-6, xtol = 1e-6, memory = False, screen_output = False)
NOTE: In our variant of the algorithm, particle memory is used to extract the elite and reinsertion is made aggressively ..... getting rid of the worst guy). Also, the bounds of the problem are enforced, as to allow PaGMO machinery to work. Fine control on each iteration can be achieved by calling the algo with memory=True and gen=1
Constructs a Nelder-Mead Simplex algorithm (SciPy)
USAGE: algorithm.scipy_fmin(maxiter=1, xtol=0.0001, ftol=0.0001, maxfun=None, full_output=0, disp=0, retall=0)
Constructs a L-BFGS-B algorithm (SciPy)
NOTE: gradient is numerically approximated
USAGE: algorithm.scipy_l_bfgs_b(maxfun = 15000, m = 10, factr = 10000000.0, pgtol = 1e-05, epsilon = 1e-08, screen_output = False):
maxfun: maximum number of function evaluations
used to define the limited memory matrix. (the limited memory BFGS method does not store the full hessian but uses this many terms in an approximation to it).
(f{k} - f{k+1}) / max{| f{k} | , | f{k+1} |,1} <= factr*epsmch where epsmch is the machine precision, which is automatically generated by the code. Typical values for factr: 1e12 for low accuracy; 1e7 for moderate accuracy; 10.0 for extremely high accuracy.
max{| proj g{i} | i = 1, ..., n} <= pgtol where proj g{i} is the ith component of the projected gradient.
calculating the gradient
screen_output: Set to True to print iterations
Constructs a Sequential Least SQuares Programming algorithm
NOTE: gradient is numerically approximated
USAGE: algorithm.scipy_slsqp(max_iter = 100,acc = 1E-6,epsilon = 1.49e-08, screen_output = False))
Constructs a Truncated Newton Method algorithm (SciPy)
NOTE: gradient is numerically approximated
USAGE: algorithm.scipy_tnc(maxfun = 1, xtol = -1, ftol = -1, pgtol = 1e-05, epsilon = 1e-08, screen_output = False)
maxfun: Maximum number of function evaluation.
(after applying x scaling factors). If xtol < 0.0, xtol is set to sqrt(machine_precision). Defaults to -1.
If ftol < 0.0, ftol is set to 0.0 defaults to -1.
stopping criterion (after applying x scaling factors). If pgtol < 0.0, pgtol is set to 1e-2 * sqrt(accuracy). Setting it to 0.0 is not recommended. Defaults to -1.
epsilon: The stepsize in a finite difference approximation for the objfun
screen_output: Set to True to print iterations
When True, the algorithms produces output on screen
Constructs a Constrained Optimization BY Linear Approximation (COBYLA) algorithm (SciPy)
NOTE: equality constraints are transformed into two inequality constraints automatically
USAGE: algorithm.scipy_cobyla(max_fun = 1,rho_end = 1E-5,screen_output = False)
When True, the algorithms produces output on screen
Constructs a Constrained Optimization BY Linear Approximation (COBYLA) algorithm (NLOPT)
USAGE: algorithm.nlopt_cobyla(max_iter = 100, ftol = 1e-6, xtol = 1e-6)
Constructs a BOBYQA algorithm (Bound Optimization BY Quadratic Approximation) (NLOPT)
USAGE: algorithm.nlopt_bobyqa(max_iter = 100, ftol = 1e-6, xtol = 1e-6)
Constructs a Subplex (a variant of Nelder-Mead that uses Nelder-Mead on a sequence of subspaces) (NLOPT)
USAGE: algorithm.nlopt_sbplx(max_iter = 100, ftol = 1e-6, xtol = 1e-6)
Constructs a Method of Moving Asymptotes (MMA) algorithm (NLOPT)
USAGE: algorithm.nlopt_mma(max_iter = 100, ftol = 1e-6, xtol = 1e-6)
Constructs an Augmented agrangian Algotihm (NLOPT)
USAGE: algorithm.nlopt_mma(aux_algo_id = 1, max_iter = 100, ftol = 1e-6, xtol = 1e-6, aux_max_iter = 100, aux_ftol = 1e-6, aux_xtol = 1e-6)
1: SBPLX 2: COBYLA 3: BOBYQA 4: Low Storage BFGS
max_iter: stop-criteria (number of iterations)
ftol: stop-criteria (absolute on the obj-fun)
xtol: stop-criteria (absolute on the chromosome)
aux_max_iter: stop-criteria for the auxiliary optimizer (number of iterations)
aux_ftol: stop-criteria for the auxiliary optimizer (absolute on the obj-fun)
aux_xtol: stop-criteria for the auxiliary optimizer (absolute on the chromosome)
Constructs an Augmented agrangian Algotihm (using penalties only for the equalities) (NLOPT)
USAGE: algorithm.nlopt_auglag_eq(aux_algo_id = 1, max_iter = 100, ftol = 1e-6, xtol = 1e-6, aux_max_iter = 100, aux_ftol = 1e-6, aux_xtol = 1e-6)
1: COBYLA 2: MMA
max_iter: stop-criteria (number of iterations)
ftol: stop-criteria (absolute on the obj-fun)
xtol: stop-criteria (absolute on the chromosome)
aux_max_iter: stop-criteria for the auxiliary optimizer (number of iterations)
aux_ftol: stop-criteria for the auxiliary optimizer (absolute on the obj-fun)
aux_xtol: stop-criteria for the auxiliary optimizer (absolute on the chromosome)
Constructs a Sequential Least SQuares Programming algorithm (SLSQP) algorithm (NLOPT)
USAGE: algorithm.nlopt_slsqp(max_iter = 100, ftol = 1e-6, xtol = 1e-6)
Constructs a Nelder-Mead algorithm (Variant2 + randomly oriented initial simplex) (GSL)
USAGE: algorithm.gsl_nm2rand(max_iter = 100, step_size = 1e-8, tol = 1e-8);
Constructs a Nelder-Mead algorithm (Variant2) (GSL)
USAGE: algorithm.gsl_nm2(max_iter = 100, step_size = 1e-8, tol = 1e-8)
Constructs a Nelder-Mead Algorithm (GSL)
USAGE: algorithm.gsl_nm(max_iter = 100, step_size = 1e-8, tol = 1e-8);
Constructs a Polak-Ribiere conjugate gradient (GSL)
USAGE: algorithm.gsl_pr2(max_iter = 100, step_size = 1e-8, tol = 1e-8, grad_step_size = 0.01, grad_tol = 0.0001);
Constructs a Fletcher-Reeves conjugate gradient (GSL)
USAGE: algorithm.gsl_fr(max_iter = 100, step_size = 1e-8, tol = 1e-8, grad_step_size = 0.01, grad_tol = 0.0001)
Constructs a BFGS2 Algorithm (GSL)
NOTE: in GSL, BFGS2 is a more efficient version of BFGS
USAGE: algorithm.gsl_bfgs2(max_iter = 100, step_size = 1e-8, tol = 1e-8, grad_step_size = 0.01, grad_tol = 0.0001);
Constructs a BFGS Algorithm (GSL)
USAGE: algorithm.gsl_bfgs(max_iter = 100, step_size = 1e-8, tol = 1e-8, grad_step_size = 0.01, grad_tol = 0.0001)
Constructs SNOPT Algorithm
USAGE: algorithm.snopt(major_iter = 100, feas_tol = 1e-6, opt_tol = 1e-6, screen_output = False);
When True, the algorithms produces output on screen
Constructs an Interior Point OPTimization Algorithm (IPOPT)
USAGE: algorithm.ipopt(major_iter = 100, constr_viol_tol = 1e-08, dual_inf_tol = 1e-08, compl_inf_tol = 1e-08, screen_output = False);
When True, the algorithms produces output on screen
Constructs a Self-Adaptive Fitness constraints handling Meta Algorithm.
The key idea of this constraint handling technique is to represent the constraint violation by a single infeasibility measure, and to adapt dynamically the penalization of infeasible solutions.
USAGE: algorithm.self_adaptive(algorithm = algorithm.jde(), max_iter = 100, f_tol = 1e-15, x_tol = 1e-15);