The (Short) Python Global Optimizer Form Guide

Go to the original much longer article for working links.
Caveats galore
For readers of my first article on this topic, which tried to get at optimizer performance in a different way, there are some familiar characters here and no huge surprises — though I repeat the caveat that some techniques (e.g. surrogate) might be favored by the somewhat stylized objective functions I am using here.
The flexibility of some packages (e.g. parallelism, discrete optimization, nested domains etc) isn’t allowed to shine. My intent isn’t to downplay the cleverness of the design features of these libraries. As more real-world objective functions are introduced, it will be interesting to see how Elo’s change. I refer you to the current leaderboard, as I won’t be attempting to keep recommendations here up to date manually.
The new Facebook inclusions are doing pretty well — which is a relief given that I kind of trashed their signature time series package (ahem). They organized an open competition last year to improve nevergrad and IOHprofiler (link).
My new go-to is dlib. It uses the approach developed by Malherbe and Vayatis (paper) as noted above, which is elegant. There is a nice video on the blog and a very clear explanation. Not only is the performance extremely good, but the run time is likely to be negligible for any real-world, expensive objective functions. The library won’t let you run problems with more than 34 dimensions, but that seems to be the only limitation.
Some optimizers, including SHGO, really struggle to limit themselves to a fixed number of evaluations and that can hurt them in head-to-head matches when playing Black. Whether that penalty is considered fair depends on your needs. Read about the Elo methodology to go deeper into that remark.
Recommendation utilities are better than my advice
Let me mention that you can poke around in humpday/comparison for utilities that will generate recommended optimizers. These are only as good as the category Elo ratings behind them, which need more time to equilibrate, but say you want some quick advice on which optimizers are the best performing at 8 dimensional problems when given 130 function evaluations to play with. Perhaps you want to limit total computation time of the optimizer (as distinct from the objective function) to 5 minutes.
from humpday.comparison.suggestions import suggest
from pprint import pprint
pprint(suggest(n_dim=8, n_trials=130, n_seconds=300))
Or this:
from humpday import recommend def my_objective(u):
time.sleep(0.01)
return u[0]*math.sin(u[1]) recommendations = recommend(my_objective, n_dim=21, n_trials=130)
Hopefully this speeds up your search or encourages you to try some packages you would otherwise overlook.
Oh and there are docs now!