Titles and Abstracts
Second Workshop on Nonlinear Optimization



Applying GPS to Optimize the Design of a Load Bearing Thermal Insulation System

Mark Abramson

Many engineering optimization problems involve a class of discrete variables, known as categorical variables, whose values must always come from a predefined unordered list. While they may take on numerical values, such values have no real significance. Examples of such variables might include type of material, color, or shape. Until recently, the only approaches for handling such variables within the scope of optimization have been exhaustive enumeration of the categorical variables, genetic algorithms, or parametric studies using only a few handpicked sets of values for the categorical variables. Audet and Dennis recently presented a class of generalized pattern search algorithms for solving these mixed variable optimization problems with bound constraints. We describe the algorithm, along with some improvements in the theory, and we present computational results for a problem in the design of load bearing thermal insulation systems.

Joint with: Charles Audet and John E. Dennis


Generalized Pattern Searches with Complete or Incomplete Derivative Information

Charles Audet

A common question asked by users of direct search algorithms is how to use derivative information at iterates where it is available without giving up the enhanced global optimization performance that can be implemented into these methods. This paper answers that question with respect to Generalized Pattern Search (GPS) methods for linearly constrained optimization. We show that the use of gradients significantly reduces the number of function evaluations needed for unsuccessful poll steps, the most expensive steps, which occur when the current mesh needs to be refined. Furthermore, we show that rather rough approximations to the gradient are sufficient to achieve this savings as well. We prove that using these less expensive poll steps does not weaken the provable convergence properties of the method, and illustrate these results on problems from the CUTE test set.

We observe that our approach, though less costly, can lead to a different solution with a higher objective function value.

Joint with: Mark Abramson and John E. Dennis


Space Mapping: A Sensible Concept for Engineering Optimization Exploiting Surrogates

John W. Bandler

Engineers have been using optimization techniques for device, component and system modeling and CAD for decades. One of the frontiers that remain is the successful application of optimization procedures in problems for which direct application of traditional optimization approaches is not practical. The recent exploitation of iteratively refined surrogates of "fine", accurate or high-fidelity models, and the implementation of space mapping and related methodologies are attempts to address this issue.

Space mapping optimization intelligently links companion "coarse" (ideal or low-fidelity) and "fine" (practical or high-fidelity) models of different complexities. Examples include full-wave electromagnetic simulations with empirical circuit-theory based simulations, or an engineering device under test coupled with a suitable simulation surrogate. The aim is to accelerate iterative design optimization. It is a simple CAD methodology, which closely follows the traditional experience and intuition of engineers, yet appears to be amenable to rigorous mathematical treatment. The exploitation of properly managed "space mapped" surrogates promises significant efficiency in engineering design.

The original concept of space mapping, first conceived by Bandler in 1993, and the aggressive space mapping approach to engineering design optimization will be discussed. A trust region aggressive space mapping optimization approach will be indicated. Artificial neural network (ANN) approaches vs. generalized space mapping for device modeling from electromagnetic data will be mentioned. We will also discuss new "neuro space mapping" approaches to device modeling and circuit optimization using ANN methodology. The new expanded space mapping design framework exploiting preassigned parameters will be discussed. We briefly review the new Implicit Space Mapping (ISM) concept in which we allow preassigned parameters, not used in optimization, to change in some components of the coarse model.

Reference is made to a report on recent neuroscientific findings that appear to support the notion that the Space Mapping process is a fundamental means used consciously or subconsciously in daily choices made by humans and to assess social rewards. We have devised novel physical examples to illustrate the idea: Cheese Cutting Problem, Shoe Selection Problem, Wedge Cutting Problem, etc..

We present new results applicable to RF, wireless and microwave circuit design, integrating electromagnetic simulations. The approach taken is, however, general and should be of wide interest.


Constrained Optimization Involving Expensive Function Evaluations: A Sequential Approach

Ruud Brekelmans

This paper presents a sequential method for constrained non-linear optimization problems. The principal characteristics of these problems are very time consuming function evaluations and the absence of derivative information. Such problems are common in design optimization, where time consuming function evaluations are carried out by simulation tools (e.g., FEM, CFD). Classical optimization methods are not practical for such problems, as they obtain the required derivative information by finite differencing.

The algorithm first creates an experimental design. In the design points the underlying functions are evaluated. Local linear approximations of the real model are obtained with help of weighted regression techniques. The approximating model is then optimized within a trust region to find the best feasible objective improving point. This trust region moves along the most promising direction, which is determined on the basis of the evaluated objective values and constraint violations combined in a filter criterion. If the geometry of the points that determine the local approximations becomes bad, i.e. the points are located in such a way that they result in a bad approximation of the actual model, then we evaluate a geometry improving instead of an objective improving point. In each iteration a new local linear approximation is built, and either a new point is evaluated (objective or geometry improving) or the trust region is decreased. Convergence of the algorithm is guided by the size of this trust region. The focus of the approach is on getting good solutions with few function evaluations (not necessarily on reaching high accuracy).


The Use of Surrogate-Based Optimization in Engine Automatic Calibration on Test Bench

Thierry Dalon

Control-unit calibration for modern internal combustion engines is currently facing a conflict caused by the additional effort needed to calibrate increasingly complex engine data with a growing number of parameters, together with extremely ambitious objectives regarding the period of time and the resources needed for calibration.

The first step to master these requirements is to develop automatic test procedures on engine test bench. The second step is to further integrate optimization technology into the automatic test environment in order to save more testing time. That is the so-called "Online Optimization" on test bench. The optimization methods used have to cope with the strong economical limitations. Robustness to measurement noise or failure as well as to the test procedure is also essential. Moreover they have to be flexible in order to integrate experimental knowledge on the problem. The method investigated at Siemens VDO Automotive is based on the generalized pattern search (GPS) with the use of surrogates in the search step. We present here the characteristics of the optimization problem with its practical environment and the implementation of the GPS to solve it.


Analysis of Generalized Pattern Searches through Clarke's Calculus

John E. Dennis

We present a new convergence analysis for the Lewis and Torczon GPS class of pattern search methods for linearly constrained optimization. The analysis is motivated by the desire to understand the behavior of the algorithm under hypotheses more consistent with properties satisfied in practice for a class of problems, discussed at various points in the paper, for which these methods are successful. Specifically, even if the objective function is discontinuous or extended valued, the methods find a limit point with some minimizing properties based on Clarke's calculus. Simple examples show that the strength of the optimality conditions at a limit point does not depend only on the algorithm, but also on the directions it uses, and on the smoothness of the objective at the limit point in question. This contribution of this paper is to provide a simple convergence analysis that supplies detail about the relation of optimality conditions to objective smoothness properties, and the defining directions for the algorithm, and it gives older results as easy corollaries.

Joint with: Mark Abramson and Charles Audet


Constructing Approximations to the Efficient Set of Convex Multiobjective Problems

Joerg Fliege

In multicriteria optimization, several objective functions have to be minimized simultaneously. In stark contrast to standard single-criteria optimization, no single solution can adequately represent the whole set of optimal points. It is therefore of prime importance to gain as much information about the solution set as possible. To this end, we propose a new efficient method for approximating the solution set of a convex multiobjective programming problem. The method is based on a warm-start interior point algorithm for which we derive complexity results. Numerical results on bicriteria problems from power plant optimization and portfolio optimization show that the method is an order of magnitude faster than standard methods applied to the problems considered.


Provably Convergent and Heuristic Algorithms for Surrogate-Based Optimization

Anthony A. Giunta

Practitioners of optimization in industrial settings often face difficulties such as expensive computational simulations and inaccurate or nonexistent gradients. Various approaches, both mathematically rigorous and heuristic, have appeared in the optimization literature over the past 30 years to address these issues. In general, the goal of these approaches is to optimize on some type of computationally inexpensive model or function that captures the general trends in the output data produced from the original expensive simulation. Thus, the computationally inexpensive model serves as a surrogate for the expensive simulation. We have focused on developing and implementing algorithms for two types of surrogate-based optimization (SBO): multifidelity optimization and metamodel optimization.

The multifidelity variant of SBO combines low fidelity models (e.g., simplified physics and/or coarse computational meshes) and high fidelity models (e.g., full physics and/or detailed computational meshes). This approach is popular in engineering design since it is often relatively easy to produce, for example, a finite element model with fewer elements than the original model. The metamodel variant of SBO employs statistical data sampling methods and surface fitting techniques to produce computationally inexpensive functions that capture the general trends of the high fidelity model. This metamodel approach also is popular in engineering design since it alleviates some sources of numerical noise generated in computational simulations that inhibit gradient-based optimization. While these two SBO approaches often are successful in producing improved engineering designs, most industrial and many academic applications of these approaches have not been mathematically rigorous. Hence, it is unknown whether an improved design is truly optimal.

Extensions of our results will be discussed. We have developed provably convergent optimization algorithms for both multifidelity and metamodel SBO. In addition, for situations where the criteria for provable convergence cannot be met, these SBO algorithms reduce to heuristic methods that have consistently displayed desirable convergence characteristics. The mathematical underpinnings of these SBO algorithms will be presented, along with a collection of example problems and actual engineering optimization problems on which SBO methods have been applied.


Algorithmic Decisions in Implicit Filtering

C. T. Kelley

Implicit filtering is a sampling method that uses difference gradients in a projected BFGS iterations. The size of the difference increment is reduced as the optimization progresses. In this talk we will discuss implementation decisions, similar decisions that must be made in all sampling methods, and the theoretical support for those decisions.


Non-Linear Optimization Using the Space Mapping Method

Kaj Madsen

We present the space mapping method for using cheaper surrogates to solve non-linear optimization of functions based on expensive models. We assume that the user provides a possibly less accurate, but cheaper, model of the same physical system. During iterations a parameter mapping is established, relating parameters of the expensive and cheaper model having similar responses. A surrogate for the expensive model is built by combining the parameter mapping and the cheaper model. Using this surrogate, most of the computational work in the search for an optimizer of the expensive model is directed to the cheaper model.

Joint with: Jacob Søndergaard


Grids and Frames in Derivative Free Optimization

C. J. Price

This talk looks at the use of grids and frames in direct search methods for unconstrained and linearly constrained optimization. A frame based template is presented which uses a generalization of the concept of a grid called an admissible set. This generalization allows sets to be used which have, for example, circular symmetries, are partly random, or have no symmetries at all. It is shown that the template includes methods which can accept simple descent, as well as those which require the satisfaction of a sufficient descent condition. The former require extra conditions on the admissible sets, but provide enough flexibility to allow arbitrary feasible points which give simple descent to be accepted every iteration. These extra conditions permit successive admissible sets to be unrelated in orientation and shape. Together these freedoms yield simple descent methods of enormous flexibility. Methods which require the satisfaction of a sufficient descent condition have a wider choice for the admissible sets, including that of the whole feasible region. Linear constraints are incorporated by using a simple barrier approach: the objective function is declared to be infinite at all infeasible points.

The template is analysed using Clarke's non-smooth calculus. Convergence to one or more stationary points is shown under mild conditions. This also re-establishes existing convergence results for previous grid and frame-based methods.

Joint with: I. D. Coope


Practical and Theoretical Aspects of the Space Mapping Optimization Method

Jacob Søndergaard

We discuss conditions for success of the space mapping optimization method. The basis for the technique is a surrogate for the objective function updated at each iteration. We expect the surrogate to approximately match the objective function over a domain which is large compared with the corresponding domain for a traditional Taylor-based optimization technique. On the other hand the space mapping surrogate lacks a crucial property of the traditional techniques, namely match to first order at a single point - the current iterate. Therefore it is relevant to find conditions for improvement in a single step of the space mapping, whereas traditional asymptotic convergence to a stationary point of the objective function cannot be expected - unless modifications take place.

Modifications providing asymptotic convergence are discussed. In this connection we also discuss problems related to establishing the parameter mapping and means of overcoming them.

Joint with: Kaj Madsen