Peter Ross Napier University napier. Gary Fogel Natural Selection, Inc. Emma Hart Edinburgh Napier University napier.
Asaad Abdollahzadeh Heriot-Watt University pet. Douglas Kell University of Manchester manchester. Antony Waldock Dyson Robotics antonywaldock. Richard A. Watson University of Southampton ecs. Shigeyoshi Tsutsui Hannan University hannan-u.
- Services on Demand.
- The Eastern Stars: How Baseball Changed the Dominican Town of San Pedro de Macoris?
- Recommended for you.
- Shop now and earn 2 points per $1.
Examples of such pathogens are bacteria and viruses. Any molecule that can be recognized by our immune system is called antigen. Such antigens provoke a specific response from our immune system. Lymphocytes of types B and T are special type of cells that play a major role in our immune system.
Upon detection of an antigen, the B cells that best recognize the antigen are cloned. Some of these cloned cells will be differentiated into plasma cells, which are the most active antibodies secretors, while others will act as memory cells.
These cloned cells are subject to a high somatic mutation rate in order to increase their affinity level, i. These mutations experienced by the clones are proportional to their affinity to the antigen. The highest affinity cloned cells experiment the lowest mutation rates, whereas the lowest affinity cloned cells have high mutation rates.
Due to the random nature of this mutation process, some clones could be dangerous to the body and are, therefore, eliminated by the immune system itself. Plasma cells are capable of secreting only one type of antibody, which is relatively specific for the antigen. Antibodies play a key role in the immune response, since they are capable of adhering to the antigens, in order to neutralize and eliminate them.
Once the antigens have been eliminated by the antibodies, the immune system must return to its normal conditions, eliminating the in-excess cells. However, some cells remain in our blood stream acting as memory cells, so that our immune system can 'remember' the antigens that have previously attacked it. When the immune system is exposed again to the same type of antigen or a similar one, these memory cells are activated, presenting a faster and perhaps improved response, which is called secondary response.
Based on the previous explanation of the way in which human immune system works, it can be say that, from a computer science perspective, the immune system can be seen as a parallel and distributed adaptive system. Clearly, the immune system is able to learn, it has memory, and is able of tasks such as associative retrieval of information. These features make immune systems very robust, fault tolerant, dynamic and adaptive. All of these properties can be emulated in a computer. Artificial immune systems AIS are composed of the following basic elements: a a representation for the components of the system, e.
In their work they use a standard genetic algorithm where the immune principle of antibody-antigen affinity is employed to modify the fitness value. In a first time the population is evaluated versus the problem objectives and different scalar values are obtained by making reference to different weighting combinations. The best individual with respect to each combination is identified as antigen. The rest of the population is the pool of antibodies.
Then antibodies are matched against antigens by the definition of a matching score. The best matching antibody fitness is added by this matching score, evolving the population of antibodies to cover antigens. Although it is very interesting to know these non-dominated solutions, additional criteria are necessary to select a single solution that will be deployed.
Even though useful to better understand the inter-relationships between the objectives, the Pareto front may become an obstacle since just one solution must be implemented. To make a selection, the DM will have to use an additional criterion, be it subjective or not. Then, why not formalize and include this additional criterion into the problem and make it a control function to find the single and final solution to be deployed among those that belong to Pareto front? Based on this idea, we propose a new methodology to formulate MOOP involving non-linear differentiable functions.
The method transforms the MOOP from multi-objective into single-objective that can be solved with the aid of any traditional single-objective optimization engine suitable for the problem in focus. In the proposed method, the objective functions are divided into two groups:. Which function will be part of each group is a designer's choice and depends on his or her experience and knowledge related to the design problem.
The control function can be an additional objective function or it can be elected among the problem objective functions. The performance functions will have an important role in the process as they will providethe Pareto set as a constraint over which the control function will search the solution that optimizes it. Thus, the multi-objective optimization problem can be written as:.
To apply the proposed methodology, the performance functions , is substituted by the KKT necessary condition, Eqs. It should be noted that the weighting factors of the Eq. As unknowns in the problem, they will be incorporated into the vector of design variables, defining the extended vector of unknowns:.
Finally, the problem is formulated as a single-objective optimization problem, with the control function, f c X , to be minimized and constrained by the conditions for obtaining the Pareto optimal solutions considering only the performance functions, f 1 X , f 2 X , The proposed methodology differs from those methods exposed in the items 3. Those strategies require that a complete decision-making structure of the problem is decided a priori.
Although the proposed method is also based on a priori decisions, just one function shall be added or isolated as a control function. The remaining functions are incorporated into the formulation to ensure that the final solution is within the Pareto set obtained from the minimization problem of these functions, even though the Pareto set and the Pareto front is not an explicit outcome of the process.
However, it must be considered that, with the use of Eq. Furthermore, the solution obtained is a local optimum and it can be a global optimum for convex Pareto fronts, since Eq. To validate the proposed multi-objective optimization method, it will be used to solve three examples with increasing levels of complexity, namely the optimization problem of three quadratic functions, the design of a cantilever beam and the conceptual design of a bulk carrier. To solve the single-objective optimization problem originated by the proposed methodology, any algorithm that works with optimization problems involving nonlinear functions and constraints can be used.
Due to its accessibility, the solver fmincon was used. It is a component of the Optimization Toolbox available in the application MatLab, Matworks, version 7. A minor adaption was done in the NSGA II in order to interrupt the evolution if there is no significant difference between consecutive generations.
Calling fi j the value of the i th objective function for the j th chromosome at n th generation of a population with pop chromosomes, the root mean square of the objective functions for the population can be defined as.
The alternative stop criterion was defined as. Sometimes, for a defined tol f value, this criterion may cause a premature interruption of the evolutionary process. To overcome this situation, the algorithm checks if it occurs in subsequent generations before discontinue the evolution. As a default, three subsequent generations were adopted with the satisfaction of Eq. Assume that the functions f 1 x 1 , x 2 and f 2 x 1 , x 2 integrate the performance functions group and the function f 3 x 1 , x 2 is the control function.
Accordingly, the problem can be formulated as:. The point that minimizes the control function falls into the Pareto set resulting from the MOOP formed by the performance functions. The Pareto sets shown in Figures 1a - 1e are not obtained by the methodology and they were computed for comparison only by a suitable algorithm.
Although the points in the Pareto front seem evenly distributed, the corresponding points in the Pareto set configure a rough approximation of the solution. To get this set, a 50 chromosomes population was used with the alternative stop of evolution criteria, Eq. With theseparameters the Pareto front stabilizes in generations. Figure 1d shows the Pareto set for the same problem obtained by using the weighted sum approach, with 50 weight vectors chosen at random. This approach returns a well-matched Pareto set. Although the NSGA II is a powerful algorithm when solving MOOPs, it focuses on the criterion space and consequently can generate rough solutions in the design space as show the results in Figures 1b and 1d.
Figure 1e shows the results when another function is chosen to play the role of the control function. The resultant points are distinct but they are over the corresponding Pareto set resulting from the MOOP of the corresponding performance function group. Moreover, they belong to the Pareto set that results from the MOOP composed of the three quadratic functions, shown in Figure 1f.
This Pareto set was obtained by the weighted sum approach with 1, sets of weights chosen at random. Observing Figure 1f , where the Pareto set is a region with infinite non-dominated points, the question that normally arises in the designer's mind is how to choose only one as a solution. In general, as more functions are aggregated to a MOOP, the more extended the Pareto set on the design space. In the example, with two functions, the Pareto set is a line.
With three functions, the Pareto set is a plane surface. In addition, comparisons for the computer performance between the algorithms are shown in Table 1. As there are no other methodologies similar to the one proposed in this paper, the results were compared to two other algorithms: the weighted sum approach and the NSGA II.
For both, the process involved two phases: a the search of the non-dominated designs for the MOOP involving the performance functions group and b the manual search in the resultant set for the alternative that minimizes the control function. The number of function calls resulted from the first part of the process. The computational performance measure adopted for all algorithms' comparison is the number of objective functions calls.
Although this measure is not perfect since both algorithms - the genetic algorithm and weight sum approach - depend on the number of points to be used in the MOOP solver, it has at least a qualitative value. For sake of comparison the computational times expended to achieve the final solution in a 2 GHz dual processor computer with 3 Gb RAM are also shown. Two other methods that return a single solution are shown in Table 1 and figure 1f , the compromise and the min-max solutions.
They are very fast, but they may not reflect the best compromise preferred by the DM. Although the example is very simple, it shows how the proposed methodology can help decision making processes. The problem solution falls within the performance functions non-dominated solutions and it is the one that optimizes the control function. In the quadratic functions example, the control function was chosen among the problem objective functions. In the next application, the MOOP is defined as minimizing the mass and tip deflection of a cantilever beam, two antagonist functions.
With the proposed methodology these functions will compose the performance group. Afterward, it will be included a third function, the manufacturing cost, that will play the role of the control function. As a second illustrative example, consider the design problem of a cantilever beam, adapted from Deb The beam, prismatic and with a circular section, must support the weight P at its free end.
The design variables are d and L , the cross section diameter and beam length, respectively. Also consider two attributes to be minimized, the beam mass and the beam tip deflection when subjected to the weight P. A schematic representation of the problem is shown in Figure 2. It can be demonstrated that the two selected objectives conflict since the minimization of the mass will lead to lower values of the pair d, L and minimization of the tip deflection will result in an increase in the cross sectional diameter with a reduction in the beam length.
The multi-objective optimization problem can be formulated as:. A clearer view of the antagonism of the objective functions can be obtained using a method to find the Pareto front of multi-objective optimization problems. Approximations of the Pareto set and the Pareto front and are shown in Figures 4a and 4b , respectively. These results were found by generating 10, feasible designs at random and then selecting the non-dominated ones. Figure 4a shows that all alternatives for the beam dimensions have length of mm and cross section diameter ranging from 19 mm to 50 mm, approximately.https://posbygorepkey.tk
Solving multi-objective optimization problems in conservation with the reference point method
These results are expected because, for a fixed diameter, to reduce the mass and the tip deflection the length decreases and the lower bound for this design variable is mm, as shown in Figure 3a , but an inverse relationship is observed considering the cross section diameter. Fixing the beam length, the lower the diameter the higher the tip deflection and the lower the beam mass.
Similar results were obtained by the NSGA II algorithm, using a 50 chromosomes population which has evolved 58 generations, conditioned by the stop criteria, Eq. For comparison, Figures 4e and 4f show the results found by the weighted sum approach. Fifty random sets of weights were used to reduce the normalized mass and deflection beam functions to a single scale. Now suppose that the problem must include the beam manufacturing cost and that it should be minimized. Additionally, suppose the cost function being defined by the function:. The contours of the cost function are shown in Figure 5a.
If the cost function is included as the third function in the MOOP, there is a growth and dispersion of Pareto optimal solutions in the design space, as shown in Figure 5b. This spread of the non-dominated alternatives in the design space makes decision making process even more difficult. The proposed methodology can help the DM elect one alternative among all non-dominated, while preserving the concept of multi-objective optimization.
The MOOP of the design of a cantilever beam can be re-formulated as:. Another interpretation for problem formulation would be: among the solutions that satisfy a technical or engineering criterion, choose the one that has the lowest manufacturing cost. Applying the proposed methodology, the MOOP can be rewritten as:. The fmincon was used to solve this single-objective optimization problem, which returns the following values.
For the sake of illustration, the values of the manufacturing cost calculated over the Pareto set obtained for the bi-objective optimization problem are shown in Figure 5d. The minimum cost is located in the diameter axis at the neighborhood of 35 mm. Table 2 shows a comparison of the methodologies used to find the single solution for the beam design. Although the proposed methodology does not return the Pareto front it is very efficient to find the final solution with number of function calls significantly smaller than the other two methods used for the same purpose.
To conclude, if the cost function is treated globally as the third function in MOOP, the set of Pareto optimal solutions will no longer be a line, as shown in Figure 4a ; instead it is spread over a surface, as shown in Figure 5b. In fact, the greater the number of conflicting functions in a multi-objective optimization problem, the greater the region in the space of the design variables that defines the Pareto optimal solutions.
Consequently, the decision making process to elect a single solution is more difficult. How do we choose a non-dominated alternative among those present in Figure 5b?
Multiobjective Problem Solving from Nature : From Concepts to Applications
The answer is impossible unless a fourth criterion is used to drive the choice. Rather, in the present proposal, this additional criterion is added to the MOOP as a control function to guide a convenient solution that satisfies all the constraints in a very efficient way, a solution that is non-dominated inside the group of the performance functions, and is an appropriate optimal solution obtained by optimizing the control function. The third application of the developed methodology was in the conceptual design of a bulk carrier.
The conceptual design of a cargo vessel is not a trivial task. For decades, this problem has been handled in two ways, either by the adapting a known design aiming to the new requirements, or by the aid of simplified mathematical models driven by an optimization algorithm that enables obtaining the optimal solution based on technical or economic criteria previously established. The model is made up of functions that define the vessel attributes from which are drawn those that constrain the design space, those to be optimized and those that characterize the technical and economic performance of the vessel and allow the evaluation of each design alternative.
Among them are the annual transportation cost , the annual transported cargo , the ship lightweight , the ship initial cost and other functions of the vessel design variables such as length, width, depth, draft, block coefficient and speed, respectively L, B, D, h, C B , V k. They limited the ship length at They also evaluated the problem by constraining the decision space in order to design ships suitable to cross the Panama Canal, with beam and draft limited to For practical reasons, the limits they have not mentioned were adopted wide enough to not influence the optimization results.
Accordingly, the ranges for the design variables are described in Table A5 of Appendix A.
- From Concepts to Applications.
- Beyond the Medieval Village: The Diversification of Landscape Character in Southern Britain (Medieval History and Archaeology)!
In this work, we replaced the vessel lightweight by the ship initial cost. Although vessel lightweight and ship initial cost functions are related, the latter has a financial appeal like the former two objective functions. Annual transported cargo is associated with the annual income, the annual transportation cost with the annual expenses and the ship cost is related to the capital required for the ship purchase.
The design alternative that optimizes the single-objective problem can be easily obtained by taking each objective function separately. Any algorithm running with non-linear optimization problems and constraints can be used to solve this single-objective optimization problem. The fmincon was used to get the results shown in Table 3. The results indicate that the objective functions are conflicting as to maximize the annual cargo A C ; the optimum design will be a large ship with the highest speed of 18 knots.
The search reaches the upper bound for the total deadweight, , t. The opposite occurs when it minimizes the ship cost C S. In this case, all vessel dimensions decrease along with the ship speed, which reaches the minimum of 14 knots with a slender shape vessel with block coefficient of 0. The search for even smaller vessels is constrained by the lower bound for the total deadweight of 3, tons. To minimize the annual transportation cost C T , the solution is situated between the previous two. A clearer view of the antagonism of the objective functions in this problem can be obtained using an algorithm to find the Pareto front of multi-objective optimization problems.
Figure 6 shows 10, feasible ship designs and highlights the non-dominated ones. To get these points, more than 12 million random design alternatives were generated and analyzed, taking seconds of computational time. Table 4 shows the design variables statistics for the feasible points. Although the problem shows a narrow surface in the criterion space, the design variables are well distributed over the design space. With the aid of Figure 6 , is possible to presume how the Pareto front of this problem should be, but, on the other hand, the random walk is not adequate to drive the search for the Pareto front, while the number of function calls becomes prohibitive.
In order to compare the results of the proposed methodology with other methods, the weighted sum approach and the genetic algorithm were used to do the bulk carrier design optimization task. Figure 7 and Table 5 show the results of the weighted sum approach algorithm with sets of random weights. As fmincon fails in find the solution for several weights sets, the process were restarted from 5 different starting points for each set single run, being chosen the best result as the resultant point. Even though points were discarded, because they have shown to be dominated solutions. These results were achieved for a population of chromosomes, which has evolved generations.
Table 6 shows the statistics for the non-dominated designs. The statistics for the design variables indicate a reasonable dispersion of the values. With the set of non-dominated solutions, shown in Figure 8 , the question that often arises for the DM is: which design alternative should be implemented? Applying the proposed methodology one attribute function can be chosen as the control function and optimized in the Pareto front generated by the optimization of the performance functions group.
Consider the previous functions, the annual transportation cost , the annual cargo and the ship cost , incorporating the performance group and the voyage cost as the control function. Accordingly, the multi-objective optimization problem can be defined as:. The mathematical formulation of this optimization problem is shown in detail in Appendix B. Note that the problem still is multi-objective, with four objective functions and no value function involved.
However, with the use of the proposed approach, it can be formulated and solved in a single passage of any algorithm dedicated to single-objective optimization with inequality and equality constraints and able to work with non-linear functions. Table 7 shows the design alternative with some ship attributes obtained by using the fmincon algorithm. Table 8 compares the results obtained by the proposed methodology with those obtained by the weighted sum approach and the NSGA II.
The proposed methodology is very efficient in finding the optimized design as the number the function calls is significantly smaller. It can be observed that the use of the proposed methodology leads to a solution that satisfies all the constraints and provides a good technical performance and its result is optimal according to the control function.
Most engineering design problems are multi-objective and the cases where the objective functions do not conflict are rare. To solve these kinds of problems, many researchers developed methods to search for the solution of multi-objective optimization without simplifying the problem to single-objective and having to decide a priori how to group the objective functions into a single scale. The evolutionary methods are frequently used to locate the set of solutions of multi-objective problems. These algorithms provide a discrete picture of the Pareto front in the criterion space. It was observed that the greater the number of objective functions, the more scattered the set of non-dominated solutions is in the decision space and the harder it is for the DM to choose an alternative to be deployed.
This paper proposes a new methodology to solve multi-objective optimization problems in which one objective function is proposed or isolated and is treated as a control function that will drive the decision-making process. The other objective functions will form the performance functions group. Through this strategy, a single-objective optimization problem is formulated, in which the control function will be optimized over the Pareto set that would result from the optimization problem established by the performance functions, if this problem was previously solved.
Then the resultant single-objective optimization problem can be solved by any classical standard optimization engines, the limitations will be those present in the method used, such as the nature of the functions, whether they are linear or nonlinear, or continuous, regarding the presence or not of inequality and equality constraints. With the proposed method a minimum knowledge is needed a priori.
There is no need to know a value function relation to the objectives before starting to solve the problem as other a priori articulation of preferences methods. As drawbacks, the method doesn't provide a set of non-dominated solutions and as in many other single-objective optimization algorithms, the solution converges to a local optimum since the conditions of Karush-Kuhn-Tucker, chief pillar of the proposed methodology, are the necessary conditions but not sufficient to ensure that the solution is in the "global" Pareto front.
If the problem is convex and the functions involved are regular, the local solution also will be a global solution, ensuring a unique result for the problem. Another disadvantage, the functions involved in the problem shall be continuously differentiable.
Prof Jonathan Fieldsend
Although these conditions are limiting, the proposed methodology is very efficient in solving engineering design problems, as demonstrated by the examples solved with its use. Goal programming and multiple objective optimisation - Part 1. European Journal of Operational Research , 1 : A survey of recent developments in multiobjective optimization.
Mathematical Psychic: an essay on the application of mathematics to the moral sciences , Editor P. Kegan, London, England. On a bicriterion formulation of the problems of integrated system identification and system optimization. IEEE Trans. Man Cybern. SMC-1 , DOI Adaptation in Natural and Artificial Systems. University of Michigan Press.