How To Create An Efficient Frontier In Excel
Efficient Frontier
Nonlinear optimization applied to the portfolio theory
Giovanni Romeo , in Elements of Numerical Mathematical Economics with Excel, 2020
Example 1 (two assets efficient frontier)
Three years of data observations for two standard indices are provided in Table 8.1-1.
Table 8.1-1. Three-years historical data for JP Morgan Euro bond index and MSCI Euro equity index.
Now, supposing we have access to some tracking investment of the two indices the problem is to find the efficient frontier from the two indices, i.e., the best combinations of investment weights leading to the MVF.
The inputs of the problem are the average returns (using arithmetic average), standard deviation, and the correlation between the two assets, as in Table 8.1-2. In Fig. 8.1-1, we have the computations that we need to graph the efficient frontier. We build an Excel Data Table over the mean and standard deviation of the portfolio, and we let the weight in CellJ9 move from 0% to 100%. The MVF and GMV portfolio are enumerated in Table 8.1-3, while the curve of minimum variance frontier, together with its two component assets is in Fig. 8.1-2.
Table 8.1-2. Average return and standard deviation.
Table 8.1-3. Minimum variance frontier enumeration and the global minimum variance portfolio (highlighted).
Efficient frontier: The portion of the minimum variance frontier beginning with the GMV portfolio (see Figs. 8.1-2) and continuing above is called efficient frontier. Portfolios lying on the efficient frontier offer the maximum expected return for their level of standard deviation of returns.
Would the curvature of the efficient frontier change, if we had a different correlation index? The answer is yes of course. The following cases may happen as far as the correlation index is concerned:
- i.
-
Positive correlation, up to perfect positive correlation, i.e., ρ = +1.
- ii.
-
No correlation, i.e., ρ = 0.
- iii.
-
Negative correlation (up to perfect negative correlation, i.e., ρ = −1. We have here the maximum effect of diversification.
- iv.
-
Imperfect positive/negative correlation.
The cases are summarized and graphed in Fig. 8.1-3.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128176481000086
Portfolio optimization
Manfred Gilli , ... Enrico Schumann , in Numerical Methods and Optimization in Finance (Second Edition), 2019
Computing the whole frontier
As a final example, we will trace out the complete mean–variance efficient frontier. There are three ways to do so.
- 1.
-
Minimize variance while varying desired return ;
- 2.
-
maximize return while varying the maximally allowed variance (note that this problem cannot be solved with QP because we have a linear objective function, but a quadratic constraint); or
- 3.
-
directly work with the objective function (14.2). This third representation has two advantages. First, we do not need to fix beforehand—if we had to, we would need to compute the return of the minimum-variance portfolio first, and set greater than this return. Second, we need not care what the highest possible is—with constraints, the maximum achievable return may not be immediately obvious.
We recall our mean–variance problem as
We now vary λ between 0 and 1. Note that we used the expressions for the portfolio return and variance that explicitly rely on the portfolio weights. We set
The following R and MATLAB programs illustrate such a computation. For R, there is a function mvFrontier in the NMOF package.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128150658000261
Sustainability of Products, Processes and Supply Chains
Bruna Mota , ... Ana P. Barbosa-Povoa , in Computer Aided Chemical Engineering, 2015
13.4.2 Multiobjective Approach
A multiobjective approach (the augmented ε-constraint method) was applied to establish the efficient frontier between the economic, environmental (measured through ReCiPe), and social objectives. The results obtained are shown in Figure 13.5.
Solutions A and B represent the solutions with less environmental impact (scenario S2) and lower cost (scenario S1), respectively. Solution Q represents the one with the higher social benefit (lower value obtained with the minimization of the social assessment objective function, equivalent to scenario S3). The ideal solution that would combine the lowest cost (from solution B) with the lowest environmental impact (from solution A) with the highest social benefit (from solution Q) is also represented for comparison (see Figure 13.5 on the left).
One of the first conclusions that can be taken from the two dimensional representations is the fact that the economic and environmental objectives vary linearly, with the exception of solution A where a lower environmental impact is obtained at a higher cost than solution B. This linearity results from the fact that both costs and environmental impact vary linearly with the area of the installed entities and with the distance traveled. However, as mentioned before, the environmental impact methodology used does not take into account the difference between building several smaller warehouses and building only a large one. Hence, these results might in reality be different and more linear than presented. Regarding the social performance, an improvement is only attained with a deterioration of the economic and environmental performances.
Analyzing the three-dimensional representation, it is possible to identify three distinctive groups of solutions. The first group comprehends solutions A through E, which appear to tend to the ideal solution. In fact, a significant increase in the social benefit (decrease in the social assessment objective function) can be obtained with a relatively small compromise of the economic and environmental performances. This is achieved with different combinations of the four distribution centers. By simply replacing one distribution center located in a densely populated region with one that is located in a less populated region the social benefit increases significantly. This is possible without a meaningful increase in the transportation activities, which is the highest contributor to the increase in environmental impact and cost, as shown in Section 13.4.1. In the second group (solutions F, G, and H) the model locates two to three of the four distribution centers in the less populated regions to achieve a higher social benefit. The increase in the transportation is balanced with the increase in the number of warehouses, to achieve the lowest possible cost and environmental impact. In the third group (solutions I through Q), a significant deterioration of the economic and environmental performances is necessary for small improvements in the social performance of the supply chain. This is due to the reduction in the number of distribution centers that occurs from solution I (with three distribution centers) through Q (with only one distribution center), so as to locate the maximum number of workers in the less populated regions. Even though the fixed costs with warehouses and entities decrease, the transportation costs increase significantly.
Solutions F, G, and H are the closest to a so-called solution of compromise. Solution F is obtained with a deterioration of the economic and environmental performances of 20% and 25%, respectively. Solution H already requires a decline of 62% and 74%, respectively, for the same objectives, which is nonimplementable at the company level. However, none of the solutions is better than the other. With the conflicting objectives, selecting one of them will always mean attributing a higher weight to one of the objectives over the others.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780444634726000136
Optimization and portfolio selection
Hal Forsey , Frank Sortino , in Optimizing Optimization, 2010
Publisher Summary
This chapter presents a new model, Forsey–Sortino Optimizer, that generates a mean–downside risk efficient frontier. It develops a secondary optimizer that finds the best combination of active managers, to add value, and passive indexes, to lower costs. It intends to provide a starting point from which other researchers around the world can make improvements and in that way make a contribution to the state of the art. The underlying assumption is that the user wants to maximize the geometric average rate of return in a multiperiod framework. Therefore, the three-parameter lognormal distribution suggested by Atchison and Brown should provide a better estimate of the shape of the joint distribution than assuming a bell shape (standard normal distribution). It is recognized that this shape should change when the market is undervalued in that it should be more positively skewed than normal, and that it should be more negatively skewed when the market is overvalued. The user is then allowed to identify which part of the world he or she is operating from. That will determine which currency the indexes will be denominated in and which indexes to use. Next, the user is allowed to select combinations of scenarios from the three buckets of returns. If the user does not wish to make such a decision, the choice should be "unknown," in which case the returns from all three buckets are used.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123749529000075
29th European Symposium on Computer Aided Process Engineering
Daniel F. Rodríguez-Vallejo , ... Benoît Chachuat , in Computer Aided Chemical Engineering, 2019
4 Results and discussion
The results in Figure 1 show that four fuels are efficient (θ ℓ * = 1), namely gasoline, EtOHsc-85, Bio-gasoline and 85-MeOHre and therefore, they constitute the efficient frontier which in turn may be used as a benchmark to define the improvement targets for our fuel of interest 85-MeOHre. Gasoline shows the lowest price, but the second highest GWP, being overcome only slightly by FT-naphtha. Ethanol from sugarcane has a very similar cost to gasoline, but a lower GWP. Bio-gasoline and methanol based on H2 from renewable electrolysis are the cleaner fuels, showing the lowest GWP. On the other hand, ethanol from corn or switchgrass, hydrogen-based fuels, and our fuel of interest (85-MeOHf) are all deemed inefficient. The comparison confirms that liquid H2 from coal, natural gas and biomass perform significantly worse than the other fuels, both in terms of DC and GWP. This is principally due to the high cost and energy consumption associated to the transport and storage stages (IEA, 2013). Observe also that our fuel 85-MeOHf shows the second highest DC, just behind methanol employing renewable electrolysis as source of H2 (85-MeOHre). For the targets definition, we consider three cases: 1) H2 flow defined as variable; 2) H2 cost as variable (flows defined as constant); and 3) H2 produced from two different sources, namely fossil fuels and wind-powered electrolysis. Case 1 is proved to be infeasible. Therefore, our fuel 85-MeOHf cannot be projected onto the efficient frontier by modifying the sole flow of H2 subject to the stoichiometric limit defined by the reaction that governs the process (Gonzalez-Garay and Guillen-Gosalbez 2018). In Case 2 (see Figure 1), it is found that 85-MeOHf could reach the target defined by its peers, 85-EtOHsc and Bio-gasoline, on the efficient frontier, but this would call for a 60% reduction in H2 price, down to 1.17 US$/kg. In practice however, such a large drop is unlikely given the current state-of-the-art in H2 production technology (Gonzalez-Garay and Guillen-Gosalbez, 2018). In case 3, 85-MeOHf could also reach the target defined by its peers, 85-MeOHre and Bio-gasoline, on the efficient frontier, by allowing for a mix of H2 from fossil fuels and wind-powered electrolysis. This projection increases DC by 7.9 % and reduces GWP by 9.4 % with respect to their initial values, which is enabled by: (i) the reduction of the total H2 flow to the stoichiometric lower bound which represents a decrease of 14.5 % compared with the initial H2; and (ii) the use of a 26-74 % mix of H2 from fossil fuels and wind-powered electrolysis, respectively, which suggests that our fuel 85-MeOHf may only become competitive upon substituting a majority of the H2 feedstock from fossil fuels by a more sustainable production route, despite the resulting increase in fuel price.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128186343500564
Optimal solutions for optimization in practice
Daryl Roxburgh , ... Tim Matthews , in Optimizing Optimization, 2010
3.8.2 Request: how to optimize in the absence of forecast returns
A client required a long–short mix of stocks selected on the basis of absolute betas rather than forecast returns, which were not available. In the absence of forecast returns, it is obviously not possible to form a traditional efficient frontier. However, the client could provide measures of beta, measured against the market, for each stock. These absolute betas were used as a constraint in conjunction with more common sector constraints within the optimization. Further, a nonlinear transaction cost function with terms dependent upon both an absolute size factor and the gross value of the portfolio was included. The result is that absolute beta is used to target the desired level of risk.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123749529000038
Data Envelopment Analysis (DEA)
Kingsley E. Haynes , Mustafa Dinc , in Encyclopedia of Social Measurement, 2005
What Is Data Envelopment Analysis?
Data envelopment analysis (DEA) is concerned with the evaluation of the performance of organizations in converting inputs to outputs. In DEA, such organizations are usually called decision-making units (DMUs). DEA is a powerful methodology that measures the efficiency of a DMU relative to other similar DMUs by identifying a "best practice" frontier, with a simple restriction that all DMUs lie on or below the efficiency frontier. DEA is a mathematical programming model that uses a set of nonparametric, linear programming techniques to estimate relative efficiency. The underlying assumption behind DEA is that if the most efficient DMU can produce Y amount of output by using X amount of input, then it is expected that other DMUs should also be able to produce the same, if they are efficient. DEA combines all efficient DMUs and forms a virtual DMU0 with virtual inputs and outputs. If the virtual DMU0 is better than DMU k by either making more output with the same input or making the same output with less input, then DMU k is considered inefficient.
A DEA study can have the following objectives:
- •
-
Identify the efficiency frontier and efficient units, and rank other units by their relative efficiency scores.
- •
-
Identify an efficiency measure that reflects the distance from each inefficient unit to the efficient frontier.
- •
-
Project inefficient units to the efficient frontier (efficient targets for the inefficient units).
- •
-
Identify efficient input–output relations.
- •
-
Evaluate the management of compared units for potential benchmarking.
- •
-
Evaluate the effectiveness of comparable programs or policies.
- •
-
Create a quantitative basis for reallocation of resources among units under evaluation.
- •
-
Identify sources and amounts of relative inefficiency in each of the compared units.
- •
-
Identify technical and allocative inefficiencies.
- •
-
Identify scale problems of units and determine the most productive scale size.
- •
-
Identify achievable targets for units under evaluation.
- •
-
Identify an individual unit's progress over time.
Differences between DEA and Other Efficiency Measurement Models
There are two basic approaches to quantify productive efficiency of a unit or entity: parametric (or econometric) and nonparametric (mathematical programming). These two approaches use different techniques to envelop a data set with different assumptions for random noise and for the structure of the production technology. These assumptions, in fact, generate the strengths and weaknesses of both approaches. The essential differences can be grouped under two characteristics: (a) the econometric approach is stochastic and attempts to distinguish the effects of noise from the effects of inefficiency and is based on the sampling theory for interpretation of essentially statistical results; (b) the programming approach is nonstochastic, lumps noise and inefficiency together (calling this combination inefficiency), and is built on the findings and observation of a population and only projects efficiency relative to other observed units. The econometric approach is parametric and confounds the effects of misspecification of the functional form of production with inefficiency. The programming model is nonparametric and population based and hence less prone to this type of specification error.
Weaknesses and Strengths of DEA
There have been arguments made that the traditional parametric methods fail to measure productive efficiency satisfactorily because of the following reasons: (a) most of the traditional approaches are based on process measures with little or no attention to important outcome measures; (b) such outcome measures as well as some input factors are qualitative and it is difficult to quantify them, and to assign them their proper relative weights is usually problematic; (c) it is very difficult to formulate an explicit functional relationship between inputs and outputs with fixed weights on the various factors; and (d) averaging performance across many DMUs, as in regression analysis, fails to explain the behavior of individual DMUs, particularly leaders and laggards. DEA has several characteristics that make it a powerful tool: (a) DEA can model multiple input and multiple output situations and it does not require an assumption of a functional production form in relating inputs to outputs; (b) DMUs are directly compared against a peer or combination of peer units; and (c) DEA can have inputs and outputs with very different measurement units (for example, X 1 could be in units of lives saved and X 2 could be in units of dollars without requiring a priori monetization or prespecifying a trade-off between the two). There are other key aspects of DEA:
- •
-
The ability of DEA to incorporate environmental factors into the model as uncontrollable inputs or outputs or by assessment of after-the-fact results.
- •
-
DEA is a nonparametric method not requiring the user to hypothesize a mathematical form for the production function.
- •
-
DEA measures performance against efficient performance rather than average performance.
- •
-
DEA can identify the nature of returns to scale at each part of the efficient boundary area (facet).
- •
-
DEA can identify the sources of inefficiency in terms of excessive use of particular input resources or low levels on certain output generation.
- •
-
DEA offers accurate estimates of relative efficiency because it is a boundary method.
- •
-
DEA offers more accurate estimates of marginal values of input or outputs, provided it offers no negligible marginal value for any variable.
- •
-
DEA allows for variable marginal values for different input–output mixes.
On the other hand, econometric approaches have some advantages over DEA:
- •
-
Econometric approaches offer a better predictor of future performance at the collective unit level if the assumed inefficiencies cannot be eliminated.
- •
-
Econometric approaches offer the ability to estimate confidence intervals for unit-related point estimates.
- •
-
Econometric approaches offer the ability to test assumptions about mathematical relationships assumed between input and output variables.
- •
-
Econometric approaches may offer more stable estimates of efficiency and target input–output levels because the estimates are not dependent on only a small subset of directly observed input–output levels.
- •
-
Econometric approach estimates of marginal input–output values and of efficiency are more transparent and can be more readily communicated to the layperson.
- •
-
Because DEA is an extremal value method, noise (even symmetrical noise with zero mean) such as measurement error can cause significant problems.
- •
-
DEA is good at estimating "relative" efficiency of a DMU, but it converges very slowly to "absolute" efficiency. In other words, it can tell you how well you are doing compared to your peers, but not compared to a "theoretical maximum." The latter is a strength of the econometric approach.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B0123693985003480
27th European Symposium on Computer Aided Process Engineering
Samira Mokhtar , ... Adrian James , in Computer Aided Chemical Engineering, 2017
Result of optimization
Every possible supplier allocation can be plotted in profit–risk space. Every spot in the graph represents a supplier allocation set. All such possible allocations define a region in this space. The line along the upper edge of the region is called efficient frontier on which every allocation set has the maximum profit for a specific level of risk. The efficient frontier in Figure 3 can be shown by the solid line and the 1000 random supplier order allocations are generated for different supplier weights and presented by dots. For instance, a risk-taker decision maker may choose the supplier allocation set A, in which the manufacturer profit is maximum and the amount of profit risk is 10%. The suppliers' weights in this allocation set are S 1:57%, S 2:16%, S 3:27%. A risk-averse decision maker may choose the supplier allocation B, in which the profit risk is 1% and the suppliers weights are S1:90%, S2:3%, S3:7% while the profit in allocation B is less than allocation A. As such, this methodology enables the decision makers to share the risk of supply disruptions by taking an optimal set of supplier order allocation. Through monitoring the risk indicators of suppliers and predicting the effects of them on unit sales and product demand, decision makers are able to change the supplier portfolio in a timely manner, in order to increase the profit and decrease the risks.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780444639653502130
Some properties of averaging simulated optimization methods
John Knight , Stephen E. Satchell , in Optimizing Optimization, 2010
10.2 Section 2
In this section, we discuss the role of portfolio simulation and some of the criticisms of portfolio optimization. Portfolio optimization has been criticized for being excessively sensitive to errors in the forecasts of expected returns. This leads to the optimizer choosing implausible portfolios and is a consequence of the difficulties in forecasting expected returns. Furthermore, these MV optimal portfolios lack the diversification deemed desirable by institutional investors, see Green and Hollifield (1992). A number of solutions to this problem have emerged. In some contexts, Bayesian prices on the expected returns are used to control the sample variability of the means, see, for example, Satchell and Scowcroft (2003). Practitioners often employ large numbers of constraints on the portfolio weights to control the optimizer, and we shall refer to this as the practitioner's solution. This solution has been given some support in the context of MV optimization by Jagannathan and Ma (2002, 2003).
Michaud (1998) has advocated simulating the optimization. The advantage of this is that we get some sense of the variability of the solution; however, we need to understand what the averaging in the simulation will lead to.
To motivate our analysis, we consider how Michaud (1998) carries out his resampling methodology. Quoting from Michaud (op cit, pages 17, 19, and 37):
- 1.
-
"Monte Carlo simulate 18 years of monthly returns based on data in Tables 2.3 and 2.4 Table 2.3 Table 2.4 …
- 2.
-
Compute optimized input parameters from the simulated return data.
- 3.
-
Compute efficient frontier portfolios…
- 4.
-
Repeat steps 1–3 500 times…
- 5.
-
…Observe the variability in the efficient frontier estimation."
The assumption behind the Monte Carlo simulation of returns can vary. It can be based on historical returns and involve resampling, or it may involve using means, variance, and covariance and simulating via multivariate normality as Michaud details above; his Tables 2.3 and 2.4 contain first and second sample moments. The strength of the method comes from the law of large numbers. If we take enough replications, our sample statistics will converge to their expected values where expectation is based on the assumed population distribution. If the statistic happens to be biased, then it will converge to its expectation, which will equal the "true" value plus the bias. As we will show, the simulated average frontier is biased. This implies that the mean simulated efficient frontier will differ from the "population" efficient frontier based on the information in Step 1 by the degree of finite sample bias. Whilst this should be small for T = 216 monthly observations, there are lots of portfolio calculations that will be based on much shorter time periods due to the usual reasons: regime shifts, institutional change, and time-varying parameters. Furthermore, we conjecture, and subsequently show, that it is not T alone that determines bias but T and N (the number of stocks) cojointly. If N is large, even for large T, then biases can be very large indeed.
It is worth noting that the emphasis of the above approach is in terms of the MV efficient frontier analysis rather than expected utility. But as we shall show next, maximizing quadratic utility gives you a solution that is expressed solely in terms of efficient-set mathematics; the only additional information is the risk aversion coefficient (λ); as we change λ, we move along the MV frontier in any case.
Jobson (1991) derives a number of key results in this area for the conventional minimum variance frontier, and we shall refer to these results when appropriate. Stein (2002) has also derived some of our results. In a recent related paper, Okhrin and Schmid (2006) consider the standard quadratic utility maximization subject to adding up constraints. Their focus, unlike ours, is the calculation of the distributional properties of the optimal portfolio weights. Our concern, on the other hand, is the distributional properties of portfolio summary measures such as the portfolio mean return (α), the tracking error (TE), and the information or Sharpe ratio (IR). Some of our results could be deduced from those of Okhrin and Schmid (2006); in what follows, we will indicate where this occurs.
Consider the active weights and the known benchmark weights both (N×1) vectors and both sum to 1, i.e., ω′i = b′i = 1 Let μ and Ω be the (N×1) mean vector and covariance matrix of the N asset returns, where the letter i denotes an (N×1) vector of ones.
Our investor chooses to maximize U, where U = μ′(ω−b)−λ/2(ω−b)′Ω(ω−b); note that there is also a constraint (ω−b)′i = 0. This is a classical MV problem equivalent, as we demonstrate, to computing the optimal frontier. It is straightforward to see that as λ ranges from 0 to ∞, we move down the frontier from the maximum expected return portfolio to the global minimum variance portfolio. This framework is widely used in finance, see Sharpe (1981), Grinold and Kahn (1999), and Scherer (2002).
Our first-order condition is:
Using i′(ω−b) = 0, we see that where β = i′Ω−1 μ, γ = i′Ω−1 i and we set a = μ′Ω−1 μ. Thus, and hence active returns α can be computed as:
(10.1)
Other terms of interest can be calculated. For example, we have:
(10.2)
and we will focus on the tracking error or standard deviation of relative returns. Finally,
(10.3)
It is straightforward to compute the information ratio defined as Notice that in this problem all terms depend essentially on a single term (aγ−β 2/γ) or functions of it.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123749529000105
23rd European Symposium on Computer Aided Process Engineering
Bruna Mota , ... Ana Paula Barbosa-Póvoa , in Computer Aided Chemical Engineering, 2013
4 Final Remarks
This work proposes an optimization model for the design and planning of a closed loop supply chain under the three pillars of sustainability: economic, environmental and social sustainability. The environmental impact is assessed through ReCiPe 2008. A social benefit indicator was developed that favors job creation in less developed regions. The ε-constraint method is used to obtain the efficient frontier between economic and social performances and results show that significant improvements in the overall performance of the supply chain can be achieved.
As future work, it would be important to complement this study around the social benefit indicator with a quantitative evaluation of the improvement/decline in life quality that comes from job creation in a given region. Still this social benefit indicator could be used in designing government incentives for companies to locate their facilities in these preferred regions. Also, applying uncertainty in both internal (e.g. environmental and social performance) and external factors (e.g. demand) is intended.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780444632340501500
How To Create An Efficient Frontier In Excel
Source: https://www.sciencedirect.com/topics/computer-science/efficient-frontier
Posted by: grossgook1951.blogspot.com
0 Response to "How To Create An Efficient Frontier In Excel"
Post a Comment