Understanding the Resampled Efficient Frontier in Financial Analysis

Author

Reads 5K

Cowboy with Whip Herding Horses
Credit: pexels.com, Cowboy with Whip Herding Horses

The resampled efficient frontier is a powerful tool in financial analysis that helps investors make informed decisions. It's a way to evaluate the risk and return of different investment portfolios by resampling the historical data.

The resampled efficient frontier is based on the idea that past performance is not always a reliable indicator of future results. By resampling the data, we can get a more accurate picture of the potential risks and returns of different investments.

Resampling involves splitting the historical data into different subsets and recalculating the portfolio's performance for each subset. This process helps to identify the most efficient portfolios that can withstand different market conditions.

By using the resampled efficient frontier, investors can create a more diversified portfolio that takes into account the potential risks and returns of different investments.

On a similar theme: Alternative Data (finance)

The Efficient Frontier

The efficient frontier is a technique in investment portfolio construction that uses a set of portfolios and averages them to create an effective portfolio. This is not necessarily the optimal portfolio, but a portfolio that is more balanced between risk and the rate of return.

Man wearing business attire and turban reviews a portfolio outdoors, showcasing professionalism and focus.
Credit: pexels.com, Man wearing business attire and turban reviews a portfolio outdoors, showcasing professionalism and focus.

It's used when an investor or analyst is faced with determining which asset classes to invest in and what proportion of the total portfolio should be of each asset class.

The efficient frontier is highly sensitive to the inputs, and usually is composed of portfolios that are not well diversified. This is a well-known characteristic of Markowitz-style portfolio optimization.

Practitioners usually add constraints on the range of weights to avoid this. We will now see a different approach using resampling.

The efficient frontier can be estimated using different samples of returns. For example, in one study, the efficient frontier was estimated using three different samples: the full sample, the pre-covid sample, and the post-covid sample.

The pre-covid efficient frontier is much "shorter", reflecting a much narrower range for the risk estimates, and is located above the others in its range of volatilities.

The post-covid efficient frontier extends much further to the right, due to the much higher volatilities, as well as the higher expected returns of equities.

The composition of the efficient portfolios changes quite significantly depending on the sample period. It's also evident that the efficient portfolios are not well diversified and are dominated by 2-3 assets.

Resampling Methods

Close-up of a calculator on financial documents with graphs and analysis papers.
Credit: pexels.com, Close-up of a calculator on financial documents with graphs and analysis papers.

Resampling is a technique used to improve the stability and robustness of mean-variance portfolio optimization. It involves obtaining many samples from the distribution of returns, estimating the efficient frontier in each sample, and averaging the resulting portfolio weights to create a resampled efficient frontier.

This method can help reduce extreme concentrations in the optimal portfolios and produce a more robust efficient frontier. By averaging the portfolio weights from multiple samples, resampling can smooth out the effects of noise in the estimates of the expected returns.

There are two main approaches to resampling: Michaud Resampling and Subset Resampling. Michaud Resampling involves sampling a mean vector and covariance matrix of returns from a distribution centered at the original point estimates, calculating an MV efficient frontier, and averaging the portfolio weights to form the RE optimal portfolio.

Subset Resampling, on the other hand, works by averaging allocation estimated with a noise-sensitive portfolio estimator over many masked subsets of assets. This method is similar to ensemble methods in machine learning, where many weak learners are merged to produce a robust estimation.

Businesswoman analyzing design portfolio in modern office setting.
Credit: pexels.com, Businesswoman analyzing design portfolio in modern office setting.

Here are some key differences between Michaud Resampling and Subset Resampling:

  • Michaud Resampling uses a normal distribution centered at the original point estimates, while Subset Resampling uses masked subsets of assets.
  • Michaud Resampling calculates an MV efficient frontier, while Subset Resampling uses a noise-sensitive portfolio estimator.
  • Michaud Resampling averages the portfolio weights to form the RE optimal portfolio, while Subset Resampling averages the allocation estimated with the noise-sensitive portfolio estimator.

Subset Resampling

Subset Resampling is a technique that helps reduce the risk of parameters estimation and smooths away outlier effects in portfolio optimization.

Subset Resampling works by averaging allocation estimates over many masked subsets of assets. The mask-optimization process is repeated many times, considering only a certain fraction of the entire asset universe each time.

This method is similar to ensemble methods in machine learning, where many weak learners are merged to produce a robust estimation.

The fraction of assets to be sampled at each iteration controls a smooth blending between the equally weighted portfolio and the single portfolio estimator. With a fraction of 1/N, we asymptotically obtain the same results as from the EquallyWeighted portfolio.

The SubsetResampling meta-estimator can be controlled by a single parameter, selecting the fraction of assets to be sampled at each iteration.

Here's a breakdown of the SubsetResampling process:

Subset Resampling is an effective way to obtain more robust estimates of efficient portfolios, as it reduces the risk of parameters estimation and smooths away outlier effects.

By averaging allocation estimates over many masked subsets of assets, Subset Resampling helps to produce a more diversified and robust efficient frontier. This is particularly useful when dealing with volatile periods, such as the Covid-19 crisis.

Stability Measures

Woman Sitting on a Sofa with a Laptop Displaying a Chart
Credit: pexels.com, Woman Sitting on a Sofa with a Laptop Displaying a Chart

Stability measures are crucial in evaluating the robustness of resampling methods. We'll examine two key indicators: Estimation Error and Sensitivity to Extreme Risk.

The Estimation Error indicator measures the aggregate sensitivity of the forecast to a large number of probable scenarios. It's computed by averaging the mean average deviation in the portfolio's expected risk/return profiles vs. the expected ones over 500 different synthetic scenarios.

The metric is formally defined as ST=1S∑i=1S1N∑p=1NdM(xp→,xpi→)2, where dM(xp→,xpi→) is the Mahalanobis distance between the expected return and risk of a portfolio and its synthetic scenario counterpart.

Higher values of the Estimation Error metric represent lower robustness, indicating higher average dispersion and lower reliability of the estimates. In this study, we'll use S=500 different scenarios generated using the sampling process described in Section 2.

Sensitivity to Extreme Risk evaluates the impact of worst-case scenarios on the reliability of the forecasted efficient frontier. This metric is a constrained version of the Estimation Error, focusing on the w worst scenarios.

Two businessmen discussing financial charts on devices at a modern indoor setting.
Credit: pexels.com, Two businessmen discussing financial charts on devices at a modern indoor setting.

The computation of the mean average Mahalanobis distance is limited to the scenarios for which this deviation is the largest. This serves as a tail risk indicator that measures the expected average negative outcome of the realization of the worst scenarios with probability w/S.

We'll consider two thresholds: the 5% worst scenarios and the 1% ones.

Approaches

One approach to creating a resampled efficient frontier is to approximate the mean efficient frontier with a continuous function by fitting a polynomial function to the data. A good starting point for the polynomial order is 2, but it could be higher depending on the structure of the data.

The polynomial function is then used to obtain the returns for specific risks, dividing the range of returns and risk into user-defined linearly spaced values. This process results in a string of boxes where the upper-right corner of one box is connected to the bottom-left corner of the next.

This technique is used in investment portfolio construction under modern portfolio theory to create a more balanced portfolio that is not necessarily the optimal portfolio.

Approach 3

A person analyzing a return on investment report with a pen in hand on a desk.
Credit: pexels.com, A person analyzing a return on investment report with a pen in hand on a desk.

Approach 3 is a possibility that's similar to the previous one, but with a different portfolio selection strategy. It involves fitting a polynomial reference efficient frontier and then splitting the portfolio set into two sections: those above the reference line and those below.

The final estimate for the efficient frontier in this approach consists of the non-dominated portfolios among the second set, denoted by +, and shown in red in Figure 5. This approach allows for a more nuanced understanding of the efficient frontier.

We can use this approach to create a more robust and diversified efficient frontier by selecting the non-dominated portfolios from the second set. This can help to reduce the sensitivity of the efficient frontier to the inputs.

By using this approach, we can create a more stable and diversified efficient frontier that's less prone to extreme weights on individual assets. This can be especially useful when working with historical data or uncertain inputs.

Suggested Approaches

A Woman Looking at the Graph on the Monitor of a Laptop
Credit: pexels.com, A Woman Looking at the Graph on the Monitor of a Laptop

One approach to tackle the limitation of averaging feasible solutions is to select promising candidates among a large set of feasible ones.

This strategy involves resampling the dataset to generate synthetic scenarios, which are optimized to obtain as many efficient frontiers as scenarios.

A polynomial function is then fitted to the combined set of solutions, allowing for the selection of linearly spaced values along the risk axis.

For each of these datapoints, the portfolio with the most similar risk/reward profile among the complete set is found.

The combined resampled efficient frontier is more likely to be robust, consisting of portfolios optimized for different scenarios that might react differently to deviations between the expected parameters and the real ones.

To overcome the problem of the reliability of the expected risk/return profile of the portfolios, they are exposed to several synthetic scenarios and the resulting risks and returns are averaged.

This process results in more reliable forecasts for the expected risk and return for the portfolio, which are then used to fit the polynomial function and identify the final set.

The Mahalanobis distance is suggested as the similarity measure due to its ability to handle differences in scale and proportionality between the two dimensions.

A unique perspective: Investment Function

Approach 1

Laptop, bitcoins, and notes on a desk representing cryptocurrency investment concept.
Credit: pexels.com, Laptop, bitcoins, and notes on a desk representing cryptocurrency investment concept.

Approach 1 is a method for approximating the mean efficient frontier with a continuous function, which is a good starting point, but the polynomial order could be higher depending on the structure of the data.

A polynomial order of 2 is likely a good starting point, but it's not a hard and fast rule. This approach starts by fitting a polynomial function to the complete set of resampling efficient frontiers.

The polynomial function is used to obtain the returns for a range of risks, which are divided into a user-defined number of linearly spaced values. This process results in a string of boxes where the upper-right corner of the one on the left is connected to the bottom-left corner of the one on the right.

The boxes are defined by reference points along the polynomial trendline, which serve as corners. Portfolios in the final solution are obtained by averaging the composition of portfolios in the same box.

The number of boxes defines an upper bound, but it doesn't guarantee that there will be portfolios in every box. A second limitation of this approach is that averaging portfolios might be problematic when dealing with investment limits or cardinality constraints.

A fresh viewpoint: Frontier Pin Number

Experimental Analysis

Three men with hard hats ride a motorcycle on a curved mountain road, emphasizing risk and adventure.
Credit: pexels.com, Three men with hard hats ride a motorcycle on a curved mountain road, emphasizing risk and adventure.

In this section, we describe the experimental analysis used to compare the performance of the three alternatives.

The sample used for this analysis is crucial in determining the accuracy of the results. We introduce the sample to provide a clear understanding of the data used.

The process of estimating the efficient frontiers for the synthetic scenario involves several steps, and we describe these in detail. This includes the methods used to calculate the frontiers and the assumptions made.

The performance indicators used to evaluate the robustness of the solutions are key to understanding the reliability of the results. We report on the specific indicators used, including metrics such as efficiency and effectiveness.

Experimental Design and Optimization

Experimental design is crucial for collecting reliable data. A well-designed experiment should have a clear objective and a controlled environment to minimize external variables.

The type of experimental design used depends on the research question and the variables involved. For example, in the case of a randomized controlled trial, participants are randomly assigned to either an experimental group or a control group to compare the effects of a treatment.

People discussing about Investments
Credit: pexels.com, People discussing about Investments

A key aspect of experimental design is ensuring that the experiment is replicable. This means that the experiment should be able to be repeated with the same results. The use of standard operating procedures and clear documentation can help ensure that the experiment is replicable.

Replication is essential for building confidence in the results of an experiment. By repeating the experiment multiple times, researchers can increase the reliability of the results. In the case of the experiment on the effects of a new medication, the researchers were able to replicate the results multiple times to increase confidence in the findings.

Optimization of the experimental design is also important. This can involve adjusting the sample size, the number of participants, or the duration of the experiment to improve the accuracy of the results.

Readers also liked: Bill Ackman New York Times

Experimental Results

The experimental results show that the performance indicators were computed for 120 consecutive months between January 2011 and December 2020.

Credit: youtube.com, Statistical Analysis for Experimental Research

We used three performance indicators: stability and sensitivity to extreme risks at 5% and 1%. These indicators are crucial in evaluating the robustness of the solutions.

The main experimental results are reported in a table, which provides the main descriptive statistics for the robustness indicators by approach.

For each approach, we reported the mean, median, standard deviation, minimum and maximum value for the three performance indicators.

Table 6

Let's take a closer look at Table 6, which provides some valuable insights into the performance of the three strategies introduced in this work.

The mean stability of the standard approach is 0.7546, indicating that it's relatively stable overall. However, the median stability is lower at 0.7475, suggesting that there may be some outliers that are pulling the average down.

The first approach has a mean stability of 0.7670, which is slightly higher than the standard approach. The median stability is also higher at 0.7713, indicating that this approach is more consistent than the standard one.

A Group of People Clapping to a Woman Presenting a Graph on Screen with Success Rate
Credit: pexels.com, A Group of People Clapping to a Woman Presenting a Graph on Screen with Success Rate

Interestingly, the reconstruction operator turned out to be very destructive, and using it with these inputs degrades the performance of the first approach very significantly in terms of both mean and median.

Here's a summary of the mean stability values for the three approaches:

As we can see, the second and third approaches have higher mean stability values than the first approach, indicating that they are more stable overall.

Frequently Asked Questions

What is resampled mean variance optimization?

Resampled mean variance optimization is a portfolio optimization method that combines traditional mean-variance optimization with Monte Carlo simulations and resampling techniques to improve risk management. This approach acknowledges the uncertainty of capital market assumptions and provides a more robust framework for investment decision-making.

Angel Bruen

Copy Editor

Angel Bruen is a seasoned copy editor with a keen eye for detail and a passion for precision. Her expertise spans a variety of sectors, including finance and insurance, where she has honed her skills in crafting clear and concise content. Specializing in articles about Insurance Companies of Hong Kong and Financial Services Companies Established in 2013, Angel ensures that each piece she edits is not only accurate but also engaging for the reader.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.