The Monte Carlo simulation is a computational algorithm that relies on repeated random sampling to obtain numerical results. This method is used to model the probability of different outcomes in a process that cannot easily be predicted due to the intervention of random variables. It is a computational technique that relies on statistical methods to estimate the behavior of a system by repeatedly sampling random inputs and observing the corresponding outputs.
Theoretical Foundations
At the core of Monte Carlo simulations is the concept of random sampling from a probability distribution. This allows for the modeling of uncertain events or variables within a system. The simulation involves running multiple iterations, or scenarios, each time using different randomly generated values. These iterations create a range of outcomes that are then used to forecast future events or understand potential risks. Monte Carlo simulations often employ various probability distributions (such as normal, log-normal, or uniform) to model diverse types of uncertainties or risks. The choice of distribution depends on the nature of the variable being modeled. The distinction with the bootstrapping method lies in the fact that the latter can be parametric, semiparametric, or non-parametric. In the parametric approach, initial estimations are made using a linear regression model for both the parameters and residuals obtained from the sample. The semiparametric method estimates coefficients and performs resampling exclusively with the residuals, while the non-parametric approach involves resampling for all the parameters. These unique characteristics set the Monte Carlo method apart from other methodologies. In this context, parameters are deliberately selected, and the analysis explores the variability of outcomes by considering the variance in the chosen parameters, which may stem from the distribution assumed to accurately represent their characteristics.
Following numerous iterations, the outcomes are amalgamated to derive statistical measures such as mean, median, variance, and percentile rankings. This amalgamation proves invaluable in comprehending the probability and repercussions associated with various outcomes. The advent of modern computing power has significantly enhanced the accessibility and efficiency of Monte Carlo simulations. This progress enables the swift processing of thousands or even millions of scenarios, facilitating a comprehensive evaluation of risk and uncertainty.
Origins
The term ”Monte Carlo simulation” originates from the renowned Monte Carlo Casino in Monaco. Developed in the 1940s, this technique was initially employed to address mathematical problems related to probability and statistics, particularly in the context of gambling and casino games.
The provided Python script implements a simulation of roulette games, capturing the dynamics of both fair and biased scenarios. The code defines three distinct roulette classes: Fair Roulette, European Roulette, and American Roulette. The FairRoulette class represents a standard, unbiased roulette game. It features pockets numbered from 1 to 36 and employs a fair spin mechanism. The spin method randomly selects a pocket, and the betPocket method calculates the payout based on the chosen pocket. The EuRoulette class inherits from FairRoulette but introduces a single zero pocket (’0’) to emulate the European roulette variant. Building upon the European variant, the AmRoulette class includes an additional double-zero pocket (’00’) to model the American roulette style. The script executes simulations for each type of roulette with varying spin quantities. It utilizes the playRoulette function to play the game and record results, providing an average return per spin. The results are then plotted using the Matplotlib library to visualize how the average returns evolve with an increasing number of spins for Fair, European, and American Roulette.
Code
The three graphs depict the results of Monte Carlo simulations for three roulette variants: Fair, European, and American. The simulations are conducted with an increasing number of spins, illustrating how the average outcomes converge towards the expected value in the long run. This phenomenon aligns with the central limit theorem in statistics. In the Fair Roulette graph, it is evident that as the number of spins increases, the average outcome converges towards zero. This aligns with the expectation for a fair roulette, where the expected value is zero. The European Roulette graph indicates a negative trend with an increasing number of spins. The presence of a single zero contributes to a negative expected value. Similar to Fair Roulette, the Monte Carlo simulation demonstrates that with a significant number of iterations, the average outcomes approach the expected value, reflecting the average loss for players per spin. The American Roulette graph shows an even more pronounced negative trend compared to European Roulette. The presence of both single and double zeros contributes to a more significant negative expected value. In general, the use of Monte Carlo simulation in this context provides a clear perspective on the long-term behavior of different roulette variants. It demonstrates that, even though individual results may vary, the law of large numbers ensures that the average outcomes will approach the expected value and the variance will tend to zero over a large number of spins.
Real-world applications
The inherent strength of this methodology lies in its ability to systematically navigate the intricate web of potential outcomes, offering a comprehensive understanding of the myriad scenarios that may shape the trajectory of complex systems. By harnessing the power of random sampling from probability distributions, Monte Carlo simulations not only enable a nuanced exploration of uncertainties but also facilitate the formulation of informed strategies and risk management decisions. In finance, for instance, these simulations prove invaluable in scenario analysis, portfolio optimization, and derivative pricing, allowing stakeholders to make more robust and data-driven choices in the face of intricate and unpredictable market dynamics. In econometrics, Monte Carlo simulations play a pivotal role in validating statistical models, assessing the robustness of estimators, and quantifying the impact of uncertainties on economic predictions.
Stocks portfolio
Here we simulated an equity portfolio with random weights which consists of Google, Invidia, Amazon, Meta and Apple stocks. We considered 800 days before starting from the 09/01/24 as the starting point and we ran 1000 simulations where we took the mean of the returns in those 800 days and we added the product between a derivate of the correlation matrix of the stocks and the matrix containing random values generated by a normal distribution in order to give the parameters the variability. Then we computed the weighted average of these simulated returns to find the portfolio returns for each day of the 100 we chose as timeframe for our Monte Carlo method. Then we plotted all the possible paths that the portfolio values had taken in the simulations for 100 days and we also plotted an histogram of all the final values with the related normal distribution.
Graphics
Code
CPPI example
The CPPI strategy (Constant Proportion Portfolio Insurance) involves creating a portfolio designed to limit downside risk while retaining upside potential through dynamic scaling of exposure. This safeguard is achieved by establishing a floor value below which the portfolio’s value cannot fall. Typically, the portfolio consists of a blend of risky assets (such as an index underlying product like SPY) and risk-free assets (often zero-coupon bonds). The strategy exhibits similarities in payoff structure with a call option and shares its convex nature; returns increase at an accelerating rate in favorable market conditions and decrease at a decelerating rate in unfavorable ones. The allocation ratio is determined by a multiplier chosen based on the investor’s risk appetite; a more aggressive approach entails a higher multiplier.
Monte Carlo simulations play a crucial role in understanding the behavior and performance of CPPI strategies across various market conditions. By running multiple simulations, analysts can assess the strategy’s robustness and potential outcomes under different scenarios, providing valuable insights for decision-making and risk management. It follows that the strategy’s success rate increases with more frequent portfolio rebalancing. As the starting value approaches the floor, the allocation to risk-free assets increases, and vice versa. However, this strategy comes with several drawbacks. Investors must factor in transaction costs, especially with frequent rebalancing. Additionally, gap risk can undermine the strategy’s effectiveness. CPPI is often employed to ensure a specific future capital amount, such as funding college tuition. Yet, sudden declines in the underlying asset’s value may fall below the required amount at maturity. Moreover, some argue that path-dependent strategies often erode value rather than create it (Cox and Leland, 1982).
The graphs below illustrate the impact of setting a high multiplier during high-volatility conditions; violations of the floor can significantly affect strategy performance. Conversely, scenarios with lower volatility typically do not experience floor violations.
Code
In this code the CPPI allocations are computed, with updates made to the account value over time. Additionally, it logs several intermediary metrics including account history, cushion history, and risky allocation history. Upon completion, the function furnishes a dictionary featuring the wealth history, risky wealth history, risk budget, risky allocation, and other pertinent parameters.
The Geometric Brownian Motion (GBM) Generator (gbm) function generates paths representative of GBM, a prevalent model utilized for simulating the stochastic nature of stock prices.
Montecarlo in discretionary trading
In the world of trading, ”fail to prepare, prepare to fail” - and this applies to testing strategies too. Monte Carlo backtesting is a powerful tool to ensure a strategy can withstand the volatility of the market. By simulating multiple market scenarios, traders can evaluate their approach and make data-driven decisions. Especially with discretionary trading backtesting, using the proper tools can be fundamental to understand the profitability of a strategy. For instance, let’s consider a trading strategy with parameters set at a 30% win rate and a 2 risk-reward ratio. This strategy undergoes backtesting with 1000 trades, with each trade opening at $100, starting from an initial portfolio value of $10,000. To gain deeper insights into its performance, 30 simulations are conducted and plotted, showcasing their average as well as the top and bottom performers.
Another important piece of information that the trader would need is the maximum drawdown that he encountered in those simulations. The goal of the next graph is to answer this question.
Code
Main drawbacks of Monte Carlo simulation
These simulations can place significant computational demands, especially when dealing with complex systems involving numerous iterations. Furthermore, the accuracy of Monte Carlo simulations relies heavily on the quality of underlying assumptions and input data. Inaccurate assumptions or flawed parameters can result in misleading or erroneous outcomes. Additionally, while Monte Carlo simulations offer valuable estimates and probabilities, their precision is inherently constrained by statistical variability. Although accuracy typically improves with a larger number of simulations, a degree of uncertainty persists.
The technique’s outcomes ultimately hinge on the model’s assumptions; if these assumptions are incorrect, so too will be the generated values. To acquire reliable estimates, large datasets are necessary. Moreover, Monte Carlo simulations are more commonly applied to options rather than stocks and bonds, as the latter are generally associated with less uncertainty than call and put options.
In summary while Monte Carlo simulations provide valuable insights into probabilistic phenomena and decision making processes, they necessitate meticulous management of computational resources, assumptions, model com plexity, and data quality to deliver meaningful and dependable results.
Comentarios