This study seeks to construct a basic reinforcement learning-based AI-macroeconomic simulator. We use a deep RL (DRL) approach (DDPG) in an RBC macroeconomic model. We set up two learning scenarios, one of which is deterministic without the technological shock and the other is stochastic. The objective of the deterministic environment is to compare the learning agent's behavior to a deterministic steady-state scenario. We demonstrate that in both deterministic and stochastic scenarios, the agent's choices are close to their optimal value. We also present cases of unstable learning behaviours. This AI-macro model may be enhanced in future research by adding additional variables or sectors to the model or by incorporating different DRL algorithms.
Jin-Kyu Jung, Manasa Patnam, and Anna Ter-Martirosyan
Forecasting macroeconomic variables is key to developing a view on a country's economic outlook.
Most traditional forecasting models rely on fitting data to a pre-specified relationship between input
and output variables, thereby assuming a specific functional and stochastic process underlying that
process. We pursue a new approach to forecasting by employing a number of machine learning
algorithms, a method that is data driven, and imposing limited restrictions on the nature of the true
relationship between input and output variables. We apply the Elastic Net, SuperLearner, and
Recurring Neural Network algorithms on macro data of seven, broadly representative, advanced and
emerging economies and find that these algorithms can outperform traditional statistical models,
thereby offering a relevant addition to the field of economic forecasting.