Chung-Ang University researchers develop an algorithm for optimal decision-making with heavy-tailed noisy rewards - insideBIGDATA

Chung-Ang University researchers develop an algorithm for optimal decision-making with heavy-tailed noisy rewards – insideBIGDATA

Researchers propose methods that theoretically guarantee minimal loss for worst-case scenarios with minimal prior information for heavy-tailed reward distributions

Exploration algorithms for stochastic multi-armed bandits (MAB) – sequential decision-making problems in uncertain environments – generally assume light-tailed distributions for reward noises. However, real-world datasets often display heavy-tailed noise. In light of this, Korean researchers propose an algorithm that can achieve minimax optimality (minimum loss in a maximum loss scenario) with minimal prior information. Superior to existing algorithms, the new algorithm has potential applications in autonomous trading and personalized recommendation systems.

In data science, researchers usually deal with data containing noisy observations. An important problem explored by data scientists in this context is the problem of sequential decision making. This is commonly referred to as a “stochastic multi-armed bandit” (stochastic MAB). Here, an intelligent agent sequentially explores and selects actions based on noisy rewards in an uncertain environment. Its objective is to minimize cumulative regret – the difference between the maximum reward and the expected reward of the selected actions. A little regret implies more efficient decision-making.

Most existing studies of stochastic MABs have performed regret analysis assuming that reward noise follows a light-tailed distribution. However, many real-world datasets actually show a heavy-tailed noise distribution. These include data on user behavior patterns used to develop personalized recommendation systems, stock price data for automatic transaction development, and sensor data for autonomous driving.

In a recent study, Assistant Professor Kyungjae Lee of Chung-Ang University and Assistant Professor Sungbin Lim of Ulsan Institute of Science and Technology, both in Korea, addressed this question. In their theoretical analysis, they proved that existing algorithms for stochastic MABs were suboptimal for heavy-tailed rewards. Specifically, the methods employed in these algorithms – robust upper confidence bound (UCB) and adaptively perturbed exploration (APE) with unlimited perturbation – do not guarantee minimax optimality (minimization of the maximum possible loss).

“Based on this analysis, robust minimax optimal (MR) UCB and APE methods have been proposed. MR-UCB uses a narrower confidence limit of robust mean estimators, and MR-APE is its randomized version. It uses a bounded perturbation whose scale follows the modified confidence limit in MR-UCB,” explains Dr. Lee, speaking of their work, which was published in the IEEE Transactions on Neural Networks and Learning Systems September 14, 2022.

The researchers then derived independent and dependent upper bounds of the cumulative regret gap. For the two proposed methods, this last value corresponds to the lower limit under the heavy-tailed noise hypothesis, thus reaching the minimax optimality. Moreover, the new methods require a minimum of prior information and depend only on the maximum order of the bounded moment of the rewards. On the other hand, the existing algorithms require the upper bound of this moment first– information that may not be accessible in many real-world problems.

After establishing their theoretical framework, the researchers tested their methods by carrying out simulations under Pareto and Fréchet noises. They found that MR-UCB consistently outperformed other exploration methods and was more robust with increased number of actions under heavy-tailed noise.

Further, the duo verified their approach for real-world data using a cryptocurrency dataset, showing that MR-UCB and MR-APE were beneficial – minimax optimal regret bounds and prior knowledge minimal – for tackling heavy-tailed synthetic and real stochastic MAB. problems.

“Being vulnerable to heavy-tail noise, existing MAB algorithms show poor performance in modeling stock data. They fail to predict large rises or sudden falls in stock prices, causing huge losses. In contrast, MR-APE can be used in stand-alone trading systems with stable expected returns through equity investment,” comments Dr. Lee, discussing potential applications of the present work. “Furthermore, it can be applied to personalized recommender systems since the behavioral data shows heavy-tailed noise. With better predictions of individual behavior, it is possible to provide better recommendations than conventional methods, which can maximize ad revenue,” he concludes.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter:

Join us on LinkedIn:

Join us on Facebook:

#ChungAng #University #researchers #develop #algorithm #optimal #decisionmaking #heavytailed #noisy #rewards #insideBIGDATA

Leave a Comment

Your email address will not be published. Required fields are marked *