Click on the title to download our whitepapers
Managing Novel Risks
ABSTRACT
Decision analysis and risk analysis have grown up around a set of organizing questions: what might go wrong, how bad might it be, what should be done to maximize expected utility and minimize expected loss or regret, and how large are the remaining risks? In probabilistic causal models capable of representing unpredictable and novel events, probabilities for what will happen, and even what is possible, cannot necessarily be determined in advance. Standard decision and risk analysis questions become inherently unanswerable (“undecidable”) for realistically complex causal systems with “open-world” uncertainties about what exists, what can happen, what other agents know, and how they will act. Recent artificial intelligence (AI) techniques enable agents (e.g., robots, drone swarms, and automatic controllers) to learn, plan, and act effectively despite open-world uncertainties in a host of practical applications, from robotics and autonomous vehicles to industrial engineering, transportation and logistics automation, and industrial process control. This paper offers an AI/machine learning perspective on recent ideas for making decision and risk analysis (even) more useful. It reviews undecidability results and recent principles and methods for enabling intelligent agents to learn what works and how to complete useful tasks, adjust plans as needed, and achieve multiple goals safely and reasonably efficiently when possible, despite open-world uncertainties and unpredictable events. In the near future, these principles could contribute to the formulation and effective implementation of more effective plans and policies in business, regulation, and public policy, as well as in engineering, disaster management, and military and civil defense operations. They can extend traditional decision and risk analysis to deal more successfully with open-world novelty and unpredictable events in large-scale real-world planning, policy-making, and risk management.
Muddling-Through and Deep Learning for Managing Large-Scale Uncertain Risks
ABSTRACT
Managing large-scale, geographically distributed, and long-term risks arising from diverse underlying causes – ranging from poverty to underinvestment in protecting against natural hazards or failures of sociotechnical, economic, and financial systems – poses formidable challenges for any theory of effective social decision-making. Participants may have different and rapidly evolving local information and goals, perceive different opportunities and urgencies for actions, and be differently aware of how their actions affect each other through side effects and externalities. Six decades ago, political economist Charles Lindblom viewed “rational-comprehensive decision-making” as utterly impracticable for such realistically complex situations. Instead, he advocated incremental learning and improvement, or “muddling through,” as both a positive and a normative theory of bureaucratic decision-making when costs and benefits are highly uncertain. But sparse, delayed, uncertain, and incomplete feedback undermines the effectiveness of collective learning while muddling through, even if all participant incentives are aligned; it is no panacea. We consider how recent insights from machine learning – especially, deep multiagent reinforcement learning – formalize aspects of muddling through and suggest principles for improving human organizational decision-making. Deep learning principles adapted for human use can not only help participants in different levels of government or control hierarchies manage some large-scale distributed risks, but also show how rational-comprehensive decision analysis and incremental learning and improvement can be reconciled and synthesized.
BOOK REVIEW
Thinking Better: Six Recent Books on Natural, Artificial, and Social Intelligence
ABSTRACT
Risk analysis is largely about how to think more clearly and usefully about what actions to take when perceptions and understanding of the current situation are incomplete and consequences of different choices are uncertain. Suggestions for improving decision-making under uncertainty come from many sources, including the mathematical prescriptions of expected utility theory and decision analysis; business and analytics books emphasizing collecting and using data to inform decisions; decision and risk psychologists cautioning about cognitive heuristics and biases that distort risk perceptions and undermine the logical coherence of preferences and plans; and legal scholars discussing frameworks for making defensible decisions in the face of existing laws and regulations.
Several recent books have offered new insights and summarized old ones about how to use data and experience to think more usefully about choices under uncertainty to reduce predictable regrets and to increase probabilities of achieving goals and preferred outcomes. This review examines the complementary perspectives offered by six recent books in the overlapping fields of cognitive neuroscience, psychology of thinking and reasoning, artificial intelligence and deep learning, social science, and social statistics and data analysis.