Abstract
When designing engineered systems, the potential for unintended consequences of design policies exists despite best intentions. The effect of risk factors for unintended consequences are often known only in hindsight. However, since historical knowledge is generally associated with a single event, it is difficult to uncover general trends in the formation and types of unintended consequences. In this research, archetypes of unintended consequences are learned from historical data. This research contributes toward the understanding of archetypes of unintended consequences by using machine learning over a large data set of lessons learned from adverse events at NASA. Sixty-six archetypes are identified because they share similar sets of risk factors such as complexity and human-machine interaction. To validate the learned archetypes, system dynamics representations of the archetypes are compared to known high-level archetypes of unintended consequences. The main contribution of the paper is a set of archetypes that apply to many engineered systems and a pattern of leading indicators that open a new path to manage unintended consequences and mitigate the magnitude of potentially adverse outcomes.