Incorporating user trust in the development of intelligent systems is one of the new challenges in engineering design field. Trust in human-intelligent system interaction determines how much user relies on the system and directly influences the benefits that an intelligent system provides to human decision-making. This paper reviews the existing literature on trust in human-AI interaction to highlight key areas for engineering system design research to address the overarching issue of user trust in system development. We present how trust influences the use of an intelligent system, describe multiple contexts that users interact with intelligent systems and categorize ways in which trust is formed based on the literature. We classify the key factors that are critical in the formation of user trust in three categories of human user attributes, design of the intelligent system, and task characteristics. We also present the analytical models that exist in the literature used to evaluate and predict trust. This paper is not intended to be a thorough literature review but rather a position paper that provides a structure to the existing literature as a reference for engineering design research. We propose future directions for engineering system design community based on the gaps and open questions identified in the literature.