Partnership between humans and computers has a significant potential to extend the ability of humans to address complex design problems. This paper presents a decision-making process for computers to effectively collaborate with humans in the solution of complex problems under dynamic competition. In the proposed process, the computers learn strategies and objectives from prior experimental data and provide strategy suggestions to human collaborators. The study integrates clustering and sequential learning methods from machine learning with a differential game formulation based on model predictive control to find dynamic Nash equilibrium solutions to zero-sum games. The application of the proposed approach is demonstrated on the real-time strategy game Starcraft II that offers a dynamic competitive problem comparable in complexity to real-world applications. The results show that the proposed approach can successfully identify a variety of opening strategies in the experimental data for the initial phase of the process. The game-theoretic strategies in the later phases provide useful suggestions for low-performing players but are unnecessarily conservative for high-performing players where there is little opportunity for improvement. These results suggest a need for an assessment of the opponent expertise and a human intuition to judge the appropriateness of the game-theoretic suggestions for further improvement.

This content is only available via PDF.
You do not currently have access to this content.