## Abstract

Managing the design process of teams has been shown to considerably improve problem-solving behaviors and resulting final outcomes. Automating this activity presents significant opportunities in delivering interventions that dynamically adapt to the state of a team in order to reap the most impact. In this work, an artificial intelligence (AI) agent is created to manage the design process of engineering teams in real time, tracking features of teams’ actions and communications during a complex design and path-planning task in multidisciplinary teams. Teams are also placed under the guidance of human process managers for comparison. Regarding outcomes, teams perform equally as well under both types of management, with trends toward even superior performance from the AI-managed teams. The managers’ intervention strategies and team perceptions of those strategies are also explored, illuminating some intriguing similarities. Both the AI and human process managers focus largely on communication-based interventions, though differences start to emerge in the distribution of interventions across team roles. Furthermore, team members perceive the interventions from both the AI and human manager as equally relevant and helpful, and believe the AI agent to be just as sensitive to the needs of the team. Thus, the overall results show that the AI manager agent introduced in this work is able to match the capabilities of humans, showing potential in automating the management of a complex design process.

## 1 Introduction

Fundamental in nearly all facets of engineering practice, engineers work in teams [1,2]. Teams benefit from amalgamating diverse sets of technical skills, experiences, personalities, and perspectives for problem solving [3,4]. Teams, along with their corresponding attributes and characteristics, have been well studied in the context of engineering tasks, which often require the success of members within a team to work efficiently with each other [57]. Engineering problems can be challenging and complex, requiring multiple disciplines working together to exchange information regarding constraints, goals, and converge their expertise. Even process-oriented features, such as mere effort, have large impacts on team performance. With a theoretical basis in social psychology, social loafing, or when one team member contributes less due to being masked by the rest of the team [8,9], can significantly hurt collective team performance. Thus, key factors such as communication, processes for engagement and information flow, and proper management can ultimately define the success or failure of a team [10,11].

Effective communication and coordination between interrelated roles are essential for solving collaborative engineering problems [1214]. Although team members usually work on specific design tasks individually, team communication facilitates and stimulates design processes and exchange information across disciplines. Thus, from the design team’s perspective, specialist design knowledge is usually embedded throughout the team and needs to be communicated to become valuable information for the design artifact to be produced [13]. Ideally, engineering teams will continue to effectively and efficiently communicate despite the problem’s complexity. However, even with the increased availability of information technology, engineering teams still struggle to communicate [1316], leading them astray and thereby restricting their ability to collectively manage complexity in order to achieve a design solution.

Teams and communication are critical in designing complicated engineered systems, which often require managing coupled design parameters and multiple differing but interrelated factors, making the design process complex [17,18]. Artificial intelligence (AI) assistance methods have proven to be efficient in this area, supporting engineering teams in completing such challenging tasks rapidly and effectively. Engineers have used AI-assistance tools to design products and explore the solution space more rapidly [19] and at different stages of the design process, including concept generation [20], concept evaluation [21], prototyping [22], and manufacturing [23], and concurrent-engineering design [24]. However, human-AI collaboration can also restrict team performance. Zhang et al. [25] reported that AI assistance hindered high-performing teams’ success. Different authors have studied the impacts of AI assistance in other aspects of engineering design, including decision-making, optimization, and computational tasks [26,27], and its effects on mental workload, frustration, and effort [28,29]. While previous works cover the use of AI as assistive tools, there exists a lack of focus in the literature on the use of AI in a managerial role for the direction and guidance of team process.

Previous research highlights the power of process management on design teams and a framework for understanding the role of real-time interventions. Gyory et al. study the impacts of human process management on engineering design teams [30]. During a conceptual engineering design task, human managers intervene with a prescribed set of potential stimuli to affect their teams’ process. This study shows that teams under this process management significantly outperformed unmanaged teams in the quality of their final design outcomes. The impacts extend to behavioral and process aspects as well, where the managed teams exhibit more engagement (contribution from all team members) and greater cohesion within their collective discourse. The current work seeks to explore whether such process management can be automated through an AI agent that could intervene in a similar way in real time.

To begin to enable the automation of interventions, such a need is identified with work by Goucher-Lambert et al., by computationally adapting stimuli to provide aid in real time during problem solving [31]. Midway through a conceptual engineering design task, participants transcribe their best solution at that point in time, and then provided a tailored stimulus through automatic semantic comparisons via latent semantic analysis (LSA) [32]. Modulating the semantic distance of the stimuli to their current design solution produces varying impacts on designers’ ideation outcomes (e.g., design novelty, feasibility, usefulness, and overall innovative potential). More recent work by Gyory et al. leverages the transcript data from the previously mentioned process-managed teams and applies topic modeling techniques (including latent semantic analysis and latent Dirichlet allocation) to computationally study design cognition and the impact of the interventions on communication [33]. The results show that with analyzing design discourse, the impact of the manager interventions can be detected, showing promise for the real-time implementation and detection via discourse information. While these previous works take steps toward real-time design process guidance, an automated system that identifies when interventions should take place (aka, triggers) and what an effective intervention at some point should be has not been automated or even considered. Advancing the power of AI opens the potential for dynamically tracking several team process measures and integrating them to determine applicable interventions to stimulate design teams’ performance.

As such, this work introduces an AI process manager to effectively guide the design process of engineering teams in real time, exploiting the performance results of team and behavioral outcomes from the human process managers demonstrated by Gyory et al. [30]. In other words, this work demonstrates an effective AI agent that works in synergy with humans through interventions, resulting in a true AI-human hybrid team. The AI agent takes a data-driven approach to management, using real-time inputs to detect deficiencies in the team process and intervene at prescribed intervals. Trained on prior problem-solving team data on a similar drone design problem (that used the same experiential platform as this work), the AI seeks to induce more ideal team process conditions over the course of the design problem. The inputs and measures tracked and integrated by the agent include team action and team communication data. To compare directly against human strategies, another experimental condition places teams under the guidance of a human process manager. Accordingly, while the AI agent takes a data-driven approach to management, this is compared with the more observational-based approach by the human process managers, though both have access to the same types of team inputs. Comparisons between the two types of management include the impact on team performance, intervention strategies, and team perceptions of effectiveness. Further insights can also be gained by the human process managers to identify the motivations (i.e., triggers) for intervening. Such insights can yield additional development opportunities of the AI process manager for real-time management.

## 2 Methodology

To study a data-driven approach to process management, this research develops an AI agent to manage teams during a complex engineering design task. In a large-scale human study, two experimental conditions place teams under the guidance of either an AI or a human process manager. Results are analyzed via effects on overall performance and analysis of team process and manager intervention strategies. Moreover, post-study questionnaires collected from all individual team members and the human process managers assess the behavioral and perceived impacts on intervention effectiveness. Some of the more critical outcomes from the surveys relate to the relevance and helpfulness of the interventions, perceived effects on team performance, and an understanding of the rationales the human process managers identified to trigger an intervention with their teams. Prior to discussing the central artifact of this work, the AI process manager, Secs. 2.1 and 2.2 first explain the experimental framework to provide better context for the process managers and interventions.

Approved by the Institutional Review Board at Carnegie Mellon University, participants complete the experiment fully online, only able to interact with each other and the experimenter via the experimental platform. In total, 199 sophomore-to-senior engineering students at Pennsylvania State University in the United States participate in the study, recruited from two different mechanical and industrial engineering courses. Participants receive $20 in compensation for their time and effort. All participants read, agree to, and sign a consent form prior to engaging in any aspect of the experiment. Participants are randomly distributed among and across two team conditions—AI or human process management. Teams consist of five participants with one additional participant in the human-managed team condition as the human manager. Data from six teams are removed due to technical issues with the platform and participants arriving late or leaving early without finishing the experiment. Altogether, data for 31 teams are obtained successfully during the experiment: 16 teams in the human process manager condition and 15 teams in the AI process manager condition. The experimental research platform for the study, HyForm,2 simulates a drone delivery fleet design and path-planning problem [34]. Using this online collaborative design environment, the platform partners AI design agents and humans. The platform contains an embedded chat interface, allowing participants to communicate information and share their problem-solving outcomes through specific channels during the study. While this paper describes only a high-level overview of HyForm, more in-depth details about the collaborative research platform and integrated design agents can be found in related work [18,35]. Note that the design agents in those references support the development of drones and path plans and are different from the process manager agent developed for this work. HyForm records all the communication, design actions, drone configurations, delivery routes, and performance metrics of each role within the teams, enabling complete reconstruction of a team's problem-solving process. ### 2.2 Experiment Overview—Experimental Timeline and HyForm Roles. Participants complete the 65-min experiment outlined in Fig. 1. First, they read and sign the consent form, then provided 12 min to complete the pre-study questionnaire, tutorials, and problem brief. The pre-study questionnaire consists of questions related to their drone design, operations, business planning, and computational design experiences, in order to control for and confirm similar levels of expertise in these domains. Participants read through two tutorials, one related to their specific role and respective HyForm interface (each role uses a different interface), and the other related to the communication tool and team structure. The problem brief lays out the mission of the company and goals, describes the team structure and roles, and provides a more in-depth explanation of their specific roles’ objectives. After completion of the pre-session materials, the first 20-min, problem-solving portion begins. Throughout these sessions, an external (to the team) process manager can intervene to affect the problem-solving behaviors and processes of the team. More details related to these interventions and the process manager are discussed in the following sections: Secs. 2.3 and 2.4. After the first session, a short, 3-min break presents participants with an opportunity to review the experimental materials (tutorials and problem briefs). Then, a second 20-minute, problem-solving session commences. While similar to the first session in terms of overall objectives, the second session involves a “shock” to the customer market. After this second session, participants fill out a post-study questionnaire. For the members on the team (drone designers, operations specialists, and problem manager), this includes questions related to the perceived relevance and helpfulness of the interventions and their assessment of team performance. The human process managers fill out a different post-study questionnaire, which queries them on their strategy for intervening, the effectiveness of their interventions, and what additional types of interventions they would have liked to use. Fig. 1 Fig. 1 Close modal Using the collaborative research platform, HyForm, teams design drone fleets and create delivery path plans to reach as many customers in the market as possible. A highly interconnected problem, each discipline (operations, design, and business) works together to achieve the objectives and optimize the overall profit for their team, given an initial budget of$15,000. The aforementioned problem shock refers to a change in the original market conditions to a COVID-19 scenario where more customers with low weight medical deliveries are added to the market, along with a 30% reduction in drone costs. Each team consists of five members and Fig. 2 depicts the team structure. Each team contains two drone designers (design discipline), two operations specialists (operations discipline), and one problem manager (business discipline). The process manager is external to the team and is either a human or the AI agent. The team structure dictates the communication structure, shown by the arrows and dashed lines in the figure. The four distinct communication channels include the design channel, the operations channel, the designer management channel, and the operations management channel. In the design channel, the two design specialists communicate with each other. Similarly, the operations channel permits the two operations specialists to interact with each other. The designer management channel allows the problem manager to communicate with both design specialists simultaneously, and the operations management channel permits the problem manager to communicate with the two operations specialists simultaneously.

Fig. 2
Fig. 2
Close modal

On the team, the drone designers carry out the design of the drones with different requirements for payload capacity, range, and cost. Provided with a base drone design to start, the design specialists build and modify drones by adding or removing different components (batteries, airfoils, nodes, rods, and propellers), and varying their sizes and locations. Once created, the drone designs are sent to the operations specialists, who create delivery paths to reach customers in the market using the available drones’ capabilities and operation costs. The problem manager is responsible for handling the company budget, choosing the customers in the market, and serves as the communication-bridging node between the design specialists and operations specialists (as depicted in Fig. 2). Ultimately, the problem manager decides whether to approve or reject the final team plans for submission.

### 2.3 Process Manager and Interventions.

Throughout the two problem-solving sessions, an external process manager intervenes to affect the problem-solving behaviors of the team. The process manager observes features of the teams’ process in real time and provides suggestions at specific points during the experiment. Being external to the team, the process manager cannot directly communicate to specific team members or help in directly solving the problem. Instead, the process manager guides with a set of prescribed, process-related interventions from a predefined list. Using a predefined list makes it possible to control the types of interventions for consistency across managers, as well as reducing the additional variability induced by allowing for either a larger set or impromptu interventions. The process managers can intervene up to 12 distinct times across both 20-min, problem-solving sessions. Figure 3 presents an overview of the specific timing for the interventions. Actions and communications are tracked and collected in 5-min intervals (integrated by the AI and shown to the human managers) and this information is considered by both manager types to determine applicable interventions. The intervention opportunities occur at 2.5-min intervals from each other, with the first available intervention starting at 5 min into each session (i.e., 5 min, 7.5 min, 10 min, 12.5 min, 15 min, and 17.5 min). This 2.5-min time period for interventions balances a tradeoff between ensuring enough real-time information to collect for the process managers while also ensuring enough opportunities to intervene within each problem-solving session, in this case six times during each session. For this research, the two experimental conditions differ on whether the process manager is a human or the AI agent. Both human and AI agent process managers choose from among the same set of prescribed, process-related interventions. Therefore, for the entirety of the experiment, the teams never have any indication which type of process manager guides them.

Fig. 3
Fig. 3
Close modal

The human process managers have access to team information via a new mediation interface in HyForm (shown in Fig. 4). Through the interface, the human managers observe in real time the team discourse occurring through all four communication channels, as well as the types of actions being performed by the drone designers and operations specialists. Pilot studies run prior to the actual data collection helped shape the design and improve the user experience of this interface. The right-hand side of the mediation interface lays out the set of prescribed interventions. A timer counts down until the next intervention (each 2.5-min interval) and provides a buffer of 15 s to allow the human process manager to select a given intervention. The process managers do not need to intervene at every interval. The intervention set includes a “No Intervention” option that does not send any message to the team. The human process managers decide to intervene based on their own assessment of the status of the team's problem solving based on the available real-time data, the same data that the AI manager has access to.

Fig. 4
Fig. 4
Close modal

Tables 1 and 2 show the interventions defined for this study, with action-based interventions in Table 1 and communication-based interventions in Table 2. Once the process managers choose an intervention, the intervention is delivered to the teams through a specific communication channel, shown in the right-hand column of the tables. “Design” indicates that the intervention goes to both design specialists, “Operations” indicates the intervention goes to both operations specialists, and “Problem Manager” indicates that the intervention is only received by the problem manager. Three of the communication interventions go through all communication channels, and thus received by the entire team.

Table 1

Action-based interventions, along with the channel with which they are injected to the teams during problem solving

Design action interventionsCommunication channels
Ops planners, it would be good to continue working on and refining your plans a bit more.Operations
Hey operations team, I suggest that you try evaluating and submitting your plan and starting fresh.Operations
Hey operations team, try running the path-planning agent to help.Operations
Drone designers, it would be helpful if you can continue working on and refining your drone designs a bit more.Design
Hey drone design team, I would recommend evaluating and submitting your current design and starting fresh.Design
Hey drone design team, check out the suggestions from the drone design agent.Design
Design action interventionsCommunication channels
Ops planners, it would be good to continue working on and refining your plans a bit more.Operations
Hey operations team, I suggest that you try evaluating and submitting your plan and starting fresh.Operations
Hey operations team, try running the path-planning agent to help.Operations
Drone designers, it would be helpful if you can continue working on and refining your drone designs a bit more.Design
Hey drone design team, I would recommend evaluating and submitting your current design and starting fresh.Design
Hey drone design team, check out the suggestions from the drone design agent.Design
Table 2

Communication-based interventions, along with the channels through which they are injected to the teams during problem solving

Communication interventionsCommunication channels
Team, I think you should try focusing more on adjusting the design parameters to meet the goals of the problem, and share this with each other (cost, capacity, speed, budget, weight, etc.).Design, Operations, and Problem Manager
Team, try focusing more on your strategy. Try optimizing and increasing/decreasing size of components and share this with each other.Design, Operations, and Problem Manager
Hi team, try sharing your goals with each other a bit more and make sure they are aligned.Design, Operations, and Problem Manager
Ops team, please try to communicate with each other more.Operations
Drone designers, please try to communicate with each other more.Design
Hi problem manager, please try to communicate with your team more.Problem Manager
Communication interventionsCommunication channels
Team, I think you should try focusing more on adjusting the design parameters to meet the goals of the problem, and share this with each other (cost, capacity, speed, budget, weight, etc.).Design, Operations, and Problem Manager
Team, try focusing more on your strategy. Try optimizing and increasing/decreasing size of components and share this with each other.Design, Operations, and Problem Manager
Hi team, try sharing your goals with each other a bit more and make sure they are aligned.Design, Operations, and Problem Manager
Ops team, please try to communicate with each other more.Operations
Drone designers, please try to communicate with each other more.Design
Hi problem manager, please try to communicate with your team more.Problem Manager

### 2.4 Artificial Intelligence Process Manager Computational Framework.

During the problem-solving sessions, the AI process manager dynamically tracks several measures to determine the state of the team at a given point in time. These measures include communication frequency, communication semantics (comprising similarity and content), and action frequency and diversity. Trained on prior team problem-solving data, the AI process manager temporally induces the patterns of these measures over the course of the problem-solving sessions from better performing teams [35], the goal being to reflect the behavioral dynamics of the teams by intervening when one of these measures significantly veers off course. This section and Fig. 5 present a more detailed description of the decision logic and conceptual framework for the underlying computation of the AI process manager. As shown in the left-hand column of Fig. 5, communication and action data represent the two main data input streams to the framework.

Fig. 5
Fig. 5
Close modal

The AI agent utilizes the preceding five minutes of team data to determine appropriate interventions. Since the interventions occur every 2.5 min, the input represents a sliding window with 2.5 min of overlap of prior team data (as shown in Fig. 3). As mentioned previously, these specific timings balance the tradeoff of maintaining an adequate amount of information for the AI to utilize as well as ensuring enough interventions throughout the experiment.

The first decision point in the computational framework (Fig. 5) compares the overall team action frequency with the overall team communication frequency. Tradeoffs between effort spent on action versus spent on communication resulted in one of the more significant findings between the high- and low-performing teams from the previous HyForm experiment [35]. Thus, this decision point leverages that finding, comparing the real-time, cumulative team communication and cumulative team action with that of the high-performing teams. If the team's action frequency measure is too low then the AI agent enters the action branch (top branch in Fig. 5), and if the team communication frequency measure is too low then AI agent enters the communication branch (bottom branch in Fig. 5). If both action and communication frequencies are within bounds (±1 standard error), then the AI chooses not to intervene.

To mathematically compute which of the branches in Fig. 5 to enter, the AI process manager computes a weighted z-score for each communication and action (Eqs. (1) and (2), respectively):
$comi=|(zscoreci×dci)$
(1)
$actioni=|(zscoreai×dai)|$
(2)
In descriptive statistics, the z-score provides a quantitative and normalized approach to determine how many standard deviations a raw score lies from the population mean. In Eqs. (1) and (2), the subscripts c and a represent communication and action, respectively, the subscript i denotes the intervention number (i varies from i ∈ [1, 12] for the twelve interventions across both problem-solving sessions), and d represents an effect size. The z-score is calculated as shown in Eq. (3)
$zscore=x−μ(σn)$
(3)
where x is the observed value taken from the real-time experiment, μ is the population mean taken from the high-performing population data, σ is the standard deviation taken from the high-performing population data, and n is the total number of teams from the high-performing prior data (in this case, n = 11). Once the z-score determines how large the difference between the sampled team communication/action data is from that of the high-performing team data, the z-scores are further weighted by an effect size ($zscoreci$ and $zscoreai$ in Eqs. (1) and (2)). From the prior HyForm team data, the differences in action and communication frequency between the high- and low-performing teams considerably fluctuated over time (as shown in Fig. 6). Figure 6 presents a moving average of the communication to action frequency ratio throughout that previous experiment.
Fig. 6
Fig. 6
Close modal

As shown, the differences between the high- and low-performing teams change over time. For example, in the latter half of the experiment, the high-performing teams spend much more effort on communication. To take this additional dimension into account, the effect size (d) determines the extent of this discrepancy between the team data over the 12 different, 5-min intervention intervals.

An additional benefit of utilizing z-scores is that they help determine whether the values are within ±1 standard error from the high-performing data. Chosen as the threshold range for whether the sampled measure is within bounds, if both the action and communication frequencies from the sampled data are within ±1 standard error from the high-performing teams’ data, then the AI chooses not to intervene. Otherwise, the AI enters the branch with the highest weighted z-score. The precise decision logic in Fig. 7 determines which intervention branch to enter. Recall that the AI uses the preceding 5 min of data for computation (Fig. 3). For example, when i = 2, or at 7.5 min into the experiment (the first intervention occurs at 5 min), the AI uses communication and action data from 2.5 min up to 7.5 min. This 5-min window holds for every intervention decision.

Fig. 7
Fig. 7
Close modal

Once the AI process manager chooses a particular branch, similar, though unweighted, z-score calculations are computed on more specific team measures at the next decision point. These specific measures dictate the chosen intervention. For the action branch (top branch of Fig. 5), z-scores are computed for each of the six action categories for the drone designers and the operations specialists. The sub-branches in the figure represent these six action categories. These categories include evaluating/submitting drone designs, iterating on drone designs, and running the drone design assistive agent for the drone designers; and submitting a path plan, iterating on a path plan, or running the assistive path planner agent for the operations specialists. Whichever of these action categories is most off from the high-performing team data determines the specific intervention to inject into the team. These are all discipline-level interventions, so either both drone designers or both operations specialists receive these interventions.

If the AI chooses the communication branch (bottom branch of Fig. 5), two types of communication measures are calculated: communication semantics and communication frequency. Communication semantics includes team semantic similarity and design parameter and design strategy content. LSA computes the discourse similarity among the design team (treating each role as a distinct document in the model) using a singular value decomposition approach to reduce dimensionality within the discourse. Then, the average pairwise cosine similarity between role documents determines overall team semantic similarity. For the communication content, the AI counts sets of keywords for the problem-solving strategy and design parameter content. The keywords for design parameters include those related to the problem constraints and goals, including but not limited to velocity, payload, miles, houses, payload, and profit, while the strategy keywords relate to how teams solve or adjust these design parameters and goals, such as increase, decrease, minimize, optimize, and balance. The communication semantic interventions are team-level and thus received by the entire team. Communication frequency counts the total number of turns for each discipline (the three sub-branches off of frequency in Fig. 5 refer to the three disciplines of design, operations, and business). These three interventions are at the discipline-level and received by those respective roles. Once again, unweighted z-scores determine which metric is most off course. So, a z-score is calculated for all six dimensions and the largest z-scores determine which measure induces the specific intervention.

## 3 Results

With the experimental methodology outlined in Sec. 2, Sec. 3 analyzes the resulting data and compares the differences between the constructed AI agent and human process managers. These comparisons span several dimensions, including performance, intervention strategy, and perceived effectiveness. The maximum profit teams achieve across both sessions provides a measure of the overall performance. The intervention strategies between the AI and human process managers are examined by identifying the types and distributions of interventions used. Data from the post-study questionnaires with the human process managers and team members provide further insight into manager strategies and ascertain how team members perceive their interventions, such as their relevancy and helpfulness.

### 3.1 Team Performance.

In this work, team profit serves as the overall measure to identify how teams perform under the guidance of either the AI agent or a human process managers. Team profit combines the achievements of both the drone design discipline and the operations discipline, by totaling the weight of packages and food delivered within the customer market. Accordingly, due to the highly coupled and interdisciplinary nature of the problem, profit relies on the success of both disciplines: the types of drones designed and the path plans created. The problem manager can submit multiple plans throughout the experiment, though the best plan (i.e., the plan with the highest profit) serves as the primary performance measure for the team.

Figure 8 shows the average maximum profit the team conditions achieve in each of the experiment sessions. Recall that the second session presents a problem shock to the team; the customer market shifts to one for COVID-19 where more customers with low weight medical deliveries are added to the market, along with a 30% reduction in drone costs. Across both sessions, the two experimental conditions perform similarly (p = 0.6, d = 0.14), though a larger difference occurs in the second problem-solving session (p = 0.1, d = 0.59). While neither session presents a difference at the 5% significance level, the trend between sessions presents an interesting finding. The profit of teams guided by the AI agent marginally improves after the problem shock, while the profit of teams under human process management decreases. This trend indicates better adaptation in process strategies to the problem shock with the data-driven approach to management of the AI manager, which produces similar performance levels as human management in the first session.

Fig. 8
Fig. 8
Close modal

Additionally, the post-study questionnaire queries participants on their perception of their teams’ performance. On an interval scale from 0 to 100, with 100 being perfect, team members rate both the perceived quality of the performance of their team as well as their teams' cohesion (cohesion describes “how your team worked together,” as defined to participants during the survey). Team member averages include the two drone designers, the two operations specialists, and the problem manager. Figure 9 shows that teams under guidance of the AI manager condition perceive the quality of their teams’ performance significantly higher (p = 0.016, d = 0.41) and their teams’ cohesion as higher in a marginally significant way (p = 0.053, d = 0.32). Furthermore, the human process managers perceive these quite differently than their respective team members. Figure 10 shows that in terms of both the quality of performance (p = 0.025, d = 0.63) and cohesion (p = 0.004, d = 0.81), the human process managers perceive these significantly better than the team itself. Thus, the analogous perception in performance levels hold from the viewpoint of the team itself, however, the process managers tend to relatively inflate these measures.

Fig. 9
Fig. 9
Close modal
Fig. 10
Fig. 10
Close modal

### 3.2 Process Manager Strategies and Insights.

Having demonstrated equivalent levels of performance between the human and AI process-managed teams, the intervention strategies are next compared. This analysis identifies the types of injected interventions across sessions and disciplines, as well as a deeper understanding of the need for intervening from the human managers via the post-study questionnaire. From the questionnaire, the human process managers answer questions related to the effectiveness of their own interventions and the dynamics within the teams they oversaw that triggered them to offer guidance when they did.

In total, the human process managers intervene 127 times using all the interventions from the prescribed set, while the AI agent intervenes 167 times but only using eight of the prescribed interventions. While the AI process manager chooses to intervene more frequently, the human process managers tend to use more team-based interventions (37% of the humans’ interventions compared with 29% of the AI's interventions). Figure 11 presents the distribution of interventions across team sessions and disciplines and the proportion of intervention categories. The numbers next to each of the bars in the figure represent the raw counts. These are categorized by the two main intervention types: action-based interventions and communication-based interventions. Tables 1 and 2 from Sec. 2.3, present which interventions fall into the respective categories.

Fig. 11
Fig. 11
Close modal

Figure 11(a) shows the distribution of interventions to each team discipline (design specialists, operations specialists, and the problem managers). This includes team-level interventions; since each team-level intervention goes to all three disciplines, they are counted thrice in the figure. As shown, the AI process manager agent focuses more on the operations specialists than the human process managers, with the design specialists and problem managers seeing more equal levels of interventions. Figures 11(b) and 11(c) also show the proportions of interventions by category for the two sessions. Both experimental sessions show similar levels of action and communication interventions for the two conditions, though the second session shows more considerable differences in behavior. A distinct commonality across these results is the high degree of similarity between the two process manager types regarding the emphasis on communication-based interventions. Across both problem-solving sessions, the proportion of communication interventions is much greater than the action interventions, and when compared overall (combining results across both problem-solving blocks), the proportion is nearly identical, with 70% of the interventions being communication-based and 30% of the interventions being action-based.

The post-study questionnaire with the human process managers corroborates this emphasis on communication. A short answer prompt asks the process managers, “What were some of the reasons you did/did not intervene with your team?—try to be as specific as possible.” Nearly all the process managers incorporate communication in their responses, both reasons for and against intervening. Several process managers note that they intervene when either there is a lack of communication across the entire team or within disciplines (i.e., if the drone designers were not communicating with each other). When describing instances when they do not intervene, some of the process managers note that they are reluctant to intervene during critical communication (i.e., sharing of information such as goals), as they do not want to interrupt the flow of information, or when the problem manager is perceived as effective in their role. Recall that in the particular team structure (Fig. 2), the problem manager is responsible for bridging the communication between disciplines. Thus, the holistic intervention strategies across both the AI agent and humans highlight the need for effective communication during problem-solving and its criticality as a measure for a process manager to track.

### 3.3 Intervention Effectiveness.

Having demonstrated the similarities between the AI agent and human process managers’ intervention strategy, the next questions examine the effectiveness of the interventions. However, one first needs to confirm if team members follow the interventions provided by the process managers. Figure 12 shows the percentage of team members rating their degree of compliance with the interventions received during the experiment. Asked during the post-study questionnaire, choices range on a categorical scale from “Always” to “Never.” As Fig. 12 shows, members in both conditions quote similar levels of obeyance with the interventions. This similarity holds across the entire range of options. Overall, team members are more likely than not to respond to the interventions, as approximately 65%–70% said they “always” or “most of the time” followed the provided interventions.

Fig. 12
Fig. 12
Close modal

While equally, and nearly all the time, willing to follow the interventions, team members next rate characteristics of the interventions they received. Again, these ratings range on an interval scale from 0 to 100, with 100 being perfect. Figure 13 tells a similar story in terms of the AI agent matching human performance. Regardless of whether the interventions come from a human or the AI process manager, teams rate both the relevance (p = 0.35, d = 0.17) and helpfulness (p = 0.11, d = 0.29) of the interventions similarly (a higher score indicates higher helpfulness and relevancy). Furthermore, teams rate the AI agent just as sensitive (p = 0.71, d = 0.05) to the needs of the team as the human process managers (higher score indicates higher sensitivity). Taken together, the results show that the AI process manager agent is able to match the human process managers, or human capabilities, in terms of providing interventions that are equally effective, relevant to the problem-solving process, and satisfy the needs of the team.

Fig. 13
Fig. 13
Close modal

## 4 Discussion

This research introduces an AI agent for real-time process management, comparing the behaviors and impacts to that of a human in the same role. Trained on prior team problem-solving data, the AI process manager takes a data-driven approach. The training data involves the problem-solving characteristics of high-performing teams solving a similar design task, also using the collaborative research platform, HyForm. Even though trained to produce prior team problem-solving behaviors, this work shows that the AI process manager matches the behaviors and capabilities of human management during new design sessions. The results throughout this work highlight this common theme between the two types of process managers, including team performance, intervention strategy, and the teams’ perception of the effectiveness and helpfulness of the interventions. Interestingly, teams perceive the interventions provided by the AI process manager as just as relevant and sensitive to the needs of the team as those provided by humans observing the process.

In terms of team output, teams perform similarly under the two types of process management. In the context of the problem used, the highest plan profit a team submits measures the performance of the team. While the teams perform similarly in terms of this measure, the trends are somewhat different. Recall that between problem-solving sessions, a problem shock forces teams to adjust their strategy (the problem shock involves a market switch in customers and changes to the cost of drones). Following the problem shock between sessions, the teams guided by the AI agent increase their performance while the human teams remain stable. While not reaching significance at 5% with the current population size, this trend indicates possible higher robustness in the process of the teams guided by the AI. Future work can increase the population size of the study to see if these trends persist and reach significance. However, the AI performs at least as effectively as the human process managers in guiding team problem solving in real time. Members on the AI-managed teams also perceive the quality of their teams’ performance significantly higher than those in the human-managed teams. Furthermore, the human process managers perceive both the quality and cohesion of their teams as significantly better than the team members they managed. This could be due to the human process managers not being able to dynamically evaluate how the team is doing in terms of their performance. While the process managers have access to the process and interactions of the team, they do not receive feedback on performance progress. An intriguing direction of future work can implement this feedback on progress within the AI framework and examine how this influences the managerial strategy and team performance.

While the AI is trained on prior team problem-solving behaviors, the holistic intervention strategy between the AI and human process managers turns out to be remarkably analogous. Both rely heavily on communication-based interventions rather than action-based interventions, even though the prescribed intervention set includes an equal number of each. Nearly 70% of the interventions by both process manager types are communication based. This trend also holds across problem-solving sessions after the problem shock. This highlights the criticality of communication within teams as an effective measure for process managers to track during problem solving. Differences between the two process managers start to show in the number of total interventions and in the number of team-based interventions. Overall, the human process managers are more balanced in their intervention strategy: they intervene fewer times, choosing more times not to intervene, choosing more team-level interventions, and equally distributing their interventions across disciplines. The AI process manager focuses a bit more, providing more interventions to the operations specialists. Even with these differences, the data-driven process management is broadly similar to that of the human strategy.

In refining the AI agent's intervention framework for future research, further insights are gained by the post-study questionnaires with the human process managers. Asked whether they felt constrained with the prescribed intervention set, nearly all (except one) felt constrained using the current set, though this constraining of the intervention set was intentionally done to reduce additional variability in the experimental design. Through a short answer question, they identified additional ways they would have liked to intervene with their teams. The desire for individualized interventions emerges as one of the main themes. The current set of interventions either go to the entire team or entire disciplines (i.e., both drone designers). Instead, future iterations of the AI process manager could identify deficiencies at the role level and intervene with individual team members. Additionally, several process managers note that they would have liked custom interventions, coming up with specific interventions in real time, and directly interacting or chatting with the team. While the authors did consider this, this creates many additional layers of variability within the experimental design and would make it difficult to compare approaches. The process managers also comment on more interventions specific to the problem manager (in the current design, the only problem manager specific intervention is to increase their communication frequency), goal-specific interventions, and positive reinforcement. Regarding the latter, several participants indicate that instead of the “No intervention” option, it would be better to increase team morale and have an intervention to provide positive reinforcement such as, “Keep up the good work.” These aspects can be implemented in future iterations of the intervention framework.

While still only in its first version, the AI process manager presents boundless opportunities as an experimental testbed for future research. In its current form, the AI tracks design actions and certain aspects of communication at the team and discipline levels. As noted by a few of the human process managers, further iterations can track these measures at the individual level within roles and inject the interventions to specific team members. Studied across different contexts, the timing of interventions, or interruptions generally, could possibly lead to different impacts, either impeding or helping problem solving [3639]. Since timing is not a direct goal here, the experimental design followed a uniform timing approach to control for this, though different timing schemes, such as anachronistic scheduling, can be tested [40]. Additionally, in the current work, the manager tracks and facilitates the overall problem-solving process of the team. The manager might instead serve as a problem manager and mediate with interventions more relevant to the design task, goals, and constraints of the problem. In fact, the manager could be equipped with features of both and mediate with varying levels of problem- and process-related interventions. Since this research focuses on process, these specificities on task, goals, and constraints are not directly applicable to the process manager in this work. As well, the overarching hope is that the method for process management is general, domain independent, and applies across problems.

A large body of literature in human computer interaction and artificial intelligence studies human trust in AI. As AI becomes more capable, intelligent, and integrated, humans may become more skeptical and less willing to listen or utilize its power. These questions are also important for the application of AI within this research. As a process manager, or in any type of managerial role for that matter, those within the team must have trust in order to actually respond to the provided interventions. Fortunately, the results of this work show that the team members did comply. The last question on the post-study questionnaire even askes participants whether they thought the process manager was an AI agent or a human. Regardless of condition, there was an equal 75%/25% split of those that believe the process manager was an AI agent versus human, respectively. So even though most participants believe the process manager to be AI, they still listen. Future work can examine this finding further, in addition to this question of trust, to see how these perceptions may affect their willingness to obey the process manager and if these perceptions change how they feel about the interventions they receive.

## 5 Conclusion

Process management brings profound benefits to aiding engineering teams and the engineering design process. Along this vein, this work creates an AI agent that manages the design process of teams in real time. This AI process manager dynamically tracks several action- and communication-based features of team process, integrates them, and chooses an appropriate intervention. The problem context for this work focuses on a highly interconnected drone design and path-planning task, one that requires effective interdisciplinary collaboration for success. While the AI process manager takes a data-driven approach to intervening (trained on previous team problem-solving data), this is compared with the impact and strategies of human process managers in the same role. The results of this work show that the developed AI process manager matches the capabilities of the human process managers. These similarities hold across several dimensions, including overall team performance, intervention strategy, as well as the perceived impact on team performance, process, and intervention efficacy. Overall, communication deficiencies and inefficiencies stood out as guiding measures to elicit interventions by both the human and AI process managers. This highlights the criticality of effective communication management, particularly during a highly interconnected and interdisciplinary design problem such as the one presented in this work. Moreover, the underlying computational framework for the AI shows promise as an experimental testbed for future research in real-time management. Additional measures, interventions, and decision-making strategies can be implemented and tested to better understand and enhance the impact of real-time process management during the design of complex engineering systems.

## Acknowledgment

The authors would like to thank Gary Stump for his discussion on this project. This work was supported by the Defense Advanced Research Projects Agency through cooperative agreement N66001-17-1-4064 and the Air Force Office of Scientific Research under Grant No. FA9550-18-1-0088. Any opinions, findings, and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the sponsors.

## Conflict of Interest

There are no conflicts of interest.

## Data Availability Statement

The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.

## References

1.
Paulus
,
P.
,
Dzindolet
,
B.
, and
Kohn
,
M.N. W.
,
2012
, “
Collaborative Creativity – Group Creativity and Team Innovation
,”
Handbook of Organizational Creativity
, pp.
327
357
.
2.
Seat
,
E.
, and
Lord
,
S. M.
,
1998
, “
Enabling Effective Engineering Teams: A Program for Teaching Interaction Skills
,”
FIE ‘98. 28th Annual Frontiers in Education Conference. Moving From “Teacher-Centered” to “Learner-Centered” Education. Conference Proceedings (Cat. No. 98CH36214)
,
Tempe, AZ
,
Nov. 4–7
, IEEE, pp.
246
251
.
3.
Horwitz
,
S. K.
, and
Horwitz
,
I. B.
,
2007
, “
The Effects of Team Diversity on Team Outcomes: A Meta-Analytic Review of Team Demography
,”
J. Manage.
,
33
(
6
), pp.
987
1015
.
4.
Dahlin
,
K. B.
,
Weingart
,
L. R.
, and
Hinds
,
P. J.
,
2005
, “
Team Diversity and Information Use
,”
,
48
(
6
), pp.
1107
1123
.
5.
Dong
,
A.
,
Hill
,
A. W.
, and
Agogino
,
A. M.
,
2004
, “
A Document Analysis Method for Characterizing Design Team Performance
,”
ASME J. Mech. Des.
,
126
(
3
), pp.
378
385
.
6.
McComb
,
C.
,
Cagan
,
J.
, and
Kotovsky
,
K.
,
2017
, “
Optimizing Design Teams Based on Problem Properties: Computational Team Simulations and an Applied Empirical Test
,”
ASME J. Mech. Des.
,
139
(
4
), p.
041101
.
7.
Tribelsky
,
E.
, and
Sacks
,
R.
,
2011
, “
An Empirical Study of Information Flows in Multidisciplinary Civil Engineering Design Teams Using Lean Measures
,”
Archit. Eng. Des. Manag.
,
7
(
2
), p.
85
101
.
8.
Karau
,
S. J.
, and
Wilhau
,
A. J.
,
2019
,
Individual Motivation Within Groups: Social Loafing and Motivation Gains in Work, Academic, and Sports Teams
,
,
Cambridge, MA
, pp.
3
51
.
9.
Gardner
,
H. K.
,
2012
, “
Performance Pressure as a Double-Edged Sword
,”
,
57
(
1
), pp.
1
46
.
10.
Cohen
,
G. P.
,
1993
, “
The Virtual Design Team: An Information-Processing Model of Design Team Management
,”
Doctoral dissertation
,
Stanford University
.
11.
Kunz
,
J. C.
,
Cohen
,
G. P.
, and
Levitt
,
R. E.
,
1993
, “
Modeling Effects of Organizational Structure and Communication Tools on Design Team Productivity
,”
AI and Theories of Groups and Organizations: Conceptual and Empirical Research. Papers from the 1993 AAAI Workshop Technical Report WS-93-03
,
Menlo Park, CA
.
12.
De Montjoye
,
Y. A.
,
Stopczynski
,
A.
,
Shmueli
,
E.
,
Pentland
,
A.
, and
Lehmann
,
S.
,
2014
, “
The Strength of the Strongest Ties in Collaborative Problem Solving
,”
Sci. Rep.
,
4
(
1
), pp.
1
6
.
13.
den Otter
,
A.
, and
Emmitt
,
S.
,
2008
, “
Design Team Communication and Design Task Complexity: The Preference for Dialogues
,”
Archit. Eng. Des. Manag.
,
4
(
2
), pp.
121
129
.
14.
Senescu
,
R. R.
,
Aranda-Mena
,
G.
, and
Haymaker
,
J. R.
,
2013
, “
Relationships Between Project Complexity and Communication
,”
J. Manag. Eng.
,
29
(
2
), pp.
183
197
.
15.
Heisig
,
P.
,
Clarkson
,
P. J.
, and
Vajna
,
S.
,
2010
,
Modelling and Management of Engineering Processes
,
Springer
,
New York
.
16.
Senescu
,
R. R.
, and
Haymaker
,
J. R.
,
2009
, “
Specifications for a Social and Technical Environment for Improving Design Process Communication
,”
Proceedings of the 26th International Conference on IT in Construction
,
Istanbul, Turkey
,
Oct. 1–3
, pp.
227
237
.
17.
Zurita
,
N. F. S.
, and
Tumer
,
I. Y.
,
2017
, “
A Survey: Towards Understanding Emergent Behavior in Complex Engineered Systems
,”
Proceedings of the ASME Design Engineering Technical Conference.
,
Cleveland, OH
,
Aug. 6–9
.
18.
Song
,
B.
,
Soria Zurita
,
N. F.
,
Zhang
,
G.
,
Stump
,
G.
,
Balon
,
C.
,
Miller
,
S. W.
,
Yukish
,
M.
,
Cagan
,
J.
, and
McComb
,
C.
,
2020
, “
Toward Hybrid Teams: A Platform to Understand Human-Computer Collaboration During the Design of Complex Engineered Systems
,”
Proceedings of the Design Society: DESIGN Conference
,
Cavtat, Croatia
,
Oct. 26–29
.
19.
Koch
,
J.
, and
Paris-Saclay
,
I.
,
2017
, “
Design Implications for Designing With a Collaborative AI
,”
AAAI Spring Symposium Series.
,
Palo Alto, CA
,
Mar. 27–29
.
20.
Camburn
,
B.
,
Arlitt
,
R.
,
Anderson
,
D.
,
Sanaei
,
R.
,
Raviselam
,
S.
,
Jensen
,
D.
, and
Wood
,
K. L.
,
2020
, “
Computer-Aided Mind Map Generation via Crowdsourcing and Machine Learning
,”
Res. Eng. Des.
,
31
(
4
), pp.
383
409
.
21.
Camburn
,
B.
,
He
,
Y.
,
Raviselvam
,
S.
,
Luo
,
J.
, and
Wood
,
K.
,
2020
, “
Machine Learning-Based Design Concept Evaluation
,”
ASME J. Mech. Des.
,
142
(
3
), p.
031113
.
22.
Dering
,
M. L.
,
Tucker
,
C. S.
, and
Kumara
,
S.
,
2018
, “
An Unsupervised Machine Learning Approach to Assessing Designer Performance During Physical Prototyping
,”
ASME J. Comput. Inf. Sci. Eng.
,
18
(
1
), p.
011002
.
23.
Williams
,
G.
,
Meisel
,
N. A.
,
Simpson
,
T. W.
, and
McComb
,
C.
,
2019
, “
Design Repository Effectiveness for 3D Convolutional Neural Networks: Application to Additive Manufacturing
,”
ASME J. Mech. Des. Trans.
,
141
(
11
), p.
111701
.
24.
Jin
,
Y.
,
Levitt
,
R. E.
,
Christiansen
,
T. R.
, and
Kunz
,
J. C.
,
1995
, “
The Virtual Design Team: Modeling Organizational Behavior of Concurrent Design Teams
,”
AI EDAM
,
9
(
2
), pp.
145
158
.
25.
Zhang
,
G.
,
Raina
,
A.
,
Cagan
,
J.
, and
McComb
,
C.
,
2021
, “
A Cautionary Tale About the Impact of AI on Human Design Teams
,”
Des. Stud.
,
72
, p.
100990
.
26.
Rao
,
S. S.
,
Nahm
,
A.
,
Shi
,
Z.
,
Deng
,
X.
, and
Syamil
,
A.
,
1999
, “
Artificial Intelligence and Expert Systems Applications in New Product Development—A Survey
,”
J. Intell. Manuf.
,
10
(
3
), pp.
231
244
.
27.
Raina
,
A.
,
Cagan
,
J.
, and
McComb
,
C.
,
2019
, “
Transferring Design Strategies From Human to Computer and Across Design Problems
,”
ASME J. Mech. Des.
,
141
(
11
), p.
114501
.
28.
Maier
,
T.
,
Zurita
,
N. F. S.
,
Starkey
,
E.
,
Spillane
,
D.
,
Menold
,
J.
, and
McComb
,
C.
,
2020
, “
Analyzing the Characteristics of Cognitive-Assistant-Facilitated Ideation Groups
,”
Proceedings of the ASME Design Engineering Technical Conference
,
Virtual, Online
,
Aug. 17–19
.
29.
Maier
,
T.
,
Abdullah
,
S.
,
McComb
,
C.
, and
Menold
,
J.
,
2021
, “
A Query Conundrum: The Mental Challenges of Using a Cognitive Assistant
,”
SN Comput. Sci.
,
2
(
3
), p.
194
.
30.
Gyory
,
J. T.
,
Cagan
,
J.
, and
Kotovsky
,
K.
,
2019
, “
Are You Better off Alone? Mitigating the Underperformance of Engineering Teams During Conceptual Design Through Adaptive Process Management
,”
Res. Eng. Des.
,
30
(
1
), pp.
85
102
.
31.
Goucher-Lambert
,
K.
,
Gyory
,
J. T.
,
Kotovsky
,
K.
, and
Cagan
,
J.
,
2020
, “
Adaptive Inspirational Design Stimuli: Using Design Output to Computationally Search for Stimuli That Impact Concept Generation
,”
ASME J. Mech. Des.
,
142
(
9
), p.
091401
.
32.
Landauer
,
T. K.
,
Foltz
,
P. W.
, and
Laham
,
D.
,
1998
, “
An Introduction to Latent Semantic Analysis
,”
Discourse Process.
,
25
(
2–3
), pp.
259
284
.
33.
Gyory
,
J.
,
Kotovsky
,
K.
, and
Cagan
,
J.
,
2021
, “
The Influence of Process Management: Uncovering the Impact of Real-Time Managerial Interventions via a Topic Modeling Approach
,”
ASME J. Mech. Des.
,
143
(
11
), p.
111401
.
34.
HyFormTM GitHub
”. https://github.com/hyform/drone-testbed-server/releases/tag/2021-March-v2, Accessed April 23, 2021.
35.
Zhang
,
G.
,
Soria Zurita
,
N. F.
,
Stump
,
G.
,
Song
,
B.
,
Cagan
,
J.
, and
McComb
,
C.
,
2021
, “
Data on the Design and Operation of Drones by Both Individuals and Teams
,”
Data Br.
,
36
, pp.
1
11
.
36.
Gero
,
J. S.
,
Jiang
,
H.
,
Dobolyi
,
K.
,
Bellows
,
B.
, and
Smythwood
,
M.
,
2015
, “
How Do Interruptions During Designing Affect Design Cognition
,”
Design Computing and Cognition'14
,
London, UK
,
June 23–25
.
37.
,
D. M.
,
Boehm-Davis
,
D. A.
,
Trafton
,
J. G.
, and
Monk
,
C. A.
,
2011
, “
Mitigating Disruptive Effects of Interruptions Through Training: What Needs to be Practiced?
,”
J. Exp. Psychol. Appl.
,
17
(
2
), pp.
97
109
.
38.
Hess
,
S. M.
, and
Detweiler
,
M. C.
,
1994
, “
Training to Reduce the Disruptive Effects of Interruptions
,”
Proc Human Factors Ergonom. Soc. Annual Meet.
,
38
(
18
), pp.
1173
1177
.
39.
Sio
,
U. N.
,
Kotovsky
,
K.
, and
Cagan
,
J.
,
2017
, “
Interrupted: The Roles of Distributed Effort and Incubation in Preventing Fixation and Generating Problem Solutions
,”
Mem. Cogn.
,
45
(
4
), pp.
553
565
.
40.
Lock
,
S.
,
2013
, “