Abstract

Teams are common throughout engineering practice and industry when solving complex, interdisciplinary problems. Previous works in engineering problem solving have studied the effectiveness of teams and individuals, showing that in some circumstances, individuals can outperform collaborative teams working on the same task. The current work extends these insights to novel team configurations in virtual, interdisciplinary teams. In these team configurations, the whole meta-team can interact, but the sub-teams within them may or may not. Here, team performance and process are studied within the context of a complex drone design and path-planning problem. Via a collaborative research platform called HyForm, communication and behavioral patterns can be tracked and analyzed throughout problem solving. This work shows that nominally inspired sub-structured teams, where members work independently, outperform interacting sub-structured teams. While problem-solving actions remain consistent, communication patterns significantly differ, with nominally inspired sub-structured teams communicating significantly less. Questionnaires reveal that the manager roles in the nominally inspired sub-structured teams, which are more central in communication and information flow, experience a greater cognitive and workload burden than their counterparts in the interacting sub-structured teams. Moreover, members in the nominally inspired sub-structured teams experience their teams as inferior on various dimensions, including communication and feedback effectiveness, yet their performance is superior. Overall, this work adds to the literature on nominal versus interacting problem-solving teams, extending the finding to larger, interdisciplinary teams.

1 Introduction

Ongoing research efforts in the engineering design community focus on teamwork and collaboration [13]. Provocative results are emerging surrounding the notion of groups of individuals (i.e., nominal teams) outperforming collective team problem solving. This raises the critical question of whether and when teams are truly optimal, or at least better performing than individuals [4,5]. The engineering design community has not been the first to illuminate such findings. Studies in the social psychology literature indicate that nominal teams can outperform idea-generating groups during brainstorming activities [68]. These findings contrast sharply with the plethora of benefits that teams offer to the problem-solving process, which stems from a diversity of perspectives and expertise that teams provide [911]. The current work is driven by the tension between these findings: How can we harmonize the efficiencies of individual problem solving with the benefits that teaming provides?

To date, the focus within the study of teaming has focused on completely interacting, or unconstrained, teams versus a team of individuals [1214]. However, this current work takes a novel approach to team configurations, focusing on different sub-team structures within an interacting meta-team. In other words, while the whole team interacts, the sub-team disciplines within the broader team may not. Thus, the contribution here is moving toward an interdisciplinary team whose sub-teams may or may not work on their parts of the task together. Our specific team architecture is studied within a previously used experimental platform called HyForm, which joins different disciplines during a complex design task [15]. Because the meta-team is not disciplinarily homogeneous, the sub-teams need to constantly exchange developing information from the other disciplines within the team. Here, homogeneous and interdisciplinary refer to the defined roles in this experiment rather than the background of the team members themselves. The exchange of information across disciplines is done through a central problem manager who becomes the mediator between disciplines. The design context stages a complex interdisciplinary problem, conducted virtually via a collaborative research platform. Within this research platform, reconfigurable communication channels enable restriction of team interactions, resulting in our unique nominally inspired, sub-structured teams.

Accordingly, the following research questions (RQs) underline this work:

  • RQ1: Are nominally inspired or interacting sub-structured teams more effective during a complex, interdisciplinary design task?

  • RQ2: How do nominally inspired versus interacting sub-structured teams impact the problem-solving behaviors of the overall meta-team?

Teams are ubiquitous in problem-solving scenarios and engineering practice—in operations logistics and planning, product design, software, aerospace, and applications are endless. The hope is that this research will critically challenge the assumptive notion of team superiority and drive more innovative and strategic decision-making principles of team construction.

2 Background

2.1 The Nominal Team—Comparing Teams and Individuals in an Experimental Context.

The study of teams within the engineering design community spans across human subject studies and computational simulations. Nominal teams are often used throughout the literature to compare individuals and teams within experimental contexts [5,1618]. In general, a nominal team refers to a group of participants who work individually, without communication or collaboration, wherein the best individual solution is selected as the team solution. That is, nominal teams are experimentally created artifacts, generated by randomly pairing or grouping individuals who worked on the task alone. Nominally inspired sub-teams are a more realistic analog to the related experimental artifact, and an important aspect of this current work as the manner to compare individual and interacting problem solving.

Work in engineering and related areas has supported the notion that nominal teams can outperform collaborative teams. For example, on a conceptual engineering design task, Gyory et al. showed that groups of individual problem solvers can produce better solutions than interacting teams, even when those teams are guided by a process manager [5]. Similarly, in data science competitions in Kaggle, simulations of groups of individuals also outperformed interacting teams [19]. However, this tendency does not necessarily hold true across all contexts. For example, in the collaborative computer-aided design, Phadnis et al. showed that interacting pairs of programmers design higher quality models [20]. Due to coordination and inefficiencies within pairs, on a per-person basis, individuals were much quicker. This result provides evidence that efficiency of the interactions and collaboration processes of the teams can dictate their performance level.

Communication and other interactions among members are a defining feature of team versus individual problem solving [2123]. Effective and cohesive communication can lead to common and shared mental models and better overall performance among teams [24,25]. Because communication is such a critical mode of team interaction behaviors, communication, or the lack thereof, is leveraged to differentiate the sub-team structures for this work. The experimental platform for this study, discussed in Sec. 2.2, allows for this reconfigurable communication channels among team members.

2.2 A Collaborative Research Platform—Hyform.

HyForm2 is a collaborative research platform created through a partnership among researchers at the Carnegie Mellon University (CMU), the Pennsylvania State University (PSU), and the PSU Applied Research Laboratory (ARL) [15]. The platform simulates a complex, interdisciplinary design problem, partnering drone designers, path planners, and business planners. A valuable tool for studying problem-solving behaviors, HyForm can track all actions and communication among team members throughout a study session. The current work uses HyForm capabilities to restrict and study team members’ interactions while designing a complex engineered system.

The HyForm platform contains three distinct modules: drone design, operations, and business plan. All team members are assigned a discipline (i.e., sub-team), and each discipline works in their respective module in HyForm. Design specialists use the drone design module to build and evaluate drones. Design specialists can arrange components such as rods, batteries, and airfoils to create different drone configurations. Then each drone is evaluated to determine its cost, range, velocity, and payload. Operations specialists work in the operation module to create and assess delivery routes for a specific target market using the drones provided by the design specialists. The problem manager uses the business plan module to select customers, define the operation specialists’ market, and choose the most profitable plan from their team.

Additionally, the problem manager monitors the overall team performance and facilitates communication between the design and operations specialists. HyForm contains a text chat interface that allows communication between team members. As mentioned earlier, the chat tool is reconfigurable such that experimenters can restrict or expand team communication between team members, enabling the ability to handle different team structures. This facilitates the study of both interacting sub-structured teams and nominally inspired sub-structured teams, which is the primary focus of this research. See Ref. [26] for figures of the modules within HyForm.

2.3 Cognitive Workload and Stress.

Solving complex design problems requires and creates high levels of mental and cognitive workloads, which is defined as the difference between the cognitive demands of the task and the attentional resource capabilities afforded from an individual [2729]. This increase in the cognitive workload can also exacerbate stress [3032]. Particularly with tasks that are dynamically changing, uncertain, and contain large solution spaces, like the task emulated within the HyForm platform, designers must be able to remain agile, reacting to both internal and external pressures during the problem-solving process [33,34]. Not only do these high levels of cognitive workload and stress adversely affect health but they can also lead to inferior performance and efficiencies [35]. We are particularly interested in studying cognitive workload and the differences experienced from two different team structures presented here. For example, is the cognitive workload and stress heighted when members working on the same part of the problem cannot directly collaborate with each other? Due to the inherent connection between these measures and team performance, these are critical to analyze when considering such different team structures.

The NASA-TLX, or NASA Task Load Index, is a common assessment tool to measure cognitive workload along a multidimensional scale [36]. Many subdimensions are ascertained from the tool, including the specific factors of mental demand, temporal demand, performance, effort, stress, discouragement, insecurity, and frustration. Previous studies have validated the assessment tool to measure both cognitive experience and stress. The NASA-TLX has exhibited robust sensitivity across a wide variety of human factors studies, both simulated and live tasks, to test for team effectiveness and performance [31,32,3739]. These reasons make it an ideal test for measuring the impacts of different team structures on how members work together and perceive their experiences on the team.

The NASA-TLX measures several subdimensions across the assessment, each of which is defined as follows [40]. Mental demand studies how much cognitive and perceptual activity is required for a task, describing the task as easy or demanding, simple or complex, and the amount of thinking and deciding involved. Physical demand relates to the amount of physical activity required, characterizing the task as easy or demanding, slack or strenuous. Describing the task as slow, rapid, or frantic, temporal demand tests the amount of time pressure felt during the task due to the pace at which the tasks or task elements occurred. An individual's overall performance level queries how successful they thought of themselves achieving success on the task and goals of the problem, essentially measuring their satisfaction with themselves. Effort tests both the mental and physical amount of work required to accomplish the level of performance on the task. Finally, frustration level queries the amount of irritation, stress, and annoyance experienced while completing the task complacency to how contention, relaxation, and complacency felt. Taken together, these dimensions measure the mental workload of individuals and provide insight into the complexity of task, performance, and efficiency.

2.4 Virtual/Distributed Teams and Collaboration.

Another major facet of this work is the distributed nature and virtuality of the design teams. Unlike co-located teams, distributed teams depend on communication as the main mode to interact and collaborate with one other [41]. Several characteristics are associated with the virtuality of teams—geographic dispersion, electronic dependence, structural dynamism, and national diversity [42]. From these factors, the two most relevant here are electronic dependence and structural dynamism. In this work, team members can collaborate by one of the two electronic ways: directly chatting through communication channels within the experimental platform and/or sharing their design progress with another team member. The virtual nature of collaboration here enables us to manipulate and structure dynamism as well—by directly controlling the team structures and if members can chat/share designs with other team members.

Conflicting evidence exists to the benefits and shortcomings of distributed teaming. Several researchers and studies identify that direct interactions with fellow coworkers or team members associated with colocation improve creativity and innovation via the sharing of tacit knowledge [43]. Even the ad hoc encounters with team members increase the sharing of ideas [44]. However, others posit that virtual collaboration is much more complex and can explicitly create the space and time for individual brainstorming and thinking and can increase creativity [45]. Regardless, distance is negatively correlated with communication frequency, and it is well known that communication is correlated with team performance [46,47].

Communication and trust are critical and interrelated factors for distributed teaming, which may also be fragile or temporal over time [48]. The virtuality of teams necessitates the need for different modes of communication—such as email, instant messaging, and other computer-mediated methods [49]. Just like the findings on creativity and innovation, overall, there are also inconsistent findings related to communication between virtual and face-to-face interactions [5052]. These inconsistencies generally arise from the complexities of virtual collaboration and the continuum of which it exists [53]. There are many ways to study communication including frequency, content, quality, timeliness, and closed-loop communication [5458]. Communication frequency is most impactful in the early stage of team formation and norming to contribute to the development of teams, as the potential for building trust and a common understanding of the problem will increase with more frequent interactions [59]. As a result, communication frequency is the way we study communication here.

3 Methodology

This section presents details regarding the methodology of the experimental design. First, Sec. 3.1 discusses the participants and the two team conditions used: the interacting sub-structured and the nominally inspired sub-structured team configurations. Then, Sec. 3.2 provides details regarding the design task, including the problem shock introduced midway through problem solving. Finally, Sec. 3.3 progresses through the timing of the 65-min experiment.

3.1 Participants and Experimental Conditions.

In total, 105 individuals participated in the study. These participants were mechanical engineering students recruited within similar mechanical engineering design classes from CMU and PSU in the United States to control for similar levels of engineering education at the university level. The study was approved by the Institutional Review Board at CMU, and all participants read and signed a consent form before partaking in the study. Following full completion of the experiment, they were compensated via an Amazon gift card, proportionate to $10 per hour of their time spent. Conducted entirely online, participants interacted with the experimenter and all aspects of the study virtually through the HyForm platform.

All individuals were randomly assigned to one of two experimental conditions, an interacting sub-structured team or a nominally inspired sub-structured team. Each team consisted of five members. The main difference between the two team conditions (as shown in Fig. 1) relates to how team members interacted. The direct lines of communication are indicated by the solid arrows in the figure. Because the meta-team is not homogeneous with respective to their defined roles, the sub-teams need to constantly exchange evolving information from the other disciplines within the team. This exchange is done through the central problem manager who becomes the mediator between the disciplines formed by each sub-team. The experiment was completely virtually via the HyForm platform. Accordingly, text-based communication channel-enabled team members to communicate with one another, and they did not know each other's identity.

Fig. 1
Team structures dictated based on types of team communication: (a) interacting substructure and (b) nominally inspired substructure
Fig. 1
Team structures dictated based on types of team communication: (a) interacting substructure and (b) nominally inspired substructure
Close modal

In the interacting sub-structured condition, the two design specialists could chat with each other, the two operations specialists could chat with each other, and each discipline could directly chat with the problem manager. When the design specialists wanted to relay information to the operations specialists, they needed to go through the problem manager. The design specialists could also see each other's submitted drone designs, and the operations specialists could see each other's submitted delivery plans, thereby sharing design progress in the sub-teams.

However, in the nominally inspired sub-structured condition, while the communication lines existed between each member and the problem manager, members within the same discipline could not directly communicate. The line of communication between the design specialists was severed as well as the line between the operations specialists. Moreover, the design specialists could not see each other's submitted drone designs, and the operations specialists could not see each other's submitted delivery plans. In this team structure, team members worked on their own designs without communication and collaboration from their fellow members, submitting their work directly to the problem manager. In this manner, the team structure mimics a nominal team, where team members work solely. Even so, each individual in one discipline can still obtain information from the other discipline via the problem manager. Like the team condition, participants were randomly assigned to one of these roles on the team.

Altogether, the final data collection consisted of 10 interacting sub-structured and 11 nominally inspired sub-structured teams.

3.2 Design Task.

Provided with an initial budget to build and operate a drone fleet, teams attempted to maximize their profit. They chose customers to deliver to from a customer map and received profit based on the distribution of different packages delivered. On a team, the drone specialists built and modified drones through the drone design module, while the operations specialists selected from these drones to create drone fleets and designed path plans among customers on the customer map. Sample actions for the drone designers within HyForm include adding drone components (motors, batteries, airfoils, etc.), increasing/decreasing the size of components, or moving components. Sample actions for the operations specialists include adding/removing delivery paths from point A to B and selecting completed drones designs. Consequently, the local objectives for the drone designers are the range, payload, cost, and velocity of the drones, while the local objectives for the operations specialists are the cost and amount of food/packages delivered to customers.

When drones and delivery routes were created, these plans were sent to the problem manager, whose objective was to select final plans for submission. The teams’ performance was measured by the submissions with the highest achieved profit. Plan profits consider the overall costs from the disciplines relative to the amount of food and package deliveries to customers. The problem task is complex and requires both parallelization of subtasks and communication within a team. Each of the three disciplines has their own specialized knowledge of design variables and constraints, and thus, these must be effectively communicated and worked on simultaneously within the time allotted to perform well.

While many aspects of the experimental architecture were similar to previous studies conducted in HyForm by the authors [60,61], a new problem shock was introduced for this work. During the second half of problem solving, team members were notified of specific restrictions and constraints. These constraints, along with the drone design and operations specialists’ modules, are depicted in Fig. 2. In terms of the constraints, the design specialists experienced a physical wall representing the hangar space, which created a geometrical constraint for drone designs (Fig. 2(a)). Second, operation specialists had a no-flight area on the customer map, which created an obstacle to work around in the deliverable routes (Fig. 2(b)).

Fig. 2
Problem shocks introduced for the second problem-solving session: (a) the hangars (white walls) limited drones to a maximum size and (b) no-flight areas (cylindrical object) created obstructions in path plans
Fig. 2
Problem shocks introduced for the second problem-solving session: (a) the hangars (white walls) limited drones to a maximum size and (b) no-flight areas (cylindrical object) created obstructions in path plans
Close modal

3.3 Design Study Timeline.

The beginning of the experiment started with 15 min of prestudy materials. This included reading and signing the consent form, reading the problem brief, and filling out a prestudy questionnaire. The problem brief provided details related to the design task and the assigned roles. Each discipline on a team (drone specialist, operations specialist, and problem manager) had a distinct problem statement for their role. The prestudy questionnaire queried individuals on their experiences related to certain aspects of the experiment, such as building drones, business/operations planning, and computational expertise (this questionnaire was going to be used to form teams, but the results showed that individuals had very limited experiences/exposure in these areas). Following this, participants went through a 10-min, guided tutorial. The guided tutorial accustomed team members to the HyForm platform—their respective disciplines’ HyForm module—and the communication channels. While not explicitly checking or testing their working knowledge afterward, the tutorial guided them through a set of tasks pertaining to their role that they experienced during the actual task.

After completing all pre-session materials, the first problem-solving session commenced. Teams were given 20 min to work through the initial problem statement (maximize team profit with a certain budget to build drones and routes). The experimenter reminded the problem manager to submit their teams’ best plan by the end of the session. All plans sent to the problem managers contained corresponding profit values, so discretion of the best plan on the problem manager was not significantly subjective. Afterward, team members completed a short, mid-study questionnaire and were provided with a 3-min break to either take a rest from their computer or review tutorial materials. After the break, the second problem-solving session commenced. This session was the same as the first except with the additional constraints (Fig. 2) to the problem—restricted sizes of drones and obstacles in the customer markets. Again, the problem manager submitted the best plan of their team by the end. After this second, 20-min session, participants filled out a poststudy questionnaire.

Both the mid-study and poststudy questionnaires included questions from the NASA task load index (NASA-TLX) survey, which as discussed in-depth in the previous discussion, evaluate participants’ experiences about their mental/temporal demand, performance, effort, stress, frustration, and other attitudes while working through the problem-solving sessions [36]. In addition to these subdimensions, three others were integrated into the assessment: stress, insecurity, and discouragement. The integration of these measures followed a similar methodology to Nolte and McComb [38]. These subdimensions are combined to represent an overall mental workload measure and a cognitive experience measure. The cognitive experience measure consists of an equally weighted average of the subdimensions of mental demand, temporal demand, performance, effort, and stress. The mental workload measures an equally weighted average of mental demand, temporal demand, stress, discouragement, frustration, and insecurity. These measures, rather than their corresponding subdimensions, are compared between team structure conditions. In addition to the NASA-TLX questions that assess at the individual level, supplementary questions queried participants about more global team characteristics, such as their overall teams’ effort, goals, quality of work, collaboration, and communication [6264]. The aforementioned assessments and questions utilized different rating scales and scale types. For ease, these will be discussed alongside their corresponding results in the succeeding sections.

4 Results

The results are broken into three main sections. First, Sec. 4.1 compares the performance, or profit achieved, between the interacting sub-structured and nominally inspired sub-structured team configurations. Second, in Sec. 4.2, problem-solving behaviors are compared. Behaviors include both communication and action counts at the team and discipline levels. Finally, Sec. 4.3 presents findings from the questionnaires related to perceptions of workload and the cognitive experience of team members. The statistical tests presented in Secs. 4.1 and 4.2 were run via Mann–Whitney U-tests for smaller sample sizes, while the statistics in Sec. 4.3 were run via standard t-tests.

4.1 Team Performance.

The first comparison between the two team configurations shows overall team performance. Due to the high degree of dependency between team member roles and achieving profitability, profit serves as the primary measure of team performance. Recall that during each problem-solving session, the problem manager continually submitted plans on behalf of their teams. The plan with the maximum profit that a team achieves during each session is tracked and averaged.

Figure 3 shows the average maximum profits across the two team conditions. By using a Mann–Whitney U-test, overall, across both sessions, the nominally inspired condition achieves a significantly higher profit than the interacting condition (p < 0.038, z = 2.08). When comparing each problem-solving session individually, the largest difference occurs in the first session. These results support the underlying basis of superior outcomes from nominal teams. Here, when team members working on similar aspects of the problem are not allowed to directly communicate (chat) or collaborate (share designs), they perform better. Whether this can be attributed to more controlled communication or another aspect of process loss in teams is explored next by examining the problem-solving behaviors across teams.

Fig. 3
Average team profit. Error bars show ±1 standard error.
Fig. 3
Average team profit. Error bars show ±1 standard error.
Close modal

4.2 Problem-Solving Behaviors.

In terms of problem-solving behaviors, two global metrics are analyzed: communication and action. The motivations underlying these two metrics are twofold. First, HyForm tracks both types of these behaviors over time, and because the teams are distributed/virtual, a team member can only either act or communicate. Thus, tracking these two measures allows complete reconstruction of the entire problem-solving process of a team. Second, a previous study by the authors revealed tradeoffs between time allocated toward communicating and time allocated toward acting, particularly following a problem shock like the one presented in this work. Thus, these tradeoffs can indicate insights into the cognitive allocation strategies of the different types of teams [61]. Communication count (or communication frequency) represents the cumulative number of messages from one team member to another, irrespective of the content within the message. Similarly, the action count is defined as any distinct action taken by a member within their respective module in HyForm. The counts for communication depend only on the originator of the message rather than the receiver.

Via Mann–Whitney U-tests, at the team level, the interacting sub-structured condition communicates significantly more than the nominally inspired substructure (p < 0.023, z = −2.27), when combining both problem-solving sessions. Looking at each session separately, the higher communication within the interacting sub-structured teams primarily holds true for the first problem-solving session (p < 0.041, z = −2.04) rather than the second (p < 0.25, z = −1.16). However, a similar trend exists in both conditions, where teams tend to communicate more following the problem shock. While perhaps not directly surprising, these results support the notion that the mediated communication channels of the nominally inspired sub-structured teams to allocate less cognitive resources to communicating.

Figure 4 examines communication at the discipline level, showing that the difference at the team level is profoundly driven by the drone designers. In fact, the only significant difference between the two team conditions is within the drone designers (p = 0.003, z = −2.98), rather than within the operations specialists (p = 0.54, z = −0.62) or the problem manager (p = 0.14, z = −1.48). Furthermore, comparing the trend across problem-solving sessions among the disciplines, the problem managers exhibit the steepest increase, or impact, from the problem shock between sessions. This indicates that while teams relied more on the problem manager after the problem switch, it did not materialize differently between the two team structures.

Fig. 4
Average communication count by team role. Error bars show ±1 standard error.
Fig. 4
Average communication count by team role. Error bars show ±1 standard error.
Close modal

Next, the action count is compared between the two team conditions. Here, the action count is the average number of design changes, regardless of what specific action is taken. Figure 5 shows the average action count per team condition for each discipline. Results show that there is no significant difference between team structures (p = 0. 0.25, z = 1.16; p = 0.72, z = −0.37; p = 0.76, z = 0.303, respectively). Thus, while the nominally inspired sub-structured teams dedicated less cognitive effort to communication, this did not directly translate to more design action effort. It could be the case that the additional effort allowed members in that team structure to slow down and think more, while not being distracted by additional communication.

Fig. 5
Average action count by team role. Error bars show ±1 standard error.
Fig. 5
Average action count by team role. Error bars show ±1 standard error.
Close modal

4.3 Team Members’ Experience

4.3.1 Cognitive and Workload Experience.

The last analyses explore the mid-study and poststudy questionnaires conducted with team members. The questions are based on the NASA-TLX survey to gain insights into team members’ cognitive demand and workload perceptions while working throughout on the task. Participants rated specific measures on a sliding, numerical scale from 0 to 100, with corresponding bounded visual scales from very low to very high. The higher rating on the performance scale indicates the better a member thought they performed, and a higher rating on the stress scale indicates a larger amount of stress a member experienced. The other dimensions follow the same logic. Additional queries asked participants about features related to the entire team itself, including the teams’ productivity, effort, and whether the team came to a consensus. Participants answered these on an ordinal scale bounded between “Very inaccurate” to “Very accurate” with seven options total.

Figure 6 shows the overall cognitive experience by the team member role, and Fig. 7 shows the overall mental workload by the team role. An intriguing result emerges when comparing the problem managers to other members on the team across both measures. The problem managers face a significantly greater amount of both cognitive experience (p < 0.001) and mental workload (p = 0.002) in the nominally inspired sub-structured condition during the first problem-solving session. Diving into the underlying subdimensions of these measures, the problem managers face a greater amount of both stress (p < 0.001) and frustration (p < 0.001) in nominally inspired sub-structured condition. The operations specialists are the members who face some of the lowest levels in the nominally inspired sub-structured team. Generally, the interacting sub-structured condition does not see as much variability in these measures across team roles as do the members in the nominally inspired sub-structured condition. These results are interesting because the structure inspired through nominality, while lowering the burden of extra communication, made team members much more sensitive to stress, particularly the problem managers. However, the imposed stress did not reach a level to hinder performance.

Fig. 6
Overall cognitive experience by team role. Boxes represent the interquartile range.
Fig. 6
Overall cognitive experience by team role. Boxes represent the interquartile range.
Close modal
Fig. 7
Overall mental workload by team role. Boxes represent the interquartile range.
Fig. 7
Overall mental workload by team role. Boxes represent the interquartile range.
Close modal

4.3.2 Team Dynamics Experience.

In addition to the workload and cognitive experiences, questions also further queried team members on the dynamics of the entire team. Table 1 shows a subset of the list of questions. These specific questions are chosen from the broader set due to their relevancy in understanding perceptions of team behaviors and interactions, which are to be impacted by the team structure differences. Questions were answered on ordinal scales representing seven discrete options, bounded between “strongly disagree/inaccurate” and “strongly agree/accurate.” The exact rating categories are shown in the subsequent figures (Fig. 8 and 9). To quantitatively analyze the differences, the categories are converted to a numerical scale (from 1 (low) to 7 (high)), averaged, and tested via a paired, two-tailed t-test.

Fig. 8
Participants’ perceptions on three team-level dimensions showing significant differences between team structures: effective communication and effective feedback within the teams
Fig. 8
Participants’ perceptions on three team-level dimensions showing significant differences between team structures: effective communication and effective feedback within the teams
Close modal
Fig. 9
Participants’ perceptions on three team-level dimensions showing significant differences between team structures: equal participation and cooperation within the teams
Fig. 9
Participants’ perceptions on three team-level dimensions showing significant differences between team structures: equal participation and cooperation within the teams
Close modal
Table 1

A subset of the team-level questions from the poststudy questionnaire conducted with team members

QuestionResponse type
“Team fulfills its mission”Accurate/Inaccurate
“Team accomplishes its objectives”Accurate/Inaccurate
“Team meets the requirements”Accurate/Inaccurate
“Team achieves its goals”Accurate/Inaccurate
“Team is productive”Accurate/Inaccurate
“Team is efficient”Accurate/Inaccurate
“Team communicates effectively”Accurate/Inaccurate
“Team has a clear group structure”Accurate/Inaccurate
“Team easily comes to a consensus”Accurate/Inaccurate
“Team gives effective feedback”Accurate/Inaccurate
“Team makes decisions easily”Accurate/Inaccurate
“Team participates equally”Agree/Disagree
“Members on this team are clear about their roles”Agree/Disagree
“Team members are cooperative”Agree/Disagree
“Subgroups are necessary”Agree/Disagree
QuestionResponse type
“Team fulfills its mission”Accurate/Inaccurate
“Team accomplishes its objectives”Accurate/Inaccurate
“Team meets the requirements”Accurate/Inaccurate
“Team achieves its goals”Accurate/Inaccurate
“Team is productive”Accurate/Inaccurate
“Team is efficient”Accurate/Inaccurate
“Team communicates effectively”Accurate/Inaccurate
“Team has a clear group structure”Accurate/Inaccurate
“Team easily comes to a consensus”Accurate/Inaccurate
“Team gives effective feedback”Accurate/Inaccurate
“Team makes decisions easily”Accurate/Inaccurate
“Team participates equally”Agree/Disagree
“Members on this team are clear about their roles”Agree/Disagree
“Team members are cooperative”Agree/Disagree
“Subgroups are necessary”Agree/Disagree

Table 2 presents the questions with significant differences between the nominally inspired (uN) and interacting (uI) sub-structured teams. The table presents the means, standard deviations, and p-values for session 1 (s1) and session 2 (s2). Results show that during the second problem-solving session, the nominally inspired teams perceive their teams as significantly less efficient (p = 0.048) and their team having a less clear group structure (p = 0.057). Across both problem-solving sessions, the nominally inspired sub-structured teams deem their teams as having less effective feedback (p = 0.027 and p = 0.049), less effective communication, (p = 0.022 and p = 0.057), and less equal (p = 0.007 and p = 0.001) and cooperative (p = 0.040 and p = 0.019) participation. Figure 8 dives deeper into these latter four dimensions. Overall, while the nominally inspired substructure outperforms the interacting substructure in terms of performance, there are additional perceived downstream effects not necessarily related to how the team performs, but related to team operations (i.e., team process). These results highlight that the nominally inspired substructure can have negative, consequential impacts on how members feel supported in the team, whether that is stress or perceptions of how their team works together. While these negative perspectives exist, they did not have a detrimental impact on performance.

Table 2

Team-level questions showing significant differences from the poststudy questionnaire conducted with team members

QuestionResponse typeMeans/STD (s1)Means/STD (s2)P-value (s1)P-value (s2)
“Team is efficient”Accurate/InaccurateμN=3.80±1.65μI=3.94±1.73μN=4.15±1.74μI=4.77±2.11p = 0.749p = 0.048
“Team communicates effectively”Accurate/InaccurateμN=3.04±1.86μI=3.92±1.99μN=3.92±1.79μI=4.58±1.91p = 0.022p = 0.057
“Team has a clear group structure”Accurate/InaccurateμN=3.64±1.73μI=3.96±1.76μN=4.23±1.64μI=4.83±1.63p = 0.336p = 0.057
“Team gives effective feedback”Accurate/InaccurateμN=2.96±1.67μI=3.81±1.95μN=3.91±1.79μI=4.58±1.81p = 0.0270p = 0.049
“Team participates equally”Agree/DisagreeμN=3.61±1.69μI=4.5±1.70μN=4.36±1.76μI=5.19±1.81p = 0.007p = 0.010
“Team members are cooperative”Agree/DisagreeμN=3.89±1.87μI=4.62±1.81μN=4.62±1.62μI=5.25±1.79p = 0.040p = 0.019
QuestionResponse typeMeans/STD (s1)Means/STD (s2)P-value (s1)P-value (s2)
“Team is efficient”Accurate/InaccurateμN=3.80±1.65μI=3.94±1.73μN=4.15±1.74μI=4.77±2.11p = 0.749p = 0.048
“Team communicates effectively”Accurate/InaccurateμN=3.04±1.86μI=3.92±1.99μN=3.92±1.79μI=4.58±1.91p = 0.022p = 0.057
“Team has a clear group structure”Accurate/InaccurateμN=3.64±1.73μI=3.96±1.76μN=4.23±1.64μI=4.83±1.63p = 0.336p = 0.057
“Team gives effective feedback”Accurate/InaccurateμN=2.96±1.67μI=3.81±1.95μN=3.91±1.79μI=4.58±1.81p = 0.0270p = 0.049
“Team participates equally”Agree/DisagreeμN=3.61±1.69μI=4.5±1.70μN=4.36±1.76μI=5.19±1.81p = 0.007p = 0.010
“Team members are cooperative”Agree/DisagreeμN=3.89±1.87μI=4.62±1.81μN=4.62±1.62μI=5.25±1.79p = 0.040p = 0.019

5 Discussion

This work studies two different substructures of teams. While the meta-teams are interacting, their substructures are not necessarily. On the one hand, the interacting sub-structured configuration consists of members within the same discipline who can communicate and access each other's completed designs. On the other hand, the nominally inspired sub-structured configuration consists of members within the same discipline who cannot directly communicate with one another nor see each other's completed designs. In essence, the disciplines in this latter structure are a nominal team or a team of individuals working alone. They submit their parts of the task to the problem manager without direct feedback from their counterparts, and then the problem manager chooses the final designs.

Comparing team performance, the nominally inspired sub-structured teams perform significantly better than the interacting sub-structured teams in terms of the profits of their final plans. This trend in the performance level is seen across both problem-solving sessions, though most prominent in the first session. Recall a problem shock is introduced between problem-solving sessions that added constraints on the design spaces. The trends show that the nominally inspired sub-structured teams reach higher performance levels in the first session and maintained those levels in the second, after the shock. On the other hand, the interacting sub-structured teams are shown to improve their performance between sessions, though still not reaching the levels achieved by the nominally inspired sub-teams.

The results from the team behaviors show that while teams act equally, the interacting sub-structured groups communicate significantly more than the nominally inspired groups. While this result might at first seems trivial, pairing this result with performance shows that more frequent discourse may add an additional burden onto teams, shifting their cognitive efforts from designing to talking, leading to inferior performance. Results from previous studies in the psychology literature support such a claim. Verbalization of one's own ideas is shown to have a significant detrimental impact on outcomes, and that communication follows a curvilinear relationship, where too little or too much can be detrimental [65,66]. Of course, all communication cannot be unfavorable. The nominally inspired sub-structured teams’ communication is more directed. All team members direct their communication to the problem manager, who manages not only the teams’ final plans but also the information flow between and among disciplines. This more targeted effort of communication and information flow may be more efficient and effective for team performance.

However, this more targeted communication certainly does not come burden free. Even though the problem managers play an information bridging role between disciplines in both team conditions, the questionnaire data reveal that the problem managers in the nominally inspired sub-structured teams perceive much greater cognitive and workload burdens. These higher burdens are not only more heightened than other members on their teams but also more heightened than the problem managers in the interacting sub-team’s condition. This can be a consequence of the greater centrality of the problem managers in these teams. In the nominally inspired condition, the problem managers not only synthesize final designs but also need to communicate across and within disciplines (via more channels), adding cognitive demand for these managers.

Moreover, members within the nominally inspired sub-structured teams perceive their teams as significantly inferior across various dimensions. Most prominent among these include communication and feedback effectiveness, and equal member participation. These are noteworthy dimensions because from the set of questions, these are most relevant to the teams’ process, rather than team outcomes. For example, team communication could have been perceived as inferior as all members cannot directly interact, which goes hand in hand with the effective feedback. While the problem manager in the nominally inspired substructure should have been providing feedback to the rest of the team, perhaps they cannot provide the needed feedback related to the drone designs or operation plans that members in those disciplines hope for. Yet, these nominally inspired sub-structured teams perform better than their interacting sub-structured counterparts, even though they did not feel as supported.

To bring these implications to a more practical light, imagine a scenario where an engineering design team is working on bringing a new product to market. The team is composed of engineers, who design the product, and a business unit, who are identifying market entry strategies. In practice, there are several different forms that this team can take, such as being divided by functions or being cross functional. The former is the structure studied in this work. For this type of structure to be optimal, instead of the engineers working directly with each other and the marketing team working directly with each other, they would be better off working individually. Then, a central, cross-functional team manager mediates the collection of ideas and coalesces the final product strategy. Our results indicate that this structure of the team may be most optimal in producing the best product launch.

The implications of this research extend, possibly even more opportunistically, to distributed teaming. With advances in the digital age and technology, and as a by-product likely here to stay of the COVID-19 pandemic, teamwork, communication, and collaboration are all taking reimagined forms in the workforce, often through computer/technology-mediated forms [67,68]. For example, distributed product development teams are becoming more prevalent. The results here indicate that less and more targeted communication is better. With the technological reliance for interaction of distributed teams, these technologies can start to direct and restrict members’ communication frequencies and patterns to improve team effectiveness.

The important takeaway from this work is that, even in interdisciplinary teams, structuring the homogeneous substructures in a nominally inspired manner, with individuals solving their tasks alone, delivers results that are superior to having interacting teams within the disciplines. This work notably extends the emerging set of findings that nominal design teams appear to be superior to interacting design teams. This work has implications for how industry could alternatively structure their teams in practice and how engineering instructors could structure teams within educational projects.

6 Limitations and Future Work

There are several limitations of this research that may affect the generalizability of the findings, but at the same time, also open opportunistic directions of the future work. First, the study is run entirely with mechanical engineering students, so the results are not seen across varying levels of professional experience. While this is the case, we do not expect the findings to be significantly constrained to just student groups or just mechanical engineers. The design task is not wholly related to mechanical engineering, and the prestudy questionnaire queried participants for significant prior exposure to drone design/operations experiences, showing none. So, the results are not directly linked to the demographics in this case. Consequently, there is an expectation that these results will generalize across individuals of other backgrounds and experiences. Within human subject studies like this, backgrounds and experiences need to be controlled for, which was done here. It would be interesting to now extend this work to professional mechanical engineering to identify how these findings generalize across disciplines as well as expertise levels.

Second, several results rely on self-assessments as the method of data collection, namely, the assessments on cognitive experience, workload, and general experiences of working in the team. It should be acknowledged that self-assessments like these are subjective and inherently come with limitations. For example, individuals may not always be the best at judging their own abilities or self-reflections and be biased, such as overevaluating or underevaluating their own performance level [69,70]. However, the results here do not rely solely on self-reflections of self-performance, but rather on behavioral experiences, such as stress. Regardless, it is important to note these inherent limitations of self-assessments. Furthermore, it should also be noted that there are many other dimensions of team constructs that have shown to impact performance but were not studied here. This includes trust, psychological safety, information sharing, and even gender diversity [7174]. The former two of these constructs lend nicely to direct research extensions. For example, it would be interesting to study how the constriction of communication within the nominally inspired sub-teams impacts trust across the different disciplines, especially if they are not able to directly interact or build norms at the beginning of team formation. This restriction of communication may also negatively impact psychological safety, which has been shown to be one of the strongest predictors of team outcomes experiences.

The results presented in this article lay several immediate opportunities for future research. First, being that the main difference between team structures relies on the communication channels, the content of chat among a team may also be critical to effective performance. Currently, only the count of communication is considered. Techniques in natural language processing can examine the cohesion and the content of the discourse, and what type of information is being transferred between members. This can start to determine the quality and the content of the communication and whether this is correlated to performance, which as discussed previously, can also mediate performance. Moreover, the action analysis also only considers action count. The diversity and types of actions being performed can be coded as an additional depth of behavioral data. Another facet of the future work can look at more precise correlations between the questionnaire data and overall team performance. Currently, the results identify insights between the nominally inspired and interacting sub-structured teams, but comparing across higher- and lower-performing teams could reveal further insights on teaming. This also extends to examining the differences in team behaviors, communications, and actions between the higher- and lower-performing teams.

7 Conclusion

This work studies the effects on behaviors and cognition of different substructures on the performance of interdisciplinary teams during a complex engineering task. The substructures embody collaborative, interacting teams and nominally inspired teams of individuals working on their tasks solo. Results show that teams with nominally inspired substructures outperform teams with interacting substructures. In addition, the teams’ communication with nominally inspired substructures is more targeted and efficient, communicating significantly less frequently. However, this nature of interaction can produce an extra burden on managers or mediators that are more centrally focused within this type of communication network. Furthermore, the team perceives this nominally inspired substructure as inferior across several dimensions, including the effectiveness of team communication, feedback, and the equality of contribution among other members. Overall, the results provide insights into the interaction patterns for interdisciplinary teams and the advantage resulting from synergizing the benefits of individual problem solving with interacting teams.

Footnote

2

Source code for HyForm is available in https://github.com/hyform

Acknowledgment

The authors would like to thank Gary Stump for his discussion on this project. This work was supported by the Air Force Office of Scientific Research (Grant No. FA9550-18-1-0088) and the Defense Advanced Research Projects Agency (Cooperative Agreement N66001-17-1-4064). Any opinions, findings, and conclusions or recommendations expressed in this article are those of the authors and do not necessarily reflect the views of the sponsors. A version of this article has been accepted at the International Design and Engineering Technical Conferences [75].

Conflict of Interest

There are no conflicts of interest.

Data Availability Statement

The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.

References

1.
Kim
,
K.
, and
Lee
,
K.-P.
,
2016
, “
Collaborative Product Design Processes of Industrial Design and Engineering Design in Consumer Product Companies
,”
Des. Stud.
,
46
(
9
), pp.
226
260
.
2.
Coburn
,
J. Q.
,
Salmon
,
J. L.
, and
Freeman
,
I.
,
2018
, “
Effectiveness of an Immersive Virtual Environment for Collaboration With Gesture Support Using Low-Cost Hardware
,”
ASME J. Mech. Des.
,
140
(
4
), p.
042001
.
3.
Cheng
,
K.
, and
Olechowski
,
A.
,
2021
, “
Some (Team) Assembly Required: An Analysis of Collaborative Computer-Aided Design Assembly
,”
ASME 2021 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference
,
Virtual
,
Aug. 17–19
, Vol. 85420. American Society of Mechanical Engineers, p. V006T06A026.
4.
McComb
,
C.
,
Cagan
,
J.
, and
Kotovsky
,
K.
,
2017
, “
Optimizing Design Teams Based on Problem Properties: Computational Team Simulations and an Applied Empirical Test
,”
ASME J. Mech. Des.
,
139
(
4
), p.
041101
.
5.
Gyory
,
J. T.
,
Cagan
,
J.
, and
Kotovsky
,
K.
,
2019
, “
Are You Better off Alone? Mitigating the Underperformance of Engineering Teams During Conceptual Design Through Adaptive Process Management
,”
Res. Eng. Des.
,
30
(
1
), pp.
85
102
.
6.
Diehl
,
M.
, and
Stroebe
,
W.
,
1987
, “
Productivity Loss in Brainstorming Groups: Toward the Solution of a Riddle
,”
J. Pers. Soc. Psychol.
,
53
(
3
), pp.
497
509
.
7.
Diehl
,
M.
, and
Stroebe
,
W.
,
1991
, “
Productivity Loss in Idea-Generating Groups: Tracking Down the Blocking Effect
,”
J. Pers. Soc. Psychol.
,
61
(
3
), pp.
392
403
.
8.
Taylor
,
D. W.
,
Berry
,
P. C.
, and
Block
,
C. H.
,
1958
, “
Does Group Participation When Using Brainstorming Facilitate or Inhibit Creative Thinking?
,”
Adm. Sci. Quart.
,
3
(
1
), pp.
23
47
.
9.
Tadmor
,
C. T.
,
Satterstrom
,
P.
,
Jang
,
S.
, and
Polzer
,
J. T.
,
2012
, “
Beyond Individual Creativity: The Superadditive Benefits of Multicultural Experience for Collective Creativity in Culturally Diverse Teams
,”
J. Cross Cult. Psychol.
,
43
(
3
), pp.
384
392
.
10.
van Knippenberg
,
D.
,
Nishii
,
L. H.
, and
Dwertmann
,
D. J.
,
2020
, “
Synergy From Diversity: Managing Team Diversity to Enhance Performance
,”
Behav. Sci. Policy
,
6
(
1
), pp.
75
92
.
11.
Hoever
,
I. J.
,
van Knippenberg
,
D.
,
van Ginkel
,
W. P.
, and
Barkema
,
H. G.
,
2010
, “
Fostering Team Creativity: Perspective Taking as Key to Unlocking Diversity's Potential
,”
Academy of Management Proceedings. No. 1. Briarcliff Manor, NY 10510: Academy of Management
,
Montreal, Canada
,
Aug. 8–10
, pp.
1
6
.
12.
Prat
,
A.
,
2002
, “
Should a Team Be Homogeneous?
,”
Eur. Econ. Rev.
,
46
(
7
), pp.
1187
1207
.
13.
Hoever
,
I. J.
,
Zhou
,
J.
, and
van Knippenberg
,
D.
,
2018
, “
Different Strokes for Different Teams: The Contingent Effects of Positive and Negative Feedback on the Creativity of Informationally Homogeneous and Diverse Teams
,”
Acad. Manag. J.
,
61
(
6
), pp.
2159
2181
.
14.
Diehl
,
M.
,
1992
, “
Production Losses in Brainstorming Groups: The Effects of Group Composition on Fluency and Flexibility of Ideas
,”
Joint Meeting of the European Association of Experimental Social Psychology and the Society for Experimental Social Psychology
,
Leuven/Louvain-la-Neuve, Belgium
.
15.
GitHub Inc.
, “
HyFormTM GitHub
,” https://github.com/hyform/drone-testbed-server/releases/tag/2021-March-v2, Accessed April 23, 2021.
16.
Wright
,
D. B.
,
2007
, “
Calculating Nominal Group Statistics in Collaboration Studies
,”
Behav. Res. Methods
,
39
(
3
), pp.
460
470
.
17.
Linsey
,
J. S.
, and
Becker
,
B.
,
2011
, “Effectiveness of Brainwriting Techniques: Comparing Nominal Groups to Real Teams,”
Design Creativity 2010
,
Springer
,
London
, pp.
165
171
.
18.
Chen
,
C. X.
,
Trotman
,
K. T.
, and
Zhou
,
F.
,
2015
, “
Nominal Versus Interacting Sub-Team Electronic Fraud Brainstorming in Hierarchical Audit Teams
,”
Account. Rev.
,
90
(
1
), pp.
175
198
.
19.
Maier
,
T.
,
DeFranco
,
J.
, and
McComb
,
C.
,
2019
, “
An Analysis of Design Process and Performance in Distributed Data Science Teams
,”
Team Perform. Manag.
,
25
(
7/8
), pp.
419
439
.
20.
Phadnis
,
V.
,
Arshad
,
H.
,
Wallace
,
D.
, and
Olechowski
,
A.
,
2021
, “
Are Two Heads Better Than One for Computer-Aided Design?
,”
ASME J. Mech. Des
,
143
(
7
), p.
071401
.
21.
Mello
,
A. S.
, and
Ruckes
,
M. E.
,
2006
, “
Team Composition
,”
J. Bus.
,
79
(
3
), pp.
1019
1039
.
22.
Cannon-Bowers
,
J. A.
,
Salas
,
E.
, and
Converse
,
S.
,
1993
, “Shared Mental Models in Expert Team Decision Making,”
Individual and Group Decision Making: Current Issues
,
N. J.
Castellan
, ed.,
Lawrence Erlbaum Associates, Inc.
,
Hillsdale, NJ
, pp.
221
246
.
23.
Hill
,
A. W.
,
Dong
,
A.
, and
Agogino
,
A. M.
,
2002
, “Towards Computational Tools for Supporting the Reflective Team,”
Artificial Intelligence in Design’02
,
Springer
,
Dordrecht
, pp.
305
325
.
24.
Weimann
,
P.
,
Hinz
,
C.
,
Scott
,
E.
, and
Pollock
,
M.
,
2010
, “
Changing the Communication Culture of Distributed Teams in a World Where Communication Is Neither Perfect nor Complete
,”
Electron. J. Inf. Syst. Eval.
,
13
(
2
), pp.
187
196
.
25.
Kim
,
M. S.
,
2007
, “
Analysis of Team Interaction and Team Creativity of Student Design Teams Based on Personal Creativity Modes
,”
Proceedings of ASME 2007 International Design Engineering Technical Conferences and Computers
,
Las Vegas, NV
,
Sept. 4–7
, pp.
1
13
.
26.
Song
,
B.
,
Soria Zurita
,
N. F.
,
Zhang
,
G.
,
Stump
,
G.
,
Balon
,
C.
,
Miller
,
S. W.
,
Yukish
,
M.
,
Cagan
,
J.
, and
McComb
,
C.
,
2020
, “
Toward Hybrid Teams: A Platform to Understand Human-Computer Collaboration During the Design of Complex Engineered Systems
,”
International Design Conference – Design 2020
,
Virtual
,
Oct. 26–29
, pp.
1551
1560
.
27.
Nguyen
,
T. A.
, and
Zeng
,
Y.
,
2017
, “
Effects of Stress and Effort on Self-Rated Reports in Experimental Study of Design Activities
,”
J. Intell. Manuf.
,
28
(
7
), pp.
1609
1622
.
28.
Dinar
,
M.
,
Shah
,
J. J.
,
Cagan
,
J.
,
Leifer
,
L.
,
Linsey
,
J.
,
Smith
,
S. M.
, and
Hernandez
,
N. V.
,
2015
, “
Empirical Studies of Designer Thinking: Past, Present, and Future
,”
ASME J. Mech. Des.
,
137
(
2
), p.
021101
.
29.
Wickens
,
C. D.
,
1992
,
Engineering Psychology and Human Performance
,
HarperCollins
,
New York
.
30.
Brown
,
I. D.
,
1994
, “
Driver Fatigue
,”
Hum. Factors
,
36
(
2
), pp.
298
314
.
31.
Fallahi
,
M.
,
Motamedzade
,
M.
,
Heidarimoghadam
,
R.
,
Soltanian
,
A. R.
, and
Miyake
,
S.
,
2016
, “
Effects of Mental Workload on Physiological and Subjective Responses During Traffic Density Monitoring: A Field Study
,”
Appl. Ergon.
,
52
(
1
), pp.
95
103
.
32.
Heikoop
,
D. D.
,
de Winter
,
J. C.
,
van Arem
,
B.
, and
Stanton
,
N. A.
,
2017
, “
Effects of Platooning on Signal-Detection Performance, Workload, and Stress: A Driving Simulator Study
,”
Appl. Ergon.
,
60
(
4
), pp.
116
127
.
33.
Dym
,
C. L.
,
Agogino
,
A. M.
,
Eris
,
O.
,
Frey
,
D. D.
, and
Leifer
,
L. J.
,
2005
, “
Engineering Design Thinking, Teaching, and Learning
,”
J. Eng. Edu.
,
94
(
1
), pp.
103
120
.
34.
Kana
,
A. A.
,
Shields
,
C. P. F.
, and
Singer
,
D. J.
,
2016
, “
Why Is Naval Design Decision-Making So Difficult?
RINA, Royal Institution of Naval Architects—International Conference on Warship 2016: Advanced Technologies in Naval Design, Construction, and Operation, Royal Institution of Naval Architects
,
London, UK
,
Oct. 26–27
.
35.
Sandi
,
C.
,
2013
, “
Stress and Cognition
,”
Wiley Interdiscip. Rev. Cogn. Sci.
,
4
(
3
), pp.
245
261
.
36.
Hart
,
S. G.
, and
Staveland
,
L. E.
,
1988
, “
Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research.
Adv. Psych.
,
52
(
4
), pp.
139
183
.
37.
Dykstra
,
J.
, and
Paul
,
C. L.
,
2018
, “
Cyber Operations Stress Survey (COSS): Studying Fatigue, Frustration, and Cognitive Workload in Cybersecurity Operations
,”
11th USENIX Workshop on Cyber Security Experimentation and Test (CSET 18)
,
Baltimore, MD
,
Aug. 13
.
38.
Nolte
,
H.
, and
McComb
,
C.
,
2021
, “
The Cognitive Experience of Engineering Design: An Examination of First-Year Student Stress Across Principal Activities of the Engineering Design Process
,”
Des. Sci.
,
7
(
3
), pp.
1
31
.
39.
Battiste
,
V.
, and
Bortolussi
,
M.
,
1988
, “
Transport Pilot Workload: A Comparison of Two Subjective Techniques
,”
Proceedings of the Human Factors Society Thirty Second Annual Meeting
,
Human Factors Society
,
Santa Monica, CA
, pp.
150
154
.
40.
Rubio
,
S.
,
Díaz
,
E.
,
Martín
,
J.
, and
Puente
,
J. M.
,
2004
, “
Evaluation of Subjective Mental Workload: A Comparison of SWAT, NASA-TLX, and Workload Profile Methods
,”
Appl. Psychol.
,
53
(
1
), pp.
61
86
.
41.
Hertel
,
G.
,
Geister
,
S.
, and
Konradt
,
U.
,
2005
, “
Managing Virtual Teams: A Review of Current Empirical Research
,”
Hum. Resour. Manag. Rev.
,
15
(
1
), pp.
69
95
.
42.
Gibson
,
C. B.
, and
Gibbs
,
J. L.
,
2006
, “
Unpacking the Concept of Virtuality: The Effects of Geographic Dispersion, Electronic Dependence, Dynamic Structure, and National Diversity on Team Innovation
,”
Adm. Sci. Q.
,
51
(
3
), pp.
451
495
.
43.
Mascitelli
,
R.
,
2000
, “
From Experience: Harnessing Tacit Knowledge to Achieve Breakthrough Innovation
,”
J. Prod. Innov. Manag.
,
17
(
3
), pp.
179
193
.
44.
Sailer
,
K.
,
2011
, “
Creativity as Social and Spatial Process
,”
Facilities
,
29
(
1/2
), pp.
6
18
.
45.
Thompson
,
L.
,
2021
, “
Virtual Collaboration Won’t Be the Death of Creativity
,”
MIT Sloan Manag Rev
, https://sloanreview.mit.edu/article/virtual-collaboration-wont-be-the-death-of-creativity/
46.
Leenders
,
R. T. A. J.
,
Van Engelen
,
J. M. L.
, and
Kratzer
,
J.
,
2003
, “
Virtuality, Communication, and New Product Team Creativity: A Social Network Perspective
,”
J. Eng. Technol. Manag.
,
20
(
1–2
), pp.
69
92
.
47.
Marlow
,
S. L.
,
Lacerenza
,
C. N.
,
Paoletti
,
J.
,
Burke
,
C. S.
, and
Salas
,
E.
,
2018
, “
Does Team Communication Represent a One-Size-Fits-All Approach?: A Meta-Analysis of Team Communication and Performance
,”
Organ. Behav. Hum. Decis. Process.
,
144
(
1
), pp.
145
170
.
48.
Jarvenpaa
,
S. L.
, and
Leidner
,
D. E.
,
1999
, “
Communication and Trust in Global Virtual Teams
,”
Org. Sci.
,
10
(
6
), pp.
791
815
.
49.
Marlow
,
S. L.
,
Lacerenza
,
C. N.
, and
Salas
,
E.
,
2017
, “
Communication in Virtual Teams: A Conceptual Framework and Research Agenda
,”
Hum. Resour. Manag. Rev.
,
27
(
4
), pp.
575
589
.
50.
Martins
,
L. L.
,
Gilson
,
L. L.
, and
Maynard
,
M. T.
,
2004
, “
Virtual Teams: What Do We Know and Where Do We Go From Here?
,”
J. Manag.
,
30
(
6
), pp.
805
835
.
51.
Hiltz
,
S. R.
,
Johnson
,
K.
, and
Turoff
,
M.
,
1986
, “
Experiments in Group Decision Making Communication Process and Outcome in Face-to-Face Versus Computerized Conferences
,”
Hum. Commun. Res.
,
13
(
2
), pp.
225
252
.
52.
Singh
,
H.
,
Cascini
,
G.
, and
McComb
,
C.
,
2022
, “
Virtual and Face-to-Face Team Collaboration Comparison Through an Agent-Based Simulation
,”
ASME. J. Mech. Des.
,
144
(
7
), p.
071706
.
53.
Kirkman
,
B. L.
, and
Mathieu
,
J. E.
,
2005
, “
The Dimensions and Antecedents of Team Virtuality
,”
J. Manag.
,
31
(
5
), pp.
700
718
.
54.
Marks
,
M. A.
,
Zaccaro
,
S. J.
, and
Mathieu
,
J. E.
,
2000
, “
Performance Implications of Leader Briefings and Team-Interaction Training for Team Adaptation to Novel Environments
,”
J. Appl. Psychol.
,
85
(
6
), pp.
971
986
.
55.
González-Romá
,
V.
, and
Hernández
,
A.
,
2014
, “
Climate Uniformity: Its Influence on Team Communication Quality, Task Conflict, and Team Performance
,”
J. Appl. Psychol.
,
99
(
6
), pp.
1042
1058
.
56.
Warkentin
,
M. E.
,
Sayeed
,
L.
, and
Hightower
,
R.
,
1997
, “
Virtual Teams Versus Face-to-Face Teams: An Exploratory Study of a Web-Based Conference System
,”
Dec. Sci.
,
28
(
4
), pp.
975
996
.
57.
McIntyre
,
R.M
, and
Salas
,
E.
,
1995
, “
Measuring and Managing for Team Performance: Emerging Principles From Complex Environments
,”
Team Effectiveness and Decision Making in Organizations
,
16
, pp.
9
45
.
58.
Keyton
,
J.
,
1997
, “
Coding Communication in Decision-Making Groups
,”
Managing Group Life: Communicating in Decision-Making Groups
, pp.
236
269
.
59.
Monge
,
P. R.
,
Contractor
,
N. S.
,
Contractor
,
P. S.
,
Peter
,
R.
, and
Noshir
,
S.
,
2003
,
Theories of Communication Networks
,
Oxford University Press
,
Oxford, UK
.
60.
Gyory
,
J. T.
,
Soria Zurita
,
N. F.
,
Martin
,
J.
,
Balon
,
C.
,
McComb
,
C.
,
Kotovsky
,
K.
, and
Cagan
,
J.
,
2022
, “
Human Versus Artificial Intelligence: A Data-Driven Approach to Real-Time Process Management During Complex Engineering Design
,”
ASME J. Mech. Des.
,
144
(
2
), p.
021405
.
61.
Song
,
B.
,
Gyory
,
J. T.
,
Zhang
,
G.
,
Soria Zurita
,
N. F.
,
Stump
,
G.
,
Martin
,
J.
,
Miller
,
S.
,
Balon
,
C.
,
Yukish
,
M.
,
McComb
,
C.
, and
Cagan
,
J.
,
2022
, “
Decoding the Agility of Human—Decoding the Agility of Human-Artificial Intelligence Hybrid Teams in Complex Problem Solving
,”
Des. Stud.
, p.
101094
.
62.
Gibson
,
C. B.
,
Zellmer-Bruhn
,
M. E.
, and
Schwab
,
D. P.
,
2003
, “
Team Effectiveness in Multinational Organizations
,”
Group Organ. Manag.
,
28
(
4
), pp.
444
474
.
63.
Schaefer
,
K. E.
,
2016
, “Measuring Trust in Human Robot Interactions: Development of the “Trust Perception Scale-HRI,”
Robust Intelligence and Trust in Autonomous Systems
,
R.
Mittu
,
D.
Sofge
,
A.
Wagner
, and
W.
Lawless
, eds.,
Springer, New York
, pp.
191
218
.
64.
Wheelan
,
S. A.
, and
Hochberger
,
J. M.
,
1966
, “
Validation Studies of the Group Development Questionaire
,”
Small Group Res.
,
27
(
1
), pp.
143
170
.
65.
Sio
,
U. N.
,
Kotovsky
,
K.
, and
Cagan
,
J.
,
2018
, “
Silence Is Golden: The Effect of Verbalization on Group Performance
,”
J. Exp. Psychol.
,
147
(
6
), pp.
939
944
.
66.
Patrashkova-Volzdoska
,
R. R.
,
McComb
,
S. A.
,
Green
,
S. G.
, and
Compton
,
W. D.
,
2003
, “
Examining a Curvilinear Relationship Between Communication Frequency and Team Performance in Cross-Functional Project Teams
,”
IEEE Trans. Eng. Manag.
,
50
(
3
), pp.
262
269
.
67.
Mortensen
,
M.
, and
Hinds
,
P. J.
,
2001
, “
Conflict and Shared Identity in Geographically Distributed Teams
,”
Int. J. Confl. Manag.
,
50
(
3
), pp.
212
238
.
68.
Katsma
,
C.
,
Amrit
,
C.
,
van Hillegersberg
,
J.
, and
Sikkel
,
K.
,
2013
, “
Can Agile Software Tools Bring the Benefits of a Task Board to Globally Distributed Teams?
International Workshop on Global Sourcing of Information Technology and Business Processes
,
Val d'Isère, France
,
Mar. 11–14
,
Springer
,
Berlin/Heidelberg
, pp.
163
179
.
69.
Kruger
,
J.
, and
Dunning
,
D.
,
1999
, “
Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments
,”
J. Person. Soc. Psychol.
,
77
(
6
), p.
1121
.
70.
Karpen
,
S. C.
,
2018
, “
The Social Psychology of Biased Self-Assessment
,”
Am. J. Pharm. Edu.
,
82
(
5
), p.
6299
.
71.
De Jong
,
B. A.
,
Dirks
,
K. T.
, and
Gillespie
,
N.
,
2016
, “
Trust and Team Performance: A Meta-Analysis of Main Effects, Moderators, and Covariates
,”
J. Appl. Psychol.
,
101
(
8
), p.
1134
.
72.
Edmondson
,
A.
,
1999
, “
Psychological Safety and Learning Behavior in Work Teams
,”
Admin. Sci. Quarter.
,
44
(
2
), pp.
350
383
.
73.
Mesmer-Magnus
,
J. R.
, and
DeChurch
,
L. A.
,
2009
, “
Information Sharing and Team Performance: A Meta-Analysis
,”
J. Appl. Psychol.
,
94
(
2
), pp.
535
546
.
74.
Bear
,
J. B.
, and
Woolley
,
A. W.
,
2011
, “
The Role of Gender in Team Collaboration and Performance
,”
Interdiscip. Sci. Rev.
,
36
(
2
), pp.
146
153
.
75.
Gyory
,
J. T.
,
Soria Zurita
,
N. F.
,
Cagan
,
J.
, and
McComb
,
C.
,
2022
, “
Comparing Nominal and Interacting Sub-Structured Teams in an Interdisciplinary Engineering Design Task
,”
International Design and Engineering Technical Conferences
,
St. Louis, MO
,
Aug. 14–17
, pp.
1
9
.