Like the last entry, this entry is also derived from a paper I wrote for my master's work in the Technology, Innovation, and Education program at Harvard's Graduate School of Education. My advisor on this project was noted knowledge networks researcher Barry Fishman, visiting Harvard from the University of Michigan. Harvard Business School Publishing is about to launch the first two products in our new series of online business simulations and I wanted to investigate how we can ensure that this product line produces effective learning for our customers. This paper explores a framework for assessing effectiveness in online business simulations.
Problem Statement and Paper Structure
Harvard Business School Publishing (HBSP) has created a new product line of online business simulations. We know from faculty customer surveys that they are highly concerned themselves with how easy the simulations are to use, administer, and debrief, and that they are equally concerned that their demanding, and increasingly technically-literate, students rate the learning experience as engaging and dynamic. Hence these eLearning products are designed to be engaging, interactive, usable and teachable.
We also know from faculty surveys that these types of learning tools are considered to be very effective. But what does the literature say about these kinds of claims? How can HBS Publishing know whether users of these simulations are truly learning? What counts as successful outcomes for the user? What is the relationship between instructional design and learning efficacy in these types of products? How can HBS Publishing evaluate these tools to gain confidence that they are designed properly to promote effective learning?
There is a standing and growing body of literature that reviews the merits of experiential based learning in higher education settings. And while experiential learning and simulations are gaining market share in educational settings, claims of their effectiveness have been challenged and true research using rigorous methodologies that aim to show a correlation with learning outcomes has been lacking and is only just starting to emerge. (Gosen, J., and J. Washbush, June 2004)
This paper will:
- Provide some brief background information on experiential learning to set context;
- Outline the primary goals associated with the HBSP simulation initiative in an effort to set measurable benchmarks for assessment;
- Explore these goals in light of concepts of assessment derived from current literature and evaluate them in terms of the HBSP simulations;
- Provide an assessment framework that can be used to review online business simulations generally and might enhance the HBSP product line specifically.
Experiential Learning and Simulations
Experiential learning focuses on the students’ application of knowledge as an integral part of their ability to both internalize concepts and apply the knowledge to real-world settings. It is best understood in direct contrast to what it is not: didactic or lecture-based education which is seen to be fundamentally passive, one-way, teacher-to-student education. Fundamentally, experiential education embodies a constructivist approach to learning in that it acknowledges that we constitute our own experiences and are active participants in our meaning-making. (Kegan, 1982)
The idea that the experience is integral to the educational process is just one aspect of linking theory to practice. A related and critical aspect is the idea that the experiences learned in the classrooms can and should replicate experiences relevant to the world outside the classroom. Situated cognition refers to the idea that knowledge is situated, being in part a product of the activity, context, and culture in which it is developed and used. To attempt to address knowledge outside of such context and culture is ineffective and inefficient:
Just as carpenters and cabinet makers use chisels differently, so physicists and engineers use mathematical formulae differently. Activity, concept, and culture are interdependent. No one can be totally understood without the other two. Learning must involve all three. Teaching methods often try to impart abstracted concepts as fixed, well-defined, independent entities that can be explored in prototypical examples and textbook exercises. But such exemplification cannot provide the important insights into either the culture or the authentic activities of members of that culture that learners need. (Brown,John Seely, and Allan Collins and Paul Duguid, 1989)
Simulations are one of the most powerful tools for linking practice to classroom and affording the ability to explore a situated context in a risk-free environment. Simulations involve building a dynamic model of a process or system and then performing what-if analysis to see how changes would affect the actual process. By mimicking its operation you can understand the system better, explore alternative strategies, optimize performance, and train personnel – all at a fraction of the cost and time it would take to experiment with the real system. (Aldrich, 2005)
Simulations are most useful to learn about complex situations (incomplete, unreliable or unavailable data), situations where the problems are unfamiliar, and situations where the cost of errors in making decisions is likely to be high. They offer benefits in that they accelerate and compress time to offer a foresight of a hazy future. (Dumblekar, 2004) Ultimately game-type simulations are effective learning environments not because they are ‘fun’, but because they are immersive, require the player to make frequent, important decisions, have clear goals, and adapt to each player individually. (Van Eck, 2006)
When reviewing how simulations best provide this type of unique learning affordance, one quickly is confronted with the challenges of defining success. Complex simulations might have dozens of stakeholders each holding unique metrics of success. Categorizing the goals associated with the simulation program helps to define objectives and set measures by which effectiveness can be gauged.
Clark Aldrich defines two aspects of a formal learning program: meeting certain program goals (such as low cost, ease of delivery, etc.), and increasing the capacity of students through certain learning goals (application of new content, mastery level achievement, etc). Together these deliver desired results for the sponsor -- measurable compliance or improvements measured against an organizational balanced scorecard, etc. (Aldrich, November 2007)
The original program goals of this product line were crafted by the HBSP Higher Education new product development team to meet specific needs vetted by higher education business educators (customers) via research, surveys and interviews:
Rigorous & Relevant
Each simulation topic was chosen based on a market need clearly identified by customer demand within that topic’s larger content discipline. Once relevance was thus established, subject matter experts were carefully selected in order to provide ‘brand-worthy’ delivery of content within the topic. This content had to conform to academically-rigorous standards of construction and presentation, including accompaniment by a robust Teaching Note that explained specific learning objectives and provided pedagogical tips for both classroom use and debrief. Learning objectives for the simulations reviewed in this paper are listed in the section corresponding to each simulation. It will be largely assumed for the purposes of this paper that the simulation topics are relevant and that the content presentation is rigorous.
Teachable & Usable
Usability is a key factor for effective eLearning products. And both the user and administrative interfaces need to be very intuitive and usable. A lack of usability on the user side can adversely affect engagement. And a difficult administrative interface hinders adoption by faculty. Administrators also have other requirements that ensure the simulation product is teachable, including a strong Teaching Note and the availability of a variety of administrative options that allow tailoring the experience to the classroom context. The idea of designing for optimal usability and teachability is not the goal of this exploration, but the extent to which usability hinders or enhances feedback and other components related to effectiveness and learning goals will be discussed.
Professional & Deliverable
The delivery structure for purchasing, configuring, administering, supporting and debriefing the simulation needs to be presented in an organized fashion. And ultimately the program delivery and integration within the classroom or homework context needs to as seamless as possible. The professional and deliverable nature of the simulations will be assumed for the purposes here.
And an obvious program goal not stated formally in the HBSP product literature is a desire for wide adoption as evidenced by sales; the success of the simulation product line as a whole as well as for individual simulations specifically will be assessed in large measure by the adoption and re-adoption rates of HBSP customers over time. It is assumed, based on customer research, that if the stated program goals are met then adoption will follow.
As important as program goals are, they are at best secondary indications to learning outcomes and may of course be completely unrelated. In our case we hope that these program goals do in fact represent a product that delivers specific learning objectives and hence realizes certain larger outcomes. This paper will not focus on how to assess the realization of the program goals specifically, although these goals are important to keep in mind as they provide the perspective through which the product lines were designed and also frame an environment of usability through which learning might best occur. Rather, it is the learning goals and resulting outcomes that should be the specific metric for assessing the efficacy of educational simulations.
There are two aspects to the learning goals of a simulation: the delivery of certain knowledge to the user, and their ability to extrapolate that knowledge for application outside the classroom.
Understanding of Course Content & Concepts
The specific learning objectives for three of the HBS Publishing simulations can be found in the Appendices section. These learning objectives represent certain tangible expected outcomes from the simulation experience. They are usually core content and concepts associated with the learning module or syllabus section within which the simulation is presented. They are indicators of the knowledge delivery to the student. Internalization of this knowledge would indicate success within the simulation environment.
These learning objectives themselves are similar to any academic learning objectives expected from other educational content formats. In other words, these same lists of learning objectives might be present in the teaching notes associated with business case studies, textbooks, etc., that covered material similar to the simulation.
Application of Concepts Outside the Classroom
Understanding course content and concepts is a reasonable expectation for course-based eLearning products. But there are larger and more lasting learning goals that the faculty simulation authors envision for, and expect from, these educational simulations. HBS Publishing’s faculty customers consider the learning from these types of tools to be transferable outside the classroom in a more powerful way than lectures and text-based learning. (Harvard Business School Publishing, 2006) How students engage with simulations, how they leverage the experience to master course content, and their ability to transfer larger concepts outside the classroom are primary simulation product line goals. What is needed is a framework through which to examine these aspects of expected learning outcomes.
Ultimately the primary desired result is the internalization by the user of the primary learning objectives (success within the simulation) in a manner which prepares that user, in the most efficient and effective manner possible, to better manage related problems outside the classroom (success beyond the simulation). The program goals thus hopefully design an experience by which positive learning outcomes are met at both the content and application levels.
Assessing effectiveness can thus be attained by focusing on these same two primary areas: that of the simulation content/learning objectives themselves and that of application, informed by simulation learning, of that knowledge beyond the simulation environment.
Assessing effectiveness of simulations can then be targeted along these two main areas –success within vs. beyond the simulation – as derived from the two tracts of learning goals as depicted in the diagram below.
Success Within the Simulation Experience
Success within the simulation requires a number of factors be met:
- Students understand and can use the software (Usability);
- Student actions map to the activities designed to foster learning (learning ‘yield’). This variable is affected by both their willingness to initiate and sustain exploration within the simulation (Motivation and Engagement support) as well as the nature of the support and feedback system that scaffolds their environment (Exploration vs. Guidance);
- Students leave the simulation experience having internalized learning objectives in the most efficient manner possible (Overhead to Outcomes Ratio).
As stated earlier, usability in and of itself is not the focus of this inquiry (although it is clearly a stated program goal and hence major design ambition). However, it is one of the most critical elements of engagement. (Graetz, 2006) When efficient usability is achieved, the simulation succeeds or fails on the merit of its design and content. When usability fails, however, it alone can detract from student engagement, understanding, and ultimately the ability to learn.
Usability is thus a variable that must be controlled before other program and learning variables can be isolated and assessed. It should be the goal of formative assessment, implemented during the design and prototyping process, to vet the usability of the design. Harvard Business School Publishing’s post-test surveys during beta and field testing inquiry specifically about a product’s usability and specific usability tests are also employed. (Harvard Business School Publishing, 2007) Basic usability is then assumed for the purposes here, although any design revisions resulting from assessment necessitate a re-validation of the product’s usability.
Motivation and Engagement
On of the strongest-held beliefs by HBS Publishing customers regarding simulations is that they are “engaging” forms of learning for their students(Harvard Business School Publishing, 2006) Publishing, 2006) Motivation is critical to the success of any learning environment. Motivation sets the stage for cognitive engagement – it “leads to achievement by increasing the quality of cognitive engagement. That is, content understanding and skill capabilities are enhanced when students are committed to building knowledge and employing deeper learning strategies.” The literature findings in this section are based specifically on the insights of Blumenfeld et al 2006, although in some cases I have reviewed and referenced their original sources.
Simulations and other experiential learning environments may require students to be more motivated than would traditional environments. (Blumenfeld, Phyllis C., and E. Soloway and R. W. Marx and J. S. Krajcik, 1991) Blumenfeld et al (2006) outlined four determinants of motivation: value, competence, relatedness, and autonomy.
Intrinsic value is influenced by interest for the topic as well as the enjoyment experienced when performing the task. But attempts to enhance interest can backfire and decrease learning -- Brophy (1999) warns against using "bells and whistles" and other seductive details which are highly interesting for students but may draw attention toward issues that are less relevant, potentially deflecting attention away from key ideas. But ultimately value refers to students’ perceptions of how tasks are related to their future goals and everyday life. (Blumenfeld, Phyllis C., and Toni M. Kempler and Joseph S. Krajcik, 2006)
Intuitively it would seem that understanding why a simulation might be preparing them for their future should be high for most MBA students. But the emphasis on value is echoed in a National Research Council 2001 report entitled “Knowing What Students Know: The Science and Design of Educational Assessment”, which stated that “assessments need to examine how well students engage in communicative practices appropriate to a domain of knowledge and skill, what they understand about those practices, and how well they use the tools appropriate to that domain.”
In the HBSP simulations, a somewhat motivated audience can be assumed from the start. These students are studying business topics in order to enhance their management skills. And it is a commonly understood value proposition associated with learning skills of strategy (Pricing simulation), leadership (Everest simulation), and operational efficiency (Benihana simulation). However, care is still taken to enhance and define the value for the user. This is primarily accomplished through the ‘cover story’ – the environment and scenario within which the simulation is situated. For the Universal Rental Car Pricing simulation, the cover story involves the user taking on the role of a District Manager in charge of rental car operations across three cities. For the Everest Leadership Team Simulation, the story involves working with a team of fellow mountaineers attempting to summit Mount Everest. And for the Benihana Service Management Operations simulation, you are in charge of experimenting with strategies aimed to increase the efficiency and profits of a popular Japanese steakhouse. The “cover story” is thus a key element to providing a sense of value for the user.
Students' feeling of efficacy regarding their ability to succeed in a particular class or task has a positive influence on their effort, persistence, use of higher-level learning strategies, and choice of challenging activities. (Schunk, D.H., and F. Pajares, 2002) Students with lower self-efficacy may choose easier tasks or tasks they feel more confident about to ensure success.
Students’ sense of competence is enhanced when teachers provide support through instruction in strategies, skills, and concept development. Scaffolding – or the tiering of feedback and instructions to guide a user incrementally through stages of learning – encourages self-efficacy through the process of modeling the thinking/learning process and breaking down tasks to best prepare students for activities.
For the HBSP simulations we wanted to provide a consistent framework for the product line that contained certain key elements designed to enhance usability and user competence. A ‘Prepare’ section is provided in each simulation. This prepare section contains a summary of the simulation, an outline of how to play the simulation, and an overview video that walks the student through the interface. Additionally, a run archive is provided when possible so that students may return to previous simulation attempts to continue playing or review work. The design framework of the simulations, along with the components of that framework, thus helps to foster a sense of competence in users.
The Benihana simulation in particular offers an interesting design that can foster a sense of competency. The simulation is organized into a series of Challenges, each of which corresponds to a learning objective. The user progresses through the Challenges and they get progressively more difficult culminating in a ‘Design the Best Strategy’ option where the user synthesizes the learning and applies it as an ultimate strategy:
Students’ need for relatedness (or belonging) is met when they have positive interactions with their peers and teachers. Feelings of belongingness are satisfied by teachers and peers through expressions of respect, caring and interest for a student's well-being. (Davis, 2003) There is a social and professional aspect to this variable that is beyond the scope of product design and effectiveness. However, Blumenfeld et al cite that opportunities for collaboration with peers encourage feelings of relatedness. Collaboration, which will be explored later, is closely connected to this concept as well.
We know that even a very ‘human’ element such as ‘relatedness’ can be replicated in online environments. Williams et al. reviewed how social guilds in massively multiplayer online role-playing games do foster a sense of real connectedness among users (2006). The challenge with the HBSP simulations is that the experience is designed to be a short one (45 – 90 minutes), so there is little time to foster real connections between users. However, components are introduced that can increase relatedness. The Pricing simulation includes a “High Score” list that allows users to enter a note on their strategy that is then shown along with their score (cumulative profit) to all of their classmates. This feature was designed to foster a sense of competition but the comments posted do reveal that the users utilize the feature for informal communication and support.
Autonomy refers to the perception of a sense of agency, which occurs when students have the opportunity for choices and for playing a significant role in directing their own activity. (Blumenfeld, Phyllis C., and Toni M. Kempler and Joseph S. Krajcik, 2006) Teachers can support this
by allowing students to make decisions about topics, the selection and planning of activities, and for artifact development. In a sense simulations in general inherently favor this quality by offering an exploratory learning environment that is always to some extent user-controlled. This idea also directly relates to the “Exploration vs. Guidance” trade-off that will be explored in more detail below.
One of the primary ways the HBSP simulations foster a sense of autonomy is through the ‘Dashboard’ interface element. As described earlier, the simulation framework includes a ‘Prepare’ section. But it also includes an ‘Analyze’ section, and most of the simulations include a Dashboard as a type of ‘home page’ for the Analyze section. While the overall superstructure of these simulations includes the need to enter decisions and advance the simulation through a series of rounds or turns, the individual analysis and exploration within any given round or turn is determined by the user. Where the user digs for more information, how s/he exports data for further analysis, and which tools are used to explore further are individual nonlinear decisions.
Additionally even when collaboration is introduced (as described below in the section on Collaboration that details the Everest simulation), every student makes decisions. That is, every role on the team still has unique decisions to foster a sense of autonomy and agency.
Blumenfeld et al next describe how certain persistent features of learning-sciences based environments also influence motivation and cognitive engagement. These features each pose unique challenges or affordances to the connectedness between student and environment represented by motivation and engagement. The features are: authenticity, inquiry, collaboration, and technology.
Authenticity is achieved by drawing connections to the real world, to students' everyday lives, and to practice in the discipline. (Newmann, F.M., and H.M. Marks and A. Gamoran, 1996) Later we’ll explore how this relates directly to the ability to transform and transfer learning. It is particularly challenging in today’s educational environment to relate a single piece of content to even the single-class unit due to the growing diversity of the student body. (Oblinger, 2003) But authentic activity is the cornerstone of situated learning and is “the only way [students] gain access to the standpoint that enables practitioners to act meaningfully and purposefully.” (Brown,John Seely, and Allan Collins and Paul Duguid, 1989) In part the nature of the “cover story” can bring a link of authenticity that resonates to both the student and practitioner aspects of the user base.
Specifically, the data behind the cover story needs to resonate with users as realistic. Care is taken to find the right balance between providing realism in data while simultaneously simplifying it to a degree necessary to achieve learning objectives and not overwhelm the user with complexity. This fine line is constantly revised during development and testing, and users do provide feedback when this balance is not optimal. Here’s an example from a High Scores list of the Pricing simulation during testing:
Students' sense of autonomy is enhanced when they have opportunities to decide on ways to collect/analyze, and interpret info. (Blumenfeld, Phyllis C., and Toni M. Kempler and Joseph S. Krajcik, 2006) Again, the exploratory nature of these business simulations works well for this affordance. The challenge is that students can be interested in surface features of the investigation without necessarily being interested in the underlying content. This interest then does not necessarily translate into cognitive engagement with the content. This trade-off is related to the autonomy variable reviewed earlier and will be explored in more detail in the “Exploration vs. Guidance” section.
As described above, the Dashboard and the nonlinear exploration aspect of the simulations in general are the primary enablers of inquiry in the HBSP simulations. The simulation authors have even at times wanted to assess paths of inquiry as indicators of student strategy and motives. Pricing simulation author John Hogan described the type of information that might be gleaned from examining paths of inquiry: “For example, a student that has the highest percentage of his/her clicks on market share data (relative to breakeven, profits, competitor pricing, etc.) could be inferred to have a strong competition orientation. Similarly, a person that looks primarily at the capacity utilization information might be more of a cost+ pricer.”
Collaboration with peers encourages motivation and cognitive engagement. (Cohen, 1994) When students work together with others to obtain information, share and discuss ideas, exchange data and interpretations, and receive feedback on work, they feel jointly responsible for success and share in goal development and achievement. And because some members of group may be more proficient in skills or have more prior knowledge or different talents, the shared effort can diminish feelings of inadequacy. Collaboration can also benefit cognitive engagement as students are encouraged to explain, clarify, debate, and critique their ideas.
Studies of the social context of learning show that in a responsive social setting, learners can adopt the criteria for competence they see in others and then use this information to judge and perfect the adequacy of their own performance. Shared performance promotes a sense of goal orientation as learning becomes attuned to the constraints and resources of the environment. (Pellegrino, Chudowsky, Glaser, & National Research Council . Division of Behavioral and Social Sciences and Education. Committee on the Foundations of Assessment, 2001)
Social settings are also paramount for situated learning, where salient aspects include collective problem-solving, the ability to learn collaborative skills, and the internalization of viewpoints of multiple roles: “Getting one person to be able to play all the roles entailed by authentic activity and to reflect productively upon his or her performance is one of the monumental tasks of education.” (Brown,John Seely, and Allan Collins and Paul Duguid, 1989)
Challenges of leveraging this feature include the fact that the characteristics of group composition – such as ability level, gender, cultural background, and language proficiency – can affect group productivity. Group members can tend toward 'social loafing', or the diminishment of thoughtfulness by encouraging reliance on others. Competition, often used as a popular form of collaboration to promote effort and participation, can lead to focus on 'winning' rather than on the inherent value of learning and developing understanding. As described in the section on Autonomy, the Everest simulation mitigates the risk of social loafing by providing every role with the opportunity to make decisions.
The HBSP simulations foster collaboration in a number of ways. Teaching notes for even the single-player simulations do offer that a viable instructional model would be for students to form teams and play the simulation (that is, have multiple players collaborate to inform the decisions of a single simulation ‘player’).
But when appropriate the entire simulation is designed to be a team-based simulation where at least each individual team is playing the simulation synchronously. That’s the case with the Everest simulation. Since the focus is team dynamics and team leadership, it was critical to involve actual teamwork as an element of the game. There are both pedagogical and interface features of the simulation designed to foster collaboration.
From a pedagogical perspective this particular simulation is designed specifically to challenge team members to collaborate and share information. This is an excerpt from the faculty Teaching Note for the simulation:
The teaching points for the exercise focus on how teams make complex decisions when critical information is distributed unevenly among members and when members have partially conflicting goals. We refer to these conditions as asymmetrical information and asymmetrical interests, respectively. The former condition is simulated in the exercise by providing critical data to individuals, who must share it if the team is to do well. Some information is shared by all; other information is presented only to individual team members. The asymmetrical distribution of information creates, in effect, an information sharing problem that resembles the challenge faced by subjects in the famous "hidden profile" experiments designed and conducted by Gary Stasser and colleagues. The simulation also contains a chat feature that allows team members to share information selectively. The experience thus allows team members to discover the well-documented finding that information privately held by individual team members, rather than shared by all, is often ignored or downplayed in team decisions, to the detriment of team performance. The simulation creates this condition and allows team members to figure out whether and when to share what they know. (Roberto and Edmondson, 2007)
So in this particular case collaboration is an overt learning objective within the simulation. But this should only serve to highlight the means by which collaboration is facilitated within the simulation design. From an interface perspective there are a number of features related to collaboration. As mentioned in the teaching note excerpt, the most visible is the addition of Chat functionality. Team members can communicate selectively with other team members or with their entire team as shown below.
There are additional instances of collaboration in the simulation that are enabled by a combination of pedagogical and design goals. The decisions required each round necessitate collaborative decision-making because each team member only has a portion of the information required to complete the task successfully. While this is a tactic specifically employed here to test the students on information sharing, in reality it could be employed in other simulations as a way to foster collaboration during team decision-making.
Blumenthal et al see technology as having the motivational benefits of a “hook” that gets students to participate, thus sustaining interest and promoting cognitive engagement. (Blumenfeld, Phyllis C., and Toni M. Kempler and Joseph S. Krajcik, 2006) Benefits include the ability to build and represent knowledge in multiple ways as well as the enhancement of feelings of autonomy via greater choice in what and how to explore.
These authors also specifically bring up the fact that technology (such as “intelligent tutors”) can diagnose student difficulties and provide immediate feedback about their progress, thereby promoting perceptions of efficacy. This is a critical component of assessing student behavior and performance. The National Research Council explores this topic at length:
Individuals acquire a skill much more rapidly if they receive feedback about the correctness of what they have done. If incorrect, they need to know the nature of their mistake. It was demonstrated long ago that practice without feedback produces little learning (Thorndike, 1931). One of the persistent dilemmas in education is that students often spend time practicing incorrect skills with little or no feedback. Furthermore, the feedback they ultimately receive is often neither timely nor informative. For the less capable student, unguided practice…can be practice in doing tasks incorrectly….One of the most important roles for assessment is the provision of timely and informative feedback to students during instruction and learning so that their practice of a skill and its subsequent acquisition will be effective and efficient. (Pellegrino et al., 2001)
Intelligent tutors are then described as “systems that assess components of students’ knowledge while they are working on problems on line. When a student makes a mistake, the system provides advice and remediation to correct the error. Studies suggest that when individuals work with these tutors, there is a relatively direct relationship between the assessment of student learning and the research-based model of student thinking. On average, students learn more with the system than with traditional instruction (Koedinger and Anderson, 1999).” This concept will also be explored in more detail in the Exploration vs. Guidance section.
Feedback is a critical element of the HBSP simulations and can take a number of forms. The Dashboards themselves provide a form of feedback as do the drilldown charts, graphs, and tabular data within the Analyze sections. Each time the simulation advances these elements are updated based on the most recent decision input. In this way they themselves form the basis of feedback data for the user. Here are some sample screens from the Pricing simulation showing cumulative data over the course of several turns.
The same is true for the Benihana simulation which populates areas of the Dashboard interface based on user inputs:
The Benihana simulation also offers a unique instance of scaffolding for HBSP simulations. The idea of escalating Challenges associated with learning objectives and culminating in a capstone challenge is a very direct way of leveraging the technology to drive toward learning outcomes.
With team-based simulations, feedback becomes more critical and more challenging. The Everest simulation utilizes a number of feedback elements. As with the other simulations it does contain a dashboard which gives some indicators.
However, information is also provided to users each round which includes feedback on recent activity:
Perhaps the single greatest feedback mechanism for faculty is the surveys that are built into the Everest simulation. At two points during the simulation the students are asked to complete surveys assessing their team dynamics and their leader’s style and effectiveness. This provides a rich pool of data for faculty to assess.
Feedback doesn’t necessarily have to relate to learning outcomes – it can also serve usability. Here a piece of feedback is provided to the Leader of the Everest expedition if they try and advance the simulation before their team has submitted all decisions:
By ensuring that motivtional aspects are met, we can transform situational interest and participation into cognitive engagement. Some key takeaways to consider when assessing motivation and engagement in simulations would then include the following:
• Be clear about how the simulation is going to relate to students’ future goals and everyday lives (value);
• Provide scaffolding for the learner through the experience and leverage the software to provide guided assistance and feedback (competence and technology)
• Enable the student’s sense of agency by empowering them to drive the exploration process (autonomy and inquiry)
• Relate the experience to real-world concepts (authenticity)
• Create opportunities for student teamwork and peer learning (relatedness and collaboration)
Exploration vs. Guidance
We know from the previous section on motivation that the elements of autonomy and inquiry are powerful motivational forces for cognitive engagement. Part of the learning proposition of exploratory or immersive learning environments is the self-directed aspect of the experience. Simulations are complex in comparison to more simple, static learning environments precisely because the user is afforded flexibility and control over certain aspects of the system. But free-form exploration without any guidance isn’t an educational simulation – it’s merely a virtual environment. The virtual world of Second Life is a prime example – without any type of inherent goals or ‘mission’, this virtual world can host academic activities but is not inherently educational in the sense of providing discreet learning objectives.
So some structure is needed in order to sustain direct academic value. However, if the environment is too prescriptive and restrictive then the simulation loses its exploratory nature. Finding the right balance between an ‘exploratory’ vs. ‘guided’ experience is difficult. It would be difficult for any one given student, but in reality the right balance is likely unique for every student in any classroom. To find learning value in an environment that is to any substantial degree “exploratory”, meaning open-ended and user-driven to some extent, requires a self-directed learner. (Grow, Spring 1991) Whereas dependent learners require an authority or coaching model of instruction, increasingly more interested and involved students require motivators, guides, and facilitators, finally culminating in true self-directed learners who require delegators. Self-direction can be learned – in fact, scaffolding learners’ learning styles toward increasing self-direction is in many ways the aim of education itself.
As mentioned in the motivation section, technology and the use of intelligent tutors can be one such scaffolding mechanism that can be used to increase self-direction; it is a mechanism by which guidance systems are used to increase the level of confidence with which you expose a user to exploratory environments. As such, it raises the bar on both the guidance and exploration aspects of immersive learning environments and simulations. James Ong writes that in complex simulations with inter-connected cause-and-effect relationships, it is often hard for students to figure out how their actions led to various outcomes. Even if the desired goals were achieved, it is possible that the student actions were not directly correlated to the outcome, and/or that positive outcomes were manifest by factors other than direct user action. Thus, feedback based solely on outcomes can be misleading and hinder training.
Ong goes onto define Intelligent Tutoring Systems (ITS) as:
…software programs that encode and apply the subject matter and teaching expertise of instructors to provide the benefits of one-on-one tutoring in an automated way. During each scenario, the ITS evaluates the student's actions to assess knowledge and skills. The ITS can provide hints during the exercise, either proactively or on demand. After each scenario, the ITS can present detailed feedback that identifies the student's strengths and weaknesses and select appropriate next exercise that address the student's specific learning needs. (Ong, 2007)
Research shows that, as with human tutoring, ITS significantly and positively affect learning outcomes for students. By assessing student actions and interpreting cues from those actions (“observable events and states”), the software can then try and map the knowledge and skills that were used by the student to assess the situation and make decisions (“student mental actions and states”). As we’ll see in the next section on success “beyond” the simulation, this theory dovetails with other conceptual frameworks that help us understand how simulations transfer learning outside the classroom. But for our specific purpose here, the ITS can be used to provide feedback on the user actions within the simulation – the actions that help determine if the simulation is even being used correctly and hence is providing the cause-effect dynamic necessary to impart the learning objectives. Such feedback provides the capability for monitoring, and people who actively monitor their current understanding are more likely to take active steps to improve their learning. (Bransford, John D., and Daniel L. Schwartz, 2001)
With the HBSP simulations this issue is addressed by the overall type of simulations we are building. Our simulations are turn-based simulations, meaning that users are presented with a cycle within the environment: they have to assess an environment of data, then input decisions based on that data, then the simulation ‘advances’ a turn, at which point they re-assess the environment based on changes effected in part due to their decisions. This overall process provides a de facto structure that balances exploration vs. guidance. By requiring decisions and turns the simulation retains enough structure to provide valuable learning toward specific learning goals. But by largely letting students explore at their own pace during each ‘pre-decision’ phase of analysis, some autonomy and inquiry is still self-guided.
Clearly some scaffolded feedback system is necessary to achieve the right balance in the simulated environment. Some key takeaways to consider when assessing motivation and engagement in simulations would then include the following:
• Acknowledging that self-direction is a necessary component for success in exploratory simulation environments;
• Building in scaffolded feedback mechanisms (such as ITS) to guide users and provide a contextual understanding of their actions within the system, which thereby increases their ability to direct their own inquiry with confidence.
Overhead to Outcomes Ratio
Ultimately success with a simulated environment needs to be assessed on the basis of whether it provided a positive “Learning ROI”, or return on investment. Simulations inherently require more overhead than some other, more traditional pedagogical methods and this administrative overhead is a primary source of abandoning simulations as well as a main barrier to adoption for many faculty. (Faria, A.J., and William J. Wellington, June 2004) This administrative overhead is worth it if the experience provides learning that was otherwise unattainable, or shortens the teaching time for the content, etc.
One critical item a faculty member needs to determine simulation effectiveness is the intuitive presentation of student data from the simulation once it is complete. In the HBSP simulations the goal is to create administrative interfaces that both provide faculty with ‘debrief-ready’ data but also provide a sense of how the students did in achieving the learning objectives.
The Pricing simulation provides student performance data at an aggregate level; it then also provides the ability for the faculty member to drill down into an individual run for any given student. Student data is also presented as a histogram so the faculty member can assess the class as a whole.
The Everest simulation surveys are mapped to psychological constructs of team dynamics. Those constructs are then plotted against team performance data in order to provide performance matrices that facilitate debrief and give insight into student performance.
This is a rather straightforward variable to assess. The key takeaway here would be reviewing the overall “was it worth it?” factor that faculty consider after reviewing and piloting a new pedagogical technique.
Success Beyond the Simulation Experience
Success beyond the simulation requires that:
- Students adapt the learning to new situations (Transferability)
- Students’ post-simulation behavior exhibits evidence of the evolution of the advancement of knowledge constructs and skills (Knowledge Constructs of Simulations; Actions, Middle Skills, and Big Skills)
Transferability – the ability to extrapolate high level concepts from the classroom and apply them in a related format outside the classroom – is in many ways the ‘holy grail’ of higher education simulations. The goal is not just “replicative” knowledge (memory of what was learned in the classroom, which is low) but “applicative” knowledge – applying the knowledge in order to solve new problems. For years the claim has been made but many have challenged the validity of the research science used to substantiate the claims. (Gosen, J., and J. Washbush, June 2004) But the nature of how transfer is defined is now being re-considered.
Bransford et al describe two views of transfer. (Bransford, John D., and Daniel L. Schwartz, 2001) These views will be explored along with supporting sources originally cited in the same Bransford piece.
Sequestered Problem Solving
Sequestered Problem Solving (SPS) has been the dominant methodology – it asks whether people can apply something they have learned to new problem or situation. It is termed SPS because the subjects in the transfer experiments are sequestered during the actual tests of transfer. It is assumed that the measure of success is the ability to directly apply one's previous learning to a new setting or problem (the “Direct Application” or DA theory of transfer).
Preparation for Future Learning
The alternative is a view that acknowledges the validity of these perspectives but also broadens the conception of transfer by including an emphasis on people's "preparation for future learning" (PFL). Here the focus shifts to assessments of people's abilities to learn in knowledge-rich environments. The authors point out that when organizations hire new employees they don't expect them to have learned everything they need for successful adaptation. Rather, they want people who can learn and they expect them to both make use of available resources as necessary to facilitate their own learning.
This idea is born out by business leaders themselves. Harvard Business School Professor David Garvin recounted this anecdote:
Several years ago [a prominent CEO] came to Harvard Business School. He had held the post for a year then and was in HBS for [an executive education] program. He was asked why he got the job. Was it because of the experience or because of what he knew? He said that you don't get this kind of job for what you know. You get this job for how fast you can learn. That justifies the process of job rotation. That is the measure of success; that is the litmus test on how fast the organization is learning and how one creates a learning organization. (Naithani, Ambika and Vinod Mahanta, 2007)
The authors believe that the better prepared one is for future learning, the greater the transfer (in terms of speed and/or quality of new learning.) They cite Broudy’s arguments about different types of knowing, stating that we must go beyond “knowing that” (replicative knowledge) and “knowing how” (applicative knowledge) that jointly constitute SPS theories, and also consider “knowing with” – the idea that people “know with” their previously acquired concepts and experiences. The educated person uses their school knowledge to think, perceive, and judge – even if they cannot recall that school knowledge explicitly. (Broudy, 1977)
This is related to Aldrich’s “Learning to Know” concept cited earlier. And the idea has ample support in the literature. Whereas traditional behaviorist approaches had focused on how much knowledge someone has, more contemporary cognitive theory also emphasizes what type of knowledge someone has. (Pellegrino et al., 2001) This perspective suggests that students might be best off generating their own ideas about phenomena and then contrast their own thinking with that of others, including experts in the area.
Also, adapting to new situations (transfer) often involves 'letting go" of previously-held ideas and behaviors. Bransford et al note that this is very different from the Direct Application concept where transfer is achieved if a behavior is merely repeated in a new situation. “Educational environments designed from a PFL perspective emphasize the importance of encouraging attitudes and habits of mind that prepare people to resist making old responses by simply assimilating new info to their existing concepts or schemas. Instead, effective learners learn to look critically at their current knowledge and beliefs.” (Bransford, John D., and Daniel L. Schwartz, 2001)
The very act of allowing multiple scenarios within simulations should afford an environment where previously-held schemas can be tested and either validated or challenged and even discarded in the face of new information and understanding. Game play in general, from which these educational simulation derive some measure of their form, rests on a foundation of trial-and-error. Scot Osterweil, Education Arcade Creative Director at the MIT Comparative Media Studies Program, cites the “Four freedoms of game play” as: freedom to fail, freedom to experience, freedom of effort, and freedom to try on new identities. (Osterweil, 2007) Clark Aldrich also talks about skill development requiring participants to experience cycles of frustration and resolution. (Aldrich, November 2007)
Finally, there is one additional element that affects a propensity for future learning. People's perspectives on the givens of a situation depend on what they have at their disposal to know with. Thus, the individual's knowledge activity constitutes the situation. (Broudy, 1977) Learners actively construct their understanding by trying to connect new information with their prior knowledge. (Pellegrino et al., 2001)
Codifying the expectations for learning beyond the simulation is not currently identified in the HBSP simulations. This is an area for future exploration and definition by the simulation authoring and development team.
The ability to transfer knowledge is a critical goal of educational simulations. Some key takeaways to consider when assessing transfer would then include the following:
• Providing a forum by which to assess a user’s capacity to “know with” their previously acquired concepts and experiences in an effort to accept that the transfer may have provided a preparation for future learning;
• Allow users to generate their own ideas and then provide contrast to those ideas via multiple scenario runs, feedback from peers or authorities, etc;
• Encouraging users to state and reflect on their previously-held ideas so that they might revise and let them go when appropriate.
Knowledge Constructs of Simulations
Clark Aldrich describes four “sweet spots” of simulations – major knowledge constructs that are as natural to simulations as internal monologues and timelines are to books. These constructs are situational awareness, understanding of actions, awareness of patterns over time, and conceptual dead reckoning. (Aldrich, November 2007)
Aldrich defines situational awareness as what experts see when they come to a scene that others don’t see. It is the ability to filter out certain details and highlight and extrapolate others. Inherently this idea acknowledges that different people with different domain expertise bring different situational awareness to the same situation.
The National Research Council continues the thought:
If knowledge is to be transferred successfully, practice and feedback need to take a certain form. Learners must develop an understanding of when (under what conditions) it is appropriate to apply what they have learned. Recognition plays an important role here. Indeed, one of the major differences between novices and experts is that experts can recognize novel situations as minor variants of situations for which they already know how to apply strong methods.
The report continues:
What distinguishes expert from novice performers is not simply general mental abilities, such as memory or fluid intelligence, or general problem-solving strategies. Experts have acquired extensive stores of knowledge and skill in a particular domain. But perhaps most significant, their minds have organized this knowledge in ways that make it more retrievable and useful. Most important, they have efficiently coded and organized this information into well-connected schemas. These methods of encoding and organizing help experts interpret new information and notice features and meaningful patterns of information that might be overlooked by less competent learners. These schemas also enable experts, when confronted with a problem, to retrieve the relevant aspects of their knowledge. Teachers should place more emphasis on the conditions for applying the facts or procedures being taught, and that assessment should address whether students know when, where, and how to use their knowledge.
Clearly the links to transfer and prior experience, and to situated learning, are nicely exemplified when looking at the ‘expert’ as a benchmark for the application of knowledge. And Clark continues with some other constructs below – each echoed in the report above – that allow the expert vantage point to be leveraged for understanding the power of simulation learning.
Understanding of Actions
What do experts see as viable options, and trade-offs of each? How and when should one calibrate responses? Understanding when to scramble (in the positive sense) or triage (in the negative sense) is part of learning. This means using a mixture of reflex and practiced tactics in an attempt to get to a better, more strategic, situation.
Awareness of patterns over time
How and why do things play out? What are small steps now that can have a big impact? Part of wisdom includes awareness of patterns and of seeing, in Clark's words, “where the puck will be, not where it is now.” The ability to see patterns in part is built on experience, or prior knowledge. People generally strive to interpret situations so that they can apply ‘schemas’ – previously learned and somewhat specialized techniques for organizing knowledge in memory in ways that are useful for solving problems. (Pellegrino et al., 2001) Schemas help people interpret complex data by weaving them into sensible patterns. Experts may have more schemas at their disposal as well as an ability to understand how and when to best apply them.
Conceptual ‘Dead Reckoning’
Conceptual ‘dead reckoning’ is understanding the opportunities, committing to a vision, and then navigating towards it. And that navigation is in fact that key to this construct (the term having been borrowed from aviation and orienteering). It is achieved through a series of steps that themselves form a framework or schema for the performer. The first step is imagining the destination (in learning, a goal) and then creating a type of mental vector between the current and destination locations. Then the user can make a series of short term decisions as they reconcile their location against the vector. Clark states that “to understand the behavior of people in challenging situations, it is critical to understand what is the conceptual map that the leaders sees, and then what is their strategy for identifying a destination.”
These larger takeaways resonate with the HBSP simulation faculty authors. Some key takeaways to consider when assessing knowledge constructs would include assessing whether the simulation contributed to a larger frame of perspective for the learner.
• Did it contribute to their situational awareness?
• Did it contribute to their understanding of tradeoffs and actions?
• Did it contribute to their awareness of patterns over time?
• Did it contribute to their ability to use conceptual ‘dead reckoning’ to navigate toward a vision?
It is obviously difficult to assess these larger conceptual constructs. Fortunately there are skillsets that can be considered as bridges between tactical actions and these higher-level strategic concepts.
Actions, Middle Skills, and Big Skills
Aldrich defines these skills as Big Skills, Middle Skills, and Actions. (Aldrich, November 2007)
Big Skills are the most valued non-technical skills a person can have, including leadership, stewardship, communication, and relationship management. These are what would commonly be referred to as organizational skills or soft skills and they involve the improvisation and knowledge of systems, not just processes. He describes their qualities as being simultaneously relevant across individuals/teams, work/family, etc. But most critically, he states that an intellectual awareness of these skills is insufficient – they must be practiced. Practicing them involves the application of Middle Skills (see below), usually with the assistance of a coach, and practicing also requires performers to experience cycles of frustration and resolution. They are most often assessed via 360 degree feedback instruments.
Middle Skills are skills such as gathering evidence, budgeting, and directing people. They are the layer between Actions and Big Skills. They are, in fact, the building blocks of Big Skills. They require finesse and calibration and are challenging to appropriately apply.
Actions are what a person in an experience actually does at the most tactical level. In simulations they are accomplished by basic inputs and almost always are available only based on context. Actions can be done in the context of activities (the action is the what, the activity is the why). Actions in combination enable Middle Skills, which in turn enable Big Skills.
The combination and logical progression/ascension of these skills are what can be used to achieve the higher level knowledge constructs described earlier. In some cases there is direct mapping: Actions are what are used to navigate during the process of conceptual ‘dead reckoning’, for instance. In other cases the cause-and-effect may be more gradual, with Actions in combination creating Middles Skills which over time enable Big Skills, etc.
While these skills are likewise not identified overtly in the HBSP simulations, they form the promise for how links to larger constructs can be made. Actions and Middle Skills are identifiable in the simulations and hence mapping those specifically to larger constructs should be an achievable exercise.
This mapping of tactical simulation inputs to skills and then to larger skills should be something that is possible or simulation designers. Some key takeaways to consider when assessing skillsets might then include:
• Visualizing and documenting the higher level Big Skills and knowledge constructs that are desired as key takeaways;
• Attempting to map the trajectory between simulation-level Actions and the Middle and Big Skills derived out of those actions.
A Framework for Assessing Effectiveness
An Assessment Checklist
The combination of the key takeaways from each of the assessment areas outlined above can help frame a checklist to use when examining the assessment possibilities for individual products or activities. Here are the specific takeaways recapped in order:
Success Within the Simulation
o Test to ensure a usable product design
• Motivation and Engagement
o Be clear about how the simulation is going to relate to students’ future goals and everyday lives (value)
o Provide scaffolding for the learner through the experience and leverage the software to provide guided assistance and feedback (competence and technology)
o Enable the student’s sense of agency by empowering them to drive the exploration process (autonomy and inquiry)
o Relate the experience to real-world concepts (authenticity)
o Create opportunities for student teamwork and peer learning (relatedness and collaboration)
• Exploration vs. Guidance
o Acknowledge that self-direction is a necessary component for success in exploratory simulation environments
o Build in scaffolded feedback mechanisms (such as ITS) to guide users and provide a contextual understanding of their actions within the system, which thereby increases their ability to direct their own inquiry with confidence
• Overhead to Outcomes
o Review the overall “was it worth it?” factor regarding use of the simulation format
Success Beyond the Simulation
o Provide a forum by which to assess a user’s capacity to “know with” their previously acquired concepts and experiences in an effort to accept that the transfer may have provided a preparation for future learning
o Allow users to generate their own ideas and then provide contrast to those ideas via multiple scenario runs, feedback from peers or authorities, etc;
o Encourage users to state and reflect on their previously-held ideas so that they might revise and let them go when appropriate
• Knowledge Constructs --> Big Skills --> Middle Skills --> Actions
o Visualize and document the links between actions, middle skills, and big skills that form the correlation between specific simulation activities and the larger knowledge constructs that are desired
This checklist can help form the basis for framing recommendations and understanding when to best evaluate and introduce assessment items. It provides a useful collection of data points regarding assessing the effectiveness of educational simulations, organized intuitively by theoretical concept topic. But a necessary next step is arranging these data points within the practical context of the simulation design and development process. Understanding when to apply these principles for maximum advantage is critical to realizing the full value of the checklist items.
Ultimately the goal is to ensure that there is a link between success within the simulation and success beyond the simulation. Understanding that relationship and creating explicit expectations around how core concepts relate to goals of applied knowledge is a key step. Simulation designers and faculty authors should place more emphasis on the conditions for applying the facts or procedures being taught, and that assessment should address whether students know when, where, and how to use their knowledge. (Pellegrino et al., 2001)
Checklist Items by Development Phase
Define program goals. These inform every aspect of the product development and project process and may introduce tradeoffs outside of the academic elements of the simulation. Be clear about where those tradeoffs exist and what the priorities are in case of conflicts.
Define learning goals. This is not just the learning objectives related to the core content and concepts, but also the larger learning goals for the application of knowledge outside the simulation. Draw specific maps between the actions within the simulation and the larger constructs expected to be enhanced beyond the simulation.
Define the audience. Be clear about the value proposition: how the simulation is going to relate to students’ future goals and everyday lives. Select a topical ‘cover story’ with real-world concepts that resonates with the users to create authenticity. Identify previously-held knowledge that you anticipate might impact the user’s experience.
Envision a design based on user autonomy and inquiry. Enable the student’s sense of agency by empowering them to drive the exploration process. Create opportunities for student teamwork and peer learning in order to foster relatedness and collaboration.
Utilize the technology to maximum benefit for the user. Develop scaffolding for the learner through the experience and leverage the software to enhance user competency via guided assistance and feedback. Create opportunities for users to identify and capture previously acquired knowledge and experiences for consideration and reflection later on in the simulation environment.
Constantly revisit the conceptualization criteria to ensure that they have not diluted during implementation. These criteria are the blueprints by which the development should be guided. Are learning and program goals intact? Does the experience resonate with the user audience as originally intended? Are there affordances for collaboration and peer learning?
It is important to note that testing needn’t and shouldn’t wait until the simulation is complete to be implemented. Formative evaluations during an iterative development process will produce the greatest results, where design and development are continually informed by testing and refined through multiple prototypes and iterations.
Test for core content and concepts. To some degree simple knowledge acquisition can easily be assessed either within the simulation environment (if so make sure you have specific designs around how to capture/present this information) or via a post-play survey.
Test for usability of design. Ensure that interface challenges do not present issues or create unexpected consequences or diversions from the learning goals. There are standard usability test guidelines for reviewing how users interact with software and extracting key issues from their experiences. These usability tests can also help reveal where the experience falls on the exploration vs. guidance spectrum. Usability “click paths” can be monitored and reviewed to determine how users move through the system, what help and navigation features they avail themselves of, etc.
Validate the motivational aspects of the simulation. Did users feel a connection to the subject matter? Did they understand why it was relevant for them? Did they feel a sense of agency with regard to navigating and creating their own unique experience? Did they have opportunities for collaboration and/or peer learning? Post-play surveys can be used with users to gather this information from users.
Validate the overhead-to-outcomes ranking. Did faculty feel that the experience was ‘worth it’? Did they feel the simulation was the best vehicle for delivering the experience and the learning? Post-administration surveys for faculty can be used to gather this information. The post-play user surveys will also form part of the data required for faculty administrators to consider this.
If possible, assess the impact of the simulation on larger knowledge constructs beyond the realm of the simulation and classroom. This is obviously not always possible, but follow-up surveys can be used and can be revealing. When this is not possible it is especially important to try and assess the actions, middle skills and big skills gained that might best represent a proxy for the larger constructs. That is why making the links between constructs and actions is so critical.
A final test of the entire process should be included as well – an after-action review or post-project assessment. Incorporating lessons-learned into future projects, even regarding even non-academic project challenges, will increase the likelihood that future projects have more time spent focused on achieving learning goals.
Simulations are designed to convey complex knowledge and to hopefully allow that knowledge to inform into meaningful activity outside of the classroom. But such delivery and transference of knowledge has many components, and simulations themselves are complex educational tools that can have many facets and features available for use. Understanding how to map simulation features to learning – in effect, how to design effective simulations – is in and of itself a challenge. But understanding how to assess that effectiveness may be an even larger challenge.
Clearly the three HBSP simulations referenced here each have strengths corresponding to some of the attributes associated with effective learning assessment. The challenge is finding ways to leverage these strengths across the widest possible span of simulations, and more importantly to determine why certain simulations have certain strengths so that features and components may be applied intelligently based on an accurate assessment of how and where they work best.
This paper began with a problem statement that outlined a challenge many educational simulation developers face: how can the learning outcomes be validated and does the literature back up general faculty feelings about simulation effectiveness? The literature shows that there are indeed tangible concepts regarding motivation, cognitive engagement, transference, and larger knowledge constructs that can be related directly to simulation design, use, and outcomes. Institutionalizing simulation development to account for these concepts is a large but necessary challenge to ensure that the learning outcomes of these powerful tools live up the expectation that already exists for their use in business education.
Please note that at the time of this writing, none of these simulations have officially been launched for sale to the public. Therefore all descriptions and screenshots should be considered works-in-progress.
Appendix A: Universal Rental Car Pricing Simulation
Authors: John Gourville (Harvard Business School), John Hogan (Monitor Group’s Strategic Pricing Group), Tom Nagle (Monitor Group’s Strategic Pricing Group)
FloridaThis web-based simulation presents an engaging context in which students develop their knowledge of pricing by managing a rental car operation (Universal) in Florida and improve regional performance by developing a pricing strategy. The simulation includes three regions -- Orlando, Tampa and Miami -- which vary in size, market dynamics and customer mix. It involves competition between two car rental companies with players inputting decisions for Universal. The simulation lasts for up to twelve simulated months. In particular, whether assigned as individuals or teams, players must set weekday and weekend prices for each region for each period (month.) In addition, they are asked to make fleet capacity decisions at several points throughout the simulation.
The simulation is asynchronous and can be assigned for homework. A robust Facilitator’s Guide is included providing an overview of simulation screens as well as a comprehensive Teaching Note with detailed commentary on debriefing the simulation.
The simulation can be assigned and used in different ways to meet the needs of the instructor. For example, it can be assigned as a pre-class exercise with subsequent in-class debrief. Alternatively, given the range of variables at the professor’s disposal, the professor can craft weekly assignments throughout the course which highlight specific learning objectives. Finally, the simulation can be run multiple times, with increasing complexity.
There are many principles of pricing that can be explored in the simulation:
- Nature and dynamics of consumer response to price (i.e., price elasticities)
- Importance of understanding and accounting for differences across customer segments.
- Importance of understanding / accounting for differences across geographic markets (heterogeneity of demand.)
- Importance of accounting for competitive response.
- Impact of price on overall marketplace demand and impact of general economic conditions on the demand function.
- Economics of pricing decisions and associated marginal math.
- Role of pricing in managing product inventory (e.g., managing excess demand and stock-outs.)
Appendix B: Everest Team Leadership Simulation
Authors: Michael Roberto (Bryant University), Amy Edmondson (Harvard Business School)
This web-based simulation presents an engaging setting in which students explore aspects of leadership and team dynamics using a team ascent of Mount Everest as the backdrop. Players log in and are assigned one of 5 roles on a team attempting to summit the mountain. The simulation lasts eight rounds -- about 1.5 hours of seat time. Each round team members analyze pieces of both symmetric and asymmetric information concerning status of weather, health, supplies, etc. They then collectively discuss whether or not it is prudent to attempt to reach the next camp en route to the summit. Decisions must be made concerning the most effective distribution of supplies and oxygen bottles needed for the ascent – decisions which affect hiking speed, health, and ultimately the team’s success in summiting the mountain. Failure to accurately communicate and analyze information as a team has negative consequences for team performance.
The simulation is synchronous and designed to be used with teams of students. A robust Facilitator’s Guide is included that contains an overview of simulation screens/elements as well as a comprehensive Teaching Note. The simulation allows instructors to configure the extent to which information and interests among team members is asynchronous, adding to the challenges they face when making decisions on the mountain.