I’ve been experimenting with using ChatGPT to draft a branching scenario. Now, I have a complete prototype scenario, “The New Hire with Attitude,” written by ChatGPT. To create the scenario, I generally prompted for one decision point or passage at a time. I did some light editing of the text while creating this prototype, but I mostly left the text as is. (If this were an actual project for a client, it would definitely need more editing.) While I do see some possibilities for using ChatGPT for writing scenarios, this experience has reiterated how important it is to have an actual human reviewing the content. Using AI to draft a scenario might speed up the process, but it definitely required checking for errors, inconsistencies, redundancies, and other issues.
Try the prototype
I built the prototype in Twine as I generated the text in ChatGPT. Without Twine, it would have been nearly impossible to keep track of the structure. If the scenario isn’t embedded below or is too narrow, try playing the scenario in a new tab.
Note that this is a prototype. It should work and be complete with functional links, but the text is not polished. You may find some continuity errors, clunky wording, etc.
The background image for the scenario was created in Playground.
This scenario has a moderately complex branching structure. It has 46 passages and about 7100 words in total. I often reused choices to manage the complexity of the branching structure, but this still has 15 different endings. (I think with better editing I could make that more efficient and collapse some of those endings together, but this is just a prototype.)
In Twine, I color-coded my tags.
- Green: Good choices
- Yellow: OK choices
- Red: Bad choices
- Purple: Endings (Provide a choice of going back to the previous decision point or restarting from the beginning)
Repetitive choices and text
As I used the same basic prompt structure to continue building more choices and alternate paths, I noticed how repetitive ChatGPT’s responses were. Of course, that makes sense to some degree: using the same prompts should get similar results. I saw less variation in ChatGPT’s responses than I expected. The advantage of that repetitiveness was that it made it easy for me to reuse the choices and keep it from expanding exponentially.
I found that I needed to vary the prompts or nudge ChatGPT more to produce more variety. Sometimes I just asked for more choices. (“Give me three more alternative choices.”) That worked reasonably well because I could pick the best options.
Sometimes, instead of simply asking for three choices, I prompted it with more details on what I wanted.
(Prompt I used) Continue the conversation on an alternate path, showing what would happen if the user chose [[Express your frustration with their behavior and demand they find a solution immediately.]] Write 3-4 lines of dialogue between Rita and Oliver struggling to collaborate on a solution. Then, write a multiple choice question with 3 options for what the user could choose to do at that point. One option is the best option, acknowledging the intensity of emotions. One option is an OK option, suggesting a break. The third option is a bad response that will escalate the conflict instead of deescalating. Mark which option is the Best, OK, and Bad.
Providing more specific details about the choice worked well; ChatGPT generated the more polished wording. I used that kind of specific direction to reuse choices or direct the learners back to a specific path. You could also use a prompt like that to insert details from your needs analysis or SME interviews about common mistakes or the best answers.
Overall, the scenario feels really generic. That’s partly because I deliberately chose a topic where I didn’t have a lot of specific details so I could let ChatGPT be free to make up details. This feels like a scenario where you’re trying to make the audience so broad that you’re trying to avoid adding any specifics about the context. It’s never clear what kind of project Rita and Oliver are working on or what kind of business they’re in. Without some of that specific context, the story feels a little flat.
Choices gave too many hints
One of the main changes I made in editing was removing the hints that ChatGPT provided in the choices. So many of the AI-generated choices gave too much of the rationale in the choice itself, making it too obvious.
As the mediator, what should you do next to facilitate a more constructive resolution?
Option 1: Intervene and suggest implementing a compromise solution now that they are actively discussing their ideas.
Option 2: Allow Rita and Oliver to continue the discussion without any further guidance or structure, as they seem to be making progress.
Option 3 (Best Option): Continue to facilitate the discussion, actively guiding Rita and Oliver in collaboratively refining the compromise solution and ensuring that both their concerns and ideas are considered and respected.
In the above passage, the best option is the longest, which already makes it most likely the right answer. It’s too long, so it needs to be shorted to something like [[Guide Rita and Oliver in collaboratively refining the compromise solution.]] Option 2 includes “without any further guidance or structure,” which seems like an obvious hint that it’s not the best choice.
Would I write a scenario with ChatGPT again?
Would I use ChatGPT to write an entire branching scenario again? Probably not. This was an excellent experiment to practice prompting and to explore what’s possible. It also helped me identify the weaknesses in this approach. It definitely can’t really be used on its own without editing and verification.
Instead of using ChatGPT to write an entire branching scenario, I will likely use it in the future to help with writing scenarios. Using generative AI tools for brainstorming story ideas is genuinely helpful. I also can see a use for coming up with some alternative choices if I’m stuck at a particular place in the story. While I will likely continue to write most scenarios myself, I can see the value in using it to augment my writing. I suspect that people who are less experienced with branching scenarios may find even more value in using ChatGPT to create alternative choices.
Read more about my process
To learn more about my process and the prompts I used, check out the previous two posts in the series.
Your experiences writing scenarios with AI tools?
If you have tried using ChatGPT or other AI tools to write scenarios, I’d love to hear about your experiences. Leave a comment below or reply to this email.
Generating Plausible Choices and Consequences for Scenarios Using AI Tools. Thursday, April 25, 10:00 EDT. Learn to use AI tools to generate draft scenario questions, choices, and consequences. Understand how to refine prompts, recognize the limitations of AI tools, and know when to rely on AI versus manual content creation. Part of the Learning & HR Tech Conference, April 23-25 in Orlando.