Learning (and teaching) by interacting
Nobody has explained us how to interact with ChatGPT or Gemini, right? But we still learned how to write better prompts to get better answers over time.
Modern users are generally proficient at navigating new interfaces, so lengthy tutorials are lately perceived unnecessary. Instead, the design should prioritise intuitive guidance that is seamlessly integrated within the interface, enriched with subtle interactive elements to make the experience engaging.
Learning by interacting is about allowing users to learn how to use your chatbot through interaction rather than instruction. This method respects the user’s innate ability to explore and understand new tools organically. This can be achieved through carefully crafted microinteractions and subtle visual cues that guide the user’s behaviour. For instance, using a simple greeting like "Hi there! What can I help you with today?" immediately sets the conversational tone and establishes the framework for interaction. This approach subtly instructs the user on how to begin their engagement with the chatbot without explicit directions.
When users provide their first prompt, we can still encourage them to enrich their next prompt with contextual suggestions, for example. For example, if a user inputs, "cancel subscription," the chatbot could respond with, "I can help with that! For a smoother process, you can tell me something like, 'Cancel my Spotify subscription starting next month.' Would you like to proceed with canceling your subscription now?"
The power of the perfect, first prompt
The quality of user input directly influences the effectiveness and relevance of the chatbot's responses. We don't want generic prompts, but rather input that with specific, detailed requests.
But designers are actually the first ones to provide the initial prompt (mentioned in the previous paragraph: "Hey, how can I help you with today?") that encourages users to provide detailed and contextually rich inputs. Ideally the first prompt should contain: personalisation (addressing the user by name or referencing past interactions), contextual relevance (to the user's current needs or past interactions) and contain curated choices (helping users quickly identify and articulate their needs).
Quick example to illustrate this: instead of using a generic "How can I help you?" prompt, we can use:"Hi [User Name], welcome back! Looking to manage your account, explore new products, or maybe something else entirely?". This could be followed by pre-defined options like: "Manage Account," "Browse Products," or "Let me explore." After the user selects an option, we can continue refining their request:"Great choice! You selected 'Manage Account.' What would you like to do today?". These curated choices help narrow down the user’s request, allowing the chatbot to provide precise and helpful responses.
Implementing these strategies ensures the user can steer the conversation in a productive direction.
One response at a time
Delivering one response at a time encourages users to think more carefully about their queries, and this is something I could test and see it myself in usability testings. This approach contrasts with the multi-bubble style of chatting, which can often lead to fragmented and less thoughtful inputs.
This is why I believe we should move away from the multi-bubble way of chatting (in AI chatbots, not when talking to a friend, of course):
Simple example.
- User: "What are some good restaurants nearby?"
- Chatbot: "Here are some popular categories: Italian cuisine, sushi bars, and vegan restaurants. Which one interests you the most?"
This single response provides clear options and encourages the user to specify their preference, leading to a more detailed and useful input.
A multi-bubble version of it would be something like:
- User: "What are some good restaurants nearby?"
- Chatbot: "We have several categories of restaurants."
This multi-bubble approach breaks the response into multiple smaller messages. While this might seem conversational, it can lead to information overload and make it harder for the user to process all options at once.
To summarise, I found that often the strategy of providing one response at a time, coupled with open-ended prompts, significantly enhances the quality of user input.
Feedback that feels natural
I always felt that traditional feedback methods like thumbs up/down buttons, while being straightforward, disrupt the conversational flow and feel unnatural. That's why I took every chance to explore how to integrate feedback seamlessly into the dialogue that creates a more user-friendly experience.
To make feedback feel like a natural part of the conversation, it should be woven into the interaction contextually. This approach respects the user’s journey and minimises disruption. Here’s what I implemented in past projects to gather feedback from the users:
- Contextual prompts: Instead of asking for feedback out of the blue, tie it to the user’s last interaction. This makes the feedback request relevant and less intrusive.
Example: After helping a user book a flight, the chatbot could ask, "Was this booking process smooth for you?" This question feels natural as it directly relates to the recent task.
- Pre-defined responses: Use pre-defined responses that align with the conversation’s flow to gather feedback. This method is less abrupt and fits organically into the dialogue.
Example: If a user indicates a response wasn’t helpful, the chatbot could follow up with, "That's not quite what I meant. Let me clarify further." This response maintains the conversation's flow and gathers useful feedback without breaking the interaction.
When it comes to the HOW, here are my favourites elements I've used, that worked to me to effectively gather that feedback and train the model:
- Polls and multiple-choice questions: These can be integrated at logical points in the conversation to gather specific feedback. Polls are effective for quick preferences and can be visually engaging.
Example: After providing hotel recommendations, the chatbot could ask, "Would you prefer a hotel with a pool or one close to the beach?" This helps refine the suggestions and feels like a natural continuation of the service.
- Open-Ended Questions: Inviting users to share their thoughts or experiences in their own words can provide richer, more detailed feedback.
Example: After completing a service or interaction, the chatbot might ask, "After your trip, tell me what your favorite part was! I'm always learning and love to hear feedback. ⭐️" This open-ended question encourages users to share more nuanced insights and helps the chatbot learn and improve.
Disclaimer: This article was not written by AI, or by any chatbot out there.