G-1Q07VLPB62 The New Conversational UX Stack
top of page

Conversational UX Stack

Welcome to the Conversational Design Hub. This Page will link to all of our most important Conversation UX Content

Impact of LLMs

LLMs are redefining how we build Conversational Agents. Today, LLMs can do most of the heavy lifting in minutes. The days of building hundreds of flows of every useful intent or FAQ question are gone. Instead, LLMs can be trained on a knowledge base and be able to give great answers.

 

The role of a Conversation Designer has changed. Designers now have to start thinking differently. Instead of thinking linearly and having a flow-based approach to design, now we need to think more about knowledge, information, data and how all of this leads to mean making.

 

Through stories, knowledge bases and prompt engineering, Designers will soon be creating experiences we once only dreamed of. The Job has changed and it is now more creative then ever.

New Design Stack

Conversational AI Project will consist of the following design stack:

1. Flows, Intents & Entities: These will still be useful in use cases where the business needs very specific answers. Additionally, they can be used as follow up action to interest a user has shown in a particular product or service.

2. Large Language Models (LLMs): Integrating Large Language Models into the your project at multiple stages. LLMs can be used to interpret user intent, entities and sentiments. They can be used to answer questions. Multiple LLMs can be used in a single project, where each LLMs has a specific function.

3. Prompt Engineering: Now the fun begins. Prompt Engineering includes all of the instructions given to an LLM in order to get back high quality, accurate responses. This includes giving the LLM are role, for example "you are a super start customer service agent" and prompt chaining.
4. Knowledge Bases: This entail training the LLMs on your own data. FAQs and website information is a great example of this. The main challenge here will be in designing knowledge bases in such a way so the the LLM can understand the information quickly and accurately. Knowledge Base design is key! 

5. Storytelling, Tone & Personality: Basically at all times, we all have a story going in our minds. The story is about what we want, what we believe will happen when we get it and if we are on the right track. Even as you are reading this sentence, you are comparing this information to that story and evaluating its fit. Designers will need to consider the Customers Journey, what story customers have in their minds and create a design, narrative and tone that fit the expectation and goals of the user. 

This is a High level overview of how the role has shifted. Designing bots is now more about organizing information, understanding ontologies, taxonomies, semantics and mean making through stories. A good example of where we are going is akin to the Designers in West World. 

Future of 
CUX & Web

As Conversational AI Agents evolve and become more and more commonplace in our everyday lives we need to imagine and consider the implications to our businesses. 

Right now, Search Generative Experience (SGE), which is Google's AI Search Engine Bot, is decreasing website traffic by 18% to 68% according to latest data from Search Engine Land. SGE is only in its infancy and at some point it will be able to answer most questions much better and more directly than the average person could after reading the corresponding information. In other words, the Web as we know it is changing.

Websites now have to change. To date, websites have been like digital magazines. They are mostly made up of static content.

 

In the future websites will have to become destinations. A place where people go to meet, have dynamic, personalized experiences that SGE can not give them.

Main Challenges

What is an LLM?

When a new technology really wows and gets us excited it becomes a part of us. We make it ours and we anthropomorphize it. We project human like qualities on it and this can hold us back from really understanding what we are actually dealing it.

So let's consider a few questions. Mainly what is an LLM and what are its limitations?

 

Perhaps these questions and ideas will illuminate our understanding:

  1. Is an LLM a program?

  2. Is an LLM a knowledge base?

  3. Does an LLM know anything? 

  4. If an LLM is a program how does it compute its 70-100 Billion parameters in only a few seconds?

  5. If an LLM is a knowledge base, why does it need to predict? 

  6. How can an LLM Model with Billions of parameters that has been trained on pretty much the entire internet, fit on a 100GB drive?

  7. What are some simple tasks that LLMs can't do?

Hopefully these questions dispel some of the mystic around LLMs. There are a number of things that most people believe about them that are contradictory and wrong.

First, LLMS are not knowledge bases and they are not really programs either. What they are is a statistical representation of a knowledge bases.

 

In other words, an LLM like ChatGPT4 has been trained on hundreds of billions of parameters that it has condensed into statistical patterns. It does't have any knowledge but it has the patters of knowledge. When you ask it a question, it predicts the answer based on its statistical model.

 

A good way to think about this is that it is basically like a filter. Your question becomes the filter and configures the shape of the model so the right content or words drip out.

If you want to learn more about LLMs and how they work check out our LLM Section.

Bias

What is Bias? Is it ever a good thing? It's very important that we are on the same page of what it means so we can understand it better. 

Here is the Webster definition of Bias

  1. an inclination of temperament or outlook especially : a personal and sometimes unreasoned judgment : PREJUDICE

  2. an instance of such prejudice

  3. BENTTENDENCY

  4. deviation of the expected value of a statistical estimate from the quantity it estimates And/OR systematic error introduced into sampling or testing by selecting or encouraging one outcome or answer over others

Transitive verb

  1. to give a settled and often prejudiced outlook to his background biases him against foreigners

  2. to apply a slight negative or positive voltage to (something, such as a transistor)

The most important insight here is that, bias has its roots in our preferences. For example, if you prefer coffee over tea, you are more likely to show bias towards coffee. You might believe that more people drink coffee, that coffee increases your mental focus better, even that it is healthier. At the bare minimum, you will have more information about coffee which will skew how you view both coffee and tea.

How does Bias Form?

Every interaction has three components. For example, as your reading this sentence, there is the words on the screen, you the reader and the meaning you are gathering form this information.

 

The first stage is attention, you are paying attention to this instead of something else. The second aspect is your perspective. You are seeing these ideas from a point of view that is limited in time and space. Consider how different this perspective would be if you read this five years ago. Lastly, there is mean making. All of these words will mean something to you. Depending on your background and education, consider how differently an engineer, a linguist and a conversational designer would interpret this paragraph.

 

Perceptions & Bias:

1. Attention: The world has too much information. Based on what we value, we decide where to look and what facts to pay attention to a-priori.

BIAS: By doing so, we are implying that some information is more important than other information. We are showing a preference. 

2. Perspective: We only see objects from a point of view. Our perspective skews how we see the things we are paying attention to.

 

BIAS: Seeing a limited perspective or only one side of an object, event, person, topic, etc. leaves us open to confirmation bias, selection bias, sampling bias, reporting bias, volunteer bias, publication bias....


4. Mean Making: We turn limited data on limited things into meaning. Mean making is a process that involves our identities, beliefs, culture, personalities, etc. For example, consider how an adult and a child would interpret a similar event.

BIAS: The entire knowledge base is a construct, something we fabricate. It is a useful invention that doesn't exist outside of us.

To properly address BIAS, we need to be aware of it at every stage of the process.

Hallucinations & Poor Accuracy

Why do good models do bad things?

The answer lies in the models and how they are built.

 

LLMs are Statistical Representations of Knowledge Bases. They have taken the world's information and knowledge and boiled it down to statistical principles.

 

These principles are like icons. Icons represent something much more than what they are. They are a low resolution images that represent a much bigger chunk of information. They give you a lot more information than meets the eye.

Additionally, LLMs were trained on biased data. We know this because the internet is full of biased data. For example, most the internet is in English and represents Western values, yet the global population doesn't speak mostly english or hold western values.

When we combine both low-resolution models and bias we will have a hallucinations and poor accuracy.

How does this work and when does it happen?

It happens when you ask a detailed question about something specific. In our example, if you ask detailed questions about the icon, the model might make up those details in a way that conforms to its biases.

Cognitive Bias Codex

image.png

Solutions

Awareness

Awareness is the first step to address both issues: Bias and Accuracy. By understanding what LLMs actually are, how the work, and what their limitations are we can build much better products. 

Knowledge Bases

Next, we want to train the Models on our own data. By doing so, we can limit bias and give the model the right information so it can answer questions about our business. 

 

You can also use your proprietary data to build unique and personalized experiences that delight customers. 

Here are the variables to consider:

  1. Hierarchical Structure: This relates to how information is organized in your company and the overall hierarchy of data.

  2. Document Context & Semitics: This relates to how information is organized within a document, pdf, or webpage and how it relates to the context and overall meaning.

Want to learn more on how to do this? We created an entire section for Knowledge Bases which includes experiments!

Prompt Engineering

This is the other great way to work around the limitations of LLMs.

Prompt Engineering is all about triggering the model to find the right 'icon' and then to follow up and go deeper into that 'icon' by asking the right contextual questions.

 

Here are a few variables to consider:

  1. Consider your overall goal

  2. How many steps are in your goal?

  3. What knowledge is required to accomplish the goal?

  4. Who is the right role or person for the job? 

  5. What is the context of your goal?

  6. What are the instructions for each step?

Using these parameters, you can break it down into multiple Prompt and give very clear, focused instructions in each one. 

Future of the Internet

Knowledge is more than just information. It can open up a person's eyes to seeing the world in a whole new way. It has the power to reveal. A good example of this is Amazon. Using your smartphone, you can see what a couch would look like in your living room. They have taken the information about the couch and turned into into an experience. This is where the internet us going.

bottom of page
gtag('config', 'AW-991893026')