A brand is an idea that highlights the taste. Imagine a person who is not a wine connoisseur. This person tastes an average wine and someone tells them that it is the most expensive bottle on the market. Would that change the taste? Well, objectively no, but how many things are really objective? Especially in terms of UX (user experience)?
It's important to understand that people perceive your product in their own way, regardless of what you had in mind when you made it. After all, no one ever tried to create a bad product, it just usually didn't click with the target audience.
That's why it's important to use the retrospective method to identify exactly why something worked or didn't work. This way you can repeat future successes and avoid making the same (or similar) mistakes. With that in mind, here's how you can use retrospective methods to simplify your hypothesis testing in the future.
How retrospectives facilitate the analysis of hypotheses
Why is UX design important?
When you're designing a product, you need to think about the CLV (customer lifetime value). Because unless you're in an industry like checkerboard manufacturing, you're not dealing with a single purchase per customer. According to some surveys, only 20 % of your regulars make up to 80 % of your profits.
We can talk about retargeting and customer retention here, but the truth is that none of those customers will stay if they haven't had a positive experience with your brand in the past.
On the other hand, users who have a positive experience tend to come back and make larger purchases with each subsequent interaction. Even if you're in the SaaS space, your customers might feel more comfortable switching to a more expensive plan or opting for a longer term package. It's also important to note that with SaaS, better UX design means lower churn rates.
One of the main reasons people share content on social media is to provide others with valuable and entertaining content, and to gain social recognition in this way. They are more likely to do this if they have had a positive experience, because in this case they can stand behind their words.
They could even do more than just share. They could even contribute through UGC (user generated content).
You shouldn't take UX for granted. To be able to check if your work is working, you need to do a retrospective. To do this, you can ask:
- What went wrong (so we can fix it)?
- What went well (so we can repeat it next time)?
TO always have the right questions at hand Retrospective methods/techniques exist.
How retrospectives facilitate the analysis of hypotheses
The 4L Retrospective:
Most people have difficulty describing their experiences accurately. They understand what they have experienced and may even grasp the implications of what happened, but they have a hard time expressing themselves. That's why it's important to give them a framework, and you can do that with the 4L Retrospective, for example:
Open Feedback Questions
Like: What did you like?
Learned: What did you learn?
Lacked: What did you lack?
Longed for: What did you long for?
- Like: First, you need to ask participants (either survey takers or members of your agile team) what they liked about the product, process, or situation. These positive aspects can eventually become your strongest selling points, possibly even USPs.
- Lacked: Then ask users what they didn't like. This may be a little more difficult, especially if you don't want to get in someone's face. One way to encourage free speech is to ask it as a direct question. While people may not be willing to talk about what they "hate," they may feel free to share their thoughts when asked.
- Longed for: Here, participants must be asked what they would wish for if they could magically wish for just one feature. This is a great way to gain invaluable insight as their lack of understanding of the development process does not limit their creativity.
- Learned: Finally, you can ask them what they learned from using the product. This is especially useful if they have used similar products before. How long did it take them to get used to the new features? Did they find the user interface difficult to understand, etc.?
A public appeal is the very purpose of a product, and this is one way to examine it.
How retrospectives facilitate the analysis of hypotheses
Make a UX hypothesis:
When you conduct a survey, research, test, or experiment, you usually have an idea of an outcome in mind. This expected result is called a hypothesis. A hypothesis can be right or wrong, but you can't find out until you test it.
Formulating a hypothesis is so important because it can help you determine whether or not the experiment was successful. In practice, it would look like this: Our sales would increase by X if we:
- Change the price
- Change the UX
- Add value/function to the product
However, when it comes to the UX Hypothesis you may need to follow a different structure. There are several forms that a UX hypothesis can take:
- Statement
- Assumption
- Variables
- Expected result
- Methodology
- Success criteria
Another important point is that you must have success criteria that are usually quantifiable.
For example, let's imagine that changing the font on your platform would lower the average churn rate. Then we make the change and have a lower churn rate, but only two (out of several thousand) fewer subscribers. Sure, the theory is true, but is the difference relevant enough to be considered?
The most important thing is that you do not ask this question after the exam. This question must be settled in advance. You still have the right to consider the loss of a single participant as a big difference, but this must be determined before the test.
You also need to establish a clear and fair methodology. You'd be surprised how easy it is to make changes to support your theory.
How retrospectives facilitate the analysis of hypotheses
Refine your insights:
The most important thing is that the realization of what is wrong is not valuable in a vacuum. These realizations that you have a problem always require that you also work on it.
The first step is to get your retrospective methods right. You need to ask the right questions. Questions like – Did you like our product? This question is fine as an introductory question. However, it has almost no practical value. If they didn't like it, their complaints about sales, customer service, and technical support will express that more clearly than any survey. If they didn't like it, you'll know it.
The same applies to the opposite.
You want to know what they didn't like or why they didn't like it. Those are the idiosyncrasies you have to get to the bottom of. You have to ask questions from which you can derive actionable information. It's as simple as that.
It's also important to have historical data that matters, even long after the problem has been fixed. You want to see if some problems keep recurring, at least in different forms and formats. If that's the case, you either have a poor development approach or your methodology is flawed.
One of the most important things is that you maintain open communication. You ask the customer a question (retrospective methodology), get the answer, and work on it. Then you fix the problem and announce it in a big announcement or in the patch notes. Then you ask for new feedback. This process will continued permanentlyat least as long as you support the product.
How retrospectives facilitate the analysis of hypotheses
Best practices for gathering user-centered insights:
Finally, we'd like to share some tips for gaining user-centric insights that you might find useful.
First and foremost, you need to understand your target audience. The biggest problem that many managers, business owners, or even developers have is assuming that they are their target audience. However, this is not necessarily the case. Once you accept this, you've taken the first step to understanding who you're dealing with.
Second, you have to find the right target group. The sample must be representative if the users belong to different demographic groups and the number of survey participants is limited. A bad sample is one of the worst mistakes you can make in statistical research.
Most importantly, all of your research needs to be contextual. You want to see how it works in a real environment, not in a vacuum or on paper.
Looking back on your own work is a grounding experience and an opportunity to learn.
Once your product is on the market, it takes on a life of its own and you have nothing to do with it until the next improvement phase. That's why you need to gather all the information you can and proactively ask what your target audience thinks about it. Making a hypothesis, understanding retrospective techniques, and refining insights are some ways to approach this.
Srdjan Gombar (Author bio):
Experienced content writer, published author and amateur boxer. Srdjan has a Bachelor of Arts in English Language and Literature and is passionate about technology, pop culture and self-improvement. In his free time, he reads, watches movies, and plays Super Mario Bros. with his son.