Table of content

At Boba – How do we know we are doing right?

One of the highest priorities of our team then - as a startup - is to ensure we trike the balance between speed and quality. Therefore, our product team used many UX research methods, from very basic or even “quick and dirty” ones to more extensive researches, depending on the situation and ambiguity of the problem.

Design meets the basic standards

Visual principles, UX laws, 10 usability heuristics ... which are widely trusted - we make sure at least our design checks these basic requirements before delivering any design.

We usually use the 10 heuristic rules to evaluate our designs before handling to the tech team.

The QC engineers also assist me in evaluating app accessibility and inclusion by testing the design on multiple devices with different display capabilities. Tested elements include basic accessibility standards such as color and contrast, readability, touch targets, and gestures.

Exploratory research

I joined Boba in its early stages, when the idea for the product was not yet fully formed. As we sought to find product-market fit, we made a concerted effort to explore as much as possible, as quickly as possible. Therefore, we did a lot of research, meet as much target users at we could to understand them.

An example is, we initiated this whole Boba project by a Focus group discussion for the purpose of discovering the target user's needs for using social media.

Causal/Experimental research

Besides qualitative researches, before launching anything important and especially when we are not really sure between designs options, we also always try to conduct user tests, sometimes along with individual interviews to find out the answer.

For example, we once tested 2 design options for the home screen with a totally new information architecture, so that we could choose the one in which users would recognize and understand the info we provide better.

Quantitative data from the app statistics

The performance of the overall product or feature (stickiness rate, number of new users,…) could somehow reflect the effectiveness of the design. We track those numbers very closely.

Here are some examples of design areas that can be reviewed in relation to feature metrics:

Design effectiveness related to the Number of New users:


  • Product's look and feel.

  • Success rate of the Onboarding flow.

  • Effectiveness of the 'External story sharing' flow.

  • Effectiveness of the App referral feature / gamification.

Number of 'Post story' actions:


  • Success rate of 'Post story' flows.

  • Effectiveness of 'Post story using app’ suggestion’ flows.

  • Notifications and information display (when users have new comments/reactions/...)

While it's true that the figures from app stats may reflect more than just design, I can still use these metrics to hypothesize or diagnose the potential design problems that users may be experiencing. I will then validate these hypotheses or finding out the “why” behind what’s happening using more causal researches.

Wrapping up

During the "searching for market fit" phase of a startup, user-research-doing-right is crucial for building a successful product. Being flexible and choosing the right methods, along with the appropriate amount of effort spent in different situations, is the key to success.