A variety of opinions, multiple data points and feedback contribute to functional, effective product design. Here’s how to do it.
Product design is not just about creating something beautiful – it’s about building something that works. A stunningly beautiful chair that’s so uncomfortable you can’t use it for more than a minute is far from effective design, however aesthetically pleasing or artistically innovative it is.
Product designers tackle all kinds of issues that require creative solutions. Our challenge is that often the products we create are targeted at people that are not us – users who have a specific need or other attributes that we digital creators don’t share.
So how do you know whether something works or not? How can you tell that a digital product you designed is functional?
I’ve found that tuning in to peer feedback, testing and reading the data are essential to a successful design.
Not so long ago, designers would keep their creations concealed until the moment of the “big reveal.” The iterative process of creation, the back-and-forth that fuels every creative process, would be usually done in private, shared only with the nucleus design team at best.
Not anymore. I’ve found that opening up to feedback, sharing ideas and discussing suggestions instead of seeking perfection while avoiding critique, have made me a better designer making better products than before.
Here are 3 things to take into account in the process:
1. An effective stakeholder’ review
The right prototype: as close to the final product as possible
A stakeholders’ review is a significant part of the creation process: we need other expert designers as well as the non-designers on the team to share their thoughts.
An effective review relies on an effective prototype: a highly detailed mockup that precisely relays the functionality of the product – that is, as close to the final design as possible, complete with animations when applicable and using real content.
This way, all stakeholders can see what works and what doesn’t sans guesswork.
Similar functionality, 3 alternatives
We usually create 3 different design versions to check how far we want to go with our experiment.
These 3 versions offer alternative treatments to a similar functionality, and all of them are rooted in the product brief as well as operational considerations – all resolved in conversation with the product managers before we set to work, factoring in operational considerations, development capacity and strategic directions.
In general, the 3 versions would be as follows:
- Version 1 – “in-the-box”
A version that doesn’t stray too far from the common or existing framework.
- Version 2 – “out-of-the-box “
An untraditional, less conventional and more innovative direction.
- Version 3 – “in-between”
Less conventional than version 1 (“in-the-box”), but more traditional than version 2 (“out-of-the-box”).
Collecting feedback and insights
Every stakeholder has an opinion, and these varying approaches are necessary for a fruitful solution – something of crowd wisdom, if you will.
2. Look into the data and collect insights
Product design is tied to business performance – so we have to learn from the numbers. Our main purpose is to create a product that looks good while also contributing to our business performance: be it more sales, more volume or a specific target audience that we want to attract with a tailored product fit for their needs.
The data sheds light on the user who will actually use this product. Here are some examples of questions whose answers provide data that informs the design process:
- What age are the users’ age?
- What is their screen size?
- Are they using iOS or Android?
After we go live, we test, test and test some more.
The metrics tell us a lot about our users’ behavior. We look at click-through-rate, track earning-per-visit, and monitor the bounce rate. This data holds a treasure trove of insights into both the ‘what’ and the ‘why’: which version performs better, and why one is preferable to the other.
Example: call-to-action buttons, tested
Here’s an example: we wanted to improve the design of our call-to-action buttons. We changed the right-angled green button with a rounded contour blue button. We also changed the font and added an arrow by way of encouraging the users to click.
The fact that variation B looks to some of us more welcoming is not enough. As call-to-action buttons are so imperative to the performance of the page, we had to test this – running variation A for 50% of the traffic, and variation B for the other 50%.
The test ran for enough time to capture enough volume for the results to be statistically significant.
Indeed, we found that variation B outperformed variation A – with a higher click-through-rate, lower bounce rate and better conversion rates – this was a winner.
Experimentation can be either uplifting or disappointing, but it’s always educational: yes, it’s amazing to see how something you believed in in fact proves to be helpful to the users, as well as boosts product value. But failed experiments teach us no less – and sometimes more than successful tests. Their insights are beneficial to our future creations.
3. Apply feedback with care
Today, I’m way more open to peer review, seeking to understand the point of view of the person offering the feedback. When someone poses a question, it makes me think of solutions that hadn’t occurred to me before.
I take notes, research the point raised, and come up with solutions: in other words, listening closely makes for more complete products.
Do keep in mind that sometimes an overload of different opinions can be confusing and achieve the opposite result. So yes, be open to feedback, but don’t accept any point of criticism as the final truth. Feedback needs to be reviewed and taken into account, but doesn’t necessarily mean that it needs to be applied.
Don’t let it skew your entire process. Do allow it to fuel a good solution.
Tal Peled is a Product Design Team Leader, Fintech Unit, Natural Intelligence