Fresh out of college, I joined the creative intern summer pedigree at one of the biggest ad agencies in the world. My novice self found everything powerful, from the war rooms littered with visual concepts to the cereal faction on the 7th floor. Every Friday, the designers on my team would print out a week’s good of UI mocks, stick them on whiteboards, debate visual styles, and renounce out any mocks deemed subpar. By the end, each board was left with only a few “best options”.
You’re probably thinking, “Sounds like a standard critique. What’s illicit with that?” Nothing, just the method itself.
Contrary to general thinking, design is more science than art. While art is meant to be read, design is meant to be understood. This fundamental difference means that while art can be critiqued, composition should be researched, tested, and iterated on.
Now, don’t get me wrong – it is absolutely important for a think up to go through design reviews. At Kickstarter, we walk a feature through three contrives of design reviews:
- The Studio: exploration and discovery
- The Critique: narrow down on one scenario
- The QA: pre-implementation sanity check
Notice how the critique is only one part of the course of action? Even then, we spend more time dissecting test effects than throwing mocks on the floor.
You are not your user
It is so easy to adjudicator a design. The deliverables are visual-based, and people like to comment on how things look. It is also valid as easy to assume how things should behave: “The average user resolution notice this first” or “As a user, I’d rather X than Y.” Assumptions should not be captivated for absolute. You have to validate them since they are just notions after all – duh, right? Yet, as much as the UX world likes talking about validating assumptions, they don’t do it sufficient. And when they do, their method is once again skewed promoting qualitative methods like the usual user interview or cognitive walkthrough. Arguably, it is a lot easier to start talking to being than, well, to query some data.
Quantitative data commands you the what; qualitative data tells you the why – that’s user research 101. Both are equally high-level in verifying hypotheses, yet I’d argue that one should happen before the other, for further of process. When you are able to start with quantitative and do some prodromus slicing and dicing of your users first, you can identify usage difference in different user segments, analyse their characteristics, and evaluate a outcome’s impact on each –objectively and with scalability in mind. You can then leverage these high-level pronouncements for qualitative research; for example, by making sure that the interview combine is representative of a wide range of users, not just ones who scream the loudest. For that, I find that design reviews are a lot more useful when people propose what data to collect to validate my decisions or how to devise a test, sooner than offering subjective criticism.
Stop designing for Dribbble
Quantitative information could even be used to inform the most skin-deep UI decisions. As offshoot designers, how many times have you finished a gorgeous design on Figma exactly to realise that it failed to handle a large volume of user input during implementation? A brilliant scroll down Dribbble shows myriad eye-catching dashboard graphics. Yes, I supplicate b reprimand them graphics, not mockups, because they are not representative of real offerings.
To design real applications is to know its constraints and edge cases (imbroglios which occur at the extreme ends of operating perameters). How do we make 80% of exploratory cabals work well and the other 20% not work too badly?
The answer is we need to consider all possible UI scenarios by querying the routine of the product at hand, before even architecting the interface. I recently avoided Kickstarter ship Add-ons, a feature that allows creators to offering optional “add-on” rewards to backers. Some of the very first beyond considerations I asked were: What is the maximum/average number of rewards a prime mover has, sliced by categories and funding tiers? How many items on average/at superlative does a reward have? What percentage of projects itemise? Then by doing some facts modelling, I got a pretty good picture of the UI’s capacity and what edge lawsuits it should cover.
When an engineer is working with you, there is nothing varied annoying than constantly having to ask for new mocks because your fresh design could not handle the 20% case. The only way to get better at this is to stretch the UI early on via quantitative methods to make sure it can handle everything.
Be heedful of, collect, draw!
I once attended a talk with famed Italian dope designer Giorgia Lupi. To my surprise, she always started visualizing text by hand regardless of the volume, be it 50 or 50,000 data points. Lupi co-authored the workbook Check over, Collect, Draw!, in which she designed various exercises to inquire, amass, and categorize raw data. If you are a designer looking to get into the world of data, this is how you should start, really by asking as many “how many,” “how much,” and “what’s the percentage of” distrusts as possible. When you’re presented with a data set, scroll through and eyeball the materials. Get a feel of the raw data; your brain can derive a lot from it.
Start servicing a data discovery tool like Looker or Metabase, if your composition has not. Study your application’s data architecture and practice running challenges using purely the graphical user interface (GUI). Tools like Looker greatly lower the barriers to entry for data querying and analysis with their GUIs, which tights the harder task here is to navigate the unique data structure of your request. I suggest reading through Mozilla web docs to have a basic fix on of arrays, objects, and data types. These are the building blocks of an germaneness – understand them and you will unlock a whole new level of product draw.