Divergent Minds, Convergent Design: A Professional Examination of User Interface Critique

Abstract

This study explores how people from diverse backgrounds evaluate and think about digital experiences. We divided people into various groups based on how they affect the creation of these interfaces by putting them into categories of artistic backgrounds and non-artistic backgrounds, then these groups were interviewed for their thoughts on a new interface. Noting down the targeted issues and dividing their feedback into various categories was the approach to understanding their target areas in an interface. Analysis was done on these findings to compare how the areas the people targeted in an interface related to the background of the person, additionally a relation was created around how the targets of a user change after getting exposed to the design industry.

Some of the major learnings around these tests were that people who have an artistic background can find problems and solutions faster as well as in a larger number in total compared to people who do not have an artistic backgrounds. There was also a major learning around the idea that people who are not from artistic backgrounds are unphased by major design elements in interfaces while artistic background people have strong opinions on elements that the general audience overlooks.

  1. Introduction

Many User Experience practitioners often quote Henry Ford: “If I had asked people what they wanted, they would have said faster horses.” We are going to dissect this and see how much value it holds, equating it to building experiences and solving problems in interfaces. Do users think differently from what designers imagine and do designers solve things which distinct types of users target as problems on interfaces they navigate? Do designers work on issues that need solving or are they fixating on points that are just induced by designer thinking of continual improvement?

It is important to answer these questions and to shape how we design our interfaces; understanding how designers and users are related is key for us to shape our designing processes knowing most of our interactions happen in the digital spaces and it’s increasing as a norm daily. To improve these design processes, we also must understand how people with design (or in our case artistic) backgrounds think as well as ask questions about how the industry impacts people who are into these product creation roles.

1.1 Problem Solving and Backgrounds

Let’s start with understanding how the idea behind this study came up, while a lot of studies talk about how backgrounds impact decision-making and problem-solving but most of these studies are around either mathematical problems or solving problems in the physical space. “When computer engineers design tiny gadgets that can process a great deal of information very quickly, they don't think about the characteristics or needs of the people who will be using these gadgets.” [Vincente 2004] was a major inspiration point that made me think about thinking about the problems Vincente talks about and aligning them to the modern world of digital experiences.

Let’s start with understanding how the idea behind this study came up, while a lot of studies talk about how backgrounds impact decision-making and problem-solving but most of these studies are around either mathematical problems or solving problems in the physical space. “When computer engineers design tiny gadgets that can process a great deal of information very quickly, they don't think about the characteristics or needs of the people who will be using these gadgets.” [Vincente 2004] was a major inspiration point that made me think about thinking about the problems Vincente talks about and aligning them to the modern world of digital experiences.

1.2 Designing, Thinking and Shaping the Study Process and Metrics

To create a process for testing people it was important to learn about the processes that are in place for creating these designs and how they are set up to make sure people who are using these processes or in some cases people who are deemed fit to be able to leverage these processes; get the best out of them to create optimal experiences for others.

This started the search for the processes behind design and design thinking, bringing me to Design thinking: Understanding how designers think and work by Nigel Cross. It talks about a sewing machine and its design process mentioning how the people responsible for making it made certain design decisions, but the users used the product in a unique way which made them rethink their design process [Cross 2019].

Defining a key point for the study and its process as a comparison between how designers look at products and their flaws in balance to how users look at similar things, but in the digital space. There were a lot of major factors that the book talked about that affected the users and their experiences, talking about how they focused on a specific section of the product that the people creating the product did not even think about. This was used as a main influence on designing the metrics that would be measured as well as compared in the study.

1.3 Current Studies Around the Digital Space

Before moving into the creation of the exact study importance was given to finding similar studies and their learnings to make sure takeaways from those studies could change my study for it to supply added feedback rather than confirming something that was already investigated.

 

Most of the papers talked about bringing in users for research and how it can change that design, which is something that is done before the area of the design process I am planning to target. However, one specific publication Designers and Users: Two Perspectives on Emotion and Design by Donal A. Norman and Andrew Ortony was a key factor for learning things that influenced how the interviews for the study were conducted.

It talks about how things are different from a designer’s perspective and how due to the nature of their work and how they function they have decisions which come into play without them thinking about it, these were called “Emotions by Accident” [Norman 2003]. While the factors mentioned in the paper are around physical products patterns around these factors were focused upon to tinker with the metrics that were evaluated in the study, translating these physical factors and how they impact these “Emotions” into digital factors was a challenge.

  1. Methodology

A process was created based on the learnings from the literature referred to, this was kept simple and short to avoid lapses of concentration during the process. While there were a lot of micro decisions taken during the process that were affected by past studies and the inspiration literature which we will get into detail while going through each section, one major decision was taken before the creation of the study; “What UI screen should be used for testing? Should it be an already existing UI or something completely new?”

The decision to use a new to the user simple user interface over something they would have used before was made on the factor of Emotions by Accident and Biases that users and designers might have for user interfaces that users have been using for a long time, it is a metric that could add more detail to the study and is something I have kept in mind for the future scope..

2.1 Metrics of Testing

As discussed above the metrics were decided based on the learnings from similar studies done in the physical space and particularly targeting the essence of designers having a unique way of looking at things, translating these physical factors into a digital space led us to the following metrics:

1. Color

2. Font

3. Content and Information Architecture

4. Icons and Graphics

5. Features

6. Miscellaneous

 

The factors were chosen keeping in mind how design practices work and how we as designers focus on certain sections to improve our designs. Colors, fonts, icons, and graphics were chosen as metrics keeping in mind how current design systems are made and when they are updated a major focus is put towards changing these to improve user experience. Are they important? That’s something we aim to learn. The four metrics are design-heavy and help us to focus on the angle of comparing people who are new to designing vs people who have been in the industry for a while; on how their thinking processes change with time. Helping us learn more about the industry and its practices, knowing its impact on people involved in creation.

Other factors like features, content and information architecture were chosen as metrics to focus towards setting context towards the product and it impacts how users would be using the product, a lot of our literature which was reviewed talks about how users perceive products in a different way influencing their usage of products due to that. Hence setting the importance for metrics. Miscellaneous was added later during the interviews understanding that each other might have something that might not fit into a metric.  

2.2 Creation of Screens

A couple of simple screens were created keeping a few things in mind:

  1. The users should need as little as possible context on the screens to avoid focusing on what the product contrasts with the design.

  1. There should be multiple errors (well everything could be an error from someone’s perspective; that’s the aim of the study) ranging from simple-to-find to hard-to-scan errors.

  2. Possible errors should also cover all the metrics defined.


The application created was an information portal for adolescents, knowing at their age they experience changes in a lot of things from themselves to the world around them. The application would serve to access information about such topics, the application would serve one article about each topic every day, and they would also have the choice to move across dates to see the article for that specific day. This context was given to the interviewees before they moved into their interview. The two screens created were:

  1. The first screen was category selection, a simple list of categories from which the users would choose one to be informed about on the day and then move on to the second screen. [Figure 1]

  2. In comparison to the first screen the second screen was a little more cognitively heavy, this was done on purpose to ease the participant into the process with the simpler screen first. This screen shows an article for the day with its source, a calendar at the bottom to navigate between days and a heading showing the date of the article. [Figure 2]

2.3 Selecting Who to Interview

The interviews were the most important part of the study and because of that selecting a group for interviewing was an important task; in total 8 people were interviewed but a base was set to cover all our user types based on the learnings from how Architects and Engineers solve problems differently due to their backgrounds [Khanmohammadi 2019], adapting this into our context of creating a product we divided our user base into the following:

  1. Designers (2)
    People who have an artistic background and are in the industry as designers.

  2. Developers and Product Managers (2)
    People who are not particularly designers and don’t have an artistic background; but they have experience in the industry collaborating with designers on a regular basis.

  3. General Artistic (2)
    These are people who haven’t worked as designers but either have artistic backgrounds or are planning to pivot into designing in the future.

  4. General Non-Artistic (2)
    These are people who haven’t worked as designers and don’t have artistic backgrounds with no inclination towards pivoting to design or experience in collaborating with designers regularly.

The above discusses background and industry experience; these were decided by interview questions. Going into depth about the background and whether it is artistic or not this was something which was decided by knowing about the participant in the interview and their histories like schooling, post-school studies and activities; as well as asking them if they consider themselves as someone who has an artistic inclination.

The demographic of the whole study covered 4 males and 4 females, as it was learned that males and females have separate ways of perceiving products. The general category was also recruited keeping in mind how age and cognition might affect users and their decisions towards using a product, recruiting 2 people who were on the older side of age (40+) and 2 people who were considered young (18-25) [Kirk 2012].

All the interviewees were recruited using Twitter, a post was made talking about how I was planning a study to learn about designers and users comparing how they look at interfaces to solve problems. The post also asked people for their background in the above-mentioned categories. To this quite a lot of people reacted with interest; out of about 30 interested people 8 were selected, but this would be a great point to tap into moving forward if the study is expanded on.

2.4 The Interviews

Interviews were the main aspect of the study, all the learning and all the research culminated into a set of questions that were to be asked and a general scope of target points that had to be noted during these interviews. Before getting into final interviews a couple of test interviews were conducted to get an idea about the notetaking and what the approximate data might look like. This also helped me make some final tweaks in how I was taking notes during the study, helping me focus more on the metrics rather than focusing on the exact answers’ interviewees were giving.

These insights helped us create a set of questions that were to follow a certain pattern covering all parts of the study, the interview was divided into 5 major parts, and they followed one another in the following order:

  1. Introduction: This section was focused on giving my introduction and telling the interviewee about what the research is; this was done without giving too much detail into the research as that might affect the answers. They were also asked if they were fine with the interview being recorded.

  2. Background Learning: After a small session of ice breaking and making people feel comfortable interviewees were asked about their background of schooling and posting schooling life, age, experience level and how they would categorize themselves when it comes to an artistic instinct.

  1. Screen One Testing: Context was given to the first screen [Figure 1] which was shared with the interviewee, they were suggested to use a phone to open the image to emulate the experience of using a phone application. After this, a timer was started after which they were asked to give their thoughts on the interface they were seeing. They were not inclined towards a specific thing and were not put under any restrictions like time or specific sections; they were told to tell me how they felt. They could do it thinking out loud or they could do it after taking their time to gather their thoughts.

    Noting down their thoughts they were finally asked what certain things they would like changed on this page, their answers from this section as well as above were categorized into the metrics as well as divided into general remarks, issues, and things they would change.

  2. Screen Two Testing
    The same activity as screen one was repeated for the second screen [Figure 2].

  3. General Questions and Thanks
    This section was specifically created for the interviews to ask about the study and if they had any questions or suggestions in general, which was followed by thanking them again for giving their time as well as checking with them if they would like to be contacted again in case something else comes up with the study moving forward.

2.5 Data Collection from Interviews

The interviews were fruitful but the main aspect of them was to get some valuable data to be collected; some factors which were measured are the following:

  1. Total Time
    Time was recorded for both the screens evaluation and the average for taken for both.

  2. Total Number of Issues Found
    The number of issues were noted for both screens and the average were taken.

  3. Average time for finding one Issue (Total Time/Total Issue)
    This factor was added to bring in a better understanding when comparing different user types. Times and issues on their own add little value but when the average was taken it helped us understand the general time taken for a user type to find an issue.

  4. Like vs Dislike
    This was a factor that was different from others, it was calculated by taking note of mentioning a certain metric like color, font, etc. which were mentioned above in Metrics of Testing. When a metric was mentioned the context of it was noted and if it was positive, it was given a score of 1, if the context was negative, it was -1 and if it was not mentioned then it was given a zero. The total likes vs dislikes were compared as well and likes vs dislikes were done for each metric which helped zoom in into what metrics were focused on for a user type. This factor was especially important for drawing comparisons and understanding each user group and their patterns.

2.6 Analysis

Based on interviews data sets were created and now it was time to get some analysis out of this, some key factors were created to start getting some comparisons in place. Let’s talk about what comparisons were done and why they were done:

  1. Average Time Taken to Find One Issue
    This analysis was added to compare how people look at screens and find issues, how much time they take and if their background of being artistic impacts their average time taken to find a single issue.

  2. Number of Issues Found
    While the average time for each issue is important to understand the way users think, this analysis was important to understand if people of certain backgrounds are more inclined to find issues on interfaces in comparison to others.

  3. General Like vs Dislike Scale
    This analysis was a general comparison after adding all the likes and dislikes from people of each background, this was added to see if people from a certain background more inclined are towards liking or disliking the overall interface to keep changing it or if they are content with things presented to them.

  4. Category Based Like vs Dislike Scale
    While the general analysis helped in understanding the satisfaction of each user type with the interface the category-based analysis was added to make sure we can understand what each user type targets on the interface. The categories that were compared for like and dislike were Color, Icons and graphics, Content, Font, and Features.

  1. Results and Discussion

The analysis resulted in some key learnings from which some patterns can be developed around the different user types. Let’s investigate each analysis mentioned above in the methodology under analysis.

3.1 Average Time to One Issue Found

Let’s start by looking at the average time taken for different user types to find an issue.

Some inferences derived from this analysis was that people who have an artistic background or an inclination towards artistic things they are faster in finding issues on digital interfaces, with the fastest being also working as designers which makes sense since these people work on similar things daily.

One interesting thing to learn from this was that people with artistic backgrounds are faster with their issue-finding in comparison to people who work in the industry but don’t have artistic backgrounds. This shows that issue finding and being critical with efficiency comes from your background, but it can also be developed by working in the industry with designers around you.

3.2 Number of Issues Found

Targeting the number of issues on the interface would be the next thing we would get more insights into.

Key takeaways from this analysis showed that people who have design backgrounds are inclined towards finding the most issues in an interface, but people who work in the industry with designers also start looking into digital interfaces to find issues. Like the previous analysis in this, the artistic instinct is more impacting in comparison to collaborating with designers. Could this be due to being in the industry with time you get faster at picking inferences on the digital interfaces.

Moving away from artistic people and the industry, the general audience which covers the larger user base of most applications don’t look at applications to critique things and even when asked to find issues they are not keen on it.

3.3 Comparing Likes and Dislikes

Mentioning the metrics like colors, fonts, icons, content, and features in a positive would result in a skew towards like and negatively would push the chart towards unlike; aiding in understanding the perspective people from diverse backgrounds have when they look at an interface. And if these insights translate that into any patterns.

When it comes to the general feeling a user type has when they look at digital interfaces the learning was that people who have design backgrounds as well as people who have worked in the industry either as a designer or collaborating with a designer generally look at things with critical glasses.

Remarkably like how number of issues and efficiency of finding these issues, the critical nature of the person is more impacted by their background or nature being artistic in comparison to them being in the industry collaborating with designers. The general audience side, which doesn’t have artistic background, is mostly unphased about the digital experience.

3.3 Comparing Likes and Dislikes for Specific Metrics

While the general like vs dislike scale gave us a lot of good insights into how specific user bases perceive digital interfaces on a higher level; a better understanding of what they exactly focus on and how they view those specific sections would add more value

When it comes to people who are designers by trait and background, they pretty much look at most things in digital spaces with critical views looking extremely critically at icons, graphics, colors, and content. However, they did not investigate the features of the application; could this mean when it comes to designers in the industry, they are more focused on making sure the application has a better experience rather than providing features that add value to the users?

An interesting insight from this was the people who have design backgrounds but haven’t worked in the industry aren’t as critical, while they still look at icons, colors, and graphics their response was mostly around it being a good thing rather than not liking it and improving it. Does working in the industry tune us designers into thinking a certain way about designs and liking a certain set that is approved by other designers? Do they become less open to more options and more critical towards changing it into their own specific set of ways? This could be true as the developers in the case also look at content, icons, and graphics in a critical sense, these are people who work on these products and work with designers regularly.

While designers, people with design backgrounds and people who collaborate with designers focus on color, icons, graphics, and content; general users who would make up most of an application user base are unphased by most of the above. General users look more into what the application provides as a value rather than how it provides; they showed a pattern of being critical around content and asking questions about features with suggestions about the two as well. This shows a disparity in how designers look at interfaces to solve problems in contrast to the general user base of applications.

Funnily enough none of the user groups commented on fonts, it would be an interesting expansion to compare this with several types of fonts rather than using a generic OS font to know how much importance fonts carry when it comes to users perceiving user interfaces as products.

  1. Conclusion

This study examined how people from diverse backgrounds and experiences evaluate user interfaces. Interviews were conducted for people in four distinct groups, developers & product managers, general artistic, and general non-artistic; the analysis from the interviews of a hypothetical application pointed out five major things:

  1. Artistic instincts are more impacting on how you view your interfaces in comparison to the industry impact,

  2. But the industry also has a significant impact on people looking at interfaces with a critical eye.

  3. Even though people who work in the industry or who have artistic instincts are critical of digital interfaces it does not correlate well with how the general audience looks and feels about interfaces they are most of the time content with how it looks and feels.

  4. Designers focus more on the looks and feels while the general audience is focused more on how it provides value and what value it provides.

  5. The industry makes people with designer instinct more critical and in a sense more robotic towards how design works, expecting a specific norm while people who have designer instincts but haven’t worked in the industry are more open to unique design styles.

Designers should be more aware of the needs and expectations of the users when creating digital products. Not just the designers but the industry should also start shaping itself in its practices to not convert designers into people who are also looking for changes in the visual sense but looking at adding value to digital experience for the general audience. Design practices should be shaped to be more open to adding features and working on what content is presented rather than how it is presented.

There is a future scope to this study which could expand it to a larger audience base learning about what demographics impact how users investigate interfaces, while also learning more about the industry and why the people in the industry start looking at things in a specific way which is different from users, they are designing interfaces for. What practices could be changed or adapted to align better with user needs?

  1. References

  • Kim J. Vicente. 2004. Why is Technology So Out of Control? Walking Around Half-Blind. In The human factor: Revolutionizing the way we live with technology. Toronto: Vintage Canada, 31–33.

  • Cross, N. (2023). Design thinking: Understanding how designers think and work. Bloomsbury Publishing.

  • Norman, D. A., & Ortony, A. (2003, November). Designers and users: Two perspectives on emotion and design. In Symposium on foundations of interaction design (pp. 1-13). Interaction Design Institute.

  • Najafi, E., Khanmohammadi, M. A., & Smith, K. W. (2019). Architects and Engineers Differences: A comparison between problem solving performances of architects and engineers in the ideation phase of an analogy-based design. Int. J. Architect. Eng. Urban Plan, 29(1), 15-25.

  • Kirk, C. P., Chiagouris, L., & Gopalakrishna, P. (2012). Some people just want to read: The roles of age, interactivity, and perceived usefulness of print in the consumption of digital information products. Journal of retailing and consumer services, 19(1), 168-178.

  • Kim J. Vicente. 2004. Why is Technology So Out of Control? Walking Around Half-Blind. In The human factor: Revolutionizing the way we live with technology. Toronto: Vintage Canada, 31–33.

  • Cross, N. (2023). Design thinking: Understanding how designers think and work. Bloomsbury Publishing.

  • Norman, D. A., & Ortony, A. (2003, November). Designers and users: Two perspectives on emotion and design. In Symposium on foundations of interaction design (pp. 1-13). Interaction Design Institute.

  • Najafi, E., Khanmohammadi, M. A., & Smith, K. W. (2019). Architects and Engineers Differences: A comparison between problem solving performances of architects and engineers in the ideation phase of an analogy-based design. Int. J. Architect. Eng. Urban Plan, 29(1), 15-25.

  • Kirk, C. P., Chiagouris, L., & Gopalakrishna, P. (2012). Some people just want to read: The roles of age, interactivity, and perceived usefulness of print in the consumption of digital information products. Journal of retailing and consumer services, 19(1), 168-178.

Links

  1. Original Research Planning [HERE]

  2. Tweet to find people for the research [HERE]

  3. Interview Script [HERE]

  4. Interview Videos [HERE]

  5. Interview learnings [HERE]