Desktop - 4.png
 

Tools I Used

Qualtrics

Miro

Duration

August - December 2020

What I did

Research Planning and Participant Recruitment

User Interviews

Personas

Empathy Maps

Storyboards

Think Alouds

Heuristic Evaluations

Team Members

Kaely Hall

Kyle Kohlheyer

Jiaxi Yang


Overview

Here, I’ll introduce the scope of this project. Keep scrolling to read a Deep Dive into the details!

Problem Space

As part of one of my core classes — Psychological Research Methods for HCI — in the Georgia Tech MS HCI program, my team was tasked with a semester-long user-centered design project. I was excited that my team chose to “help people create more butterfly-, hummingbird-, and bee-friendly gardens,” because I love the great outdoors, but gardening in particular was a novel problem space for me.

Our literature review revealed that pollinator decline is a serious problem in the US, and that increasing pollinator amounts and diversity in cities presents an opportunity to benefit the ecosystem as a whole via a spillover effect. Therefore, we decided to focus on helping urban gardeners make their gardens more pollinator-friendly.

Solution

Our final high-fidelity prototype included several features, some of which are shown below. Keep scrolling to take a look at our overall process!

Custom Plant Search

  • Search for plants that meet your maintenance, water, and sunlight needs.

  • Encourages users to choose plants that are pollinator-friendly

Social Gardening

  • Share gardening pictures and progress updates with people in your area

  • Get inspiration and information about your local environment

Local Pollinator Heat Map

  • See how your local environment is benefitting from community gardens.

  • Track the growth of pollinators in your area.

Personal Pollinator Tracking

  • Track pollinators in your garden

  • Get analytics about the change of pollinator levels season by season.


Deep Dive

Research

Understanding the problem

First, my team undertook a literature review and exploratory interviews. I interviewed 3 urban gardeners to understand their gardening goals, setting, and context. In particular, I aimed to uncover their relationship to pollinators.

Who are our users?

Our literature review revealed that pollinator decline is a serious problem in the US, and that increasing pollinator amounts and diversity in cities presents an opportunity to benefit the ecosystem as a whole via a spillover effect. Therefore, we decided to focus on helping urban gardeners make their gardens more pollinator-friendly.

What is their problem?

Pollinator decline threatens our ecosystem as a whole, so our users’ problem is partially a global problem. But, in addition, our exploratory interviews found that pollinators also offer very local benefits. Many gardeners are invested in their local environment (e.g. by planting native plants), and pollinators are beneficial to plant growth and prosperity. Furthermore, many gardeners simply enjoy seeing pollinators out and about!

Our goal was to encourage gardeners to make their gardens more pollinator-friendly, and our handful of exploratory interviews hinted that a myriad of factors affected gardeners’ investments in pollinators. So, now that we’d gotten a crash-course in the problem space, we geared up to dig into problem in a more rigorous fashion.


Discovering User Needs

Surveys

Next, my team decided to use a survey because 1) it is cheap in terms of time and 2) it is generalizable (with a relatively large sample size) which would supplement our planned second round of semi-structured interviews. By using a combination of the two methods, we can make inferences about the user population of urban gardeners in a more reliable way than with just interviews alone.

The survey addressed the following topics, which we identified as a result of our pilot interviews:

  • Gardening information-seeking behaviors

  • Time commitment to gardening

  • Types of plants planted and priorities when choosing new plants

  • Feelings towards pollinators

  • Willingness to spend resources on curating a pollinator-friendly garden

We mainly gathered survey responses by posting on online gardening communities, so we elicited responses from all over the world. In all, we had 250 respondents to our survey, one of the highest marks ever for a survey in our Masters program. We analyzed our data quantitatively — a few of the most important graphs can be seen below.

Not shown are the results for gardeners’ feelings towards pollinators because nearly all of our respondents were extremely enthusiastic about birds, bees, and butterflies.

1.png
2.png
3.png
4.png

Interviews

I lead the recruitment of participants for our semi-structured interviews, which drew from our pool of survey respondents (so none of our group members had personal relationships with the interviewees). I reached out to and interviewed gardeners from across the globe in an effort to get a big-picture view of our user base. It was extremely rewarding to talk to such a diverse group of users, especially since they contributed to our project out of their own generosity.

Similar to the survey, we picked several topics to dig into:

  1. Interest and thoughts on gardening in general

  2. Process and motivations for gardening

  3. General pollinator knowledge

  4. Interest in engaging with pollinators

  5. Blockers to starting a pollinator garden or making an existing pollinator garden more friendly

My team used an affinity map to analyze and synthesize our data, which provided several benefits:

  1. Entire team is exposed to interview data, not just the interviewer

  2. Allowed bottom-up synthesis, since we wanted to remove ourselves from the process as much as possible.

As seen below (click image to enlarge), we came up with many groupings of data, a plethora of design ideas, and a small group of key takeaways (listed below).

Research Takeaways

Desktop - 3.png

Research Artifacts — Personas and Empathy Maps

Based on our research takeaways, my teammate Jiaxi and I created personas that described the essential attributes of our users. Jiaxi completed the personas, and I created empathy maps to pair with each persona. We would use these to inform the creation of our prototypes and to keep us grounded in our users’ perspectives throughout the rest of our project.


Design and Evaluation

Sketches and Wireframes

After concluding our formative research, my teammates Kyle and Jiaxi set out to design our first sketches and wireframes. Seen below, their first attempts are relatively high fidelity, and set us up for successful summative research.

Pollinator Heat Map and Local Gardens

Plant Search

AR Garden Planning

 
 

Evaluation, Stage 1

Next, I helped design and lead our user feedback sessions. We chose task-based think alouds as our method of choice, as this was our first time putting our prototypes in front of users. Our goal was to prioritize direct user feedback, especially at moments of uncertainty or confusion. Again, I recruited our participants, and we gave them 3 tasks to complete and provided guidance and appropriate.

Across multiple user feedback sessions, I acted as both our facilitator — leading the user through the prototype and answering questions — and the notetaker.

We also performed an intra-team accessibility review of our prototypes. Our team dedicated a few hours to painstakingly uncovering potential accessibility concerns. Due to time constraints, this was the most resources we could dedicate to this process, but we chose to make the most of the time we could commit.

Overall, these two methods rapidly identified much of the low-hanging fruit that we could improve. Think alouds and our own accessibility review proved to be highly efficient methods of improving our prototype.

Post-think-aloud questions

Post-think-aloud questions

High-Fidelity Prototype

Next, Kyle and Jiaxi updated our prototype based on our user feedback. They incorporated suggested changes, improved usability and accessibility, and upgraded the overall look and feel.

Interact with the prototype here, or scroll through the images below (click to enlarge).

Evaluation, Stage 2

In our last stage of evaluations, we decided to use both expert and user-based testing to a more rigorous review of our final prototype.

Expert Heuristic Evaluations

Our primary goal for our expert evaluations was to get outside perspectives on our prototype, which provided several benefits. First, experts with fresh eyes identified obvious errors that we and our users missed in previous feedback sessions. Secondly, we could guide the experts to specific areas of interest — the prototype’s structure (e.g. interaction design and information architecture) and skeleton (e.g. design of information, interface, and navigation).

Heuristic evaluations allow the expert to explore the interface with the heuristics to guide their judgement. Exploration was important to us, as was the prototype’s holistic interaction and navigation — heuristics serve these needs better than a cognitive walkthrough or other methods.

We used Nielsen’s 10 Usability Heuristics, as they were the set of heuristics most familiar to our experts. Sample data is shown below.

User-based Testing

The user-based testing informed and validated scope as well, but it also allowed us better insight into the final surface-level design.

We chose a moderated, task-based evaluation to allow comparisons of multiple participants’ experiences. Users spoke aloud their thoughts and ideas, which led to rich qualitative data, but, most importantly, we recorded the completion time for each task.

We used this data, and post-task System Usability Score surveys, to quantitatively compare our system to industry usability benchmarks. Sample data is shown below.

Key Takeaways

Our research gathered a lot of helpful feedback from both experts and users. Our most important takeaways include the following:

  • User meta-data is not communicated clearly, including confusing graph axes. Investment in improving data visualization is necessary.

  • Pollinator Score — a crucial feature, as it condenses information in to one metric — is not communicated consistently. “Is this score this garden’s overall score? Or is it relative to mine?”

  • Iconography — including the map’s menu icon — does not afford its purpose well

  • The search parameters bar — which enables custom searches — is not well afforded, i.e. it’s not clear that it swipes from side to side and it clickable.

  • The tutorial moves too quickly, and users have no control over its progression.

  • Text is too small in some places (can be hard to test this when developing a mobile app on a desktop).

  • Navigation is confusing — “I want a menu where I can get anywhere in 2 clicks”


Reflection

Overall, it is clear that our project still has a lot of work required before it would be ready for real development. To this end, my #1 takeaway from this project was the absolute importance of tailoring research goals and questions to concrete steps taken to improve your project. In other words, since research needs to inform design, research therefore must be planned to ensure it produces actionable information for designers.

In this project, my team went through two high-fidelity iterations of this project, but it is still a long ways from satisfactorily meeting our user requirements. This indicates to me that our research was not providing the right answers (and possibly not asking the right questions).

Our methods themselves — surveys, interviews, heuristic evaluations, usability testing — were sound, and performing them was fantastic experience. So, in the future, I will always keep an eye on the finish line.

But, this is the wonderful thing about school! I got the chance to develop research plans, choose methods, and design questions. The fact that it didn’t go perfectly is the point! I learned so much about how to measure success as a UX Researcher, and I can’t wait to apply these lessons to my next project.