Responsible AI Survey
Methods
Screener design
Recruitment
Survey design
Statistical analysis
Qualitative data coding & analysis
Existing research audit
Presentation of findings
Stakeholders
Office of Ethical and Humane Use
Product management
Research operations
Tools
SurveyMonkey (survey design and statistical analysis)
Miro (qualitative data analysis)
Time Frame
7 weeks (in parallel with other projects)
Product Overview
Salesforce offers “responsible AI” features within their AI systems to empower users to prevent bias. For example, Proxy Variable Detection “indicates that one or more variables are highly correlated to a sensitive variable.” Training a model to categorize people by address could lead to racially biased outcomes.
These products serve a great cause, but it’s unclear how often they’re used or how much value they provide due to a lack of instrumentation. The product org requested user research to guide the next steps in this product space and how it’s messaged/advertised.
As a result of my study…
New sales approach: My main recommendations “drove planning of new [AI] enablement materials for the Sales org.” Data around customer usage and perceived value kicked off more sales investment.
Spurred further research: Illuminated many gaps in our knowledge of users, and led to the planning of further research initiatives.
Selected Challenges & Adaptations
Challenge 1: Recruitment
A survey relies on a large sample size, and initially I struggled to recruit more than 100 respondents. Few Salesforce admins have expertise with updating and managing AI models, so screening respondents was crucial.
Adaptation: Working with my research operations partner, I expanded past our registered database of research participants and posted information on Salesforce message boards. We also collaborated with Product Managers to post in Trailhead communities that they moderated (Trailhead is Salesforce’s free online learning platform). In addition, I improved the response rate by editing out 25% of questions.
With these approaches, we achieved n = 257, a great result for an exploratory survey.
Challenge 2: Rapid analysis turnaround
The study’s report was needed a week after our survey closed, so I had to expedite my analysis process. (I was working on other studies in parallel.)
Adaptation: I began analyzing the results as they came in, to get a sense for how the presentation would be framed. Second, I only presented out high-level findings and relegated the more detailed aspects to the appendix. A mentor advised me that a researcher is only as good as their opinion, so I prioritized high-level recommendations to inform my stakeholders’ in-flight decisions.
Deep Dive
Context & Methodology
My stakeholders and I were unsure what specific cohorts were using these features, so I chose to execute a survey to collect broad and cross-cutting data. I collected primarily quantitative data.
Recruitment was a challenge, so I leveraged Salesforce user communities and added a screener to select for recruits with Salesforce AI experience.
Findings & Recommendations
I had only a week to analyze this data, so I elicited mostly high-level findings:
There was a difference between respondents perceived importance of responsible AI principles—e.g., fairness and accuracy—and their knowledge about achieving them.
Recommendation: Wider messaging will attract more users, and product value will keep them around.
Users of the tools regarded them as very valuable.
Recommendation: There is room to implement responsible AI across more Salesforce tools.
Very few respondents used our responsible AI tools and were not familiar with responsible AI concepts and ideas.
Recommendation: Further research is needed to identify how users find out about these products.
EU-located respondents were more aware of responsible AI
Recommendation: Develop EU-specific enablement material to capitalize on this growing user base.
Recommendation: Research potential EU-specific tools.
Impact
My main recommendations “drove planning of new [AI] enablement materials for the Sales org,” per the Principal Architect of the Office of Ethical and Humane Use. Uncovering the importance and knowledge gap, and that users find them valuable, was enough to kickoff more sales investment.
This study also “identified new areas that needed to be researched” by illuminating many gaps in our knowledge about the users, and led to the planning of further research initiatives.