User Satisfaction Plus (USAT+) Survey

The User Satisfaction (USAT+) survey is a unified survey of metrics we use within the Product and Marketing division on a bi-yearly basis for tracking product health across key dimensions.

USAT+ is GitLab’s research method for periodically measuring overall product perception and user experience health across key UX dimensions. Grounded in GitLab’s UX Quality Metrics Framework (internal only), USAT+ delivers an overarching view of user experience health across GitLab through a unified survey designed to run twice a year.

Before USAT+, we used to run separate periodic surveys to measure our overall product satisfaction, usability, and navigation experience. USAT+ consolidates these into a single survey, reducing survey fatigue and operational overhead while expanding coverage to include additional UX indicators.

To effectively leverage USAT+, it’s important to understand both its strengths and limitations.

Satisfaction surveys across GitLab

There are two teams across GitLab who run separate, but related satisfaction surveys. UX Research conducts the User Satisfaction Plus (USAT+) survey and Customer Success conducts the All-Customer Satisfaction survey. This handbook section has more details on how these two satisfaction metrics are different.

What USAT+ is for

Capture the Pulse of User Experience

  • Provide quantitative metrics for overall UX health
  • Support benchmarking against historical data and competitors

Decode the User Experience Across Key Dimensions

  • Break down overall scores into granular UX dimensions to understand the UX landscape
  • Support quantitative analysis of how different UX dimensions correlate and relate to user behavior and business outcomes

Highlight Areas of Attention

  • Identify key UX aspects needing attention, providing context for focused investigation
  • Guide prioritization of research initiatives based on dimensional patterns

What USAT+ is NOT for

Not a Solution Provider

  • USAT+ serves as a directional indicator rather than prescribing solutions
  • USAT+ findings require additional focused investigation to determine appropriate solutions and provide specific design or product recommendations

Not a Standalone Decision Tool

  • USAT+ should not be used as the only input for decisions
  • USAT+ findings should be considered alongside and cannot substitute other data sources such as product analytics, support tickets, direct user feedback, usability testing, and in-depth user research

How the USAT+ survey is run

UX Research determines our USAT+ score for paid users of GitLab.com and self-managed GitLab on a bi-yearly basis through a survey launched through Qualtrics. A USAT+ template issue is created by the UX Research DRI 1 month before data collection starts. The issue template contains background information, research goals, and processes on conducting the survey from start to finish. All documents created are stored in an internal only Google Drive within UX Research.

Questions included in USAT+

USAT+ is composed of standardized rating scales measuring the user experience, with open-text follow-up questions that provide context into the numbers and capture additional feedback. The survey is structured in two main sections.

Section 1: Overall User Satisfaction (USAT)

This section leads with a rating of overall satisfaction with the product (USAT), supplemented by two open-text follow-up questions designed to surface the reasoning behind satisfaction levels and identify improvement areas.

  • “How satisfied are you with GitLab (the product)?” (1 = Very dissatisfied; 2 = Dissatisfied; 3 = Neutral; 4 = Satisfied; 5 = Very satisfied)
  • “Why are you satisfied or dissatisfied with GitLab (the product)?” (free-text field)
  • “How could your satisfaction be improved with GitLab (the product)?” (free-text field)

Note: There are two satisfaction survey efforts across GitLab. UX Research collects the USAT score as part of the USAT+ survey, while Customer Success conducts the All-Customer Satisfaction survey. See this handbook section for details on how these surveys differ.

Section 2: Comprehensive User Experience Assessment

This section contains 20 Likert-type questions measuring specific UX aspects. Questions are presented in randomized order to minimize response bias. Each question uses an agreement response scale (1 = Strongly Disagree; 2 = Disagree; 3 = Neither agree nor disagree; 4 = Agree; 5 = Strongly Agree) and is immediately followed by an optional free-text field allowing respondents to explain their rating: “Please tell us more about why you feel this way.”

UMUX-Lite (SUS-equivalent score)

  • “GitLab’s capabilities meet my requirements.”
  • “GitLab is easy to use.”

Additional Questions

  • “It is easy to notice new features in GitLab.”
  • “I can easily locate what I’m looking for when navigating GitLab.”
  • “GitLab is easy to learn.”
  • “I was able to get going with GitLab without someone’s assistance.”
  • “The GitLab interface is visually appealing.”
  • “I do not encounter accessibility issues in GitLab (related to vision, hearing, physical, speech, or cognitive needs).”
  • “Different parts of GitLab work together smoothly.”
  • “GitLab works seamlessly with other tools.”
  • “GitLab enables me to work efficiently.”
  • “GitLab is available when I need to use it.”
  • “GitLab runs without significant wait times.”
  • “GitLab works without errors.”
  • “GitLab’s interface elements respond as intended when I click on them.”
  • “I feel confident using GitLab.”
  • “The GitLab interface is visually overwhelming.”*
  • “There is too much inconsistency in GitLab.”*
  • “GitLab is unnecessarily complex.”*
  • “It takes too many steps to get to where I want in GitLab.”*

Negative-worded question scores are flipped during analysis (so a “1” becomes a “5,” a “2” becomes a “4,” etc.) to ensure all scores align in the same direction, where higher scores indicate better experience.

For detailed definitions and interpretation of the metrics measured in USAT+, refer to the UX Quality Metrics Framework.

How USAT+ works

Collection schedule

USAT+ is designed to be administered bi-yearly (twice per fiscal year).

The first collection launched in FY2026 Q1, building on the operational approach from the previous USAT survey while introducing changes including prize draw incentives, an expanded question set, and enhanced email templates. The originally planned FY2026 Q3 collection was skipped due to a need to prioritize other surveys.

The next collection is scheduled for FY2027 Q1, maintaining consistent methodology to establish a reliable baseline before future iterations.

Current workflow

Target participants and recruitment approach

USAT+ targets paid SaaS users (GitLab.com Premium and Ultimate tiers), with sampling designed to represent our active user base across different plan types and user tenures and maintain sufficient sample sizes for reliable segment analysis. USAT+ uses a targeted sampling approach to ensure representative data while managing survey fatigue. We aim for 384 total responses with plan type proportions matching our user population (±3%).

Survey invitations are sent to paid GitLab.com users across Premium and Ultimate plan types who haven’t been contacted in the previous 12 months. Users are selected using Snowflake queries to ensure we’re targeting active users with recent product experience.

To prevent survey fatigue, the UX Research team coordinates USAT+ timing with Research Operations and Customer Success teams across the organization conducting large-scale surveys, communicating the data collection schedule at least one month in advance.

Eligible users are invited through email waves of approximately 5,000 users each, scheduled Monday-Friday at 8-9am Eastern Time to maximize visibility. Responses are collected over a four to six week window and incentivized through a prize draw.

Data collection process

Survey invitations are distributed via email through Rally, while the survey itself is hosted on Qualtrics. Detailed operational workflows, recruiting processes, and deployment templates are documented in the USAT+ issue template, maintained by the UX Research team and available internally at GitLab UX Research Project on GitLab.

Generating a list of eligible users

At the beginning of each quarter, the UX Research DRI will generate a list of eligible users to distribute the USAT+ survey through the following steps:

  1. Generate a new list of paid GitLab.com users to contact by using the following query in Snowflake. Query for approximately 40,000 user IDs and email addresses. Note: You will need access to Snowflake and SAFE data to query the data tables in this step. If you do not have access to one or both, please create an Access Request issue.
  2. Create a list of users you’ve compiled in step 1 within the USAT+ user Google Sheet template to track contacts for the quarter.
  3. Remove users that were contacted more than 12 months ago from the list of previously contacted users.
  4. Create a copy of the sheet containing the users you will contact this quarter and insert it in the CSM / UXR Survey Participants Google Drive Folder. Contact the Customer Success team, so can avoid contacting the same users in their Customer Satisfaction (CSAT) survey.
  5. Calculate the proportions by plan type for the current quarter. Add these percentages to the user list to help calculate how many users for each plan type you need to invite. The end goal is to achieve a sample breakdown that roughly matches the population’s breakdown (+/- 3%). Note: You will need access to Tableau and SAFE data to view the link in this step. If you do not have access to one or both, please create an Access Request issue.
  6. Create a new project in Rally UXR and upload the list of user contacts to the project.
Sending an email wave
  1. Using the percentages you calculated in the percentages tab of the USAT+ user list Google Sheet template, determine how many users for each plan type you need to contact for the wave. If it’s the first wave, use the population proportions. If it’s a subsequent wave, use the proportions you calculate based on the responses so far (see next point).
  2. To calculate your current sample plan proportions, download your survey results from Qualtrics and/or Rally. Calculate the percentage breakdown of plans so far. Then subtract that number from the population percentage and add the result to the population percentage. An example with fake numbers:
  • Population percentage for Ultimate = 73%
  • Percentage of Ultimate plan types after sending wave 1 = 65%
  • Wave 2 percentage for Ultimate: (73% - 65%) + 73% = 81%

In this example, the sample is under the population, hence the next wave percentage is higher than the population to try to get within 3% of the population percentage for Ultimate.

  1. Waves should be ~5,000 users each. Mark the desired number of users out of that 5,000 that fit your percentages for each plan type with the name of the wave you are sending.
  2. In your Rally UXR project, filter out @gitlab.com email addresses, users who are on cooldowns/opted out of emails, and users contacted for previous USAT+ surveys from the past 12 months.
  3. After filtering out those individuals, select the number of emails for your most recent wave.
  4. Create a new email distribution using the USAT+ survey template email in Rally.
  5. Send the email distribution. Typically emails are scheduled to go out Monday - Friday early in the morning US time (8 - 9am Eastern Time) with the goal of maximizing visibility and responses.

Once all email waves have been sent and you’ve hit the sample size goal for the quarter, add the user IDs and email addresses that were used this quarter to the previously contacted sheet, noting the quarter they were contacted. This allows us to avoid contacting the same users too frequently.

Data analysis

Data analysis consists of two components: quantitative analysis and open response coding.

Quantitative Analysis

USAT+ delivers both:

  • Benchmarking scores: We present USAT and SUS scores, comparing current results to historical trends and industry benchmarks. Both are integrated into the USAT+ question set, with USAT measuring overall satisfaction, and SUS (calculated via UMUX-Lite questions) assesses overall usability.
  • Detailed breakdowns: We analyze ratings of each UX aspect covered in the survey, providing overall numbers to track health over time and breakdowns to identify specific areas of strength and concern.

Qualitative Analysis

We analyze open-ended responses at two levels:

  • Overall satisfaction feedback: We code responses to the two main open-ended questions that follow the overall satisfaction rating in the first part of the survey (“Why are you satisfied or dissatisfied with GitLab?” and “How could your satisfaction be improved?”). These responses are coded into high-level themes that highlight recurring user pain points and feedback patterns, connect to specific stages and teams for actionable context, and are shared with product managers and designers for investigation and issue creation. Themes are standardized across all USAT+ collections to enable trend analysis over time.
  • Metric-specific insights: Each of the 20 additional Likert-type questions includes an optional free-text field where respondents can explain their rating. We use these responses to provide additional context for specific metrics, highlighting key strengths and pain points that explain the “why” behind the scores. Note: These free-text fields are optional, and only a limited percentage of users provide detailed justifications. As such, these examples may not represent the full range of user perspectives. For in-depth insights into specific metrics, we recommend targeted outreach to participants who opted in to follow-up research.

Data Quality and Cleaning

To ensure data integrity, we implement quality checks including:

  • Detection of potential bot or fraudulent responses
  • Identification of inconsistencies between satisfaction ratings and open-ended feedback
  • Review of response patterns (for example, straight-lining, completion time)
  • Validation of data processing and calculations through peer review

Any flagged responses are reviewed and removed from analysis as needed to maintain data quality.

Reporting and insights

Following each data collection, USAT+ results are shared with stakeholders. Reports include:

  • Quantitative metrics and scores for overall and segmented user populations
  • Trend analysis comparing to previous collections
  • Themed verbatim feedback organized by stage and product area
  • Directional recommendations for research focus areas

All reports and analysis documents are stored in the internal UX Research Google Drive (internal).

USAT+ dashboards and maintenance

There are two internal dashboards in Tableau meant for presenting USAT results:

  • USAT Scores: This dashboard shows the number and percentage of responses between those who are satisfied with GitLab the product (ratings of satisfied and very satisfied) and those who are not (ratings of neutral, dissatisfied, and very dissatisfied).
  • USAT Line Chart: This dashboard shows the USAT score on a quarterly basis in order to track our score over time.

These dashboards can also be found in the UX Department Performance Indicators page.

The survey data in the analysis template gets connected to Tableau to show data from the current and previous quarters. UX Research is responsible for working with Product Data Insights to update these Tableau dashboards each quarter.

Outreach to USAT+ responders

When filling in the survey, UX Research gives USAT+ respondents the option to indicate whether they would be open to a follow up interview. As part of the analysis, the UX Research DRI will compile a list of users who agreed to the follow up interview (see Google Sheet template for USAT+ follow up users).

After the research report is shared out, the UX Research DRI will notify Product Managers, Product Designers, and Customer Success Managers about USAT responders who opted into contact via this USAT+ Responder Outreach issue template. The USAT+ responder outreach workflow is described in more detail here.

Continuous improvement

USAT+ is a new survey that was piloted in FY2026 Q1. As with any new research method, we’re still refining both the survey itself and the operational processes through systematic learning and validation.

Operational refinement

After the first data collection, we observed an unexpected 16-percentage-point increase in satisfaction scores compared to historical USAT data. An investigation revealed that methodology changes (including the introduction of sweepstakes incentives, expanded question set, enhanced email templates, and survey structure modifications) likely contributed to this increase alongside potential product improvements.

Given this finding, we’ve decided to maintain consistent methodology across the next collection cycle (FY2027 Q1) to establish a reliable baseline. This allows us to distinguish genuine shifts in user satisfaction from methodological variations before making further refinements. This includes keeping survey structure and page layout, question wording and order, incentive strategy, email templates and timing, and response collection windows consistent.

In parallel, the UX Research team is actively standardizing and improving USAT+ operations through the USAT+ Process Refinement initiative. This work focuses on standardizing templates for survey administration, analysis, and reporting; enhancing data quality checks and AI/bot detection; optimizing sampling strategies and recruitment workflows; improving stakeholder coordination and insights delivery; and conducting accessibility audits and refining analysis scripts.

For detailed findings and plan, see the FY2026 Q1 score investigation issue (internal only) and USAT+ Process Refinement and Standardization issue (internal only).

Psychometric validation

To explore the psychometric quality of USAT+ and validate whether the survey reflects our intended UX dimensions, we conducted an exploratory factor analysis on the FY2026 Q1 dataset. This analysis suggested the data could be explained by three main dimensions of user experience. While this could potentially support a more streamlined version of USAT+ in the future, we are keeping the full question set for at least one more round of data collection in order to maintain method stability. Future optimization efforts will build on this additional data.

For the full psychometric analysis and detailed question mappings, see the USAT+ Factor Analysis Report (internal).

Follow up questions from stakeholders

For past reports, UX Research has gotten requests to answer follow up questions about the USAT+ survey data (for example: What is the breakdown of company size across the survey responses?). When these requests arise, we’ve partnered with Product Data Insights to get support connecting USAT+ respondents to internal data sources (i.e., Snowflake). We create an ad hoc request issue in the Product Data Insights project in GitLab and submit this request to a member of their team.

Questions?

For questions about USAT+ methodology, administration, or results, reach out to the UX Research team in #ux-research.

USAT+ reports

Historical survey reports

USAT

SUS

Navigation

Last modified December 23, 2025: Add USAT+ handbook page (0b6cd9a5)