Methods for analyzing feedback: Canada.ca design
On this page
- Who should analyze feedback
- How much feedback is needed
- Research questions shape analysis
- Manual analysis methods
- Tools for analysis
- Machine learning pilot
- Include other supporting data sources
Who should analyze feedback?
This should be done by people who know the subject quite well.
People should be adept at spotting patterns and themes in data.
It’s best to have someone who is bilingual.
If more than one person is sharing the task of reading feedback, having a shared understanding of the issues (and how you will group feedback) is very important.
It’s good to get into the habit of looking at user feedback regularly to identify any emerging or persistent issues affecting task success.
How much feedback is needed
There is no magic number for how many comments you need.
With feedback, you are looking for enough comments to sufficiently describe an issue or answer a research question. There is a point of diminishing returns when collecting more feedback does not lead to additional insights. This is called “saturation”.
If feedback shows that something is broken, you don’t need hundreds of comments to determine if you should take action.
When identifying issues, don’t rely on volume of feedback alone to prioritize improvements. Feedback submitted may be from people who face cultural, linguistic, geographical, disability, technological, socioeconomic, or other barriers to uptake.
You can seek confirmation of feedback insights using other data sources such as: web analytics, call volumes, social media trends, and GC Task Success Survey results.
While it is best to read the full dataset of feedback, sampling a smaller set of feedback can help identify trends when dealing with more feedback than you can read. If you continue to receive more feedback than you can read, it is best to remove the tool until you have completed your updates.
Research questions shape analysis
Start by thinking in terms of research questions and who you will be sharing your findings with. Doing this in advance can help clarify how to group feedback when doing manual analysis.
Common research questions:
- What are the most common issues being reported?
- Are there specific reasons for failure or specific suggestions to improve the experience?
- What pages are receiving the most feedback?
- Has feedback increased or decreased after a page update?
- What types of issues were the most common (findability, comprehension, technical)?
Manual analysis methods
Grouping feedback with similar issues together with tags is useful for both small and large datasets. It helps you be more efficient with analysis by having smaller sets of data to analyse.
A small dataset may only need a few tags to make sense of the feedback. A large dataset may require two levels of tags to understand specific content issues.
Best practices for choosing how to group and tag feedback
Familiarize yourself with your data
Read through a sample of feedback data and try to spot recurring patterns. Jot them down to get a rough overview of WHAT tasks, topics, or issues people are talking about.
Not every comment will be useful - sometimes it will be too unclear to use or be completely about another topic.
Consider tags based on a task or issue
Task-based tags are recommended when analyzing feedback for a group of pages where there are multiple user tasks.
To identify tasks, ask yourself why the user came to the site. What were they trying to do, or what question were they trying to answer?
Issue-based tags may be a better strategy when gathering feedback on a single page, single topic, or where a single task dominates your feedback.
For large datasets you may find a second level of tags is needed to add precision. This can be done at the same time you tag the feedback OR when you are ready to analyze a smaller set of feedback.
Example of some tags used for feedback on vaccine pages
Tag | User task or issue | Topics included |
---|---|---|
Vaccine safety | Is the vaccine safe for me? | Pre-existing conditions, ingredients/allergies, side effects |
Getting vaccinated | How do I get vaccinated? | Eligibility, when, where, how to register |
Proof of vaccination | How do I get a copy of my vaccine record? | Vaccine records, provincial apps, federal vaccine proof |
Limit the number of tags being used
Start with broad tags and only include those for which you have multiple examples. Your goal with this first review is to succinctly group recurring topics/issues.
Aim to keep your set of tags to under 15 for the task. Limiting the number of tags will help surface the issues that need the most assistance.
“Other” is a tag too! Tag one-offs or low-frequency comments as “Other” until there are enough for them to graduate into having a tag of their own.
Avoid using overlapping or ambiguous tags
Make sure each tag is clearly differentiated from the others. Your aim is to reduce doubt about which tag a comment should get.
Be prepared to tweak your choice of tags
As you read more of your dataset, review your initial tag choices. Are they clear and unambiguous? Does one tag alone cover the majority of feedback? Do you need to divide it into separate tags?
There’s no one-size-fits-all strategy. As you collect more feedback, you may find you need to adjust your choice of tags.
Document and test your tagging strategy
Document your choice of tags with examples. This is especially useful if more than one person will share the responsibility for reviewing feedback.
Ask others to review your tag choices to make sure that the tags are clear to other people. This is especially critical if more than one person will be helping to analyze feedback. Agreeing on a common set of tags in the beginning (and when adjusting tags) avoids feedback being tagged poorly between people.
Example of some tags used for feedback on vaccine pages
Tag | User task or issue | Topics included |
---|---|---|
Vaccine safety | Is the vaccine safe for me? | Pre-existing conditions, ingredients/allergies, side effects |
Getting vaccinated | How do I get vaccinated? | Eligibility, when, where, how to register |
Proof of vaccination | How do I get a copy of my vaccine record? | Vaccine records, provincial apps, federal vaccine proof |
What to avoid when tagging feedback
Mixing types of tags
If you want to add additional ways to analyse your dataset, it’s best to create new columns in your spreadsheet to note different kinds of facets. For example, adding a status or specifying a particular sub-issue.
Trying to be overly-precise
The purpose of tagging is to help you identify user priorities and group feedback into smaller datasets to analyze. A “good enough” approach to defining and assigning tags will do.
If you have more feedback than you can manage to review, classify and analyze, adjust your strategy: choose a specific task or time frame to focus on.
Tools for analysis
For small datasets, any spreadsheet software should be adequate to group and sort feedback (Excel, Google Sheets, etc.).
For larger datasets, it’s helpful to use a tool that has more advanced functionality to sort, filter, and tag. If you have a data science specialist, they may prefer or have access to more specialized tools.
- Download a tagging strategy template (Excel, 61KB)
- Download a template to analyze page feedback and/or GC TSS feedback (Google Sheets)
Machine learning pilot
For institutions that receive high volumes of feedback, we are piloting alternative methods to access and analyze feedback using data science tools.
Contact the Digital Transformation Office if you are receiving more feedback that you can manage through manual analysis.
Email: cds.dto-btn.snc@servicecanada.gc.ca
Include other supporting data sources
Include other data sources in your reporting to build a more complete picture, confirm your insights, or add urgency from sources such as:
- GC Task Success Survey results and feedback
- analytics
- call centre volumes
- search trends
- usability study results
- questions received through social media
Page details
- Date modified: