Language selection

Search


Content guidance for AI help applications: Canada.ca design

Follow this content guidance to create a consistent experience for Canada.ca visitors.

On this page

Topic-specific AI applications

While you can limit a chat application to a certain topic, keep the Canada.ca vision in mind as you experiment and design. The Canada.ca vision is one where users don’t need to know which department handles a specific task. Instead, they should be able to find the information they need seamlessly, regardless of departmental boundaries.

People expect Canada.ca to function as a unified site. If an AI application is limited to a specific topic, its invitation button or link should clearly indicate that topic. This ensures users understand they will only find information related to that specific topic.

Notices, transparency and accountability

AI help applications must be clearly labelled as AI.

Include a notice that addresses privacy, potential mistakes, how it should be used, limitations of the application, and similar issues without blocking access to the chat service.

To ensure transparency and accountability, provide users with clear information about how their data will be used. For example, if the application will use the data they input and the responses generated for training purposes, make that clear to users.

Add a link or details-summary directly to the chat solution, similar to the evidence-based approach for privacy statements on Canada.ca. This makes the information available to users without making them first read it all before accessing the application.

Privacy notices must be provided at the time of personal information collection. When personal information is intended to be (or likely to be) collected, the notice should be placed so that it is clearly associated with the area where a user would input information into the AI chat function.

If creating/collecting/using/disclosing/retaining/deleting personal information, consult with your institutional privacy/ATIP officials when creating a privacy notification. The notice must conform with section 4.2.20 (1-6) of the TBS Directive on Privacy Practices:

Warning users, via a notice, that AI makes mistakes is not a replacement for working on and measuring accuracy levels (see: Accuracy). The application must produce accurate responses and always include a citation link.

In consultation with your department’s Legal Services, consider including a legal disclaimer/detailed liability and indemnification statement.

Learn more about the roles, responsibilities and best practices for protecting users’ privacy and personal information.

Accessibility

You must follow the requirements in the Standard on Web Accessibility.

These additional considerations will help to ensure that your AI help application is accessible:

Accuracy

Ensure you measure the accuracy of your AI application. You should share the results of any accuracy measurement activities with your department’s communication team. Heads of communications are accountable for the accuracy of all communications products and activities. The Directive on the Management of Communications and the Policy on Communications and Federal Identity both outline the requirements to ensure all information provided by a department is accurate. This requirement extends to AI applications.

In addition to regularly testing your application for accuracy, you should include a note that reminds the user that AI can make mistakes and that they should verify the information they’re provided. For example, you could include a note like this, “AI can make mistakes, always check your answer.” In addition the heading above your citation link(s) could say, “Check your answer.”

Languages

Ensure you follow the requirements in the Directive on Official Languages for Communications and Services.

Generative AI models can have differing performance in English and French, and some models are better than others. Departments should undertake testing to ensure that the quality of the tools and outputs meets official language requirements.

To facilitate effective communication in multiple languages, follow this guidance:

Language-specific versions

Language of citations

Terminology and style

In your system prompt, instruct the AI to use official Canadian French terminology and adhere to the style found on Canada.ca for French responses.

Translation and support

Visitors to Canada.ca often use their browser to translate the page into their language. Large Language Models (LLMs) are designed to answer in the language of the question. Your team should determine whether to accommodate questions and answers in languages beyond English and French, along with necessary control measures, such as logging translations of the questions and answers into official languages for evaluation and monitoring purposes.

Citations

Citations help users verify the answer and provide a link for the next step. All in-scope answers must include at least one authoritative citation link to the source material. Wherever possible, citations must point to a Government of Canada web page so that people can review the information source for themselves.

Ensure the AI is citing the correct page

To illustrate how citation links should be used, consider the following scenarios:

Once the situation is clarified, the AI can provide the appropriate link. For instance, it could direct the user to answer the questions on the “Who can renew a passport” page to be led to the correct form for their situation.

Make citations highly visible

To ensure citations are highly visible, consider the following guidelines:

Gender-Based Analysis Plus (GBA Plus)

To ensure that your AI help application does not create any unintended consequences or negative outcomes for certain community groups, it is recommended that a Gender-Based Analysis Plus be conducted. Please contact your department/agency’s GBA Plus Centre of Expertise, and refer to the GBA Plus website from Women and Gender Equality Canada to find more information on how to undergo a GBA Plus analysis.

Safeguards against harmful or biased outputs

Ensure you test the application for unintended biases and other harmful outputs.

Handling online wizards

An online wizard is a step-by-step guide that helps users complete a task by breaking it into smaller, manageable steps. There are many heavily used Canada.ca wizards like “Find out if you need a visa.” These wizards can be many layers deep with extensive logic and are kept up to date.

Your system prompt should direct the AI service to send users to any existing wizard pages rather than the AI trying to ask all the relevant questions. Since current AI models are reinforced to answer questions, rather than to ask questions, providing these answers should be left to the wizard.

Eventually it may be possible to feed the wizard logic to the AI service so that it can handle the questions and answers.

Answer length

Answers should be concise, simple and clear. This makes it easier for users to understand the answer and it also reinforces the need to use the citation link to take the next steps.

In the system prompt, encourage the AI to not include more information than is needed. Shorter answers also help prevent the AI from hallucinating.

Some individuals may need more detailed explanations to fully grasp a topic. If a user requests more information, longer answers can be provided to ensure they receive the context they require. Even when providing more detail, ensure a citation link is provided for additional information and so the user can verify the information given.

Chat IDs for reference

All conversations should have a visible identifier that’s documented in the system. This allows for easy reference if necessary.

Things to avoid

When designing your AI help application, avoid language related to live chat, including:

Page details

Date modified: