Sago AI Policy

1. Purpose and Scope

The purpose of this AI policy is to outline Sago’s principles, values, and guidelines for developing and implementing AI systems into our digital qualitative and quantitative platforms. We are optimistic about the opportunity that AI will provide to the market research industry and are moving quickly to adopt AI as a fundamental technology for Sago’s research solutions and services. We believe it will enhance and transform our current processes in ways that we can only imagine. Because Sago is embracing this emerging sector as it has with every other technology, we believe it is essential to have an AI policy that ensures responsible, ethical, and secure development and use of AI technologies.

2. Guiding Principles for AI Development

Sago is committed to the following guiding principles for AI development:

2.1 Ethical Considerations

Sago will develop and use AI systems that prioritize fairness and inclusivity. We will strive to minimize biases and ensure that our AI solutions do not discriminate against any individual or group.

2.2 Privacy and Security

Sago will prioritize the privacy and security of user data collected and used by our AI systems. We will adhere to relevant data protection laws and regulations and implement comprehensive security measures to protect user data.

2.3 Transparency and Explainability

Sago is committed to being transparent about the development and use of our AI systems. We will provide clear and accessible information about how our AI technologies work and ensure that their decision-making processes can be understood and explained to users and stakeholders.

2.4 Accountability

Sago will take responsibility for the AI systems we develop and use and ensure that appropriate human oversight is in place to monitor their performance, address any issues, and make necessary adjustments to improve their effectiveness and alignment with our guiding principles. We will continue to build on our existing infrastructure with targeted, specific areas within our business to simplify and improve our output via AI. We will find ways to test, learn and embed enhanced capability in a ‘fail fast forward’ approach.

3. Operationalizing the Principles

3.1 Data Collection and Use

Sago will collect and use data responsibly, in compliance with applicable laws and regulations, and with respect for user privacy. We will ensure that data used to train and test our AI systems is representative of the target population, and we will actively work to minimize biases in data collection and processing.

3.2 Algorithmic Decision-Making

Sago will design and implement AI algorithms that prioritize fairness, transparency, and explainability. We will regularly assess the performance of our algorithms to identify and address any potential biases or unintended consequences, and we will provide users with clear information about the factors that influence AI-driven decisions.

3.3 Human Oversight

Sago will maintain appropriate oversight of our AI systems to ensure that they align with our guiding principles and values. We will involve human experts in the development, deployment, and monitoring of AI systems, and provide them with the necessary tools and training to effectively oversee AI-driven processes and decisions.

3.4 Partnerships

Sago regularly partners with other organizations to develop new and innovative AI-enabled solutions to expand our capabilities. Partners undergo a rigorous evaluation process including Indepth data security reviews to ensure they meet the same principles we stand behind.

3.5 AI Fraud Detection

We stand by our recruiting. Our respondents agree that they will be answering in their own words as part of participating. And if they don’t, they won’t receive their incentive and they will be exempt from participating in any future research.

4. Ongoing Evaluation and Communication

Sago is committed to regularly evaluating and refining this AI policy to ensure that it remains current and effective in guiding our AI practices. We will engage in ongoing communication with internal and external stakeholders.

5. Questions on Compliance?

Contact us: [email protected]

Sago AI Policy on Managing AI Fraud

In qualitative research, respondent fraud has always been a concern that the industry has done good work to try to prevent. Every researcher’s nightmare is to have people who falsely claimed to have used a product, or who claimed to be someone they weren’t just to get their incentive.

Clients count on Sago to provide upstanding people to talk to because they have important decisions to make based on their input. The AI-assisted response challenge is no different. All indications so far are that this is an extremely small percentage of cases (we have identified a handful of cases out of the tens of thousands of respondents we recruit on a regular basis).