Add AI Risk & Governance Policy

Last Updated: 12th Feb, 2026

 

This AI Risk & Governance Policy describes how Hyzstack Labs Inc. ("Company", "we", "us") manages risks associated with artificial intelligence systems used within the Vector SaaS Platform ("Platform").


1. Purpose

The purpose of this policy is to ensure responsible deployment, monitoring, and governance of AI systems used in the Platform, while maintaining data protection, fairness, transparency, and reliability.


2. AI System Overview

The Platform uses Retrieval-Augmented Generation (RAG) architecture, combining:

  • Client-provided documents stored in isolated vector databases
  • Intent classification models
  • Large language models for response generation
  • Rate limiting and abuse prevention systems

The Platform does not train foundation models using Client Data.


3. AI Risk Categories

3.1 Hallucination Risk

  • Responses are grounded in retrieved client documents.
  • If relevant documents are not found, the system is designed to indicate insufficient information.
  • Low-confidence retrieval paths trigger clarification prompts.

3.2 Bias Risk

  • Outputs reflect the content of uploaded documents.
  • The Company does not intentionally introduce bias into outputs.
  • Clients are responsible for reviewing document quality and neutrality.

3.3 Misuse Risk

  • Rate limits prevent automated abuse.
  • Malicious or disallowed usage is restricted under Terms of Service.
  • Monitoring systems detect anomalous usage patterns.

3.4 Privacy Risk

  • Data is logically isolated per client.
  • Primary hosting infrastructure is located in Canada.
  • AI providers are contractually restricted from training on submitted data.

3.5 Security Risk

  • JWT-based authentication between widget and API.
  • Redis-backed quota enforcement.
  • Separation between control plane and workflow engine.
  • Encrypted data transmission (TLS).

4. Human Oversight

The Platform is designed as a decision-support system. It does not replace professional, legal, medical, or financial advice.

  • Clients remain responsible for reviewing outputs.
  • End users should not rely solely on AI-generated responses for critical decisions.

5. Model Governance

  • Model selection is controlled centrally by the Company.
  • Compliance-sensitive domains may use higher-accuracy models.
  • Temperature settings are adjusted to reduce variability in regulated domains.
  • Model changes are evaluated for performance and safety impact.

6. Data Handling & Retention

  • Long-term memory stores only explicitly extracted structured facts.
  • Short-term memory is session-bound and limited in scope.
  • Client data may expire based on configurable retention policies.

Data lifecycle practices align with principles of Canada's Personal Information Protection and Electronic Documents Act (PIPEDA).


7. Transparency

The Platform:

  • Does not claim outputs are always accurate.
  • Does not represent AI responses as human-authored.
  • Encourages validation of critical information.

8. Monitoring & Continuous Improvement

  • Usage analytics are monitored for anomalies.
  • Retrieval performance is evaluated periodically.
  • Prompt engineering improvements are deployed to reduce hallucinations.
  • Security controls are reviewed regularly.

9. Limitations of AI Systems

AI systems may:

  • Generate inaccurate or incomplete responses
  • Misinterpret ambiguous queries
  • Fail when relevant context is missing

The Platform is designed to minimize these risks through retrieval grounding and controlled prompts.


10. Future Governance Roadmap

  • Formal AI impact assessments
  • Expanded compliance documentation
  • Enhanced explainability controls
  • Model evaluation benchmarking

11. Contact

For questions regarding AI governance practices:
Email: admin@hyzstack.com
Hyzstack Labs Inc.