Our AI Usage Policy

Last updated: 25 April 2026

We use AI as an internal operational tool, not as an unsupervised decision-maker. Our aim is to improve drafting, structuring, extraction, and internal review speed while keeping human oversight, feature controls, and POPIA-aligned data handling in place.

1. Where we currently use AI

AI-assisted features may be used inside our internal systems for selected workflows such as:

  • drafting blog and social media content;
  • internal job report drafting and business reporting assistance;
  • selected structured extraction or refinement tasks from workshop-related text;
  • VitalSystems internal summaries and helper outputs;
  • compliance-monitor impact assessments and internal review support;
  • AI monitoring and quality-improvement workflows based on human corrections.

Licence-disk barcode extraction is handled locally and is not sent to external AI services.

2. What we do not use AI for
  • We do not treat AI outputs as final legal advice.
  • We do not let the compliance monitor automatically change app code, customer records, or legal documents.
  • We do not rely on AI alone for high-impact operational or compliance decisions.
  • We do not intentionally use customer information to train third-party public AI models.
3. Human review and controls
  • AI features are controlled through feature flags and can be disabled quickly if needed.
  • We maintain AI monitoring, reviewer roles, audit schedules, and incident tracking for AI-related risks.
  • Human review remains required where accuracy, customer impact, legal risk, or operational risk is higher.
  • Our compliance tooling is framed as an early-warning and review system, not a final legal decision-maker.
4. Data handling and minimisation
  • We aim to minimise the amount of personal information sent to external AI services.
  • Some workflows use selected text or structured context rather than full source files where practical.
  • Temporary processing files are removed when they are no longer needed for the workflow.
  • AI raw responses and sensitive stored payloads are protected with internal monitoring and encryption controls.
5. AI providers and model use

Where external AI services are used, we use them as operators or service providers supporting our internal workflows. We keep a vendor and use-case register, review those uses through our compliance controls, and monitor changes that may affect lawful processing or cross-border handling.

6. Monitoring, auditing, and incidents

We log AI usage, track corrections, maintain a compliance register, run audit reminders, and review incidents or legal changes that may affect our use of AI. These controls are designed to improve transparency, catch issues earlier, and support manual review by our team.

7. Legal context in South Africa

As at 25 April 2026, POPIA is binding law in South Africa and remains the main legal framework affecting our handling of personal information. South Africa's AI-specific framework appears to still be at draft policy stage, including the Draft South African National Artificial Intelligence Policy published on 10 April 2026, rather than a final standalone AI Act.

8. Questions

If you have questions about how we use AI in relation to your information, contact info@sumtra.app.

← Back to Privacy Policy