Schedule a Call

Pinpoint’s approach to responsible AI

Pinpoint AI: Our Approach to Responsible AI

This document outlines how Pinpoint builds, deploys, and governs AI across our platform. It's designed to give clients confidence that we take AI seriously, develop responsibly, and are committed to being on the right side of AI in recruitment.

Empowering the human connection through an ecosystem of intelligent efficiency.

AI should augment and assist recruiters, not replace them. By automating administrative tasks and surfacing actionable insights, we empower recruiters to focus on what they do best-build relationships and hire smart-while retaining absolute control.

Crucially, AI must never be the final decision-maker. Hiring decisions remain with the recruitment team. All AI-driven insights are transparent, auditable, and fully explainable.

Our 3 core principles

⚡️ Power

Power is about efficiency. We deploy AI to eliminate administrative friction, surface top candidates, deliver better candidate engagement at scale, and manage workflows, freeing recruiters to focus on high-value human interactions.

📖 Transparency

Transparency is the cornerstone of trust. AI should never be a black box. We provide clear, explainable insights into every AI-derived suggestion. For our engineers, we enforce rigorous observability and testing standards. If we can’t explain it, we won’t build it.

⚙️ Control

Control places the recruiter firmly at the centre of decision-making. We never force AI adoption. You decide exactly when and how AI augments your workflow. Client data is your data-it is never used for training models without express permission.

How we build AI: Technical rigour

We don’t just make promises, we’ve built the infrastructure to deliver on them. Here’s how we ensure responsible AI development.

Dedicated AI infrastructure

All AI workloads run through Arti, our dedicated AI microservice. This separation from our main application provides:

  • Specialised tooling – Purpose-built for AI observability and orchestration
  • Isolated scaling – AI workloads scale independently without affecting core platform
  • Dedicated monitoring – AI-specific tracing, logging, and alerting
  • Clear cost attribution – Full visibility into AI spend per feature

Full observability with LangSmith

Every AI interaction is fully traceable. We use LangSmith to capture:

  • Complete prompt/response pairs for every LLM call
  • Token usage breakdown
  • Latency measurements at each workflow step
  • Error details and conversation threading
  • Full audit trail for compliance

Comprehensive error tracking

All AI operations are logged to Datadog with structured error tracking:

  • Every exception captured with full stack traces
  • Request context preserved for debugging
  • Automated alerting for infrastructure issues
  • Searchable logs across all environments

Cost & usage monitoring

Every LLM request is logged to our database with:

  • Which feature made the call
  • Which model was used
  • Token counts (input, cached, output)
  • Calculated cost in real currency
  • Full prompt and response text
  • Timestamps for trend analysis

This enables us to track cost per feature, monitor efficiency over time, and identify optimisation opportunities.

Responsible development practices

Human-centric design

For every AI feature, we ask two questions:

  1. Does it reduce operational friction?
  2. Does it strengthen recruiter-candidate interactions?

If the answer to either is “no”, the feature is incomplete, needs guardrails, or shouldn’t be prioritised.

Example: Intelligent routing

When users ask our AI chatbot “What can you do?”, we don’t send that to an LLM for an unpredictable response. Instead:

  1. The message is classified by our router
  2. Recognised as a “chatbot capability” question
  3. Routed to a dedicated handler with a consistent, helpful response

Why this matters:

  • No unnecessary LLM calls – Reduces cost and latency
  • Consistent experience – Users always get the same helpful answer
  • Predictable behaviour – No risk of hallucination or inappropriate responses

This pattern classifies, routes, and handles runs appropriately throughout our AI architecture.

Testing & evaluation

Before any AI feature ships:

  • Hundreds of test cases run through the system
  • Responses evaluated against expected outcomes
  • Edge cases and failure modes are explicitly tested
  • Prompt versions are compared systematically

Our commitment

We believe efficiency and empathy are not opposites; they are partners. We use AI to handle the heavy lifting so your team can focus on what software can never replace: the human connection.

We don’t build technology to create barriers between you and your candidates. We build it to break them down.

Because you aren’t just processing data. You are hiring people.

💡 Discover our AI roadmap for a list of live and upcoming features

Frequently asked questions on Pinpoint AI

Data usage, privacy & model training

Are AI features opt-in or enabled by default?

All AI features are explicitly opt-in and never enabled by default. Each Pinpoint tenant has granular settings to enable or disable each AI feature independently.

Is there an opt-out mechanism to prevent data from being used for AI training?

Yes. We do not allow any third parties to use client data for training. This is configured specifically within our organisation’s vendor accounts (specifically OpenAI). Your data is never used to train, fine-tune, or improve any global, shared, or vendor-owned models.

Do you offer “Customer Trained Models”?

No. We have no customer-trained model functionality. All AI features use commercially available models via API.

What categories of data are accessed by AI features?

We provide data to the LLM at inference time through the prompt, sharing only what the model needs to complete its task. For example, our AI custom field feature (which evaluates if a Resume meets specific criteria) requires access to the candidate’s CV, so it is included in the prompt. We do not routinely include logs or metadata.

What is the data retention period for AI logs and outputs?

  • LangSmith observability logs: 14 days retention
  • Internal LLM request database: Adheres to the company’s data retention policy

Is customer data shared with third-party AI providers?

Yes,customer data is used as input context to AI models at inference time to achieve the feature’s purpose (e.g., analysing a CV for job criteria scoring). However, this data is not stored by the provider or used for training.

Model architecture & third parties

Which AI providers power Pinpoint’s features?

We work exclusively with OpenAI for Large Language Models (LLMs)

For resume parsing and redaction, we use Affinda

Is the model privately hosted or a shared instance?

We have our own OpenAI Organisation account and use their publicly available commercial models via API. We do not use a private/dedicated instance.

What is the data flow between Pinpoint and AI providers?

We use OpenAI’s & Affinda’s API directly. No intermediary providers (such as OpenRouter). Requests go from our dedicated AI microservice (Arti) directly to OpenAI’s API endpoints.

We call Affinda’s API directly from our main application layer using TLS 1.2

Have third-party AI providers been vetted or certified?

Security controls & technical safeguards

Does AI follow existing role-based access control?

AI features operate within the existing permission model. For example, chatbot requests authenticate using JWT tokens with company-specific claims, ensuring users can access only data for their tenant.

What protections exist against prompt injection, leakage, and cross-tenant exposure?

  • Manual red-teaming: We actively test for malicious and unintended misuse scenarios
  • Input/output guardrails: OpenAI’s Moderation API screens for harmful content
  • Tenant isolation: All requests are authenticated within a specific tenant (e.g., JWT tokens with company_id claims prevent cross-tenant access)
  • Minimal prompt data: We ensure prompts include only the data required for LLMs to perform the task at hand. This means prompts are dynamically generated when they are sent to LLMs.
  • Planned improvements: Automated evaluations on live traffic percentage for ongoing defence

Are prompts and outputs encrypted in transit and at rest?

All communication with AI vendors uses REST APIs over TLS 1.2+, encrypting data in transit. Data at rest in our internal database follows our standard encryption practices.

What audit logs are available for internal AI usage?

  • LLM request database: Full records of all requests, including input, output, and model configuration, filtered by client ID
  • LangSmith: Traces of all LLM calls and agent decisions
  • PostHog: Customer usage analytics

Can administrators restrict access to AI features?

Yes. Each Pinpoint company has individual enable/disable settings for each AI feature.

Fairness, bias & responsible AI

What measures do you take to ensure fairness and prevent bias?

  • We use OpenAI models, developed by world-leading AI researchers with extensive bias mitigation
  • We are actively reviewing third-party providers for independent bias auditing
  • We run evaluations through our AI systems to monitor responses and results before any release

What guardrails exist to prevent inappropriate or unintended AI behaviour?

Hard stops:

  • OpenAI’s Moderation API screens both inputs and outputs for violence, sexual content, or harmful material

Behavioural controls:

  • Carefully crafted system prompts define expected behaviour
  • Comprehensive prompt evaluation ensures AI acts as intended
  • Routing logic directs queries to appropriate handlers (including graceful handling of inappropriate requests)

Compliance & regulation

How do you comply with automated decision-making regulations?

We will never allow AI to act autonomously. All final decisions are made by recruiters and hiring managers—humans remain in the loop.

This human-in-the-loop architecture is foundational to our compliance approach across jurisdictions:

  • EU: Aligns with GDPR Article 22 and EU AI Act requirements for human oversight of high-risk AI systems
  • US Federal: Meets EEOC guidance that employers must maintain accountability for AI-assisted hiring decisions
  • US State Laws: Satisfies human review requirements under NYC Local Law 144, Illinois HB 3773, California FEHA, and Colorado S.B. 24-205

How do you protect against deepfakes or fabricated candidates?

This is an evolving challenge that technology has scaled but not created. We are continuously evaluating solutions and working toward partnerships with specialist providers. Current vendor solutions produce unacceptable false positive rates, so we’re taking a measured approach.

Licensing

How is AI functionality licensed?

We do not have a single AI module. AI is broken down into features across the platform. Majority of AI functionality is available to all clients however there is an overall credit limit per client to ensure fair usage. The credit limit is a cap that we do not intend clients to hit and is as a safeguard but can be discussed if ever an issue.

Questions?

If you have questions about our AI practices, data handling, or compliance posture, please reach out to your Customer Success Manager or contact us directly.