Developer API

The CVault API lets you parse resumes, extract skills, score candidates, and receive real-time webhook events — all over HTTPS. Every endpoint is authenticated, rate-limited, and returns JSON. Supports 17 languages and any resume layout.

Base URLhttps://cvault.tech/api/portal
FormatJSON (application/json) — multipart/form-data for file uploads
TLSRequired on all requests
ComplianceDPA available — Data Processing Agreement (DPA) for reviewed customer deployments

Versioning

The CVault API is currently unversioned. The current stable version is v1 and all endpoints are at /api/portal/.

Stability guarantee: We will never remove a field from a response object or change the type of an existing field in a breaking way without a minimum 90-day deprecation notice. New fields may be added to response objects at any time — your integration should ignore unknown fields.

Breaking changes (if ever required) will be introduced under a new path prefix (e.g. /api/v2/portal/) with the old version remaining active for a minimum of 6 months. We will email all API key holders before any such migration.

Currentv1 — stable, no breaking changes planned

Authentication

Every request must include an API key in the x-api-key header. Generate keys from Settings → API Keys. Keys are prefixed cv_live_ and scoped to your account.

# All requests require this header
x-api-key: cv_live_your_key_here

Alternatively pass the key as a Bearer token:

Authorization: Bearer cv_live_your_key_here

Keys are hashed and never stored in plaintext. If a key is compromised, rotate it from Settings immediately — the old key is invalidated instantly.

Rate limits & quotas

Monthly parse quotas reset on your billing cycle date. Each file in a batch counts as one parse. Duplicate files (same SHA-256) returned from cache do not consume quota.

PlanMonthly parsesMax batch sizeMax file size
Basic51 file10 MB
Pro1001 file10 MB
Ultra1,00025 files10 MB
Mega5,00050 files10 MB

When quota is exhausted the API returns 402 Payment Required. Upgrade at cvault.tech/pricing.

Quickstart

Parse your first resume in under 2 minutes. Get an API key from Settings → API Keys, then run one of the examples below.

import requests

API_KEY = "cv_live_your_key_here"
BASE_URL = "https://cvault.tech/api/portal"

with open("resume.pdf", "rb") as f:
    response = requests.post(
        f"{BASE_URL}/parse_cv_batch_parallel",
        headers={"x-api-key": API_KEY},
        files={"files[]": ("resume.pdf", f, "application/pdf")},
        data={"job_description": "Senior Python engineer with Django experience"},
    )

result = response.json()
candidate = result["results"][0]["candidate"]
print(f"Name: {candidate['name']}")
print(f"Fit score: {candidate['fit_score']}/100")
print(f"Skills: {', '.join(candidate['skills'][:5])}")

For async batches (large files or more than ~5 files), the API returns a job_id. Poll /job_status/{job_id} until status is completed, then fetch from /job_results/{job_id}.

Parse resumes

POST/api/portal/parse_cv_batch_parallel

Upload 1–50 resume files and receive structured candidate data. Files under ~3 pages typically complete synchronously (200). Larger batches or files are processed asynchronously — you receive a job ID (202) and poll for results.

Request

Send as multipart/form-data.

FieldTypeRequiredDescription
files[]File[]YesPDF, DOCX, DOC, or TXT. Max 10 MB each. Batch limit depends on plan (1–50 files).
job_descriptionstringNoJob description text (≤50,000 chars). Enables skill matching and fit scoring.
parse_optionsJSON stringNoSee parse options table below.

Parse options

Pass as a JSON-stringified object in the parse_options field. If omitted, the API returns the fast ats_core profile by default. It is still ATS-ready: summary/profile text, exact raw skills, and normalized date/location helpers are included without the heavier analysis sections.

OptionDefaultDescription
output_profileats_coreats_core for fast ATS insertion with summary plus normalized date/location helpers, ats_plus for richer recruiting context, or full_profile for the broadest standard response.
verdicttrue*AI-generated candidate summary paragraph. Forced off in ats_core.
senioritytrueSeniority level classification (junior / mid / senior / lead / executive).
scoretrue0–100 fit score with reasoning. Requires job_description.
red_flagstrue*Detected employment gaps, frequent job changes, or role mismatches. Forced off in ats_core.
strengthstrue*Top candidate strengths relative to the role. Forced off in ats_core.
self_checktrue*Confidence audit on extracted fields. Available in richer profiles only.
response_modeatsBackward-compatible alias. ats maps to ats_core; full maps to full_profile.

Recommended profile usage: use the default for ATS and Zapier ingestion, set output_profile to ats_plus when you want richer recruiting context, and use full_profile when you want the broadest standard CVault response.

Example request

import requests

response = requests.post(
    "https://cvault.tech/api/portal/parse_cv_batch_parallel",
    headers={"x-api-key": "cv_live_your_key_here"},
    files=[
        ("files[]", ("john_doe.pdf", open("john_doe.pdf", "rb"), "application/pdf")),
        ("files[]", ("jane_smith.docx", open("jane_smith.docx", "rb"),
                     "application/vnd.openxmlformats-officedocument.wordprocessingml.document")),
    ],
    data={"job_description": "Senior backend engineer with 5+ years Python..."},
)
print(response.json())

Advanced profile examples

parse_options = {"output_profile": "ats_plus"}

Synchronous response (200)

Returned when parsing completes within the request window.

{
  "success": true,
  "count": 2,
  "parses_used": 12,
  "parses_limit": 100,
  "duplicates": [],
  "results": [
    {
      "file_index": 0,
      "filename": "john_doe.pdf",
      "success": true,
      "candidate": { /* full candidate object — see Response Schema */ },
      "parse_tier": "tier1",
      "cached": false
    }
  ]
}

Asynchronous response (202)

Returned for large batches. Poll /job_status/<job_id> until status is completed.

{
  "success": true,
  "async": true,
  "job_id": "job_abc123",
  "status": "queued",
  "total": 25,
  "poll_url": "/api/portal/job_status/job_abc123",
  "results_url": "/api/portal/job_results/job_abc123",
  "parses_used": 12,
  "parses_limit": 100,
  "quota_status": "ok",
  "duplicates": []
}

Job status

GET/api/portal/job_status/{job_id}

Poll this endpoint after receiving a 202 from the parse endpoint. Recommended polling interval: 1–2 seconds.

curl https://cvault.tech/api/portal/job_status/job_abc123 \
  -H "x-api-key: cv_live_your_key_here"
{
  "success": true,
  "job_id": "job_abc123",
  "status": "processing",   // queued | processing | completed | failed
  "total": 25,
  "completed": 18,
  "failed": 0,
  "created_at": "2025-03-14T10:23:00Z",
  "updated_at": "2025-03-14T10:23:14Z"
}
StatusMeaning
queuedJob is waiting for a worker.
processingFiles are being parsed. Check completed for progress.
completedAll files done. Fetch results from /job_results/<job_id>.
failedJob-level failure (quota, storage). Individual file failures appear in results.

Job results

GET/api/portal/job_results/{job_id}

Fetch parsed results once the job status is completed. Results are paginated — use offset and limit for large batches.

Query paramDefaultDescription
offset0Number of results to skip.
limit100Max results to return (max 100).
curl "https://cvault.tech/api/portal/job_results/job_abc123?offset=0&limit=100" \
  -H "x-api-key: cv_live_your_key_here"
{
  "success": true,
  "job_id": "job_abc123",
  "total": 25,
  "offset": 0,
  "limit": 100,
  "results": [
    {
      "file_index": 0,
      "filename": "john_doe.pdf",
      "success": true,
      "candidate": { /* full candidate object */ },
      "parse_tier": "tier2",
      "cached": false
    },
    {
      "file_index": 1,
      "filename": "corrupt.pdf",
      "success": false,
      "error": "Could not extract text from file"
    }
  ]
}

Extract skills from job description

POST/api/portal/extract_skills

Send a job description and get back structured required and nice-to-have skill lists. Use these as inputs to parse_cv_batch_parallel or/rescore_candidates for accurate fit scoring.

curl -X POST https://cvault.tech/api/portal/extract_skills \
  -H "x-api-key: cv_live_your_key_here" \
  -H "Content-Type: application/json" \
  -d '{
    "description": "We are hiring a Senior Backend Engineer with Python, Django, PostgreSQL...",
    "title": "Senior Backend Engineer",
    "department": "Engineering"
  }'
{
  "success": true,
  "required": ["Python", "Django", "PostgreSQL", "REST APIs", "Docker"],
  "nice_to_have": ["Kubernetes", "Redis", "GraphQL", "AWS"]
}
FieldTypeRequiredDescription
descriptionstringYesJob description text. Max 50,000 characters.
titlestringNoJob title for context.
departmentstringNoDepartment for context.

Rescore candidates

POST/api/portal/rescore_candidates

Re-score previously parsed candidates against a new or updated skill set without re-uploading files. Add ?async=true for large batches.

curl -X POST https://cvault.tech/api/portal/rescore_candidates \
  -H "x-api-key: cv_live_your_key_here" \
  -H "Content-Type: application/json" \
  -d '{
    "job_id": "job_abc123",
    "required_skills": ["Python", "Django", "PostgreSQL"],
    "nice_to_have_skills": ["Redis", "Kubernetes"]
  }'
{
  "success": true,
  "updated": 25,
  "scores": [
    {
      "candidate_id": "cand_xyz",
      "fit_score": 87,
      "hiring_recommendation": "strong_yes",
      "matched_skills": ["Python", "Django"],
      "missing_skills": ["PostgreSQL"]
    }
  ]
}

Generate interview questions

POST/api/portal/generate_interview_questions20 / day

Generate tailored interview questions for a specific candidate based on their parsed profile. Rate limited to 20 requests per day per account.

curl -X POST https://cvault.tech/api/portal/generate_interview_questions \
  -H "x-api-key: cv_live_your_key_here" \
  -H "Content-Type: application/json" \
  -d '{
    "candidate_id": "cand_xyz",
    "job_context": "Senior backend role at a fintech startup, Python-heavy stack",
    "motivation_letter": "I am passionate about distributed systems..."
  }'
{
  "success": true,
  "generated_at": "2025-03-14T10:30:00Z",
  "questions": [
    {
      "category": "Technical",
      "question": "Walk me through how you would design a rate-limited API in Django.",
      "rationale": "Candidate claims Django expertise; role requires high-throughput API design."
    },
    {
      "category": "Behavioural",
      "question": "Tell me about a time you had to debug a production outage under pressure.",
      "rationale": "2-month gap in 2023 — probing for circumstances."
    }
  ]
}

Webhooks

CVault fires signed HTTP POST events to your endpoint when parse jobs complete. Each delivery includes a CVault-Signature header you should verify before processing.

Manage subscriptions

GET/api/portal/webhooks
POST/api/portal/webhooks
DELETE/api/portal/webhooks/{webhook_id}
# Create a webhook
curl -X POST https://cvault.tech/api/portal/webhooks \
  -H "x-api-key: cv_live_your_key_here" \
  -H "Content-Type: application/json" \
  -d '{
    "target_url": "https://your-app.com/webhooks/cvault",
    "event_types": ["parse.completed", "parse.failed"]
  }'
{
  "success": true,
  "webhook": {
    "id": "wh_123",
    "target_url": "https://your-app.com/webhooks/cvault",
    "event_types": ["parse.completed", "parse.failed"],
    "secret": "whsec_abc123..."
  }
}

Supported event types

EventFires when
parse.completedAll files in a parse job have been processed successfully.
parse.failedA parse job encountered a terminal error.
score.completedA rescore job has finished.

Signature verification

Every delivery includes a CVault-Signature header. Verify it using your webhook secret to prevent spoofed events.

import hmac, hashlib

def verify_webhook(payload_bytes: bytes, signature_header: str, secret: str) -> bool:
    expected = hmac.new(
        secret.encode(),
        payload_bytes,
        hashlib.sha256
    ).hexdigest()
    return hmac.compare_digest(f"sha256={expected}", signature_header)
// Node.js
const crypto = require('crypto');

function verifyWebhook(payloadBuffer, signatureHeader, secret) {
  const expected = 'sha256=' + crypto
    .createHmac('sha256', secret)
    .update(payloadBuffer)
    .digest('hex');
  return crypto.timingSafeEqual(
    Buffer.from(expected),
    Buffer.from(signatureHeader)
  );
}

Delivery retries

Failed deliveries are retried 3 times with exponential backoff: 1s → 5s → 30s. Your endpoint should return 2xx within 10 seconds. View delivery history at /api/portal/webhooks/{id}/deliveries.

Error codes

All errors return JSON with an error string and optional details field.

HTTP statusMeaning
400Bad request — missing required field, invalid file type, or input too large.
401Missing or invalid API key.
402Monthly parse quota exhausted. Upgrade your plan.
403Action not permitted for your account or plan.
404Job ID or resource not found.
413File exceeds 10 MB limit.
429Rate limit exceeded. Back off and retry.
500Internal server error. Retry with exponential backoff.
{
  "error": "Monthly parse quota exhausted",
  "quota_used": 100,
  "quota_limit": 100,
  "resets_at": "2025-04-01T00:00:00Z"
}

Candidate response schema

The candidate object returned in parse results. All fields are nullable unless marked required.

FieldTypeDescription
namestringFull name.
emailstringPrimary email address.
phonestringPrimary phone number.
locationstringCity, country, or full address.
linkedinstringLinkedIn profile URL.
profilestringProfessional summary paragraph.
senioritystringjunior | mid | senior | lead | executive
skillsstring[]All extracted technical and soft skills.
languagesobject[]{ language, proficiency } — up to 17 languages supported.
experienceobject[]Work history entries. See sub-schema below.
educationobject[]Education entries. See sub-schema below.
certificationsstring[]Certification names.
fit_scorenumber0–100 fit score. Present when score: true and job description provided.
hiring_recommendationstringstrong_yes | yes | maybe | no
verdictstringAI summary paragraph. Present when verdict: true.
red_flagsstring[]Detected issues. Present when red_flags: true.
strengthsstring[]Top strengths. Present when strengths: true.
total_experience_yearsnumberCalculated total years of work experience.
nationalitystringNationality if stated on resume.
driving_licensestringLicense category if stated.

Experience entry

{
  "title": "Senior Software Engineer",
  "company": "Acme Corp",
  "location": "London, UK",
  "start_date": "2021-03",
  "end_date": null,          // null = current position
  "is_current": true,
  "description": "Led backend team of 5, built real-time data pipeline...",
  "skills_used": ["Python", "Kafka", "PostgreSQL"]
}

Education entry

{
  "degree": "BSc Computer Science",
  "institution": "University of Manchester",
  "location": "Manchester, UK",
  "start_date": "2014-09",
  "end_date": "2017-06",
  "grade": "First Class Honours"
}

Full example

{
  "name": "Jane Smith",
  "email": "[email protected]",
  "phone": "+44 7700 900123",
  "location": "London, UK",
  "linkedin": "https://linkedin.com/in/janesmith",
  "seniority": "senior",
  "total_experience_years": 8,
  "fit_score": 91,
  "hiring_recommendation": "strong_yes",
  "verdict": "Jane is an exceptionally strong candidate for this role...",
  "skills": ["Python", "Django", "PostgreSQL", "Docker", "Kubernetes"],
  "languages": [
    { "language": "English", "proficiency": "native" },
    { "language": "French", "proficiency": "professional" }
  ],
  "red_flags": [],
  "strengths": [
    "Deep Python expertise with 8 years production experience",
    "Direct experience in high-throughput fintech APIs"
  ],
  "experience": [
    {
      "title": "Senior Software Engineer",
      "company": "Acme Fintech",
      "start_date": "2021-03",
      "end_date": null,
      "is_current": true,
      "skills_used": ["Python", "Django", "PostgreSQL"]
    }
  ],
  "education": [
    {
      "degree": "BSc Computer Science",
      "institution": "University of Manchester",
      "end_date": "2017-06"
    }
  ]
}