Developer API
The CVault API lets you parse resumes, extract skills, score candidates, and receive real-time webhook events — all over HTTPS. Every endpoint is authenticated, rate-limited, and returns JSON. Supports 17 languages and any resume layout.
Versioning
The CVault API is currently unversioned. The current stable version is v1 and all endpoints are at /api/portal/.
Stability guarantee: We will never remove a field from a response object or change the type of an existing field in a breaking way without a minimum 90-day deprecation notice. New fields may be added to response objects at any time — your integration should ignore unknown fields.
Breaking changes (if ever required) will be introduced under a new path prefix (e.g. /api/v2/portal/) with the old version remaining active for a minimum of 6 months. We will email all API key holders before any such migration.
Authentication
Every request must include an API key in the x-api-key header. Generate keys from Settings → API Keys. Keys are prefixed cv_live_ and scoped to your account.
# All requests require this header x-api-key: cv_live_your_key_here
Alternatively pass the key as a Bearer token:
Authorization: Bearer cv_live_your_key_here
Keys are hashed and never stored in plaintext. If a key is compromised, rotate it from Settings immediately — the old key is invalidated instantly.
Rate limits & quotas
Monthly parse quotas reset on your billing cycle date. Each file in a batch counts as one parse. Duplicate files (same SHA-256) returned from cache do not consume quota.
| Plan | Monthly parses | Max batch size | Max file size |
|---|---|---|---|
| Basic | 5 | 1 file | 10 MB |
| Pro | 100 | 1 file | 10 MB |
| Ultra | 1,000 | 25 files | 10 MB |
| Mega | 5,000 | 50 files | 10 MB |
When quota is exhausted the API returns 402 Payment Required. Upgrade at cvault.tech/pricing.
Quickstart
Parse your first resume in under 2 minutes. Get an API key from Settings → API Keys, then run one of the examples below.
For async batches (large files or more than ~5 files), the API returns a job_id. Poll /job_status/{job_id} until status is completed, then fetch from /job_results/{job_id}.
Parse resumes
/api/portal/parse_cv_batch_parallelUpload 1–50 resume files and receive structured candidate data. Files under ~3 pages typically complete synchronously (200). Larger batches or files are processed asynchronously — you receive a job ID (202) and poll for results.
Request
Send as multipart/form-data.
| Field | Type | Required | Description |
|---|---|---|---|
files[] | File[] | Yes | PDF, DOCX, DOC, or TXT. Max 10 MB each. Batch limit depends on plan (1–50 files). |
job_description | string | No | Job description text (≤50,000 chars). Enables skill matching and fit scoring. |
parse_options | JSON string | No | See parse options table below. |
Parse options
Pass as a JSON-stringified object in the parse_options field. If omitted, the API returns the fast ats_core profile by default. It is still ATS-ready: summary/profile text, exact raw skills, and normalized date/location helpers are included without the heavier analysis sections.
| Option | Default | Description |
|---|---|---|
output_profile | ats_core | ats_core for fast ATS insertion with summary plus normalized date/location helpers, ats_plus for richer recruiting context, or full_profile for the broadest standard response. |
verdict | true* | AI-generated candidate summary paragraph. Forced off in ats_core. |
seniority | true | Seniority level classification (junior / mid / senior / lead / executive). |
score | true | 0–100 fit score with reasoning. Requires job_description. |
red_flags | true* | Detected employment gaps, frequent job changes, or role mismatches. Forced off in ats_core. |
strengths | true* | Top candidate strengths relative to the role. Forced off in ats_core. |
self_check | true* | Confidence audit on extracted fields. Available in richer profiles only. |
response_mode | ats | Backward-compatible alias. ats maps to ats_core; full maps to full_profile. |
Recommended profile usage: use the default for ATS and Zapier ingestion, set output_profile to ats_plus when you want richer recruiting context, and use full_profile when you want the broadest standard CVault response.
Example request
Advanced profile examples
Synchronous response (200)
Returned when parsing completes within the request window.
{
"success": true,
"count": 2,
"parses_used": 12,
"parses_limit": 100,
"duplicates": [],
"results": [
{
"file_index": 0,
"filename": "john_doe.pdf",
"success": true,
"candidate": { /* full candidate object — see Response Schema */ },
"parse_tier": "tier1",
"cached": false
}
]
}Asynchronous response (202)
Returned for large batches. Poll /job_status/<job_id> until status is completed.
{
"success": true,
"async": true,
"job_id": "job_abc123",
"status": "queued",
"total": 25,
"poll_url": "/api/portal/job_status/job_abc123",
"results_url": "/api/portal/job_results/job_abc123",
"parses_used": 12,
"parses_limit": 100,
"quota_status": "ok",
"duplicates": []
}Job status
/api/portal/job_status/{job_id}Poll this endpoint after receiving a 202 from the parse endpoint. Recommended polling interval: 1–2 seconds.
curl https://cvault.tech/api/portal/job_status/job_abc123 \ -H "x-api-key: cv_live_your_key_here"
{
"success": true,
"job_id": "job_abc123",
"status": "processing", // queued | processing | completed | failed
"total": 25,
"completed": 18,
"failed": 0,
"created_at": "2025-03-14T10:23:00Z",
"updated_at": "2025-03-14T10:23:14Z"
}| Status | Meaning |
|---|---|
queued | Job is waiting for a worker. |
processing | Files are being parsed. Check completed for progress. |
completed | All files done. Fetch results from /job_results/<job_id>. |
failed | Job-level failure (quota, storage). Individual file failures appear in results. |
Job results
/api/portal/job_results/{job_id}Fetch parsed results once the job status is completed. Results are paginated — use offset and limit for large batches.
| Query param | Default | Description |
|---|---|---|
offset | 0 | Number of results to skip. |
limit | 100 | Max results to return (max 100). |
curl "https://cvault.tech/api/portal/job_results/job_abc123?offset=0&limit=100" \ -H "x-api-key: cv_live_your_key_here"
{
"success": true,
"job_id": "job_abc123",
"total": 25,
"offset": 0,
"limit": 100,
"results": [
{
"file_index": 0,
"filename": "john_doe.pdf",
"success": true,
"candidate": { /* full candidate object */ },
"parse_tier": "tier2",
"cached": false
},
{
"file_index": 1,
"filename": "corrupt.pdf",
"success": false,
"error": "Could not extract text from file"
}
]
}Extract skills from job description
/api/portal/extract_skillsSend a job description and get back structured required and nice-to-have skill lists. Use these as inputs to parse_cv_batch_parallel or/rescore_candidates for accurate fit scoring.
curl -X POST https://cvault.tech/api/portal/extract_skills \
-H "x-api-key: cv_live_your_key_here" \
-H "Content-Type: application/json" \
-d '{
"description": "We are hiring a Senior Backend Engineer with Python, Django, PostgreSQL...",
"title": "Senior Backend Engineer",
"department": "Engineering"
}'{
"success": true,
"required": ["Python", "Django", "PostgreSQL", "REST APIs", "Docker"],
"nice_to_have": ["Kubernetes", "Redis", "GraphQL", "AWS"]
}| Field | Type | Required | Description |
|---|---|---|---|
description | string | Yes | Job description text. Max 50,000 characters. |
title | string | No | Job title for context. |
department | string | No | Department for context. |
Rescore candidates
/api/portal/rescore_candidatesRe-score previously parsed candidates against a new or updated skill set without re-uploading files. Add ?async=true for large batches.
curl -X POST https://cvault.tech/api/portal/rescore_candidates \
-H "x-api-key: cv_live_your_key_here" \
-H "Content-Type: application/json" \
-d '{
"job_id": "job_abc123",
"required_skills": ["Python", "Django", "PostgreSQL"],
"nice_to_have_skills": ["Redis", "Kubernetes"]
}'{
"success": true,
"updated": 25,
"scores": [
{
"candidate_id": "cand_xyz",
"fit_score": 87,
"hiring_recommendation": "strong_yes",
"matched_skills": ["Python", "Django"],
"missing_skills": ["PostgreSQL"]
}
]
}Generate interview questions
/api/portal/generate_interview_questions20 / dayGenerate tailored interview questions for a specific candidate based on their parsed profile. Rate limited to 20 requests per day per account.
curl -X POST https://cvault.tech/api/portal/generate_interview_questions \
-H "x-api-key: cv_live_your_key_here" \
-H "Content-Type: application/json" \
-d '{
"candidate_id": "cand_xyz",
"job_context": "Senior backend role at a fintech startup, Python-heavy stack",
"motivation_letter": "I am passionate about distributed systems..."
}'{
"success": true,
"generated_at": "2025-03-14T10:30:00Z",
"questions": [
{
"category": "Technical",
"question": "Walk me through how you would design a rate-limited API in Django.",
"rationale": "Candidate claims Django expertise; role requires high-throughput API design."
},
{
"category": "Behavioural",
"question": "Tell me about a time you had to debug a production outage under pressure.",
"rationale": "2-month gap in 2023 — probing for circumstances."
}
]
}Webhooks
CVault fires signed HTTP POST events to your endpoint when parse jobs complete. Each delivery includes a CVault-Signature header you should verify before processing.
Manage subscriptions
/api/portal/webhooks/api/portal/webhooks/api/portal/webhooks/{webhook_id}# Create a webhook
curl -X POST https://cvault.tech/api/portal/webhooks \
-H "x-api-key: cv_live_your_key_here" \
-H "Content-Type: application/json" \
-d '{
"target_url": "https://your-app.com/webhooks/cvault",
"event_types": ["parse.completed", "parse.failed"]
}'{
"success": true,
"webhook": {
"id": "wh_123",
"target_url": "https://your-app.com/webhooks/cvault",
"event_types": ["parse.completed", "parse.failed"],
"secret": "whsec_abc123..."
}
}Supported event types
| Event | Fires when |
|---|---|
parse.completed | All files in a parse job have been processed successfully. |
parse.failed | A parse job encountered a terminal error. |
score.completed | A rescore job has finished. |
Signature verification
Every delivery includes a CVault-Signature header. Verify it using your webhook secret to prevent spoofed events.
import hmac, hashlib
def verify_webhook(payload_bytes: bytes, signature_header: str, secret: str) -> bool:
expected = hmac.new(
secret.encode(),
payload_bytes,
hashlib.sha256
).hexdigest()
return hmac.compare_digest(f"sha256={expected}", signature_header)// Node.js
const crypto = require('crypto');
function verifyWebhook(payloadBuffer, signatureHeader, secret) {
const expected = 'sha256=' + crypto
.createHmac('sha256', secret)
.update(payloadBuffer)
.digest('hex');
return crypto.timingSafeEqual(
Buffer.from(expected),
Buffer.from(signatureHeader)
);
}Delivery retries
Failed deliveries are retried 3 times with exponential backoff: 1s → 5s → 30s. Your endpoint should return 2xx within 10 seconds. View delivery history at /api/portal/webhooks/{id}/deliveries.
Error codes
All errors return JSON with an error string and optional details field.
| HTTP status | Meaning |
|---|---|
400 | Bad request — missing required field, invalid file type, or input too large. |
401 | Missing or invalid API key. |
402 | Monthly parse quota exhausted. Upgrade your plan. |
403 | Action not permitted for your account or plan. |
404 | Job ID or resource not found. |
413 | File exceeds 10 MB limit. |
429 | Rate limit exceeded. Back off and retry. |
500 | Internal server error. Retry with exponential backoff. |
{
"error": "Monthly parse quota exhausted",
"quota_used": 100,
"quota_limit": 100,
"resets_at": "2025-04-01T00:00:00Z"
}Candidate response schema
The candidate object returned in parse results. All fields are nullable unless marked required.
| Field | Type | Description |
|---|---|---|
name | string | Full name. |
email | string | Primary email address. |
phone | string | Primary phone number. |
location | string | City, country, or full address. |
linkedin | string | LinkedIn profile URL. |
profile | string | Professional summary paragraph. |
seniority | string | junior | mid | senior | lead | executive |
skills | string[] | All extracted technical and soft skills. |
languages | object[] | { language, proficiency } — up to 17 languages supported. |
experience | object[] | Work history entries. See sub-schema below. |
education | object[] | Education entries. See sub-schema below. |
certifications | string[] | Certification names. |
fit_score | number | 0–100 fit score. Present when score: true and job description provided. |
hiring_recommendation | string | strong_yes | yes | maybe | no |
verdict | string | AI summary paragraph. Present when verdict: true. |
red_flags | string[] | Detected issues. Present when red_flags: true. |
strengths | string[] | Top strengths. Present when strengths: true. |
total_experience_years | number | Calculated total years of work experience. |
nationality | string | Nationality if stated on resume. |
driving_license | string | License category if stated. |
Experience entry
{
"title": "Senior Software Engineer",
"company": "Acme Corp",
"location": "London, UK",
"start_date": "2021-03",
"end_date": null, // null = current position
"is_current": true,
"description": "Led backend team of 5, built real-time data pipeline...",
"skills_used": ["Python", "Kafka", "PostgreSQL"]
}Education entry
{
"degree": "BSc Computer Science",
"institution": "University of Manchester",
"location": "Manchester, UK",
"start_date": "2014-09",
"end_date": "2017-06",
"grade": "First Class Honours"
}Full example
{
"name": "Jane Smith",
"email": "[email protected]",
"phone": "+44 7700 900123",
"location": "London, UK",
"linkedin": "https://linkedin.com/in/janesmith",
"seniority": "senior",
"total_experience_years": 8,
"fit_score": 91,
"hiring_recommendation": "strong_yes",
"verdict": "Jane is an exceptionally strong candidate for this role...",
"skills": ["Python", "Django", "PostgreSQL", "Docker", "Kubernetes"],
"languages": [
{ "language": "English", "proficiency": "native" },
{ "language": "French", "proficiency": "professional" }
],
"red_flags": [],
"strengths": [
"Deep Python expertise with 8 years production experience",
"Direct experience in high-throughput fintech APIs"
],
"experience": [
{
"title": "Senior Software Engineer",
"company": "Acme Fintech",
"start_date": "2021-03",
"end_date": null,
"is_current": true,
"skills_used": ["Python", "Django", "PostgreSQL"]
}
],
"education": [
{
"degree": "BSc Computer Science",
"institution": "University of Manchester",
"end_date": "2017-06"
}
]
}