For e-commerce and logistics platforms handling high volumes, manual address checking is not scalable. The most effective way to achieve **RTO reduction** is by integrating AI-powered verification directly into your fulfillment pipeline. This guide walks your engineering team through the steps to integrate the **Smart Locator Bulk Address Verification API**, showcasing the enterprise features that guarantee reliability.
1. The Bulk Verification Integration Flow & Setup
Our bulk job processor is designed to be **asynchronous** and **resumable**, following the **Job Submission > Status Polling > Download** model. The first step is securing your credentials:
Pre-Requisite: JWT API Key Setup
All API calls require a valid JSON Web Token (JWT) key to be passed in the Authorization: Bearer [token] header.
**Action:** Generate your unique 30-day API Key directly from your client dashboard under the **API Key Access** section.
2. Submitting a Bulk Verification Job (POST)
To begin, send a POST request to the bulk jobs endpoint with your order data. The system requires your CSV data string to strictly contain the following headers:
-
ORDER ID -
CUSTOMER NAME -
CUSTOMER RAW ADDRESS
Request Payload Structure
// API Endpoint: /api/bulk-jobs
Payload Example (JSON):
{
"filename": "orders_dec_9.csv",
"totalRows": 15000,
"csvData": "ORDER ID,CUSTOMER NAME,CUSTOMER RAW ADDRESS\n1001,Amit S,Hno 1-20, Nager, Hyd..."
}
3. Enterprise Stability: Throttling and Resume Logic
This is where Smart Locator delivers enterprise reliability, features you won't find in basic PIN code APIs. Our system handles the heavy concurrency required to talk to external services (like the Gemini AI and India Post):
Throttle Control (`MAX_CONCURRENT_CALLS`)
Our job processor automatically limits verification requests to a specific concurrency level (e.g., **10 concurrent calls at a time**). This sophisticated throttling prevents your verification job from overloading external APIs, which typically leads to rate-limiting errors and job failures.
Resumable Job Processing
If your job is interrupted (e.g., server restart, cancellation, or external API outage), the system uses **Resume Logic** to check the database for previously verified addresses (`processedOrderIds`). It **resumes verification** from the last successfully completed record, ensuring you never waste time or credits processing the same rows twice.
// Logic Snippet from runJobProcessor
const addressesToProcess = addresses.filter(addr => !processedOrderIds.has(addr['ORDER ID']));
// Client Benefit: Guaranteed job completion, no data loss.
4. Status Polling and Receiving the Dual Output
Once the job is submitted, capture the resulting jobId and poll the status (via GET /api/bulk-jobs) until the status is **'Completed'**. You can then download the results using the following specialized parameters:
Dual Output Download Endpoints
// Download Ready-for-Ship CSV:
GET /api/bulk-jobs?action=download&jobId=<YOUR_JOB_ID>&type=ready
// Download Manual Check CSV:
GET /api/bulk-jobs?action=download&jobId=<YOUR_JOB_ID>&type=manual
Output 1: Ready for Ship CSV (Guaranteed Deliverable)
This file contains addresses that successfully passed **all** AI correction and verification checks. Your system can **automatically push this file for dispatch** with confidence, accelerating fulfillment and reducing manual handling.
Output 2: Manual Check CSV (RTO Prevention File)
This file contains addresses classified for manual review because they failed initial checks, required a **PIN correction**, or received a low **AddressQuality** score. Triage this file to prevent RTO at the source.
By leveraging the dual-output classification, your integration instantly converts address errors from a fulfillment cost into a simple, actionable task for your team.