1) Get an API key
Create an account and generate an API key in your dashboard .
Add it to an environment variable:
export AURIGIN_API_KEY = aurigin_test_1234567890abcdef
2) Upload a file and get a prediction
Send your audio file directly to POST /predict. The API processes your audio in 5-second segments and returns both a global verdict and per-segment results.
import os
import json
import requests
API_KEY = os.environ.get( "AURIGIN_API_KEY" , "aurigin_test_1234567890abcdef" )
BASE_URL = "https://api.aurigin.ai/v1"
FILE_PATH = "path/to/your/recording.wav" # Replace with the path to your local audio file
with open ( FILE_PATH , "rb" ) as f:
files = {
"file" : (os.path.basename( FILE_PATH ), f, "audio/wav" )
}
r2 = requests.post( f " { BASE_URL } /predict" , headers = { "x-api-key" : API_KEY }, files = files)
print ( "Multipart response:" , r2.status_code, r2.json())
# The response contains predictions for each 5-second chunk of your audio
# For a 15-second file, you'll get 3 predictions: ["fake", "real", "fake"]
3) Voice ID: Enroll and Verify
Voice ID adds identity verification on top of deepfake detection. First, enroll a user’s voice to create a voiceprint, then verify if future voice samples match.
Enroll a Voice
Create a voiceprint from a clean voice sample (>= 10 seconds):
import os
import json
import requests
API_KEY = os.environ.get( "AURIGIN_API_KEY" , "aurigin_test_1234567890abcdef" )
BASE_URL = "https://api.aurigin.ai/v1"
ENROLLMENT_FILE = "path/to/user_voice_enrollment.wav" # >= 10 seconds
with open ( ENROLLMENT_FILE , "rb" ) as f:
files = { "audio_file" : (os.path.basename( ENROLLMENT_FILE ), f, "audio/wav" )}
data = { "user_id" : "john_doe" }
response = requests.post(
f " { BASE_URL } /voiceid/enroll" ,
headers = { "x-api-key" : API_KEY },
files = files,
data = data
)
result = response.json()
embedding_vector = result[ "embedding" ]
# Store the embedding vector securely for future verification
print ( f "Enrollment successful! Embedding dimension: { len (embedding_vector) } " )
print ( f "Model: { result[ 'model_version' ] } " )
print ( f "Processing time: { result[ 'processing_time' ] :.2f} s" )
# Save embedding for later use
with open ( "user_voiceprint.json" , "w" ) as out:
json.dump(embedding_vector, out)
Verify a Voice
Compare a new voice sample against the enrolled voiceprint:
import os
import json
import requests
API_KEY = os.environ.get( "AURIGIN_API_KEY" , "aurigin_test_1234567890abcdef" )
BASE_URL = "https://api.aurigin.ai/v1"
VERIFICATION_FILE = "path/to/verification_sample.wav"
# Load the stored embedding vector
with open ( "user_voiceprint.json" , "r" ) as f:
embedding_vector = json.load(f)
with open ( VERIFICATION_FILE , "rb" ) as f:
files = { "audio_file" : (os.path.basename( VERIFICATION_FILE ), f, "audio/wav" )}
data = {
"voice_vector" : json.dumps(embedding_vector),
"user_id" : "john_doe"
}
response = requests.post(
f " { BASE_URL } /voiceid/verify" ,
headers = { "x-api-key" : API_KEY },
files = files,
data = data
)
result = response.json()
print ( f "Is Match: { result[ 'is_match' ] } " )
print ( f "Similarity Score: { result[ 'similarity_score' ] :.2%} " )
print ( f "Confidence Score: { result[ 'confidence_score' ] :.2%} " )
print ( f "Model Version: { result[ 'model_version' ] } " )
print ( f "Processing Time: { result[ 'processing_time' ] :.2f} s" )
if result[ 'deepfake_score' ] is not None :
print ( f "Deepfake Score: { result[ 'deepfake_score' ] :.2%} " )
# Make authentication decision
if result[ 'is_match' ] and result[ 'similarity_score' ] >= 0.85 :
print ( "✅ Authentication successful!" )
else :
print ( "❌ Authentication failed - require additional verification" )
Errors
Check your API key and file size before making requests.
Code Description 400Invalid input or file too large 403Authentication failed (check x-api-key) 500Internal error or upstream unavailability
Next Steps
Deepfake Detection Learn more about detecting AI-generated audio
Voice ID Explore voice enrollment and verification