Questions About Deepfake Detection
Everything fraud teams, identity vendors, and security leaders need to know about rPPG-based liveness verification
Frequently Asked Questions
Every heartbeat pushes blood through the capillaries beneath facial skin, causing micro-color changes that are invisible to the naked eye but detectable by cameras. Remote photoplethysmography (rPPG) captures these hemodynamic signals. Deepfakes, no matter how visually convincing, are generated pixel by pixel — they contain no real circulatory system and produce no blood volume pulse. Circadify checks for this pulse. If it is absent or inconsistent, the face is not real.
Circadify detects printed photo attacks, screen replay attacks, 3D mask presentations, AI-generated deepfake videos, face-swap injections, and virtual camera stream injections. All of these share the same fundamental weakness: none of them produce the genuine hemodynamic signals that living human skin generates. The detection approach is attack-agnostic — it verifies biology, not visual patterns.
No. Circadify performs passive liveness detection. There are no blinking prompts, head-turning challenges, or other active instructions. The user simply faces the camera as they would during any normal selfie capture or video call. Blood flow analysis runs in the background without the user being aware of it, preserving a frictionless experience.
Yes. The rPPG algorithms detect relative color changes over time, not absolute skin color. This makes the analysis inherently inclusive across the full spectrum of human skin tones. The system is also designed to perform in varied lighting conditions — indoor office light, natural daylight, and mixed lighting environments typical of real-world verification scenarios.
No specialized hardware. Circadify works with standard RGB cameras already present in smartphones, laptops, tablets, and kiosk systems. There is no need for infrared sensors, depth cameras, or structured light projectors. If your users can take a selfie or join a video call with their current device, Circadify can run on it.
All video analysis runs locally on the capture device. The facial video used for liveness detection is never transmitted to external servers, stored in cloud databases, or shared with third parties. This architecture eliminates the risk of biometric data breaches from server-side storage and ensures compliance with data minimization principles.
Have a Technical Integration Question?
Our security engineering team is available to discuss deployment architecture, detection methodology, and integration with your verification stack.
