CircadifyCircadify
Fraud Prevention8 min read

How rPPG Liveness Detection Works on Video Calls

How rPPG liveness detection works on video calls, including blood-flow analysis, presentation attack defense, and the research shaping remote identity checks.

tryfacescan.com Research Team·
How rPPG Liveness Detection Works on Video Calls

How rPPG Liveness Detection Works on Video Calls

Video-based onboarding used to be a decent proxy for in-person identity checks. That assumption is getting shaky fast. Fraud teams now have to deal with replay attacks, injected virtual-camera feeds, face swaps, and synthetic identities that can look convincing for long enough to slip through a remote review. That is why rPPG liveness detection on video calls has moved from a niche biometric topic into a practical control for banks, KYC providers, and enterprise identity platforms.

“Deepfakes enable impersonation, manipulation of biometric checks, and fabrication of documentary evidence.” — FATF, Horizon Scan: AI and Deepfakes (December 2025)

rPPG Liveness Detection on Video Calls: The Core Mechanism

Remote photoplethysmography, or rPPG, measures tiny color changes in facial skin caused by blood volume changes with each heartbeat. Those changes are invisible to the human eye, but a camera can capture them across a short video sequence. On a live video call, an rPPG system typically watches the forehead, cheeks, and other skin-rich regions, then analyzes whether those pixel changes behave like a real cardiovascular signal.

That matters because a convincing face is not the same thing as a living person. A deepfake can imitate blinking. A replay can imitate head motion. A fraudster can even inject synthetic video directly into a verification session. What is harder to fake is a coherent pulse waveform that stays spatially consistent across facial regions and temporally stable over several seconds.

The foundational work goes back to Wim Verkruysse, Lars Svaasand, and John Nelson at the University of Arizona, who showed in Optics Express in 2008 that ambient-light video could recover plethysmographic signals from the face. Later algorithmic work from Gerard de Haan and Vincent Jeanne (2013) and Weixuan Wang with colleagues at Eindhoven University of Technology (2017) made rPPG extraction more robust under motion and lighting changes.

What Happens During a Video-Call Liveness Check

In a production video call, the liveness layer usually runs in parallel with face matching and document verification. The user is not asked to perform an elaborate challenge unless the session risk score rises.

A typical workflow looks like this:

  • The platform captures a short face video, often 3 to 5 seconds
  • Skin regions are tracked frame by frame despite head movement
  • RGB traces are extracted from those regions
  • Filters isolate frequencies associated with cardiac pulse
  • The system checks for signal quality, periodicity, and cross-region consistency
  • A liveness score is passed to the broader fraud or KYC decision engine

That passive design matters. Signicat’s 2024 Battle to Onboard research found that 68% of consumers have abandoned a digital onboarding application at least once. In remote identity flows, every extra prompt costs conversion. Passive liveness gives risk teams another detection layer without turning the call into a script-reading exercise.

Comparison Table: How Video-Call Liveness Methods Differ

Method What it checks Strengths Weak spots on video calls
Static selfie review Visual plausibility Fast, cheap Fails against replays, deepfakes, and synthetic identities
Active challenge-response Blink, smile, head turn Catches simple photo attacks Adds friction; advanced deepfakes can mimic prompts
Texture or artifact analysis Pixel artifacts, moiré, rendering clues Useful against some replay or GAN outputs Performance can decay as generation models improve
Device / camera integrity checks Source authenticity of the video stream Helpful against virtual-camera injection Does not prove the face is biologically live
rPPG blood-flow analysis Physiological pulse signal in facial skin Passive, hard to spoof, well-suited to video Sensitive to lighting, motion, and poor capture quality

The real point is not that rPPG replaces everything else. It does something different. It checks for biological presence while the rest of the stack checks document authenticity, facial similarity, and device integrity.

Industry Applications for Video-Call rPPG

Video KYC and remote account opening

Banks and fintechs are increasingly using live video sessions for remote onboarding. In those sessions, rPPG adds a passive “is this a live human?” layer without requiring an agent to rely only on visual judgment. That is especially relevant as injection attacks move from lab demos into production fraud campaigns.

Call-center identity verification

For higher-risk account changes, some institutions escalate customers into video verification. rPPG helps separate a real caller from a replayed or manipulated feed, which is useful when social-engineering attacks are paired with stolen personal data.

Government and regulated identity proofing

The European Banking Authority’s remote onboarding guidance, applicable from October 2023, pushed institutions toward stronger liveness controls for unattended onboarding. The direction of travel is pretty clear: remote identity proofing is expected to include stronger defenses against presentation attacks, not weaker ones.

Enterprise access and step-up authentication

Video-call liveness is also showing up in contractor verification, executive approval workflows, and privileged-access scenarios. In those cases, passive liveness can sit behind a video meeting or identity check without forcing the user through a clumsy challenge flow.

Current Research and Evidence

Several papers and policy documents explain why rPPG remains attractive for video-call security teams.

  • Verkruysse, Svaasand, and Nelson (University of Arizona, 2008) showed that ambient-light facial video can recover plethysmographic information, establishing the technical basis for camera-based pulse analysis.
  • de Haan and Jeanne (Philips Research, 2013) introduced the CHROM method in IEEE Transactions on Biomedical Engineering, improving pulse extraction under motion and illumination changes.
  • Wang, den Brinker, Stuijk, and de Haan (Eindhoven University of Technology, 2017) proposed the POS algorithm, another major step in robust rPPG extraction from ordinary video.
  • Si-Qi Liu, Pong C. Yuen, Shengping Zhang, and Guoying Zhao (ECCV 2016) used rPPG for 3D mask face anti-spoofing, showing that non-living presentation materials fail to reproduce the expected blood-flow signal.
  • Umur Aybars Ciftci, Ilke Demir, and Lijun Yin (University at Buffalo, 2020) published FakeCatcher in IEEE TPAMI, demonstrating that biological signal inconsistencies can expose synthetic face videos.
  • NIST’s 800-63-4 draft work (2024) and related commentary moved presentation-attack detection and media-analysis expectations higher for remote identity proofing.
  • FATF’s 2025 horizon scan made the regulatory concern explicit by tying deepfakes to identity fraud, fabricated evidence, and AML/CFT risk.

There is one important wrinkle. Research highlighted in 2024 and 2025 suggests higher-quality deepfakes are beginning to imitate heartbeat-like patterns. That does not make rPPG obsolete. It means defenders should treat it as one strong layer in a broader stack, not as a magic trick that solves fraud on its own.

Why Video Calls Are a Better Fit for rPPG Than Single Images

rPPG needs time-series data. A still photo cannot provide it. A video call can.

That gives video-call verification three practical advantages:

  • Enough frames to observe multiple cardiac cycles
  • More chances to compare signals across different facial regions
  • Better resilience against simple replay or static-presentation attacks

A live call also gives systems more context. They can combine pulse analysis with motion consistency, lighting behavior, device telemetry, and session risk signals. In other words, the video stream becomes more than a face image. It becomes a behavioral and physiological sensor.

The Future of rPPG Liveness Detection on Video Calls

The next phase will probably look less like a standalone biometric check and more like fused decisioning.

Three shifts stand out:

  • More multimodal scoring. rPPG will increasingly be combined with device attestation, face matching, and injection-attack checks in one risk model.
  • More scrutiny of synthetic physiology. As generative models try to imitate blood-flow rhythms, teams will look harder at signal coherence, region-by-region consistency, and camera-sensor artifacts.
  • More regulatory pressure. Remote onboarding rules are moving toward evidence that institutions can resist AI-enabled impersonation, not merely document it after the fact.

That makes rPPG especially useful on video calls. A call already gives platforms the time window needed for physiological analysis. The opportunity is sitting there in the capture itself.

Frequently Asked Questions

How long does rPPG liveness detection need on a video call?

Most systems look for roughly 3 to 5 seconds of usable facial video. That usually provides enough frames to estimate whether a pulse-like signal is present and consistent.

Can rPPG work if the person moves during the call?

Yes, within limits. Modern methods such as CHROM and POS were designed to be more tolerant of ordinary head movement, but heavy motion, poor lighting, or occlusion can still reduce signal quality.

Does rPPG replace other liveness checks?

No. It works best as part of a layered stack. rPPG is strong at testing biological presence, while other tools handle document checks, device integrity, and fraud scoring.

Can deepfakes defeat rPPG?

Some newer research suggests synthetic videos may imitate heartbeat-like patterns more convincingly than older models did. That is exactly why teams should pair rPPG with injection detection, source-integrity checks, and broader presentation-attack defenses.

Video-call identity checks are no longer judged only on whether a face looks real. They are judged on whether the session can prove a person is really there. That is the space where rPPG fits best: not as hype, but as a practical way to test for biological presence in an environment where visual realism has become cheap. For teams building remote verification flows, solutions like Circadify are part of a broader move toward passive, physiology-aware fraud defense. For more on this topic, see How KYC Providers Add rPPG Liveness to Identity Verification and Why Traditional Deepfake Detection Fails (And What Works).

rPPGLiveness DetectionVideo KYCDeepfake Detection
Request Enterprise Demo