Worldcoin

Achieving Proof of Human

How to build Proof of Human in a way that is aligned with individual empowerment.

Introduction

In the previous whitepaper, we introduced the idea of an anonymous Proof of Human (PoH) and its high level building blocks. We detailed the accelerating capabilities of AI agents that create an existential challenge for the integrity of digital interaction and why a globally inclusive, high-integrity, and privacy-preserving PoH mechanism is critical infrastructure for humanity.

This whitepaper, the second in the World whitepaper series, focuses on how to build PoH. It outlines the key design requirements needed to implement a PoH that is optimized for individual empowerment at scale. We then evaluate how to implement those in practice including candidate approaches to establishing a root of trust. Eventually we detail how World implements privacy-preserving PoH via World ID, which includes the Orb, Anonymized Multi-Party Computation (AMPC), and zero-knowledge proof-based authentication.

Building Proof of Human

The implementation of PoH can take many forms, leading to vastly different societal and individual consequences. These potential outcomes span a spectrum, from intrusive, Orwellian surveillance to systems that safeguard privacy and actively enable free expression. Consequently, the core system architecture, along with the resulting capabilities and incentives for all participants, must be designed with extreme care. This section outlines the key properties that we believe are essential for creating the best possible future for humanity, separate from the World project. World ID is our effort to translate these properties into a working system.

Derivative Design Requirements

To establish design requirements, we first need to define what we value most. We place the highest importance on individual empowerment because we think this leads to the most beneficial implementation for (human) society. This means, the design requirements should be defined such that they: prevent surveillance, maximize privacy, ensure broad participation and accessibility, and upholding freedom of expression and individual agency.

Importance of Uniqueness

Counterintuitively, PoH alone is insufficient because, without uniqueness, it is vulnerable to relay attacks. In this scenario, a small group of humans could authenticate repeatedly to serve millions of automated agents—picture a "human call-center" dedicated solely to passing PoH challenges for bots.

Strict uniqueness—exactly one credential per person—is essential to PoH integrity. Even allowing individuals to hold a small number of credentials creates a fatal vulnerability. If PoH becomes critical societal infrastructure, the incentive to bypass it will be enormous. Those who can acquire multiple credentials will likely find it lucrative to sell all but one. If just 1% of the US population sold nine out of ten credentials, a single malicious actor could masquerade as 31 million Americans. Since often only a fraction of people participate in any given online debate, this is usually more than enough to dominate public discourse.

The stakes extend well beyond social media. Elected governments wield immense power, making them high-value targets for manipulation. During elections in particular, bot networks armed with illegitimately transferred PoH credentials could pose as passionate citizens, drown out real voices, manufacture false consensus, and ultimately shift voter opinion enough to alter outcomes.

To maximize individual empowerment and protect the integrity of democratic society, PoH uniqueness must therefore be exact.

Figure 1: Building PoH in a way that maximizes human agency leads to high level design requirements as explained in the previous whitepaper. Those requirements can be broken down into derivative requirements for the three core components of PoH: root of trust, uniqueness verification and credential issuance, and authentication. These requirements are costly to implement in practice but important to empower individuals and prevent privacy-invasive tracking.
Figure 1:
Building PoH in a way that maximizes human agency leads to high level design requirements as explained in the previous whitepaper. Those requirements can be broken down into derivative requirements for the three core components of PoH: root of trust, uniqueness verification and credential issuance, and authentication. These requirements are costly to implement in practice but important to empower individuals and prevent privacy-invasive tracking.

Three Stages of Proving Uniqueness

The process of proving unique humanness involves three distinct stages:

Stage 1: Root of Trust — Proving an individual is human and acquiring verifiable, high-integrity information. This information is then used in the second stage to establish uniqueness.

Stage 2: Uniqueness Verification and Credential Issuance — Verifying uniqueness based on the previously acquired information and issuing a unique human credential.

Stage 3: Authentication — Using the issued credential to prove one's unique human status to other parties.

Based on the initial design requirements, we can deduce derivative design requirements for each of these three stages:

Design Requirements for Root of Trust

  1. Universal Eligibility. Anyone needs to be eligible.
  2. Uniqueness Accuracy. In order to be able to establish uniqueness, there needs to be enough entropy to distinguish between O(10B) humans without falsely rejecting a single person.
  3. Fake Human Resistance. The humanness and uniqueness test needs to be very hard to bypass or spoof. This includes the ability for the issuer to act quickly if a compromise becomes known.
  4. Borderless and Interoperable. Citizens from different countries need to be able to prove to each other that they are humans; high-integrity uniqueness means countries need to be able to trust the PoH from citizens of other countries to not be bots, to prevent influence operations.
  5. Proof of Country. In order to trust PoH in certain high-stakes scenarios with national context (e.g., opining on presidential candidates) and to prevent global income arbitrage, it is important for people to be able to prove in which country their credential was issued.
  6. Multi-issuer. To ensure inclusivity and make sure no entity has the power to exclude someone from receiving a PoH, there need to be multiple entities as a root of trust—but importantly, they all need to tie into the same uniqueness set.
  7. Checks and Balances. It needs to be possible to validate one's PoH via another issuer, which can make it game-theoretically uneconomical to create fake identities.
  8. No Upload of Sensitive Information. To preserve privacy, no sensitive information should be uploaded.

Design Requirements for Uniqueness Verification

  1. Multi-party Uniqueness Check. The uniqueness check should be performed by multiple parties to prevent any single party from blocking anyone.
  2. Secure Compute. It must be as difficult as possible for any single party to adversarially inject fake identities or deviate from the comparison protocol.
  3. No Centralized Database with Sensitive Information. Sensitive information must not be aggregated or stored in any centralized database.
  4. Recovery. Enable the legitimate owner to reclaim their PoH following theft or sale.

Design Requirements for Authentication

  1. Unlinkable Pseudonymity. It should not be possible to identify who someone is or to track someone across different contexts.
  2. Illicit Transfer Prevention. The system needs to ensure that the person using the PoH credential is the one it was issued to, via a person-bound second factor for periodic reauthentication.

Ideal Root of Trust for PoH

Overview

When evaluated against the design requirements outlined above, most candidate PoH mechanisms fail for structural reasons. Below, we evaluate several approaches.

Online Accounts

The simplest attempt to establish PoH at scale uses existing accounts such as email, phone numbers, and social media. This method fails because one person can have multiple accounts. Further, accounts are not person-bound—they can be easily transferred—and CAPTCHAs are ineffective because AI agents are now capable of bypassing them. Current methods for preventing duplicate accounts, such as analyzing activity patterns, tend to fail when users have strong incentives to create multiple identities or commit fraud, as demonstrated by large-scale attacks targeting well-established financial services.

Credit Cards

Credit cards have been used in several contexts as a proxy for PoH. Although this method can increase friction for fraudsters, it is far from effective. Beyond not being private, credit cards allow fraudsters to create many duplicates—the simplest method is to pay for many accounts, and fraudsters can acquire large numbers of credit cards either through virtual credit cards or on the dark web. Advances in generative AI let automated agents mass-produce plausible credit identities. Furthermore, many people lack access to financial services. Even in the U.S., 6% of adults don’t have a bank account, and credit card ownership is not universal even among those with accounts. In less developed countries, the share without access is significantly higher. Therefore, credit cards are not an inclusive solution for PoH.

Official ID Verification (Know Your Customer)

Online services often request proof of ID to comply with KYC requirements. In theory, a similar mechanism could be used to issue a PoH based on government documents, but this faces multiple challenges (discussed in depth in Section X). More than 50% of the global population does not have an ID that can be verified digitally, and building KYC verification while preserving anonymity is inherently contradictory. Zero-knowledge proofs and digitally signed IDs can partially address the privacy concern, but NFC-readable IDs are far less prevalent and people can hold multiple government IDs, so perfect uniqueness cannot be achieved.

Web of Trust

The underlying idea is to verify identity claims in a decentralized manner—for example, PGP key-signing parties or projects like Proof of Humanity that use face photos and video calls. However, these systems heavily rely on individuals and are susceptible to human error and Sybil attacks. Staking assets can increase security but decreases inclusivity, and these systems carry privacy concerns (e.g., publishing face images) and susceptibility to fraud using deepfakes.

One could also use information about relationships between people to infer which users are real. Projects like EigenTrust, BrightID and soulbound tokens (SBTs) propose more sophisticated rules based on the observation that social relations can constitute a unique identifier.1 However, the required relationships are slow to build on a global scale, and it seems inevitable that AI, possibly assisted by humans acquiring multiple "real world" credentials, will be able to create such profiles at scale. Ultimately, these approaches require giving up the notion of a unique human entirely.

Biometrics

Approaches based on online accounts, social graphs, webs of trust, or financial credentials can be mimicked by AI systems and ultimately require accepting multiple identities per person. Biometric verification is the only class of mechanisms that can simultaneously satisfy all design requirements at global scale when implemented correctly. Biometrics are universal, enabling access irrespective of nationality, race, age, gender, or economic means. They provide a natural recovery mechanism and can be used for authentication, making the PoH credential person-bound. Among biometric modalities, iris recognition uniquely satisfies the accuracy, scalability, and privacy requirements, as shown in Fig. 2.

Figure 2: An overview of candidate PoH mechanisms evaluated against the core design requirements established in the previous whitepaper. Only biometrics satisfy all five requirements when implemented correctly.
Figure 2:
An overview of candidate PoH mechanisms evaluated against the core design requirements established in the previous whitepaper. Only biometrics satisfy all five requirements when implemented correctly.

Deep Dive: Why a Document-based Root of Trust is Not Ideal for Individual Empowerment

In a future shaped by highly capable machine intelligence, the foundations of economic and social power shift. As automation increases, control over bots as well as the ability to determine who counts as a unique human will become an increasing source of power. Any entity that controls PoH gains significant influence over access to platforms, economic participation, and collective decision-making. Therefore, any incentive misalignment between issuer and participants can lead to catastrophic failures. The inherent nature of how documents are issued suggests that any PoH system based on them will result in long-term incentive misalignment.

Despite benefits—such as legal incentives against hacking and the fact that one billion people already have a verifiable document—the structural limitations of document-based PoH make it unviable on a global scale.

Figure 3: Properties of document-based PoH evaluated against core design requirements. Five of eight requirements (red) cannot be met by document-based approaches.
Figure 3:
Properties of document-based PoH evaluated against core design requirements. Five of eight requirements (red) cannot be met by document-based approaches.

Evaluating against design requirements:

Universal Accuracy

Establishing uniqueness using government documents is challenging. Names are insufficient for deduplication due to high collision rates (e.g., nearly 40,000 James Smiths in the US), and the birthday problem further complicates uniqueness checks. A document's numerical identifier is the only viable uniqueness signal, but this creates a vulnerability where individuals could acquire multiple PoH simply by reporting their document "lost" and obtaining a new one. If 1% of the US population participated, this could easily lead to tens of millions of "authentic human" bot accounts.

Universal Eligibility

Only about one in eight people possess documents that can be cryptographically verified. Basing PoH on documents would exclude many billions of people.

Fake Human Resistance

Some documents are cryptographically verifiable, making them relatively fraud resistant. However, document-based PoH incentivizes the theft of physical documents like passports. Stolen documents can be used to generate PoH or, in some implementations, be cloned without the owner's knowledge. As AI capabilities grow, the issuing infrastructure will need increasingly advanced capabilities—securing the root of trust, anomaly detection, cross-issuer verification, revocation mechanisms, and dynamic expiration dates. Developing these capabilities is likely better suited to publicly auditable and mutually verifiable companies rather than governments.

Borderless & Interoperable

Interoperability is relatively straightforward since verifiable documents are already standards-based. However, the fact that governments could create fake documents and inject fake PoH credentials makes it hard to trust foreign credentials. This leads to a replication of geographical borders on the internet and may lead to exclusion of free exchange between people from different countries.

Proof of Country

Documents include the issuing country, which makes proof of location straightforward.

Multi-issuer. Wherever PoH becomes a prerequisite for interaction, the issuer gains leverage. Loss of PoH will eventually imply exclusion from large parts of the internet, including significant limits on freedom of speech. This can be mitigated if no single entity has a monopoly over issuance, which is not possible for government documents.

Checks & Balances. The ability to issue PoH credentials directly translates to power. For government documents, no second issuer exists for cross-checking and validation. For biometric-based PoH, there can be a diverse set of hardware devices from different manufacturers that can be cross-checked against each other.

No Upload of Sensitive Information. It is possible to verify the integrity of cryptographically verifiable documents without uploading sensitive information.

Deep Dive: Why Iris-based Root of Trust Maximizes Individual Empowerment

To our knowledge, a set of secure hardware devices from different companies (based in different countries) that issue a root of trust based on the entropy of the iris is the root of trust that maximizes individual empowerment and is the only root of trust that fulfills all design requirements. Note: this is not where World ID is today, but it should get there eventually. Iris-based hardware—while able to fulfill all requirements—also comes at the cost of being very capital intensive and operationally complex to scale.

Figure 4: Unlike document-based approaches (Fig. 3), an iris-based root of trust satisfies all core design requirements for PoH that maximizes individual empowerment.
Figure 4:
Unlike document-based approaches (Fig. 3), an iris-based root of trust satisfies all core design requirements for PoH that maximizes individual empowerment.

Why Iris Biometrics Specifically

Among biometric modalities, iris recognition uniquely satisfies the accuracy, scalability, and privacy requirements of global uniqueness verification. There are two modes to consider: 1:1 authentication (comparing a user's template to a single enrolled template, like Face ID) and 1:N verification (comparing against a large set of templates to prevent duplicates). Global PoH requires the latter—comparing biometrics against eventually billions of previously verified humans. If the mechanism is not accurate enough, an increasing number of users will be incorrectly rejected. Face biometrics are not accurate enough and billions of people would be falsely denied a Proof of Human.

Figure 5: There are two different modes for biometrics. The simpler mode is 1:1 authentication, which involves comparing a user's template against a single previously enrolled template. This is commonly used in technologies such as Face ID, which compares an individual against a single facial template. However, for global Proof of Human, 1:N verification is required. This mode involves comparing a user's template against a large set of templates to ensure that there are no duplicate registrations.
Figure 5:
There are two different modes for biometrics. The simpler mode is 1:1 authentication, which involves comparing a user's template against a single previously enrolled template. This is commonly used in technologies such as Face ID, which compares an individual against a single facial template. However, for global Proof of Human, 1:N verification is required. This mode involves comparing a user's template against a large set of templates to ensure that there are no duplicate registrations.

Iris biometrics can achieve false match rates beyond 2.5×10-14 (one false match in 40 trillion), which is several orders of magnitude more accurate than the current state of the art in face recognition, while maintaining a suitable false non-match rate. The structure of the iris is remarkably stable over time, difficult to alter, and essentially unique to each individual. The iris texture is formed by random morphogenesis during gestation—an epigenetic development independent of personal factors—so even genetically identical individuals (identical twins or a person's two irises) have completely uncorrelated iris patterns.

Other biometric modalities exhibit fundamental limitations:

  1. Fingerprints can be unreliable since cuts or wear alter ridge patterns, and capturing high-quality images becomes difficult as fingerprints degrade over time. Using all ten fingerprints or combining modalities is vulnerable to combinatorial attacks.
  2. Facial recognition offers better liveness detection than DNA, but its accuracy is far below that of iris biometrics. At global scale with billions of people, the error rates would lead to double-digit percentage false rejection rates, falsely rejecting billions.
  3. DNA sequencing could in theory be highly accurate, but reveals extensive private information, collection is intrusive, difficult to scale, and there is no practical way to ensure liveness.
Figure 6: Overview of how different biometric modalities impact key considerations such as privacy, accuracy, scalability, and integrity (red indicates insufficient or problematic). Iris biometrics is the only modality that enables all of them.
Figure 6:
Overview of how different biometric modalities impact key considerations such as privacy, accuracy, scalability, and integrity (red indicates insufficient or problematic). Iris biometrics is the only modality that enables all of them.

Why Purpose-Built Hardware Is Needed

Meeting the design requirements necessitates guarantees that cannot be achieved through software or general-purpose hardware alone. Reliable verification depends on:

  1. Compute integrity: Image capture and processing must occur within a tamper-resistant environment and cannot be emulated, replayed, or modified. This requires hardware-backed signing, secure execution, and protections against compromised enrollment flows.
  2. Multispectral imaging and active liveness detection: Single-sensor consumer devices lack the signal diversity to reliably distinguish genuine presentations from high-quality spoofs in adversarial settings.
  3. High-resolution infrared imaging: Required to capture sufficient iris entropy across eye colors. Visible-spectrum cameras suffer from reflections and low contrast, particularly for darker irises.

While smartphones provide the most straightforward and scalable option, the correlation between image quality and biometric accuracy is well established. Both smartphones and existing imaging devices lack sufficient resolution to accurately capture iris biometrics, resulting in unacceptably high error rates. Furthermore, phones and existing iris cameras lack multi-angle and multispectral cameras as well as active illumination to detect presentation attacks with high confidence. A widely viewed video demonstrates an effective method for spoofing Samsung's iris recognition and underscores how easily such attacks succeed without sufficiently advanced hardware.

Additionally, an on-device trusted execution environment (TEE) is needed to guarantee that verifications originate from fully compliant devices rather than emulators. While some smartphones include specialized hardware (Apple's Secure Enclave, Google's Titan M), many do not or can only be accessed by the device manufacturer. Without such protections, attackers could spoof both image capture and enrollment requests, creating unlimited fraudulent PoH credentials.

Walking Through the Design Requirements

Universal Accuracy

Global uniqueness requires false match rates on the order of 10-20 to ensure no single person on Earth would mistakenly be excluded. Iris-based algorithms achieve error rates on the order of 10-14 today. Those can be further improved. If improvements aren't sufficient, they can be combined with face-based entropy which would already today achieve error rates below 10-20.

Universal Eligibility

Iris biometrics far surpass the inclusivity of alternatives like verifiable documents by many orders of magnitude. Many health conditions, like cataracts to a degree, do not impede iris biometrics. Specialized verification centers could facilitate alternative verification for individuals with severe eye conditions, via e.g., facial biometrics.

Fake Human Resistance

A purpose-build biometric camera can include multispectral cameras, a TEE, verifiable software, and multiple secure elements to make spoofing very expensive. Any root of trust issued by a particular hardware device can be revoked through a governance mechanism. Implementing incentive mechanisms for decentralized audits can raise the bar far beyond what hardware security alone could achieve.

Borderless & Interoperable

Biometrics and hardware are inherently borderless and it is straightforward to build interoperable infrastructure.

Proof of Country

Hardware devices can verify location via celltower and GPS connectivity plus continuous audits. With ongoing reauthentication, continuous travel to spoof PoH country credentials becomes expensive.

Multi-Issuer

Multiple companies across different jurisdictions can build auditable and verifiable hardware devices to the same specifications. This prevents censorship.

Checks & Balances

Hardware devices from different companies can issue iris-based roots of trust to the same specifications, enabling cross-checks. With staked security deposits and periodic reauthentication across different devices, it can become game-theoretically uneconomical for a manufacturer to inject fake identities.

No Upload of Sensitive Information

It is possible to issue an iris-based root of trust without uploading sensitive information.

Therefore, we conclude that an iris-based root of trust fulfills all design requirements.

The Orb

Based on the conclusion that iris biometrics via purpose-built hardware is the ideal root of trust, Tools for Humanity (TFH) built the Orb: a high-security, open-source camera that anonymously issues an AI-safe PoH credential. The Orb is purpose-built to verify humanness and ensure uniqueness in a fraud resistant and inclusive way. The humanness check is performed locally on the device, without requiring any images to be stored or uploaded. World Foundation's vision is for device development, production, and operation to be decentralized over time so that no single entity will be in sole control of World ID issuance.

Top Level Requirements

The Orb's primary use case of verifying unique humanness led to the following requirements:

  1. Privacy: The Orb must process images locally on-device and then securely transfer them to the user's custody. This eliminates any requirement for storing images on the Orb or uploading them to a central backend.
  2. Security: The Orb must only verify World IDs of genuine humans, meaning it must be highly resistant to spoofing and tampering, even in adversarial environments.
  3. Transparency and Verifiability: The Orb must require minimal trust in its manufacturers and operators. There must be a way for the public to audit Orbs, including transparency into its design, and the state of software and cryptographic keys (see the Decentralization whitepaper).
  4. Scalability: The Orb must generate iris codes consistently and accurately enough to allow for 8+ billion unique World IDs without significant false matches or false non-matches.

The Orb was designed to maximize trust, user experience, and scalability, with minimal compromises on imaging quality and security. The device’s key imaging components include a wide-angle RGB/Infrared (IR) imaging for capturing high-quality face images, a telephoto IR camera with a steerable focus using custom lenses that can capture high-quality iris images, 2-D Time of Flight (2D-ToF) for face depth mapping, high-resolution thermal imaging for liveness detection, and imaging at orders of magnitude higher resolution than the industry standard.

Additional software and hardware components to highlight include secure on-device biometrics image processing that ensures the images are authentic and generates a signed iris code, fraud and presentation attack detection (PAD) measures to ensure liveness and humanness (see more below), and, lastly, images are only analyzed in local memory, and are deleted after verification.

Hardware Design

The Orb's design is open sourced so others can trust, learn from, and improve the design. The work is intended to serve as an inspiration and starting point for other protocol-compatible verification devices.

The Orb consists of two hemispheres separated by a retaining ring tilted at 23.5° (inspired by the angle of Earth's rotational axis). Once the shells are removed, the Orb divides into four core parts:

  1. Front: Optical system
  2. Middle: Mainboard and Powerboard
  3. Back: Computing unit and cooling
  4. Bottom: Speaker and docking chamber for an exchangeable battery

The optical system consists of several multispectral sensors behind a 2D mirror gimbal to capture high-resolution iris and face images, along with additional liveness signals. Key imaging components include:

  1. Multi-camera, multi-spectral optical system featuring wide-angle RGB/Infrared (IR) imaging for high-quality face images
  2. Telephoto IR camera with steerable focus using custom lenses for high-quality iris images
  3. 2-D Time of Flight (2DToF) for face depth mapping
  4. High-resolution thermal imaging for liveness detection
  5. Imaging at orders of magnitude higher resolution than industry standard

The mainboard holds a powerful computing unit enabling secure on-device image processing. The rear hemisphere contains the cooling system and compute module. An exchangeable battery can be inserted from the bottom for mobile operation; constant power via USB-C is also available for stationary use.

Figure 7: All relevant components of the Orb.
Figure 7:
All relevant components of the Orb.

Security and Privacy

Presentation Attacks and Liveness Detection

The Orb's presentation attack detection (PAD) system runs in real time to distinguish genuine presentations from attack attempts. For enhanced privacy, all PAD checks run locally—images are never uploaded for PAD processing. The Orb layers multiple complementary checks across a diverse sensor suite to maximize the cost and complexity of attacks:

  1. Challenge-response and passive liveness checks ensure that biometric images on screens, printouts, or simulations are rejected
  2. A thermal sensor verifies the heat signature matches that of a live human
  3. Additional checks for fraudulent biometrics (e.g., patterned contact lenses) and obscured biometrics (e.g., looking away from the camera)
  4. Continuous hardening through internal red teaming and external programs

Hardware Security

Two unique cryptographic keys are permanently burned into the Orb's hardware: one provisioned into the main SoC during manufacturing and another in a hardware secure element that cannot be exported. The Orb will not operate unless both keys are valid and their environments are intact, and no code can run without a valid cryptographic signature. Additional features include active device tamper monitoring, fraud detection, secure element for authenticity, and an accessible SD card for auditors to validate code.

Third-Party Audits & Testing

The Orb sets a high bar for defending against scalable attacks. However, any hardware system interacting with the physical world requires continual enhancements. Contributor red teams test various attack vectors. Several audits (Theori, Trail of Bits, Least Authority) have been conducted, and a bug bounty program establishes incentives for external findings.

Distribution

Current

Wide-scale Orb distribution is paramount to bootstrapping PoH from an abstract protocol into something relying parties can actually use. Today, the operational model combines trained operators, large-scale partnerships, and TFH-led flagship locations. Dedicated operators provide predictable hours and high-quality sessions. TFH is integrating with established retail and venue partners to reach people where they already are: transit hubs, shopping corridors, campuses, and civic spaces. TFH continues to run flagship sites to set the bar on safety, privacy, and user experience, and to validate operational playbooks before handing them to partners. TFH is targeting mass deployments globally, prioritizing high-density areas and layering mobile and pop-up routes to close gaps.

Future

The target state is a way to verify your World ID within easy reach of every person across the globe. The network will grow from professional operators into a broad ecosystem including retail counters, service desks, campus groups, community organizations, and independent providers. As the technology standardizes and decentralizes, the operator role becomes more lightweight. The hardware footprint will follow the same arc: form factors enabling mobility and others optimized for unattended use. As consumer hardware evolves and begins to offer widely available secure execution and camera pipelines that meet liveness and imaging quality thresholds, PoH verification will extend beyond dedicated Orbs to consumer-level hardware.

Figure 8: A view of the planned distribution roadmap for the Orb as well as new device form factors to scale PoH verifications across the globe.
Figure 8:
A view of the planned distribution roadmap for the Orb as well as new device form factors to scale PoH verifications across the globe.

Verification of Uniqueness

As established earlier, uniqueness must be absolute—exactly one credential per person. The two hardest parts are making sure every human can receive a PoH and minimizing false acceptances of users that have previously verified. This section describes the generic architecture for achieving both, followed by World's specific implementation.

Determining whether a person has already verified requires global information from all prior verifications, so this process cannot happen locally on verification devices. Instead, a global uniqueness check service is required, complementing the verification hardware in the PoH issuance process. The critical properties are:

  1. Multi-party uniqueness check: The uniqueness check should be performed by multiple parties to prevent any single party from blocking anyone.
  2. Secure compute: It must be as difficult as possible for any single party to adversarially inject fake identities or deviate from the comparison protocol.
  3. No centralized database with sensitive information: Sensitive information must not be aggregated in any centralized database.
  4. Recovery: Enable the legitimate owner to reclaim their PoH following theft or sale.

One way to implement these requirements is through a secure multi-party computation (SMPC) protocol that verifies uniqueness in an anonymous manner without revealing biometric data to any entity. At enrollment, biometric information is processed locally in a TEE on a custom biometric camera and transformed into encrypted, statistically random fragments by verifiable software. Those fragments are sent to the user's phone. The user can then choose to send them to multiple independent node operators. Those nodes have private state and jointly determine whether someone has verified before—crucially, in such a manner where no party learns any statistically meaningful information about the underlying data whatsoever, except whether the entry is unique.

Figure 9: A diagram of a secure multi-party uniqueness (SMPC) check. When a new person verifies through the biometric hardware device, their iris data is converted into a unique code and split into multiple encrypted, statistically random fragments. Each SMPC party independently compares the new fragment against their set of existing encrypted fragments, and together, determine the existence of potential duplicates without revealing the underlying data to any party. If all SMPC results indicate that the fragment is unique, a signed Proof of Human credential is returned to the user. This process ensures that each Proof of Human corresponds to a single, unique human while preserving user anonymity.
Figure 9:
A diagram of a secure multi-party uniqueness (SMPC) check. When a new person verifies through the biometric hardware device, their iris data is converted into a unique code and split into multiple encrypted, statistically random fragments. Each SMPC party independently compares the new fragment against their set of existing encrypted fragments, and together, determine the existence of potential duplicates without revealing the underlying data to any party. If all SMPC results indicate that the fragment is unique, a signed Proof of Human credential is returned to the user. This process ensures that each Proof of Human corresponds to a single, unique human while preserving user anonymity.

If the uniqueness check is successful, the encrypted fragments are stored in the respective SMPC node and a signed PoH credential is returned to the user. Importantly, the credential should not be used directly to prove humanness, in order to prevent tracking across applications.

As PoH becomes more widely relied upon for platform access, economic participation, and public discourse, losing access to one's credential becomes a serious problem. However, enabling recovery conflicts with the uniqueness of PoH, making credential recovery challenging.

Any recovery mechanism must meet strict criteria: it must preserve the owner's privacy, and only the legitimate owner can be allowed to trigger it. This places severe requirements on the root of trust for recovery. Relying solely on a document like a passport is insufficient, as it would be too easy to impersonate the owner.

Crucially, recovery must deactivate the old access and issue new access without resetting the PoH's unique property. This is necessary to prevent individuals from fraudulently presenting as multiple people or circumventing a block issued due to misuse.

One way to implement this recovery is to store the PoH key in an SMPC system, accessible via user-stored authentication keys. If these authentication keys are compromised, the user can undergo a trusted verification process (e.g., via a secure biometric camera). This process would temporarily enable the user to deactivate the compromised keys, add new ones, and thus regain access.

World’s Implementation: Creating a Unique Iris Code

This section details how World implements the generic uniqueness architecture described above, beginning with how the Orb generates a unique iris code.

Iris recognition was first developed in 1993 by John Daugman. Unlike 2D face images that are mostly defined by landmarks, feature proportions and shapes, iris images present rich and complex texture with semi-periodic variations in image intensity. As a result, they contain strong signals in both the spatial and frequency domains, and effective analysis must take both into account. Examples of iris images can be found on John Daugman's website.

Although the field has advanced since the turn of the millennium, it continues to be heavily influenced by legacy methods and practices. Historically, the morphology of the eye in iris recognition has been identified using classical computer-vision methods such as the Hough Transform or circle fitting. In recent years, deep learning has brought about significant improvements in the field of computer vision, providing tools for understanding and analyzing the eye physiology with unprecedented depth

Rather than using raw images directly, the Orb derives unique codes from iris texture patterns via frequency-based feature extraction (e.g., applying multi-scale Gabor wavelet filters and quantizing their phase response).

Although the iris code is essentially random data not known to reveal any information about the person, the Orb also anonymizes the iris code to create fragments for the AMPC system. This is done on the Orb so that the uniqueness check can later be performed without any exposure of personal or personally identifiable information.

Scale and Error Rates

A false match occurs when two different people are incorrectly judged to be the same. A false non-match occurs when two samples from the same person are incorrectly judged to be different. These two metrics are the key performance indicators for any biometric system and constrain its scale.

At the beginning of the project, World established that in order to scale to a billion people, there must be a false match rate (FMR) — or the probability of falsely matching two individual identities — of 1x10-12, so that even when matched against a billion unique identities, a new genuine user would only have a 1 in 1000 chance of getting falsely rejected and having to perform another signup. To reach this number, we need a FMR of 1x10-6 per iris (as detailed further in a blog post).

The verification system needs to reach these acute precision levels, while maintaining a false non-match rate (FNMR) — the probability of accepting someone in the system a second time — below 5x10-3.

These targets were exceeded in extensive, purpose-built test sets conducted prior to launch, where the project achieved an FMR of 2.25×10⁻¹⁴ at approximately 1×10⁻³ FNMR.

Limitations

Biometrics are probabilistic, and biometric verification has inherent error rates. In real-world operations, the measured error rate of the system for confusing any two people to be the same (false match rate) is approximately 1×10⁻¹², or about 1 in a trillion. On a billion-person scale, this translates to roughly a 99.999% true acceptance rate and a 0.001% false rejection rate, which remains significantly better than known alternatives.

As PoH becomes more important to everyday life, universal accessibility will matter. Everyone should be able to verify if they choose to do so. Many common eye conditions, including cataracts, do not meaningfully affect the accuracy of iris biometrics but there are some more severe eye conditions that can affect accuracy. In time, specialized centers could support alternative verification for individuals with severe eye conditions, for example through facial biometrics. Such extensions, however, would need careful design to preserve the system’s integrity.

World’s Implementation: Anonymized Multi-Party Computation (AMPC)

AMPC is World's specific implementation of the generic SMPC architecture described previously. It is an open-source, multi-party computation system that anonymizes and securely protects MPC fragments of Orb-verified World IDs. AMPC is not only one of the largest MPC-based systems in production but also breaks new ground by leveraging high-end GPUs to significantly increase performance. These technologies set a new standard for privacy, security, and scalability in biometric verification.

Privacy Protections

AMPC offers additional privacy protections by eliminating the need for storing iris codes and avoiding plaintext Hamming distances during verification. It incorporates the latest advances in cryptographic multiparty protocols and ensures that no biometric data ever leave the user's device. Iris data is cryptographically processed directly on the Orb, rendering a single iris code into multiple encrypted fragments that do not individually reveal any information about the original. The fragments are end-to-end encrypted and transmitted to each compute node so that at no point is user data visible to any party.

One key privacy enhancement is how similarity comparisons are handled. Although iris codes are matched based on Hamming distance, AMPC does not reveal any information about the distances—only a binary result: whether the user is a match or not. The AMPC system reveals only a single bit per invocation: whether the individual has previously verified.

In addition, masks used to filter out noise and highlight relevant features during verification are also broken into fragments, ensuring they never exist in plaintext. This eliminates another piece of information and further enhances privacy.

Decentralization and Transparency

AMPC marks an important step toward decentralization and transparency. It also operates with reputable third-parties like Nethermind, a trusted and reputable blockchain and research engineering company, to operate an independent database in which the anonymized data is stored. Other independent operators include the University of Erlangen-Nuremberg (FAU) and UC Berkeley Center for Responsible Decentralized Intelligence (RDI), with two additional institutions —  the Korea Advanced Institute of Science and Technology (KAIST) and the University of Engineering and Technology in Peru (UTEC) — set to join the network. Today, AMPC is operated exclusively by these independent, trusted organizations, and neither World Foundation nor Tools for Humanity serve as parties in AMPC.

A governance board has been established, including independent external domain experts, to coordinate and supervise updates, ensure accountability, and govern onboarding of third parties to operate compute nodes.

Roadmap

Future improvements aim at scaling the system and reducing compute requirements to make it easier for new third parties to join. Trusted execution environments (TEEs) are in development to minimize potential manipulation. Further, AMPC is open source, and the community is encouraged to review, contribute to, and build upon this work. Looking ahead, the World ID 4.0 protocol upgrade introduces changes that strengthen the uniqueness and privacy layer, including abstract on-chain accounts with multi-key support, a distributed OPRF network for nullifier computation, enforced one-time-use nullifiers, and credential recovery via designated Recovery Agents.

World’s Implementation: Semaphore Set Registration

With AMPC, it is possible to prove that an individual is a member of a set—but that single bit of information is not sufficient for the more complex interactions envisioned for PoH. To enable those interactions, World uses another privacy technology: Semaphore.

Semaphore is a generic, open-source privacy layer for Ethereum applications based on zk-SNARKs (zero-knowledge succinct non-interactive argument of knowledge). Using zero knowledge, Semaphore allows Ethereum users (or users of any chain capable of verifying Groth16 proofs) to prove their membership of a group and send signals (e.g., perform actions, cast votes) without revealing their original identity.

World's version of Semaphore is deployed as a smart contract on Ethereum, with a single set containing the hashes of the World ID secrets for all Orb-verified users. A commitment to this set is replicated to other chains using state bridges so that corresponding verifier contracts can be deployed there.

Individuals interact with the protocol through a wallet containing a Semaphore key pair specific to World ID. Semaphore does not use an ordinary elliptic curve key pair, but leverages a digital signature scheme using a ZKP primitive. The World ID secret is a series of random bytes. The signature is a ZKP that proves the person holds a secret that, when hashed, corresponds to an entry in the identity set. Specifically, the hash function is Poseidon over the BN254 scalar field. The hash of the World ID secret is not revealed or disclosed after initial enrollment.

Authenticating Using Proof of Human

Generic Architecture

A robust authentication mechanism is needed to prove that an action originates from a human. This mechanism must be privacy-preserving and prevent cross-context tracking:

  1. Unlinkable Pseudonymity: It should not be possible to identify who someone is or to track someone across different contexts.
  2. Illicit Transfer Prevention: The system needs to ensure the person using the PoH credential is the one it was issued to, via a person-bound second factor for periodic reauthentication.

One way to address unlinkable pseudonymity is through a combination of self-custody (user-held key material) and zero-knowledge proofs. Credentials are held locally by users and can be presented without revealing identity or linking activity across contexts. A public, tamper-resistant registry enables verification and revocation without exposing personal data.

Additionally, a second factor is required. The initial verification alone is insufficient to maintain integrity over time. Without continuous authentication, a credential holder could temporarily delegate access to a malicious actor while retaining the ability to reclaim it—making short-term rental economically attractive. Continuous authentication, like face-based checks against embeddings from the initial verification, can be performed locally on the user's device. By requiring frequent reauthentication, the original owner cannot hand off their credential for extended periods. For high-stakes use cases, users can return to a purpose-built biometric camera for high-assurance authentication (an "anonymous notary").

Figure 10: During enrollment (orange), a biometric camera captures images to confirm the user is a real human, then provides signed and encrypted template fragments to the user's device before permanently deleting the images. A SMPC network performs an encrypted uniqueness check to ensure the user has not previously enrolled, and the app publishes the proof of human commitment to a public registry. During verification (green), a relying party requests proof of humanness from the user's app, which generates and returns a zero-knowledge proof (ZKP) paired with a nullifier. The relying party verifies this proof against the public registry, confirming the user is a unique human without revealing their identity.
Figure 10:
During enrollment (orange), a biometric camera captures images to confirm the user is a real human, then provides signed and encrypted template fragments to the user's device before permanently deleting the images. A SMPC network performs an encrypted uniqueness check to ensure the user has not previously enrolled, and the app publishes the proof of human commitment to a public registry. During verification (green), a relying party requests proof of humanness from the user's app, which generates and returns a zero-knowledge proof (ZKP) paired with a nullifier. The relying party verifies this proof against the public registry, confirming the user is a unique human without revealing their identity.

World’s Implementation: Personal Custody Package

World's specific implementation of the generic authentication architecture begins with the Personal Custody Package (PCP). The data used by the Orb to determine a person is a real human is immediately packaged into a PCP, encrypted, sent to the user's device, and permanently deleted from the Orb.

World publicly announced personal custody in March 2024. At a high level, personal custody ensures that images taken by the Orb are not exposed to World Foundation's (or any third party's) backend systems, but are stored locally and readily available for the user.

As detailed in the Protocol whitepaper, PCP has evolved over time to generally describe a self-custodial credential. There are now multiple types of PCPs, and as the World protocol becomes more established, the PCP format is expected to be publicly documented and standardized for interoperability between credential issuers and the protocol.

Currently, the Orb-specific PCP contains:

  1. Iris and face embeddings generated by the Orb
  2. Raw iris and face images
  3. MPC fragments for the AMPC system

Many data fields are individually encrypted with a key specific to the use case—for example, each MPC fragment is encrypted with the public key of the specific AMPC party. The entire PCP is then encrypted on the Orb with a key provided by the user, ensuring only the user can access it on their own device.

The PCP provides two primary functions:

  1. Face Auth: A trusted face image from the verification process is compared against a selfie generated by the device.
  2. Uniqueness enrollment: The included MPC fragments can be checked to determine whether the user has enrolled in the uniqueness service. Users can only enroll once to receive a unique PoH credential.

How to Address Common Concerns of PoH

Poorly implemented PoH creates severe risks. However, in a rigorous implementation (see section X), those risks can be contained:

Concern: PoH is a privacy risk and enables surveillance

Poorly designed PoH systems can be privacy invasive, enabling tracking across applications. However, a well-designed PoH implementation can be strongly privacy preserving by design. Without PoH, systems must infer legitimacy indirectly through continuous tracking: behavioral monitoring, device fingerprinting, cross-service correlation, and identity checks. These approaches require persistent visibility into user behavior and create strong incentives for surveillance. PoH shifts the model from invasive monitoring to a privacy-preserving proof. PoH can be implemented private-by-design. Secure multi-party computation protects privacy when establishing uniqueness, having multiple issuers minimizes the risk for censorship and zero-knowledge proofs and unlinkable nullifiers preserve anonymity when proving humanness to others. This way no cross-service profile can be established and surveillance can be prevented.

Concern: PoH requires a centralized database of people

This is the case for traditional identity systems. However, for a well designed PoH system, uniqueness can be verified using encryption techniques that avoid exposing biometric or identifying data and distribute trust across multiple independent parties such that there is no need for a centralized database.

Concern: PoH is a centralizing force

PoH can be designed such that the opposite is the case. Without PoH, influence concentrates among actors who leverage bots, coordinated networks, or purchased accounts. This centralizes power in the hands of those with resources to manufacture participation. PoH inverts this dynamic by making participation human-bounded, which prevents authentic voices from being drowned out and empowers individuals. At the same time, for any PoH there will be an implementation and bootstrapping phase in which in almost all cases will lead to temporary centralization which needs to be iteratively eliminated over time. Initially, there is a small group of people that builds the first version and has decision-making authority. The class of PoH systems we are advocating for can be and need to be progressively protected against incompetence and malice of this initial group of people. The world decentralization whitepaper describes one potential implementation.2

Concern: PoH leads to a black market for credentials

When PoH becomes critical, malicious actors will be strongly motivated to amass credentials, which would undermine PoH's effectiveness. While complete prevention of PoH delegation to bad actors is likely impossible, several measures can significantly increase the difficulty and economic cost of such actions.

Key security and recovery mechanisms include:

  1. Strong Authentication: Using phone-based methods similar to FaceID, combined with regular reauthentication, can ensure that the credential remains under the control of its rightful owner.
  2. Recovery Mechanisms: Enable the legitimate owner to reclaim their PoH following theft or sale, reducing the long-term utility of illicit transfers.
  3. Geographic Association: Optionally disclosing the country of issuance of a PoH can help prevent arbitrage based on income disparities.

Furthermore, as long as each person can only acquire a single PoH, renting it out carries significant risk: if the malicious actor uses the PoH against the terms of use of applications the original owner risks being locked out of essential applications.

Ultimately, if PoH achieves the importance we anticipate, the delegation or misuse of another person's PoH may reasonably be made illegal, mirroring existing laws against the misuse of a passport.

World ID: Bootstrapping PoH

World is an effort at implementing PoH as described above, with the design goal to maximize individual empowerment.

This whitepaper evaluates multiple approaches to achieving a global PoH mechanism and concludes that biometrics — and specifically iris biometrics — best satisfy the core requirements of inclusivity, uniqueness, person-bound credentials, decentralization, and anonymity. To operationalize iris verification at global scale, TFH built the Orb, a custom, open-source imaging and processing device that prioritizes privacy (local processing and Personal Custody Packages), robust liveness and presentation-attack detection, and auditable security features. In addition to the Orb, Anonymized Multi-Party Computation (AMPC) enables uniqueness checks without exposing biometric data, while Semaphore and zero-knowledge proofs let holders present only the facts they consent to share. These design choices together make a privacy-preserving, auditable pathway to a verified World ID. Iris biometrics offer orders-of-magnitude lower false-match rates than face recognition, and the Orb’s engineering and operational model are aimed at producing a reliable personbound credential that scales. The result is a practical technical foundation for systems and services that need to know “is this a unique human?” while preserving user anonymity and removing central points of trust.

Proactively Implementing PoH

Bureaucracies tend to address major problems only after significant damage has occurred, which helps focus on what matters most. However, a reactive approach to PoH is highly undesirable and lead to surveillance. This not only results in avoidable negative consequences but also increases the likelihood of a less considered and simpler implementation, which would likely impact efficacy, privacy and freedom of speech. By the time a crisis makes PoH seem necessary—potentially involving extreme outcomes like meaningful increases in societal unrest or other threats to democracy like outcome-altering election interference—the resulting PoH would most likely be implemented in haste. A rush would likely lead to prioritizing speed and functionality above all else, compromising critical elements such as individual empowerment, privacy and resilience to adversarial actors, making the final system far less beneficial than a proactive solution in ways that cannot be addressed iteratively afterwards (local optimum).

The World project is a proactive effort to address these challenges. In order to bootstrap PoH ahead of time and avoid some of the negative consequences, World employed one particular approach to try to counteract the described dynamics: It used its token as an economic bootstrapping mechanism while PoH is new and adoption is nascent and provides an incentive to verify. Analogous mechanisms have been used historically to grow networks at a smaller scale: for example, in PayPal’s early years, the company invested in user incentives to rapidly expand user adoption and achieve network scale, which was critical to its transition from a niche startup to a global platform. The token further gives all network participants a native ownership share for their participation, because the overall system becomes more useful as more people are verified.3

Multiple Uniqueness Credentials

In practice, supporting multiple forms of uniqueness can be useful in the short term to accelerate adoption and improve coverage. Therefore, World ID supports not only PoH credentials through the Orb but also credentials through passports and face based verification.

However, when evaluated against the requirements for global PoH, these alternatives exhibit structural limitations. Cryptographically verifiable identity documents with embedded chips (e.g., NFC-enabled IDs) depend on inhomogeneous availability, heterogeneous security infrastructure, and document issuance processes rather than the person themselves. Uniqueness degrades through issuance by multiple authorities (e.g. dual citizenship), re-issuance or replacement, and robust recovery is incompatible with privacy without introducing avenues for serious government surveillance. In more extreme cases, not all governments might be trusted to not create fake IDs (to potentially disseminate disinformation in other countries or even their own).

Similarly, face-based verification on consumer devices provides limited assurance and does not scale to global uniqueness. Camera quality and sensor diversity limit entropy and therefore the ability to distinguish lookalikes. This makes it practically impossible to distinguish real people from high resolution displays showing deep fakes. Compute integrity is also insufficient to prevent attacks.

For these reasons, while multiple credential types should coexist as transitional or complementary mechanisms, purpose-built biometric hardware appears to be the most beneficial long term approach that satisfies the full set of requirements for robust and privacy-preserving PoH.

World ID Today

Most of the security and privacy measures outlined previously—such as the secure hardware device (the Orb), Secure Multi-Party Computation (SMPC), Zero-Knowledge Proofs (ZKPs), and face authentication—are already integrated into World ID. This is complemented by numerous other privacy-enhancing features and ongoing development to further strengthen the system, including World ID 4.0.

World ID is progressing towards global adoption. At the time of this writing, there are about 40M people on World App (the first of ideally many World ID authenticator apps) and about 18M people verified as human with an Orb. At the same time, current projections for advanced machine intelligence require accelerating adoption in order to enable the benefits of PoH when they are needed.

Other Resources

The World roadmap is a dynamic and evolving blueprint that is subject to change and refinement through input and decisions from the World community. Whether you are a developer, a user, an enthusiast, or simply someone interested in the future of decentralized systems, please reach out through the appropriate channel:

  1. Join the community discussion on X or Discord.
  2. Contribute to open source repositories on GitHub.
  3. Visit the World developer documentation.
  4. Reach out directly to the World team for support questions.
  5. View live on-chain data on the Dune Dashboard.

Footnotes

  1. SBTs are not designed to be a PoH mechanism. Rather, they complement applications where proving relationships rather than unique humanness is needed. However, they are sometimes mentioned in this context and are therefore relevant to discuss.

  2. https://whitepaper.world.org/#advancing-decentralization

  3. For more information on WLD, please see this.