Fargo Printers Fargo Printers

TCP/IP Intercoms Commend Intercoms

Access Control Access Control Vertx

Access Control Quorum

Fingerprints Biometric L1

3D Face Readers
3Di Admin Manager
3D Facial Recognition
4G Fingervein
4G V-Flex Lite
4G CR Pass
4G Secure Control
4G Secure Admin
4G V-Station
4G V-Flex
4G Indoor
4G Outdoor
4G Extreme

Biometric Safran

Readers Readers

Cards and Peripherals Peripherals

HID on the desktop Hotd

Global Downloads Click here if you know what you want!

Content on this page requires a newer version of Adobe Flash Player.

Get Adobe Flash player

L1 Biometrics - 3D How It Works
To L1 Videos

3D Facial Recognition Technology - How It Works

How 3D Face Recognition Works

The below details the major steps of how the technology captures, constructs, extracts and matches a 3D mesh of the face.

Face Capture using structured light in near-infrared range:

  • The L-1 3D face reader’s camera projects an invisible structured light pattern onto the face
  • The light pattern is distorted by the surface geometry of the face
  • The camera precisely records the pattern distortion

3D Reconstruction Process real-time reconstruction of the 3D facial surface:

  • The distorted pattern is input into a 3D reconstruction algorithm
  • A 3D mesh of the face is created by means of triangulation
  • The resulting face geometry is measurable in sub-millimeters
  • The 3D reconstructed image is NOT stored in the database

Feature Extraction and Matching

  • A biometric template is extracted from the 3D facial geometry (skull curvature, etc.)
  • The template is based on the unique rigid tissues of the skull which are unchanging over time
  • The resulting numeric template is stored in an ordinary database
  • Identification is performed by matching the biometric template against the enrollment database
  • Verification is performed by matching the biometric template against a template stored on a smart card

Figure 1 – L-1 Core Technology development. The hardware and grey boxes are proprietary to L-1.

Face Capturing

The enterprise access division’s proprietary hardware for face capturing – or the acquisition of facial data – works on the principle of structured or coded lighting. The essence of structured lighting consists in projecting a pattern of known space structure at the subject’s face. The structured light is distorted by the individual facial geometry, and these distortions are unambiguously defined by the form of the scanned surface. Having defined compatibility between elements of the initial and determined structure of the coded light beam, by means of reconstruction algorithms, it is possible to precisely restore the geometry of the registered surface.

Face capturing refers to the moment when the camera and the special light take a “picture” of the target. This module includes the software necessary to automate the acquisition process by mean of computers. The software controls the hardware functionality and synchronizes all the necessary steps of the acquisition process. A simplified scheme on how the capturing works is represented in Figure 2 below.

Figure 2 – The digitizing equipment. (A) The special projector illuminates an invisible structured light (a pattern) onto the face; © The camera records the face and the distorted pattern that contain the key information needed to reconstruct the 3 coordinates of all points belonging to the face’s surface.

3D Reconstruction

The second step is the reconstruction of the 3D surface illustrated in Figure 3 below. This module uses a set of proprietary algorithms, designed for surface reconstruction and optimization, based on data received from the camera. After receiving raw data (the distorted pattern on the target object), the 3D Reconstruction algorithms perform image filtering (noise reduction), and then instantly reconstructs the 3D surface, smoothing and interpolating data to avoid holes and optimizing the mesh.

The algorithm has to recognize the pattern projected onto the surface and calculate, by means of triangulations, all three coordinates of the sampled points on the surface. This will result in the surface described in the form of a cloud of points. After this step, the system will interpolate all the points by mean of a mesh.

Next, if the color surface was captured by an L-1 3D face reader, the surface can then be calculated and over-imposed onto the mesh. The texture can be overlapped (after an automatic adaptation) on the 3D surface. This stage is not relevant for devices using the 3D video unit, where the surface texture is not captured.

It is important to stress that the texture is NOT needed for recognition purposes. The output of this module is the optimized 3D surface or 3D mesh, suitable for further use in the recognition process.

Figure 3 – Flow scheme of the 3D reconstruction process.


Profile | Products | Software | Latest News | FAQ's | Downloads | Contact

© 2011, evolving management solutions - south africa