Note: Descriptions are shown in the official language in which they were submitted.
?WO 98/08439CA 02264029 1999-02-24PCT/US97/14873TITLEAPPARATUS FOR THE IRIS ACQUIRING IMAGESBACKGROUND OF THE INVENTION1. ield of the InventionThe invention relates to a method and apparatus for illuminating the eyeto obtain an image of the iris.2. Bac roun ofth Inv 'onThere are several methods known as biometrics for recognizing oridentifying an individual. These methods include analyzing a signature, obtaining andanalyzing an image of a ?ngerprint and imaging and analyzing the retinal vascularpatterns of a human eye. Recently the art has used the iris of the eye which contains ahighly detailed pattern that is unique for each individual and stable over many yearsas a non-contact. non-obtrusive biometric. This technique is described in United StatesPatent No. 4,641,349 to F lom et al. and United States Patent No. 5,291,560 toDaugman. The systems described in these references require the person beingidenti?ed to hold at least one of their eyes in a ?xed position with respect to an imagingcamera which takes a picture of the iris. While this procedure is satisfactory for someapplications, it is not satisfactory for quick transactional activities such as using anautomated teller machine, unobtrusive access control or automated dispensing. Otherexamples are immigration control, point of sale veri?cation, welfare check dispensing,internet banking. bank loan or account opening and other ?nancial transactions.?CA 02264029 1999-02-24WO 98/08439 PCT/U S97/ 14873The iris identi?cation techniques disclosed by Flom and Daugmanrequire a clear, well-focused image of the iris portion of the eye. Once that image isobtained a comparison of that image with a coded ?le image of the iris of the person tobe identi?ed can be accomplished quite rapidly. However, prior to the presentinvention there has not been an optical system which could rapidly acquire asuf?ciently clear image of an iris of the person to be identi?ed unless that personpositioned his eye in a ?xed position relatively close to an imaging camera. There is aneed for a system which will rapidly obtain a clear picture of the iris of a person oranimal remotely from the optical system and in an uncertain position. This systemwould be particularly useful to identify users of automated teller machines as well asindividuals seeking access to a restricted area or facility or other applications requiringuser identi?cation. The system could also be used to identify patients, criminalsuspects and others who are unable or unwilling to be otherwise identi?ed.Automated teller machines, often called ATMS, are widely used forbanking transactions. Users are accustomed to receiving relatively fast veri?cation oftheir identity after inserting their identi?cation card and entering an identi?cationnumber. However, anyone who knows the identi?cation number associated with agiven card can use that card. Should a robber learn the identi?cation number bywatching the owner use the card, ?nding the number written on the card or otherwise,he can easily draw funds from the 0wnerâs account. Consequently, banks have beensearching for other more reliable ways of verifying the identity of ATM users.?CA 02264029 1999-02-24wo 98/08439 PCT/US97/14873Since the iris identi?cation methods disclosed by Flom et al. haveproved to be very reliable, the use of iris identi?cation to verify the identity of ATMusers and other remote user recognition or veri?cation application has been proposed.However, for such use to be commercially available, there must be a rapid, reliable andunobtrusive way to obtain iris images of sufficient resolution to permit veri?cation andrecognition from an ATM user standing in front of the teller machine. To require theuser to position his head a predetermined distance from the camera, such as by using aneyepiece or other fixture or without ?xturing is impractical. Thus, there is a need for asystem which rapidly locates the iris of an ATM user and obtains a quality image of theiris that can be used for verification and identification. This system should be suitablefor use in combination with an access card or without such a card. The system shouldalso be able to obtain such an image from users who are wearing eyeglasses or contactlenses or ski masks or other occluding apparel.SUMMARY OF THE INVENTIONWe provide a method and apparatus which can obtain a clear image ofan iris of a person to be identi?ed whose head is located in front of the portion of ouroptical system which receives light re?ected from the iris. The system includes at leastone camera with or without ambient illumination and preferably at least one or moreilluminators. We also prefer to provide a pan/tilt mirror, or gimbal device and at leastone lens. Light re?ected from the subject is captured by the gimbaled camera or mirrorand directed through the lens to the camera. In a preferred embodiment, a narrow ?eldof view (N FOV) camera receives the light reflected from the pan/tilt mirror through the, , ..,.,............_................u..............«......~-~»~ ââ?CA 02264029 1999-02-24wo 98/08439 PCT/US97/14873lens or directly via a gimbaled mounted camera. A second camera and preferably athird camera are provided to obtain a wide ?eld of view (WFOV) image of the subject.In some cases, the WFOV cameras may be super?uous, if the user is always known tobe in the ?eld of view of the NF OV camera or could be located by moving the NFOVcamera. Images from these WFOV cameras are processed to determine the coordinatesof the speci?c location of interest, such as the head and shoulders and the iris of aperson to be identi?ed. Based upon an analysis of those images the pan/tilt mirror orgimbal is adjusted to receive light re?ected from the iris or other area of interest anddirect that re?ected light to a narrow ?eld of view camera. That camera produces animage of suf?cient quality to permit iris identi?cation.The preferred embodiment contains a wide ?eld of view illuminatorwhich illuminates the face of the person to be identified. The illuminator preferablycontains a plurality of infrared light emitting diodes positioned around the lens of thewide ?eld of view camera or cameras.We also prefer to provide two or more narrow ?eld of view illuminatorseach comprised of an array of light emitting diodes. These arrays are mounted so as tobe rotatable about both a horizontal axis and a vertical axis. By using at least twoarrays we are able to compensate for specular re?ection and re?ection from eyeglassesor contact lenses or other artifacts which obscure portions of the iris.We further prefer to construct the arrays so that one set of light emittingdiodes have center lines normal to the base, a second set of light emitting diodes havecenterlines at an acute angle to the base, and a third set of light emitting diodes have?CA 02204029 1999-02-24wo 93/03439 PCT/US97/14873centerlines at an obtuse angle relative to the base. This provides the array a wider fieldof illumination. We further prefer to provide a control system which enables us toseparately illuminate each group of light emitting diodes. The control system maypermit selective activation of individual diodes. An alternative is to provide a singleillumination that is directed in a coordinated manner with the image steering device.An image processor is provided to analyze the images from the wide?eld of view camera and thereby specify the location of a point or area of interest onthe object or person being identi?ed. A preferred technique for identifying the positionof the user is stereographic image analysis. Alternatively, visible or nonâvisible rangeimaging or distance ?nding devices such as ultrasonic. radar, spread spectrummicrowave or thermal imaging or sensing or other optical means could be used.The present system is particularly use?il for verifying the identity ofusers of automated teller machines. The system can be readily combined with mostconventional automated teller machines and many other financial transaction machines.Image acquisition and identification can generally be accomplished in less than fiveseconds and in less than two seconds in many cases.Other configurations of cameras such as one NF OV camera, one WFOVand one NF OV camera, two NF OV cameras, multiple NFOV cameras and multipleWFOV cameras can be utilized for other special purpose applications such as more orless restricted movement or position scenarios. For example, iris imaging in atelephone booth or for a telephone hands free use, multiple iris imaging in a crowd of?CA 02264029 1999-02-24W0 93/08439 PCT/US97/14873people, iris imagining of people in a moving or stationary vehicle, iris imaging of a racehorse, or a point of sale site use.Other objects and advantages will become apparent from a description ofcertain present preferred embodiments shown in the drawings.BRIEF DESQRJPTION OF THE FIQURESFigure 1 is a front view of a present preferred embodiment of our devicefor obtaining images of irises.Figure 2 is a side View of the embodiment of Figure 1.Figure 3 is a top plan view of a ?rst present preferred embodiment of ournarrow field of view illuminator.Figure 4 is a side view of the illuminator of Figure 3 with the area ofillumination shown in chainline.Figure 5 is a side view similar to Figure 4 of a second present preferredembodiment of our narrow ?eld of view illuminator.Figure 6 is top view of a ?rst present preferred illuminator bracket.Figure 7 is a side view showing the illuminator bracket of Figure 6 withthe position of the illuminator shown in chainline.Figure 8 is top view of a second present preferred illuminator bracketFigure 9 is a side view showing the illuminator bracket of Figure 8 withthe position of the illuminator shown in chainline.Figure 10 is a block diagram showing a preferred control architecture forthe embodiment of Figure l.?CA 02264029 1999-02-24WO 98/08439 PCT/US97/14873Figure 11 is a front view showing the eyes and glasses of the person tobe identi?ed on which a re?ection of the wide ?eld of view illuminator appears.Figure 12 is a front view showing the eyes and glasses of the person tobe identi?ed on which a re?ection from the narrow ?eld of view illuminators appears.DESCRIPTION OF THE PREFERRED EMBODIMENTSReferring to Figures 1 and 2 a present preferred embodiment of ourdevice is contained in housing 1 which is 14.5 inches wide, 15 inches high, and 18inches deep. A housing of this size can be easily placed within or near an automatedteller machine or other limited access machine or entry place. When incorporatedwithin or placed near the housing of an automated teller machine our unit is locatedbehind a light transmissive bezel. Typically, the bezel would be smoked or othervisually opaque glass or a comparable plastic which obscures our device from beingeasily seen. Our device would be positioned so as to be about eye level of most users.In Figure 2 we show the head of a person to be identi?ed user in front of our device.In the preferred embodiment of our device shown in Figures 1 and 2, weprovide two wide ?eld of view (WFOV) cameras 3 each having a lens 2 and 4. Theorientation and placement of lens 2 and 4 and other components may be changed toaccommodate available space or other applications. A wide ?eld of view illuminator 6surrounds the lens. Hoods 5 and 7 are provided around the lens 2 and 4 to prevent lightemitted from the illuminator 6 from passing directly into the camera. The wide ?eld ofview illuminator is comprised of sets of light emitting diodes 10. For ease of?CA 02264029 1999-02-24WO 98/08439 PCT/US97/14873construction, these sets of diodes may be mounted on small circuit boards 12. Theseboards are then mounted on a housing 13 which surrounds the lens 2 and 4. Asuf?cient number of LED containing circuit boards 12 are provided and positioned toilluminate the head of the person to be identi?ed who is standing in front of our deviceas shown in Figure 2. Consequently, we prefer that the wide ?eld of view illuminatorprovide a ?eld of illumination which encompasses a region of about two feet indiameter at a distance of about one foot from the illuminator. This ?eld of illuminationis indicated by the solid lines extending from the wide ?eld of view illuminator 6 inFigure 2.Portions of the wide ?eld of view illuminator are positioned around theWFOV camera to provide nearly on-axis illumination. On axis illumination, nearly onaxis illumination, and oblique illuminator allow surface features to be imaged withminimal shadows which can generate false edges or other artifacts. Such illuminationof the eye produces a shadow free image with good surface features. This type oflighting may cause the pupil to be bright making the iris easier to locate although thisfeature can be disabled if desired. Any shadows produced by other light sources andcamera angles are minimized or washed out. Illumination control of this type can beused to stimulate parasympathetic autonomic nervous system re?exes that cause eyeblinks or pupil variation or other reactions. These changes may be useful to determinesubject awareness to establish certain life signs and to reduce pupil size for improvingimaging resolution.?CA 02264029 1999-02-24wo 98/08439 PCT/US97/14873Light from the wide ?eld of view illuminator is re?ected from the user'sface into the lens of the wide ?eld of view camera lens 2 and 4. This enables theWFOV cameras to create images from which an x, y, z coordinate location can bedetermined for one of the user's eyes. The WFOV cameras may also be used to providesecurity video images for monitoring transactions or other activities. We take an imageof the right eye. However, either the left eye or the right eye or both eyes can beselected. The WFOV cameras could utilize a number of techniques such as stereo,stereo with structured light, and depth from focus with or without structured light todetermine both the x-y location and distance to the object of interest. We prefer to usestereo processing techniques which compare at least a portion of the images from thetwo wide ?eld of view cameras to determine the x, y, 2 coordinate locations. We canalso provide a gaze director 9 which assists in identifying the location and motion of theeye, by attracting the attention of the user.After we know the position of the selected eye we must illuminate theiris portion of the eye and obtain an iris image which can be used for iris veri?cationand recognition. We prefer to be able to obtain an image having approximately 200pixels across the iris portion of the image so that the image recognition algorithms usedto perform identification and veri?cation can operate reliably.The iris recognition algorithms compare features of the iris in the imagewith the same features in a file image. Veri?cation is considered to have been madewhen there is a match of a predetermined number of the compared features. For somesituations a match of at least 75% of the compared features may be required. It is?CA 02264029 1999-02-24wo 98/08439 PCT/US97/1487310.therefore important that there be no specularities, spurious light re?ections or darkshadows covering a signi?cant portion of the image. When the user is wearing glasses,this can easily occur. To overcome this problem we provide at least two spaced apartilluminators for use in conjunction with the NFOV camera. These illuminators inconjunction with the WFOV camera may be selectively switched to an orchestratedcombination to enhance these image structures. For example, a structured specularitycan be created on eyeglasses to make eye ?nding easier, and then disabled for irisimage acquisition.In the embodiment shown in Figure 1 a single NFOV camera 16 ispositioned behind mirror 18. Light emitted from one or more of the NF OVilluminators 21, 22 or 23 is re?ected from the selected eye to an optical subsystem 30which directs the re?ected light to the NFOV camera. We can provide a sensor 14which senses the level of ambient light surrounding the person to be identi?ed.Information from this sensor can be used to determine what, if any, illumination mustbe provided by the illuminators 21,22 and 23. Alternatively, any one or a plurality ofcameras themselves may be used for such light sensing.The optical subsystem 30 in the embodiment shown in Figure 1 containsa pan/tilt mirror attached to rod 33 which extends from motor 34. This enables thepan/tilt mirror to be rotated about a tilt axis corresponding to a centerline through rod33. Motor 34 is mounted on arm 35 which is pivotably attached to base 36 by rod 37.This arm 35 can be moved around a pan axis corresponding to a centerline through rod37. Light emitted from any of the NFOV illuminators 21, 22, or 23 is re?ected from?CA 02264029 1999-02-24wo 98/08439 PCT/US97/1487311.the subj ect iris to the pan/tilt mirror 32. That mirror is positioned to direct the re?ectedlight to mirror 18 from which the light is re?ected to NFOV camera 16. The lens of theNFOV camera can be moved to change the focus or zoom, and the aperture of thecamera is adjustable. One can also mount NF OV camera 16 on a movable platform sothat the NFOV camera can be turned toward the eye. Then, the portions of the opticalsubsection shown in Figure 1 may not be needed. Our preferred optical subsystem hasfive degrees of freedom: the pan axis, the tilt axis, the focus axis, the aperture axis andthe zoom axis. A system with fewer degrees of freedom could also be used. The panand tilt axes are used to position the pan/tilt mirror 32 so that the correct narrow field isimaged onto the sensing array of NF OV camera 16. The ability to control movementsalong the focus axis, aperture axis and zoom axis allows us to be certain that the imagedobject is in focus. In some cases, only the NFOV camera is needed and the WFOVcamera may optionally not be used.The design of the optics resolution, magni?cation, focusing and size ofthe imager dictates the distance between the camera and lens and the distance betweenthe lens and object to be imaged. The size of the imager is of paramount importance indefining the distance from lens to imager and contributes to the depth of focus. Thoseversed in the art will recognize that NFOV camera 16 may be solid state or of vidiconnature and that the sensing array size can vary from generic industry sizes of 1/4, 1/3,1/2, 2/3 or 1 inch diagonal measurement. In an optical system, the introduction of amirror in an optical path allows the path to be redirected without effecting the opticalpath length. The use of these mirrors allows the optical path to be folded back on itself?CA 02264029 1999-02-24WO 98/03439 PCT/U S97/ 1487312.thus reducing the overall required physical length needed to implement the opticaldesign. Those skilled in the art will recognize that a gimbaled camera can also be usedto perform image steering.In the embodiment of Figure l the illuminators are positioned to providea field of illumination illustrated by the dotted lines in Figure 2. These ?elds must bedirected and sized so that light will be re?ected from the eye of the user to the pan/tiltmirror 32. To achieve this result we provide three illuminators 21, 22 and 23 placed atdifferent locations on the housing 1. These illuminators are oriented to direct light tothe areas where the user's eye is most likely to be based upon information received fromthe WFOV cameras or other position detectors that could be used. The illuminatorsmay be mounted in a permanent location and orientation or placed on manuallyadjustable or motorized brackets such as those shown in Figures 6, 7, 8 and 9. Theilluminators could also be attached to a sliding mechanism for translation along an axis.The light source preferably is a light emitting diode or other device thatemits infrared or near infrared light or a combination of these. A lens and diffuser (notshown) can be used to guarantee uniform illumination. We have found infrared light tobe particularly useful because it penetrates eyeglasses and sunglasses more easily thanvisible light or colored light within the visible spectrum. Infrared light is also invisibleto the user and extremely unobtrusive. Optical ?lters may be placed in the light path infront of the camera to reduce any undesirable ambient light wavelengths that corruptthe desired images. Different wavelength priority ?lters may be used to permitdifferent wavelengths to be used to optimize each camera's performance. For example,?CA 02264029 1999-02-24wo 98/08439 PCTIUS97/1487313.longer wave IR could be used for WFOV camera imaging and shorter IR could be usedfor NF OV camera imaging. Then the NFOV cameras would not respond to WFOVillumination. This "speed of light" processing can be used to great advantage. Ifdesired, the LED light source could be strobed. Strobing provides the capability tofreeze motion. Strobing also provides the capability to overwhelm ambient light byusing a high intensity source for a brief period of time, and exposing the cameraaccordingly to wash out the background ambient illumination which would otherwisecause interference. Strobing NFOV and WFOV illuminators, perhaps at differenttimes, and allowing the cameras to integrate photos over appropriate, possiblydisparate, time slices permits optimum usage of the speci?c spatial and timecharacteristics of each device.As shown in Figure 1, 3, 4 and 5, the NF OV illuminators are preferablycomprised of a 6 x 6 array of light emitting diodes 20 mounted on a circuit board 24. Ifthe light emitting diodes are mounted normal to the circuit board the illuminator willilluminate an area of illumination having some diameter b as indicated in Figure 4. Wehave discovered that the area of illumination can be increased using the exact samecomponents as are used for the illuminator of Figure 4 by repositioning or otherwisereconfiguring the light emitting diodes. This is shown in the embodiment of Figure 5by larger diameter a. In that illuminator the top two rows of diodes are positioned at anacute angle relative to the board 24 and the lower two rows of diodes are mounted at anobtuse angle. Circular ring illuminators or other shapes may also be used which couldbe placed around the NFOV lens. Other optical elements such as polarizers or?CA 02264029 1999-02-24W0 98/03439 PCT/U S97/ 1487314.birefringent analysis devices may be used to minimize artifacts or enhance otherbiologically relevant characteristics such as corneal curvature, eye separation, irisdiameter, skin re?ectance and sclerac vasculature.In Figures 6 and 7 we show a bracket which we have used to attach theNF OV illuminators 21, 22, and 23 to the housing 1. That bracket 30 has a U-shapedbase 31 having a hole 32 through which a screw attaches the bracket to the housing. A?rst pair of gripper arms 33 and 34 with attached pin 42 are pivotably attached to oneupright of the base. A similar second pair of gripper arms 33 and 34 are pivotablyconnected to the opposite upright through collar 37. The illuminator indicated in chainline in Figure 7 is held between the gripper arms by screws or pins 38. A series ofholes (not visible) are provided along the uprights so that the gripper arms can bepositioned at any selected one of several positions. A locking tab 39 extends from thecollar 37 into an adjacent slot in the upright to prevent rotation of the collar. Set screw38 is tightened against the pin 42 extending from gripper arms 35 and 36 to preventrotation of the gripper arms.A second bracket 44 which we have used to attach the NFOVilluminators 21, 22, and 23 to the housing 1 is shown in Figures 8 and 9. That bracket50 has a base 51 which attaches to the housing. A rod 52 extends upward from thebase 51. Collar 53 slides along rod 52 and can be held at any desired location on therod by set screw 54. Rod 55 extends from collar 53 and holds carrier 56. This carrier isslidably attached to the rod 55 in the same manner as collar 53. The illuminatorindicated in chain line in Figure 9 is attached to the carrier 56 by fasteners or snap fit on?CA 02204029 1999-02-24wo 98/08439 PCT/US97/1487315.pins 57 extending from the carrier. Both this bracket 50 and the other illustratedbracket 30 permit the attached illuminator to be repositioned or adjusted along a panaxis and a tilt axis.We prefer to connect each array of light emitting diodes through adistribution board 60 to an illumination controller 62 as shown in Figure 10. Since therecan be one or more illuminator arrays these arrays are designated as ILLUMINATOR 1through ILLUMINATOR X in the drawing. The distribution board 60 and illuminationcontroller 62 enable us to selectively light the WFOV illuminator 6 and the NF OVilluminators 21, 22 and 23 in the embodiment of Figure 1. Furthermore, we canselectively illuminate sets of light emitting diodes within each array or selectively lightindividual diodes.Each set of light emitting diodes may emit a different wavelength oflight. It has been noted that the irises of different people respond better to certainwavelengths and worse to other wavelengths of light. This could be accomplishedusing illuminators with different wavelength LEDs or populating a single illuminatorwith different wavelength LEDS next to each other. These could be strobed and thebetter image selected. Additionally, LEDs of different beam widths could be mountedside by side or in different illuminators for illumination intensity control or forspecularity control - the higher tighter the beamwidth the less the size of thespecularity.We can also control the duration and the intensity of the light which isemitted. To prevent burnout of the illuminators caused by prolonged illumination we?CA 02264029 1999-02-24WO 98/08439 PCT/US97/1487316.can provide timers 63 for each illuminator as indicated by the dotted blocks labeled "T"in Figure 10. The timers will cut the power to the array after a predetermined period ofillumination. The WFOV cameras 3 provide images to an image processor 64 whichwe call the PVâI. That processor 64 tells the computer 65 the x, y, 2 coordinates of theselected eye or eyes of the person to be identified. The image processor may alsoassess the quality of the image and contain algorithms that compensate for motion ofthe subject. This processor may also perform image enhancement. The PC 65 hasaccess to information from the ambient light level detector 69 and the NF OV camera.These data can be used to modify illumination strategies. The x, y, z coordinates forthe expected position of the eye enable the computer to direct the illuminationcontroller as to which illuminators should be lighted and to direct the pan/tilt controller66 to properly position the pan/tilt unit 67 so that a useful image of the iris can beobtained. These functions may be selected by results from the WFOV cameraprocessing. Commands are sent from the computer 65 to the motors to change thelocation of the pan/tilt axes or to adjust focus. In the simplest case, one may considerthat a WFOV image is acquired, the data is processed and then passed through theimage processor 64 and computer 65 to the pan/tilt controller 66 or a gimbaledcontroller. In order to minimize motion time and control settling time, there can besimultaneous motion in the optical subsystem along all five axes.The pan/tilt controller 66 accepts macro level commands from thecomputer and generates the proper set points and/or commands for use by theillumination control or each axis supervisor. The intermediate continuous path set?CA 02264029 1999-02-24WO 98/08439 PCT/US97/ 1487317.points for the axis are generated here and then sent to each axis supervisory controller.A command interpreter decodes the commands from the image analysis and formatsresponses using positioning information from the optical devices. A real time interruptproduces a known clock signal every n milliseconds. This signal is a requirement forthe implementation of a sampled data system for the position controller of each axisand allows synchronization via the supervisory controller for continuous path motion.A diagnostic subsystem performs health checks for the control system.Besides the choreography of the five axes, the microprocessor controllermust also provide illumination control. The illumination controller will acceptcommands similar to the commands associated with motion control to timely activate,or to synchronously activate with the camera frame taking, selected illuminators.Images from the WFOV are transmitted as analog signals to the imageprocessor 64. The image processor preferably contains two pyramid processors, amemory capable of storing at least two frames, one LUT, ALU device, a digitizerwhich digitizes the analog video signal, a Texas Instrument TMS 320 C-3] or C-32processor and a serial/parallel processor. The image is processed using the pyramidprocessors as described in United States Patent No. 5,359,574 to van der Wal. TheTexas Instruments processor computes disparities between images. The WFOV imagesdefines a region or point in the field of view of the WFOV cameras where the subject'sright eye or left eye or both are located. Using stereo processing techniques on thedisparities will result in x, y, 2 coordinates for points on the subject relative to theWFOV cameras. That information is then further processed to define an area of interest?CA 02264029 1999-02-24WO 93/03439 PCT/U S97/ 1487318.such as the head or an eye. The coordinates of the area of interest are used to direct theNFOV optical system. These position coordinates are transferred from the imageprocessor to a NFOV image and iris image processor 65. This unit 65 contains a 486,PENTIUM or other microprocessor system and associated memory. In the memory areprograms and algorithms for directing the optical platform and doing iris identi?cation.Additionally, WFOV video images can be stored as a security video record.The focus axes of the NFOV system may be controlled in an open loopfashion. In this case, the x,y,z coordinate from stereo processing de?nes via table lookor analytic computation the focus axis position so that the lens properly focuses theNFOV camera on the object of interest. A closed loop focus method could also beused. In this case, NF OV video would be processed by image processor 64 to obtain a?gure of merit de?ning if the axis was in focus. From the ?gure of merit the axis couldbe commanded forward or backward and then a new image acquired. The processwould continue in a closed loop form until the image is in focus. Other informationsuch as iris size and location as measured in camera units, and eye separation fromWFOV camera images can be combined with stereo and other focus information intomultivariate features than can be used to refine range information by fusion of direct orderived sensory information. This sensor fusion may encompass other information aswell.Since the object of interest, namely the eye, may be moving, there is arequirement that the NFOV camera track the trajectory seen by the WFOV. Whenmotion ceases to blur the image, a quality image may be acquired via the NFOV camera?CA 02264029 1999-02-24wo 98/08439 PCT/US97/1487319.and optics. By tracking the eye, the optics directing light to the NF OV camera arealigned so that when it is desired to obtain an iris quality image little or no additionalmotion may be required.In this case, the x,y,z coordinates from analysis of the WFOV images aresent to the NFOV controller at some uniform sample rate (such as every 100 ms). Acontinuous path algorithm such as described in Approach, by Klafter, Chmielewski and Negin (Prentice Hall, l989) would be used toprovide intermediate sets of {p,t,f,a,z} set points to the axis so that the axes remain inmotion during the tracking phase. To define the last end position, either a macro levelcommand can be given or the same {p,t,f,a,z} can be continually sent at the sampleperiods.It is important to recognize that as the NF OV axes move, the associatedimager may not have suf?cient time to perform the required integration to get a non-blurred image. Additionally, depending on the camera used (interlaced or progressivescan) there may be ?eld to ?eld displacement or horizontal displacement of the imageall of which can be wholly or partially corrected by computation. Thus, it is easily seenwhy the WFOV camera provides the information necessary for directing the NF OVstage. It should be noted, that certain eye tracking algorithms (such as those based onspecularity or iris con?guration or pattern matching) may be capable of providingsufficient information (even if the image is slightly blurred due to focus or exhibitssome blur caused by motion) to provide a reasonable estimate of the eye location in theNFOV camera. Hence, it is conceptually possible to use the WFOV data for coarse?CA 02264029 1999-02-24WO 98/08439 PCT/U S97/ 1487320.movement and the processed NFOV data (during motion) as additional information for?ner resolution. This fusion of data can provide a better estimate than one WFOVcamera image alone in positioning the NFOV image to acquire a quality iris image.To acquire a quality iris image, the NFOV axes must settle to a pointwhere the residual motion is less than that which can be detected by the imager. Oncethis occurs, any remaining images must be purged from the imager (typically there is adelay between an image integrated and the readout via RS 1 70) and the properintegration time allowed to acquire a non blurred image. See Robetic Engineering AnIntegrated Approach for a timing scenario. This can be accomplished in a number ofways, the simplest being a time delay which occurs after the cessation of motion until agood quality RS170 image is captured. Multiple iris images which may be partiallyobscured may be collected and fused into a single composite, less obscured iris imageusing normalization and fusion methods.We have found that any light source will cause a re?ection on eyeglassesof the person to be identi?ed. The eyes as seen from the WFOV cameras 3 are shownin Figure 11. There is a re?ection 70 from the WFOV illuminator 6 on both lenses 72of the personâs eyeglasses 74. The re?ection 70 partially covers the iris 76 of thepersonâs eye making iris identi?cation difficult if not impossible. To overcome thisproblem we use illuminators 21, 22 and 23 located offâaxis from the optical axis of theNFOV camera 16. By care?illy positioning and sometimes using only some of thelight emitting diodes we can achieve adequate illumination without creating anobscuring re?ection. This result is shown in Figure 12 where light from only a few?CA 02264029 1999-02-24WO 98/08439 PCT/US97/1487321.light emitting diodes in the NFOV illuminators 21, 22 and 23 has created a re?ection80 that appears in the image. That reflection does not cover any part of the iris 76. Insome cases especially with glasses, the WFOV specularity makes ?nding the head andthe eye more expeditious.Multiple illuminators also enable us to determine the shape of eyeglassesworn by the subject in the images. We illuminate the eyeglasses sequentially using twospaced apart illuminators. The specularity will be in one position during the ?rstillumination and in a different position during the second illumination. The amount ofspecularity change is then calculated to determine appropriate eyeglass shape. Fromthat information we can determine the minimum movement of illumination required tomove the specularity off of the iris.A calibration procedure must be used to correlate the center of theNF OV camera's ?eld of view with pan/tilt and focus axis positions for a series ofcoordinates in 3 dimensional space as de?ned by the wide ?eld of view. Given a set ofWFOV coordinates {x,y,z} de?ning the position of a userâs eye somewhere in theworking volume in front of the cameras, a transformation or table look up can be usedto de?ne the coordinates of the pan, tilt and focus {p,t,f} axes that make the center ofthe NFOV camera's ?eld of view coincident with x,y coordinates and in focus on the 2plane. We prefer to use a series of targets to assist in calibration. These targets havepartially ?lled circles corresponding to iris positions at known locations on a page. Thetargets are placed a known distance from the housing and the device is activated toattempt to ?nd an iris and produce a calibration image.?CA 02264029 1999-02-24wo 93/03439 PCT/US97/1487322.When NFOV and WFOV cameras are used, they must be calibratedtogether. This may be accomplished by manual or automatic procedures, optical targetsor projected targets may be used. An automatic procedure would require a calibrationobject to be automatically recognized by the computational support equipmentoperating on the camera images. Another possibility is to use the NFOV cameramotion capabilities or other motion generation capability to project a target into thecalibration volume, and then recognize the target.Although we have shown certain present preferred embodiments of ourcompact image steering and focusing device and methods of using that device, it shouldbe distinctly understood that our invention is not limited thereto but may be variouslyembodied within the scope of the following claims.