Language selection

Search

Patent 3050487 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent Application: (11) CA 3050487
(54) English Title: SYSTEM AND METHOD FOR STORING AND DISTRIBUTING CONSUMER INFORMATION
(54) French Title: SYSTEME ET METHODE DE STOCKAGE ET DE DISTRIBUTION DE RENSEIGNEMENTS SUR LES CONSOMMATEURS
Status: Compliant
Bibliographic Data
(51) International Patent Classification (IPC):
  • G06F 21/62 (2013.01)
  • G06F 16/27 (2019.01)
(72) Inventors :
  • ORTIZ, EDISON U. (Canada)
  • POURTABATABAIE, ARYA (Canada)
  • KHANDAVILLI, AMBICA PAWAN (Canada)
  • SALTER, MARGARET INEZ (Canada)
  • RICHARDS, JORDAN ALEXANDER (Canada)
  • VINTILA, IUSTINA-MIRUNA (Canada)
  • WILKINSON, SARAH RACHEL WAIGH YEAN (Canada)
(73) Owners :
  • ROYAL BANK OF CANADA (Canada)
(71) Applicants :
  • ROYAL BANK OF CANADA (Canada)
(74) Agent: NORTON ROSE FULBRIGHT CANADA LLP/S.E.N.C.R.L., S.R.L.
(74) Associate agent:
(45) Issued:
(22) Filed Date: 2019-07-24
(41) Open to Public Inspection: 2020-01-24
Availability of licence: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): No

(30) Application Priority Data:
Application No. Country/Territory Date
62/702,871 United States of America 2018-07-24
62/806,394 United States of America 2019-02-15
62/824,697 United States of America 2019-03-27
62/702,684 United States of America 2018-07-24
62/839,408 United States of America 2019-04-26
16/424,242 United States of America 2019-05-28
16/503,154 United States of America 2019-07-03

Abstracts

English Abstract



A computer implemented system for controlling access to data associated with
an entity
includes a data storage device having a protected memory region, and one or
more
processors, at least one of which is operable in the protected memory region.
The one or
more processors are configured for: storing a secret key associated with the
entity in a
portion of the protected memory region associated with the entity; upon
receiving entity
data, storing the entity data in the portion of the protected memory region
associated with
the entity; and upon receiving an access grant signal, generating a smart
contract, the
smart contract defining the entity data to be accessed and a recipient of the
entity data to
be accessed.


Claims

Note: Claims are shown in the official language in which they were submitted.



WHAT IS CLAIMED IS:

1. A computer implemented system for controlling access to data associated
with an entity,
the system comprising:
a data storage device having a protected memory region;
one or more processors, at least one of which is operable in the protected
memory
region and configured for:
storing a secret key associated with the entity in a portion of the protected
memory
region associated with the entity;
upon receiving entity data associated with the entity, storing the entity data
in the
portion of the protected memory region associated with the entity; and
upon receiving an access grant signal, generating a smart contract, the smart
contract defining the entity data to be accessed and a recipient of the entity
data to be
accessed, the smart contract configured to trigger a message for communicating

information associated with the entity data to a recipient device upon
satisfaction of at
least one verification condition.
2. The
system of claim 1, wherein the system comprises a trusted execution
environment including the protected memory region, the protected memory region

inaccessible to the one or more processors when operating outside the trusted
execution
environment; wherein at least one processor configured to operating inside the
trusted
execution environment is configured for:
generate the information associated with the entity data within the trusted
execution
environment, and passing the information associated with the entity data for
communication outside the trusted execution environment.
3. The system of claim 1, wherein the information associated with the entity
data is a token
or is encrypted data based on the entity data.

-63-


4. The system of claim 1, wherein the entity data includes one or more token
data objects
received from or computed jointly in a multiparty protocol with an issuer
computing device,
the one or more token data objects generated using at least a issuer computing
device
private issuance key, the one or more token data objects each including one or
more
signed data elements representing at least one of the one or more
characteristics of the
entity.
5. The system of claim 1, wherein the entity data is generated by an issuer
computing
device using the public key associated with the entity and a key associated
with the issuer.
6. The system of claim 1, wherein the at least one processor operable in the
protected
memory region and configured for: generating a log of activity associated with
the entity
data.
7. The system of claim 1, wherein the one or more processors are configured to
set, and
enforce access controls of the smart contract based on at least one key
associated with
the entity.
8. The system of claim 7, wherein the one or more processors are configured to
generate
one or more access tokens for accessing the portion of the protected memory
region
associated with the entity.
9. The system of claim 1, wherein the at least one verification condition is
met when the
one or more processors generate a verification request data message, and
receive a proof
data message from the recipient device.
10. The system of claim 9, wherein the verification request data message
includes at least
a nonce C0; and the client computing device processor is configured to:
compute t = x-1 mod p, where x is an attribute value from the one or more
token
data objects, and p is an order of the discrete log group; t is a modular
inverse of x mod p;
uniformly sample a first random number 7-1 and a second random number, r2,
such
that r1,r2 E Zp ;
- 64 -

compute Image where R is a commitment to random values r1 and r2, Cx is a
commitment to attribute x, h is a group generator;
compute c = H(Cx, R, c0), where c is a proof challenge, based at least on the
Fiat-
Shamir Heuristic;
compute z1 = ct +r1 and z2 = -cty + r2, where z1 and z2 are proof responses
based on a Sigma protocol; and
encapsulate and transmit the one or more proof data messages including R, z1
and z2
as data objects to the verifier computing device, such that the verifier
computing device is
able to compute c = H(Cx,R,c0) and confirm that Image
the verifier computing
device controlling provisioning of access to a secured resource responsive to
the
confirmation that Image
11. A method for controlling access to data associated with an entity, the
system
comprising:
storing a secret key associated with the entity in a portion of a protected
memory
region associated with the entity;
upon receiving entity data associated with the entity, storing the entity data
in the
portion of the protected memory region associated with the entity; and
upon receiving an access grant signal, generating a smart contract, the smart
contract defining the entity data to be accessed and a recipient of the entity
data to be
accessed, the smart contract configured to trigger a message for communicating

information associated with the entity data to a recipient device upon
satisfaction of at
least one verification condition.
12. The method of claim 11, comprising: generating the information associated
with the
entity data within a trusted execution environment, and passing the
information associated
with the entity data for communication outside the trusted execution
environment.
- 65 -

13. The method of claim 11, wherein the information associated with the entity
data is a
token or is encrypted data based on the entity data.
14. The method of claim 11, wherein the entity data includes one or more token
data
objects received from or computed jointly in a multiparty protocol with an
issuer computing
device, the one or more token data objects generated using at least a issuer
computing
device private issuance key, the one or more token data objects each including
one or
more signed data elements representing at least one of the one or more
characteristics of
the entity.
15. The method of claim 11, wherein the entity data is generated by an issuer
computing
device using the public key associated with the entity and a key associated
with the issuer.
16. The method of claim 11, comprising generating a log of activity associated
with the
entity data in the protected memory region.
17. The method of claim 11, comprising enforcing access controls of the smart
contract
based on at least one key associated with the entity.
18. The method of claim 17, comprising generate one or more access tokens for
accessing
the portion of the protected memory region associated with the entity.
19. The method of claim 11, wherein the at least one verification condition is
met when the
one or more processors generate a verification request data message, and
receive a proof
data message from the recipient device.
20. A computer readable medium or media having stored thereon machine
interpretable
instructions, which when executed, cause at least one processor to
store a secret key associated with the entity in a portion of a protected
memory
region associated with the entity;
upon receiving entity data associated with the entity, store the entity data
in the
portion of the protected memory region associated with the entity; and
- 66 -

upon receiving an access grant signal, generate a smart contract, the smart
contract defining the entity data to be accessed and a recipient of the entity
data to be
accessed, the smart contract configured to trigger a message for communicating

information associated with the entity data to a recipient device upon
satisfaction of at
least one verification condition.
- 67 -

Description

Note: Descriptions are shown in the official language in which they were submitted.


SYSTEM AND METHOD FOR STORING AND DISTRIBUTING CONSUMER
INFORMATION
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application is a non-provisional of and claims all benefit
including priority to
U.S. Provisional Patent Application 62/702,871, filed July 24, 2018.
[0002] This application is also a continuation-in-part of U.S. Patent
Application
16/424,242, filed May 28, 2019, which is a non-provisional of and claims
priority to:
[0003] U.S. Provisional Application No. 62/677,133 filed May 28, 2018;
[0004] U.S. Provisional Application No. 62/691,406 filed June 28, 2018;
[0005] U.S. Provisional Application No. 62/697,140 filed July 12, 2018;
[0006] U.S. Provisional Application No. 62/806,394 filed February 15, 2019;
and
[0007] U.S. Provisional Application No. 62/824,697 filed March 27, 2019.
[0008] This application is also a continuation-in-part of U.S. Patent
Application
16/503,154, filed July 3, 2019, which is a non-provisional of and claims
priority to:
[0009] U.S. Application No. 62/693,680, dated 03-Jul-2018;
[0010] U.S. Application No. 62/702,684, dated 24-Jul-2018, and
[0011] U.S. Application No. 62/839,408, dated 26-Apr-2019.
[0012] All of the above references are hereby incorporated by reference.
FIELD
[0013] The present disclosure generally relates to the field of consumer
information, and
more specifically, to storing and distributing entity (e.g. consumer)
information and entity
data.
- 1 -
CA 3050487 2019-07-24

BACKGROUND
[0014] Today, consumers may not have adequate control nor access to their own
information including transactional data relating to past purchases, and other
types of
consumer information. In addition, the consumer information may not be
properly protected
when vendors access the information for commercial purposes.
[0015] Improved systems and methods for storing and distributing consumer data
are
therefore desired.
SUMMARY
[0016] In accordance with an aspect, there is provided a system that is
configured to
give consumers access and control of their own information. The system may
include: a
data storage unit storing a user profile; and a processor configured with
computer readable
instructions stored in the data storage unit to: receive and store one or more
sets of
consumer data, each set of consumer data having a metadata identifying a
source of the
set of consumer data; categorize the one or more sets of consumer data;
present the one
or more sets of consumer data through a user interface to the consumer;
receive a user
request to transmit at least one of the one or more sets of consumer data to a
client;
transmit the at least one set of consumer data to the client; and make a
payment to the
consumer in view of the at least one set of consumer data transmitted to the
client.
[0017] In accordance with another aspect, there is provided a computer
implemented
system for controlling access to data associated with an entity, the system
comprising: a
data storage device having a protected memory region; one or more processors,
at least
one of which is operable in the protected memory region and configured for:
storing a
secret key associated with the entity in a portion of the protected memory
region
associated with the entity; upon receiving entity data, storing the entity
data in the portion
of the protected memory region associated with the entity; and upon receiving
an access
grant signal, generating a smart contract, the smart contract defining the
entity data to be
accessed and a recipient of the entity data to be accessed, the smart contract
configured
to trigger a message for communicating information associated with the entity
data to a
recipient device upon satisfaction of at least one verification condition.
- 2 -
CA 3050487 2019-07-24

[0018] In accordance with another aspect, there is provided a method for
controlling
access to data associated with an entity, the system comprising: storing a
secret key
associated with the entity in a portion of a protected memory region
associated with the
entity; upon receiving entity data, storing the entity data in the portion of
the protected
memory region associated with the entity; and upon receiving an access grant
signal,
generating a smart contract, the smart contract defining the entity data to be
accessed and
a recipient of the entity data to be accessed, the smart contract configured
to trigger a
message for communicating information associated with the entity data to a
recipient
device upon satisfaction of at least one verification condition.
[0019] In accordance with another aspect, there is provided a computer
readable
medium or media having stored thereon machine interpretable instructions,
which when
executed, cause at least one processor to store a secret key associated with
the entity in a
portion of a protected memory region associated with the entity; upon
receiving entity data,
store the entity data in the portion of the protected memory region associated
with the
entity; and upon receiving an access grant signal, generate a smart contract,
the smart
contract defining the entity data to be accessed and a recipient of the entity
data to be
accessed, the smart contract configured to trigger a message for communicating

information associated with the entity data to a recipient device upon
satisfaction of at least
one verification condition.
[0020] In various further aspects, the disclosure provides corresponding
systems and
devices, and logic structures such as machine-executable coded instruction
sets for
implementing such systems, devices, and methods.
[0021] In this respect, before explaining at least one embodiment in
detail, it is to be
understood that the embodiments are not limited in application to the details
of
construction and to the arrangements of the components set forth in the
following
description or illustrated in the drawings. Also, it is to be understood that
the phraseology
and terminology employed herein are for the purpose of description and should
not be
regarded as limiting.
- 3 -
CA 3050487 2019-07-24

[0022] Many further features and combinations thereof concerning embodiments
described herein will appear to those skilled in the art following a reading
of the instant
disclosure.
DESCRIPTION OF THE FIGURES
[0023] FIG. 1 shows an example system of storing and distributing consumer
information, in accordance with one embodiment.
[0024] FIG. 2 is an example block diagram of an example computing device,
according
to some embodiments.
[0025] FIG. 3 is an example flow chart representing a process performed by the

example system, according to some embodiments.
[0026] FIG. 4 is a description of some guiding principles illustrating
aspects of a
personal information bank, according to some embodiments.
[0027] FIG. 5 shows example features of the system, according to some
embodiments.
[0028] FIG. 6 is a visualization of aspects of the personal information
bank according to
some embodiments.
[0029] FIG. 7 is an illustration of a unified customer experience in
relation to personal
data, according to some embodiments.
[0030] FIG. 8 is a series of pictograms showing scenarios in relation to
individuals at
different stages of life and contextual situations, according to some
embodiments.
[0031] FIG. 9 is an example timeline for an example use case, according to
some
embodiments.
[0032] FIG. 10 shows example data sharing options for several example use
cases,
according to some embodiments.
- 4 -
CA 3050487 2019-07-24

[0033] FIG. 11 also shows example data sharing options for several example use
cases,
according to some embodiments.
[0034] FIG. 12 is another example timeline for an example use case, according
to some
embodiments.
[0035] FIG. 13 is yet another example timeline for an example use case,
according to
some embodiments.
[0036] FIG. 14 is an example timeline for an example use case, according to
some
embodiments.
[0037] FIG. 15 is another example timeline for an example use case, according
to some
embodiments.
[0038] FIG. 16 is yet another example timeline for an example use case,
according to
some embodiments.
[0039] FIG. 17 is an illustration of an example rendering for a personal
assistant on a
mobile device, according to some embodiments.
[0040] FIG. 18 is a flow diagram of example data transfers, according to some
embodiments.
[0041] FIG. 19 is a flow diagram of example data transfers, according to some
embodiments.
[0042] FIG. 20 is an illustration depicting an example business data flow
for data
sharing, according to some embodiments.
[0043] FIG. 21 is an example table illustrating entities related to various
channels of
obtaining data, according to some embodiments.
[0044] FIG. 22 includes a description of various aspects of data
collection, according to
some embodiments.
- 5 -
CA 3050487 2019-07-24

[0045] FIGS. 23, 24 and 25 are schematic diagrams showing aspects of an
example
computer system and method for controlling data associated with an entity.
[0046] FIG. 26 is a schematic diagram illustrating a remote attestation
process between
a partner and a trust manager of the example platform according to some
embodiments.
[0047] FIG. 27 illustrates another schematic diagram of an example Clean Room
on the
platform for processing secure transaction data according to some embodiments.
[0048] FIG. 28 is a graphical representation of parties to a verification
event, according
to some embodiments.
[0049] FIG. 29 is an example 0-Auth based method, according to some
embodiments.
[0050] FIG. 30 is an example method diagram where a secure enclave master
verifier is
utilized, according to some embodiments.
[0051] FIG. 31 is a state diagram of a verify oracle, according to some
embodiments.
[0052] FIG. 32 is a system diagram providing additional detail in the
context of a verifier
hosted enclave, according to some embodiments.
[0053] FIG. 33 is a system diagram providing a simplified variation of the
system shown
in FIG. 32, according to some embodiments.
[0064] FIG. 34 is a method diagram providing an example issuer sequence where
the
prover computing system has a corresponding key pair, according to some
embodiments.
As described in later figures, the prover key is optional, but in some cases,
the prover key
pair helps prevent sharing or can be utilized to reduce an amount of data
required to be
held secret. The use of a key pair for the prover may be instrumental in
preventing
credential subletting, an abuse of the system whereby the prover shares some
of their
credentials with another for attribute impersonation.
- 6 -
CA 3050487 2019-07-24

[0055] FIG. 35 is a method diagram providing an example verification sequence,
where
the prover computing system has a corresponding key pair, according to some
embodiments.
[0056] FIG. 36 is a method diagram providing an example issuer sequence where
the
prover computing system does not have a corresponding key pair, according to
some
embodiments.
[0057] FIG. 37 is a method diagram providing an example verification sequence,
where
the prover computing system does not have a corresponding key pair, according
to some
embodiments.
[0058] FIG. 38 is a system diagram providing an example verification system
having a
third party hosted enclave including a transcript, according to some
embodiments.
[0059] FIG. 39 is an example C-based proof request description language,
according to
some embodiments.
DETAILED DESCRIPTION
[0060] It will be appreciated that numerous specific details are set forth
in order to
provide a thorough understanding of the exemplary embodiments described
herein.
However, it will be understood by those of ordinary skill in the art that the
embodiments
described herein may be practiced without these specific details. Furthermore,
this
description is not to be considered as limiting the scope of the embodiments
described
herein in any way, but rather as merely describing implementation of the
various example
embodiments described herein.
[0061] The embodiments are implemented using technological devices, including
computers, having specialized components and circuitry that are adapted for
improved
security and privacy of data sets. As noted herein, some embodiments are
directed to a
secure enclave data processor and uses thereof in conjunction with a computer
readable
memory having a protected memory region.
- 7 -
CA 3050487 2019-07-24

[0062] The secure enclave data processor interfaces with the protected memory
region
to securely store and encrypt data sets received from a particular data source
(e.g., from a
data issuing / official / validated / trusted organization) that may, in some
embodiments, be
encrypted with a key specific to the organization or data source. In an
embodiment, the
key may be pre-generated and associated with the organization or data source.
In another
embodiment, the system may include a key generator which performs a key
generation
ceremony when a new key is required to load data sets into the protected
memory region.
[0063] Embodiments of methods, systems, and apparatus are described through
reference to the drawings.
[0064] It will be appreciated that numerous specific details are set forth
in order to
provide a thorough understanding of the exemplary embodiments described
herein.
However, it will be understood by those of ordinary skill in the art that the
embodiments
described herein may be practiced without these specific details. In other
instances, well-
known methods, procedures and components have not been described in detail so
as not
to obscure the embodiments described herein. Furthermore, this description is
not to be
considered as limiting the scope of the embodiments described herein in any
way, but
rather as merely describing implementation of the various example embodiments
described herein.
[0065] Disclosed herein is a system for storing, protecting and
distributing consumer
information. Entity data such as consumer information herein may refer to
various types of
information related to a consumer, such as name, age, occupation, salary,
interests,
marital status, address, professional association, political affiliation and
so on. Consumer
information may also include financial data, transactional data and social
network data.
The term "consumer information" may be used interchangeably with the term
"consumer
data" throughout the disclosure.
[0066] Once stored, the consumer information may be protected. In some cases,
the
consumer information may be categorized and classified. The stored information
relating
to a consumer may be viewed and managed by the consumer through a user
interface
provided by the system. The consumer can choose to share part of the consumer
data
- 8 -
CA 3050487 2019-07-24

such as transaction history with one or more clients of the system, for
example, a
consumer may monetize a transaction history with a particular vendor by
choosing to sell it
to a client who may be the same industry as the vendor. The consumer may also
provide
his or her history of shopping at Nike to get a great experience at Nike , or
a competitor
of Nike . The information and data shared with clients of the system may or
may not be
anonymous. In some embodiments, the data shared with a client (e.g. Nike ) may
be
partially anonymous, in that the client would not know anything else about the
consumer
other than the shared data. For example, if the consumer chooses to use
anonymous
credentials when sharing data with Nike , Nike can provide the consumer with
offers
related to a shopping experience, without having to know everything else about
the
consumer.
[0067] FIG. 1 is a schematic block diagram of a physical environment for a
system 100
for storing and distributing consumer information.
[0068] System 100 may be software (e.g., code segments compiled into machine
code),
hardware, embedded firmware, or a combination of software and hardware,
according to
various embodiments.
[0069] System 100 is configured to receive one or more data sets
representative of
consumer data. In some embodiments, some of the one or more sets of consumer
data
may be received from one or more vendor systems 130 or one or more user
devices 135
through network 150. System 100 may connect with one or more client systems
140 to
share one or more sets of consumer data with the client(s), when instructed or
allowed by
the consumers.
[0070] A vendor system 130 may be a system at or connected to a vendor that
interacts
with the consumer in some capacity. For example, a vendor may be a physical
store, a
restaurant, a social network website, a media, a brand, a workplace, and so
on. In some
embodiments, a vendor system 130 may have pre-registered with system 100 to
share
information regarding one or more consumers, provided that the consumers have
given
explicit consent to share the information. A consent may be given when a
consumer signs
up with system 100 as a user, and has selected the specific vendor as one
source of
- 9 -
CA 3050487 2019-07-24

consumer information. Each time a consumer interacts with the vendor, some
type of
electronic data may be stored by vendor system 130. The electronic data may
include
transactional data generated during a purchase transaction, fitness data
generated by a
fitness device wore by the consumer, items ordered by the consumer at a
restaurant,
books bought by the consumer at a bookstore, and so on. If the consumer is
registered
with system 100 and the vendor has been selected as a vendor that can share
information
regarding the consumer, then the electronic data may be transmitted to system
100 for
storage and further processing.
[0071] A consumer may operate a user device 135 such as a mobile phone or a
tablet.
The user device 135 may be pre-registered with system 100 to share information
regarding
the consumer. For example, a mobile device 135 may has information regarding
when the
consumer browses internet, the websites visited frequently by the consumer,
and the
mobile applications most frequently used by the consumer, and so on. The
information
may be transmitted to system 100 if the consumer has given explicit consent.
[0072] A client system 140 may be a system configured to receive information
from
system 100. In some embodiments, a client system 140 may need to pre-register
with
system 100 prior to receiving any consumer information. Examples of client
systems 140
may include financial institutions, stores, e-commerce websites, social
network websites,
and so on. Client systems 140 may in some embodiments be vendor systems 130.
[0073] A processor or processing device 101 can execute instructions in memory
109 to
configure various components or units 111, 113, 115, 117. A processing device
101 can
be, for example, any type of general-purpose microprocessor or
microcontroller, a digital
signal processing (DSP) processor, an integrated circuit, a field programmable
gate array
(FPGA), a reconfigurable processor, or any combination thereof.
[0074] Each communication interface 105 enables the system 100 to communicate
with
other components, to exchange data with other components, to access and
connect to
network resources, to serve applications, and perform other computing
applications by
connecting to a network (or multiple networks) capable of carrying data
including the
Internet, Ethernet, plain old telephone service (POTS) line, public switch
telephone
- 10 -
CA 3050487 2019-07-24

network (PSTN), integrated services digital network (ISDN), digital subscriber
line (DSL),
coaxial cable, fiber optics, satellite, mobile, wireless (e.g. Wi-Fi, WiMAX),
SS7 signaling
network, fixed line, local area network, wide area network, and others,
including any
combination of these.
[0075] Each I/O unit 107 enables the system 100 to interconnect with one or
more input
devices, such as a keyboard, mouse, camera, touch screen and a microphone, or
with one
or more output devices such as a display screen and a speaker.
[0076] Data storage 108 can be, for example, one or more NAND flash memory
modules of suitable capacity, or may be one or more persistent computer
storage devices,
such as a hard disk drive, a solid state drive, and the like. In some
embodiments, data
storage 108 comprises a secure data warehouse configured to host user profile
data.
[0077] Memory 109 may include a suitable combination of computer memory such
as,
for example, static random-access memory (SRAM), random-access memory (RAM),
read-only memory (ROM), electro-optical memory, magneto-optical memory,
erasable
programmable read-only memory (EPROM), and electrically-erasable programmable
read-
only memory (EEPROM), Ferroelectric RAM (FRAM) or the like.
[0078] A user profile unit 111 may be configured to store information about a
consumer,
including consumer data received from vendor system 130, user devices 135
and/or client
systems 140. In some embodiments, the information may be further categorized
or
classified based on one or more schemes. For example, one or more sets of
consumer
data may contain metadata or tags indicating a source of the consumer data.
The source
of a consumer data may be the vendor system 130 or user device 135. For
example, a
source of a consumer data set may be Nike .
[0079] The consumer data may also be categorized based on a default set of
categories
such as food, clothing, social media, personal information, financials, and so
on. System
100 may be configured to classify the consumer data based on the identified
source (e.g.
vendor) of consumer data, or other indicators in a transaction such as items
purchased in
a transaction.
- 11 -
CA 3050487 2019-07-24

[0080] Once categorized, the one or more sets of consumer data may be stored
in the
data storage 108 and associated with the corresponding user profile. In some
embodiments, the one or more sets of consumer data may be part of the
corresponding
user profile.
[0081] A fee exchange unit 113 may be configured to determine when a fee is
paid to a
party and facilitates the fee payment accordingly. For example, fee exchange
unit 113
may be configured to determine that a fee payment is required when a client
system 140
requests to sign up with system 100 for receiving consumer information. For
another
example, fee exchange unit 113 may be configured to determine that a fee
payment to
system 100 or a consumer is required when a client system 140 has received, or
is about
to receive, requested consumer information of the consumer. Fee exchange unit
113 may
track fee payments and approves pending consumer data sharing requests in
response to
the fee payments.
[0082] A user interface unit 115 may be configured to include an API unit
configured for
providing or facilitating an interface, such as a user interface, to connect
to external
databases and systems (e.g. user device 135), so that a consumer may access,
view and
manage his or her consumer information. Through the user interface, the
consumer may
send requests for sharing one or more sets of consumer data to one or more
client
systems 140.
[0083] A client interface unit 117 may be configured to include an API unit
configured for
providing or facilitating an interface, such as a user interface, to connect
to external
databases and systems (e.g. client systems 140), so that a client system may
access, view
and manage shared consumer information. Through the user interface, the client
system
140 may send requests for one or more sets of consumer data from one or more
consumers and make fee payments for said requests, if required.
[0084] In some embodiments, system 100 may include an API unit (not
illustrated)
configured for providing or facilitated an interface, such as a user
interface, for system
administrators. The interface may allow one or more administrators to
configure the
- 12 -
CA 3050487 2019-07-24

settings of system 100, such as for example, fee payment schemes for one or
more client
systems 140.
[0085] FIG. 2 is a schematic block diagram of an example computing device 200
implementing system 100, according to some embodiments. As depicted, computing

device 200 includes at least one processor 202, memory 204, at least one I/O
interface
206, and at least one network interface 208.
[0086] Each
processor 202 may be a microprocessor or microcontroller, a digital signal
processing (DSP) processor, an integrated circuit, a field programmable gate
array
(FPGA), a reconfigurable processor, a programmable read-only memory (PROM), or

combinations thereof.
[0087] Memory 204 may include a computer memory that is located either
internally or
externally such as, for example, random-access memory (RAM), read-only memory
(ROM), compact disc read-only memory (CDROM), electro-optical memory, magneto-
optical memory, erasable programmable read-only memory (EPROM), and
electrically-
erasable programmable read-only memory (EEPROM), Ferroelectric RAM (FRAM).
[0088] Each I/O interface 206 enables computing device 200 to interconnect
with one or
more input devices, such as a keyboard, mouse, camera, touch screen and a
microphone,
or with one or more output devices such as a display screen and a speaker.
[0089] A networking interface 208 may be configured to receive and transmit
data sets
representative of the machine learning models, for example, to a target data
storage or
data structures. The target data storage or data structure may, in some
embodiments,
reside on a computing device or system such as a mobile device.
[0090] FIG. 3 shows an example process 300 performed by system 300. At step
301, a
system may receive one or more sets of consumer data. In some embodiments,
each set
of consumer data may have a metadata identifying a source of the set of
consumer data.
At step 302, the system may store the one or more sets of consumer data in the
user
profile. At step 303, the system may categorize the one or more sets of
consumer data. At
step 304, the system may present the one or more sets of consumer data through
a user
- 13 -
CA 3050487 2019-07-24

interface to the consumer. At step 305, the system may receive a user request
to transmit
at least one of the one or more sets of consumer data to a client. At step
306, the system
may transmit the at least one set of consumer data to the client in response
to the user
request. At step 307, the system may make a payment to the consumer in view of
the at
least one set of consumer data transmitted to the client.
[0091] Referring now to FIG. 4, which is a description of some guiding
principles
illustrating aspects of a personal information bank, according to some
embodiments. For
example, consumers (e.g. users) retain ownership of consumer information and
no data is
collected, stored, used or shared without explicit opt-in from consumers.
[0092] FIG. 5 shows example features of system 100, such as control and
security of
consumer data, and rewards for participating in system 100. For example, the
consumers
can deposit online and offline data to a trusted and secure data storage
provided by
system 100, the consumers also can choose which data to share with various
venders,
and which brand offers the consumer wishes to receive. The consumers can get
rewarded
for sharing consumer data, such as monetary compensation. The consumers may
also
access unique experiences with chosen brands, based on the shared consumer
data.
[0093] FIG. 6 is a visualization of aspects of the personal information
bank according to
some embodiments. For example, system 100 may include a secure data storage
108
known as Personal Information Bank, which may include an enterprise data
warehouse. A
consumer may choose to share his or her data and store the data at the
Personal
Information Bank. The consumer can monetize from the shared data by sharing
the data
with selected companies (e.g. client systems 140) such as universities,
marketers and
brands, and other businesses.
[0094] FIG. 7 is an illustration of a unified customer experience in
relation to personal
data, according to some embodiments. As shown, consumer data may include
personal
data, which may include one or more of: loyalty program data, mobile (e.g.
investing)
application data, medical data, location data, digital consumption data,
telecom mobile
data, hearts and fitness data, utilities data, shopping data and usage data.
- 14 -
CA 3050487 2019-07-24

[0095] FIG. 8 is a series of pictograms showing scenarios in relation to
individuals at
different stages of life and contextual situations, according to some
embodiments.
Consumers in various stages of life may use system 100 to store and distribute
personal
information and derive benefits therefrom.
[0096] FIG. 9 is an example timeline for an example use case, according to
some
embodiments. A consumer "Kevin Holland" may choose to share certain personal
information such as name, age, relationship status, occupation and income. He
may also
share consumer data with vendor systems 130 such as restaurants, bookstores
and video
streaming websites. System 100 may collect or receive various consumer data
such as
transactional data with book stores, third party data on various social media
websites, data
from video streaming websites and data from certain marketing teams in order
to generate
targeted promotions or offers for consumer Kevin. Kevin may be presented with
one or
more of such offers and he may choose to accept an offer and participate in an

experience. Throughout this process, fee payments may be requested and
accepted by
system 100 from client systems 140 for participating in system 100 and
receiving Kevin's
consumer data.
[0097] FIGs. 10 and 11 show example data sharing options for several example
use
cases, according to some embodiments. As shown, through a user interface
provided by
user interface unit 115, a consumer may choose one or more types of consumer
information for sharing with one or more client systems 140. For example, a
consumer
may choose to share home address, income and transaction history by clicking
on one or
more radio buttons. The consumer may also choose to share social media data
from GPS
on user device 135, OpenTable and video streaming websites. The consumer may
choose to share health data such as prescription history and exercise
tracking. The
consumer may further choose to share purchase data such as online purchases
from
Amazon and EBay. In addition, the consumer may choose to accept marketing
offers from
client systems 140 in one or more industries such as food, insurance, retail
and health. In
addition, the consumer may specify types of rewards and incentives for data
sharing, such
as research labs, community data feed, commercial data feed for monetary
incentive and
market place offers for personalized products or services.
- 15 -
CA 3050487 2019-07-24

[0098] FIG. 12 is an example timeline for an example use case by consumer
"Jason
Wallace", according to some embodiments. The consumer Jason Wallace may choose
to
share one or more data sets of consumer information, similar to the consumer
Kevin
HoIlard as described above. In this example, Jason may receive targeted offers
from
Starbucks as he has shared information regarding coffee purchase habit with
Starbucks . Starbucks may pay a channel access fee to system 100 for
marketing
through system 100. Starbucks may also pay a fee for making an offer to
Jason. A local
coffee boutique, similarly, may access Jason's coffee purchase data and makes
an offer to
Jason with payment of appropriate fees to system 100. The fee payments to
system 100
may be in some cases split with Jason. Jason may also accept the offer from
Starbucks
or the local coffee boutique and experience a great cup of coffee, while
accepting a
monetary payment from system 100 for accepting the offer.
[0099] FIG. 13 is yet another an example timeline for an example use case,
according to
some embodiments. A client "Reina Lin" may operate a small business such as a
coffee
shop, which has a client system 140. The client system 140 may pay a fee to
access
system 100 in order to collect certain transactional data from consumers.
Reina may
select criteria for a marketing campaign and create a distribution list based
criteria.
System 100 may match Reina's coffee shop with a consumer (e.g. Jason) based on

Jason's shared information. Reina's client system 140 may pay a certain amount
of fees
to system 100 for accessing the consumer data and making an offer to
consumers.
[00100] FIG. 14 is an example timeline for an example use case, according to
some
embodiments. James Sutton, a consumer, may also make use of system 100 based
on
his life experience and interests. James may make a donation to a children's
hospital and
the donation transaction may be stored in system 100. System 100 may recognize
a trend
towards charitable donations when accessing financial data, and alert client
system 140 of
a children's hospital regarding James. James may receive targeted charity
campaigns
from the children's hospital, which may receive further donations from James
as a result of
the targeted campaigns. James may also receive reward from the hospital for
being a
frequent donor.
- 16 -
CA 3050487 2019-07-24

[00101] FIG. 15 is another example timeline for an example use case, according
to some
embodiments. A consumer "Kelly Smith" may store and grant permissions to her
transaction data on system 100. System 100 may recognize shopping patterns in
her
shopping data, defines a monthly loyalty value, and offers the value to Kelly.
Kelly may
opt-into sharing her data in aggregated form (e.g. anonymized and grouped with
other
consumers' data), which may be offered to retailers. Retailers may pay a fee
in order to
access the aggregated consumer data. In return, Kelly may receive a reward,
such as a
loyalty program points, for agreeing to share her data in aggregated form.
[00102] FIG. 16 is yet another example timeline for an example use case,
according to
some embodiments. A couple "Mr. and Mrs. Singh" may be married and have a
combined
income of 150,000 a year. They may be look for a cheaper home insurance and
request
home insurance quote through system 100. The couple may choose to share
household
income data and transactional data with home insurance companies, and select
top criteria
for home insurance. System 100 may open up the Mr. Singh's data profile for
home
insurance companies, which may pay a fee to access Mr. Singh's information
through
system 100. The home insurance companies may bid each other to generate best
offer for
Mr. Singh, and pay a fee for presenting the best offers to Mr. Singh. System
100 may
select and present the most suitable offers to Mr. Singh and if the offer is
accepted by Mr.
Singh, receive a further fee payment from the company that has received the
acceptance
from the consumer.
[00103] FIG. 17 is an illustration of an example rendering for a personal
assistant on a
mobile device, according to some embodiments. The mobile device may be an
example of
user device 135. The mobile device may have a mobile application installed for
accessing
system 100. The mobile application may be configured to show a user interface
and
present a consumer's data as stored and managed by system 100. The mobile
application
may have features such as transaction data integration, pattern recognition,
third-party
data integration and consumer data analysis.
[00104] FIGs. 18 and 19 show a flow diagrams of example data transfers,
according to
some embodiments. A consumer may interact with one or more data brokers such
as
websites, healthcare analytics, list brokers, and so on. The data may be
consumed by
- 17 -
CA 3050487 2019-07-24

data users such as banks, marketers, media, government, lawyers, individuals,
law
enforcement, employers, product and service providers. The data users may also

generate consumer data that are collected by data collectors such as internet,
medical
providers, public entities, retailers, telecom and mobile network providers
and financial
institutions.
[00105] FIG. 20 is an illustration depicting an example business data flow for
data
sharing, according to some embodiments. As shown, data from consumers or
customers
may be shared, via system 100, with data brokers, who may sell the data to
brands, who
may contract marketing agencies to publish ads and distribute the data to
various
channels.
[00106] FIG. 21 is an example table illustrating entities related to various
channels of
obtaining data, according to some embodiments.
[00107] FIG. 22 includes a description of various aspects of data collection,
according to
some embodiments.
[00108] FIGS. 23, 24 and 25 show aspects of an example computer system and
methods
for controlling access to data. In some embodiments, the aspects of the
computer system
can be applied to any of the systems described herein or otherwise.
[00109] In some embodiments, the data is associated with an entity such as a
consumer
or individual. In some embodiments, the entity may be a company such as a
financial
institution or a car rental company. In some embodiments, the entity may be
any other
person or group of persons which may have associated data that they wish to
control.
[00110] In some embodiments, the system may store received entity data in a
secure
area (sometimes referred to as the "Virtual Clean Room" or VCR), where the
entity data is
then decrypted and used to re-encrypt or generate derivative data or tokens
for sharing
with a verified recipient. The received entity data cannot be accessed,
decrypted or read
by any other user, system or process except with the proper permissions with
the Clean
Room In some embodiments, the owner of the computer hosting the platform may
be
unable to view or infer anything about input or output data.
- 18 -
CA 3050487 2019-07-24

[00111] In some embodiments, the Clean Room is implemented within one or more
secure enclaves within a Trusted Execution Environment (TEE) of a processor
(e.g., a
CPU), where data models may be trained and executed to conduct any level of
analytics. Key management capabilities are also in place to ensure proper
encryption and
decryption of the data stored within the Clean Room.
[00112] Embodiments described herein are directed to technical solutions
adapted to
overcome technical challenges associated with improved privacy and security.
In
particular, systems, methods, and computer readable media are described that
utilize
secure processing technologies, such as secure enclaves, in relation to the
operation of an
improved machine learning data architecture that has enhanced privacy and
security
measures.
[00113] As described above, these enhanced privacy and security measures lead
to
increased technical challenges as, for example, encryption and decryption
requirements
reduce total computing resources available in various situations. Computing
resources
may be constrained due to requirements that particular aspects need to be
conducted
using only secure processors and data elements may require to be stored only
in
encrypted formats while outside of secure processing environments.
[00114] FIG. 1 is a block diagram illustrating an example electronic
transaction platform
100 for receiving and processing secure consumer data, over a network 150,
according to
some embodiments. The entity data may be received from other system(s) or
devices
130, 140, 135, which may include bank system(s), trusted systems (e.g.
government
licensing / identification management systems) merchant system(s) and the
like. FIGS.
23, 24 and 25 provide schematic diagrams of an example Clean Room which may be

implemented on platform 100 or other systems.
[00115] A processing device 101 can execute instructions in memory 109 to
configure
various components or units such as the VCR custodian, VCR core, and common
platform
components in FIGS. 23-25. A processing device 101 can be, for example, a
microprocessor or microcontroller, a digital signal processing (DSP)
processor, an
integrated circuit, a field programmable gate array (FPGA), a reconfigurable
processor, or
- 19 -
CA 3050487 2019-07-24

any combination thereof. Processing device 101 may include memory 109, data
storage
108, and other storage 111. In some embodiments, processing device 101
includes a
secure area known as a trusted execution environment (TEE) 103. TEE 103 may
include
memory 109 and data storage 108, and is an isolated environment in which
various units
and applications may be executed and data may be processed and stored.
Applications
running within TEE 103 may leverage the full power of processing device 101
while being
protected from components and applications in a main operating system.
Applications and
data within TEE 103 are protected against unwanted access and tampering, even
against
the owner of processing device 101. In some cases, different applications and
data
storage within TEE 103 may be separately isolated and protected from each
other, if
needed.
[00116] In some embodiments, the protected memory region of the TEE 103 (e.g.,
secure
data warehouse 108) is isolated through the use of encryption. In this
example, the
encryption keys are stored within the TEE 103 itself so that it can access
data as required
but the underlying data is not accessible by other components, such as an
operating
system operating on the server or a kernel process. In an alternate
embodiment, the
isolation is conducted through the use of physical or electrical circuit
isolation from the
other components. In yet another alternate embodiment, both physical and
encryption
isolation are utilized.
[00117] As components and data of platform 100 are kept within TEE 103, they
are well
guarded against unauthorized access and tampering due to the isolation and
security
afforded by TEE 103. Therefore partner systems 115 have confidence that their
consumer
data would not be inadvertently leaked or accessed by others. As will be
described below,
each partner may verify that platform 100 within TEE 103 is secure and tamper-
free prior
to transmitting any data to plafform 100 (e.g., through attestation
processes). Therefore,
partner systems 115 have a high level of trust in platform 100 and would be
more willing to
send their consumer data to platform 100 for processing and in turn, receiving
targeted
recommendations and offers to current and prospective customers.
[00118] Data storage 108 can be, for example, one or more NAND flash memory
modules of suitable capacity, or may be one or more persistent computer
storage devices,
- 20 -
CA 3050487 2019-07-24

such as a hard disk drive, a solid state drive, and the like. In some
embodiments, data
storage 108 comprises a secure data warehouse configured to host encrypted
data.
[00119] Memory 109 may include a combination of computer memory such as, for
example, static random-access memory (SRAM), random-access memory (RAM), read-
only memory (ROM), electro-optical memory, magneto-optical memory, erasable
programmable read-only memory (EPROM), and electrically-erasable programmable
read-
only memory (EEPROM), Ferroelectric RAM (FRAM) or the like.
[00120] In some embodiments, data within the TEE can be stored in a data
storage 108,
memory 109, or some combination thereof.
[00121] Data storage 108 may comprise a secure data warehouse configured to
store
information associated with the TEE 103, such as cryptographic keys for remote

attestation, encryption and decryption. Data storage 108 may also store
confidential
information such as consumer data including transaction data. Storage 108
and/or other
storage 111 may be provided using various types of storage technologies, and
may be
stored in various formats, such as relational databases, non-relational
databases, flat files,
spreadsheets, extended markup files, etc. Data storage 108 can include, for
example, a
computer readable cache memory for loading the protected memory region, among
others,
as well as the protected memory region itself. Where the data storage 108 is
configured
for two-way access, the data storage 108 may store corresponding public keys
corresponding to specific data sources for encrypting the data prior to access
requested by
computing devices associated with the specific data sources.
[00122] The data storage 108, in some embodiments, maintains an isolated
machine
learning data model architecture that is trained based on data sets received
by the TEE
103, which may or may not be stored after processing on data storage 108. For
example,
if data is not stored on data storage 108 after processing and training,
performance can be
improved as less overall storage is required. This is useful where the data
sets are
particularly large or voluminous. In another embodiment, data sets are stored
on data
storage 108 in the protected memory region for future usage or time-spanning
analysis.
- 21 -
CA 3050487 2019-07-24

[00123] The data storage 108, can also store output data structures, which can
be
interacted with through recommendation engine 120, the output data structures
storing
field values that are generated by processing by a data processing subsystem.
In some
embodiments, the data processing subsystem of the TEE 103 includes a stored
function
that is generated based on an aggregate of the data sets received from the
corresponding
partner computing devices.
[00124] Each I/O unit 107 enables the platform 100 to interconnect with one or
more
input devices, such as a keyboard, mouse, camera, touch screen and a
microphone, or
with one or more output devices such as a display screen and a speaker. The
I/O unit 107
can be used to receive instructions to prepare for loading / unloading data
into data
storage 108, and may require the provisioning of a specific access key
required to access
or otherwise decrypt or validate data for loading into data storage 108.
[00125] The I/O unit 107 can also receive as a data structure, an instruction
set, or a
query string, the query data message that triggers the data processing
subsystem to
generate various output data structures.
[00126] Each communication interface 105 enables the platform 100 to
communicate
with other components, to exchange data with other components, to access and
connect
to network resources, to serve applications, and perform other computing
applications by
connecting to a network (or multiple networks) capable of carrying data
including the
Internet, Ethernet, plain old telephone service (POTS) line, public switch
telephone
network (PSTN), integrated services digital network (ISDN), digital subscriber
line (DSL),
coaxial cable, fiber optics, satellite, mobile, wireless (e.g. Wi-Fi, WiMAX),
SS7 signaling
network, fixed line, local area network, wide area network, and others,
including any
combination of these.
[00127] The platform 100 may be operable to register and authenticate users
(using a
login, unique identifier, and password for example) prior to providing access
to
applications, a local network, network resources, other networks and network
security
devices. The platform 100 may serve one user or multiple users. In some
embodiments,
- 22 -
CA 3050487 2019-07-24

users' credential information are stored within TEE 103, making them secure
and ensuring
a high level of trust from partners.
[00128] In some embodiments, one or more aspects of the system can be
implemented
on a cloud computing platform such as Azure TM or any other suitable
commercial or private
platform.
[00129] FIG. 23 shows aspects an example Virtual Clean Room on platform 100
for
controlling entity data. During a registration process, at 1, a client device
associated with
an entity can be authenticated and can request that the system created a
client or entity
special space via a secure enclave. In some embodiments, a public or private
key pair is
generated. The public key (Pk) is transmitted stored on a client device. And
the private or
secret key Sk is stored in the client special space (e.g. a portion of a
protected memory
region associated with the entity).
[00130] At 2, the client device may authenticate with a service provider or
issuer device.
The issuer device may be, for example, a device associated with the government
latency,
an insurance company, financial institution, or any other party which may
provide
information about the entity which may require proof or validation (e.g. birth
certificate
information, health care insurance, payment account information, etc.). In
some
embodiments, the issuer device can include a menu or can otherwise list tokens
or other
entity data which may be stored in the CSS. In some embodiments, the client
device can
select which entity / personal information to be stored in the CSS.
[00131] At 3, the client device provides an address (e.g. a URL, address,
pointer, etc.) or
otherwise provides information for the service provider to connect to the CSS.
In some
embodiments, the client device also provides the entity's public key to the
issuer device. A
digital version of the entity information is created by the issuer device and
is transmitted to
the CSS. In some embodiments, the entity information is generated using the
client's
public key and/or the issuer's private key. In some embodiments, the issuer's
private key is
used to sign or otherwise provide a cryptographic verification (e.g.
signature) that the entity
data was generated by the issuer device.
- 23 -
CA 3050487 2019-07-24

[00132] In some embodiments, entity data not relying on the issuer or other
authoritative
source can be received. For example, a user device can upload a picture of the
entity data
such as a photo of a driver's license into the CSS using their public key. In
other
embodiments, other verified or unverified entity data can be uploaded into the
CSS by the
user device or any other device associated with a user. For example, heart
rate
information from a heart monitoring device, etc.
[00133] At 4, the system can be configured to monitor, log, and/or audit any
CSS activity
including creation of the CSS, addition of data, access of data,
deletion/editing of data and
the like.
[00134] With reference to FIG. 24, upon receipt of an access grant signal such
as an
instruction from an entity device to grant, to a recipient, access into one or
more
components of the entity data, the system can generate a smart contract. A
smart contract
can be configured to define the entity data to be accessed and an identifier
for the
recipients of the entity data can be accessed. Some environments, Smart
contract and
configured to trigger a message for communicating information associated with
the entity
data to a recipient device upon satisfaction of one or more verification
conditions.
[00135] Entitlement. In some embodiments, the system consents, the record may
force
fine-grained access controls on elements of contract. In some embodiments,
access is
signed based on keys from User 1 (i.e. the entity).
[00136] In some embodiments, an agent orchestrator process executed by the
processors is configured to coordinate the work flow within the virtual clean
room. Some
environments, the agent orchestrator makes a function call to a data access
manager
process or otherwise facilitates minting of access tokens for the compute and
data nodes
as encoded in the access controls. In some embodiments, the data access
manager
process or functions perform the system's enforcement obligations.
[00137] In some embodiments, the compute nodes trigger a rule-based query to
extract
the personal information stored in the CSS of the Data Node (4c) per the smart
contract. In
some embodiments, extracted personal information data is either encrypted in a
key
managed by the special space or it is a token.
- 24 -
CA 3050487 2019-07-24

[00138] In some embodiments, the smart contract can provide timed access to
the entity
data. For example, access can expire, or access can be granted after a defined
period of
time or after the occurrence of an event (e.g. access details of a will after
a person dies).
[00139] Again, all events can be captured in an audit log.
[00140] A Remote Attestation mechanism may be used to authenticate and
establish a
secure communication channel, whereby a remote client (e.g. a partner system
115) may
ensure they are communicating with a particular piece of code running in
enclave mode on
an authentic non-compromised processor of platform 100. Remote Attestation can
also be
used by the enclave to send a public key to the client in a non-malleable way.
This
mechanism relies on highly non-trivial group signatures, but is also based on
highly peer-
reviewed research.
[00141] In some embodiment, the client or the partner system may include a
Python
script containing modules for establishing a secure encryption channel with
the platform
100, and converts input data into a canonical form to be consumed by the Clean
Room
300.
[00142] Remote Attestation may constitute the root of a client's trust in the
analytics
service. There are three ways it may be integrated with key exchange:
1. Perform Remote Attestation each time, or at least once per client. In
this
case the enclave will not have a long-lived public key, and would directly
place a
Diffie Hellman message on Remote Attestation's payload.
2. Enclave to present a Remote Attestation Transcript. Remote Attestation
is
by nature an interactive protocol, designed to convince only the verifier it
interacts
with. However, if all verifier challenges are produced deterministically using
a strong
hash function, the protocol is turned into a non-interactive one, through
which a
single execution can convince any number of verifiers. This transformation is
known
as the Fiat-Shamir Heuristic. A thus transformed protocol can be carried out
by the
untrusted enclave host itself. The enclave authenticates by presenting this
protocol
- 25 -
CA 3050487 2019-07-24

transcript similar to a public key certificate and signing a challenge and its
new Diffie
Hellman message by the public key embedded in the Remote Attestation
transcript.
3. Certificates: Clients can delegate Remote Attestation
verification to a 3rd
party and consume certificates issued by them. This is not a very promising
option.
[00143] The client or partner system may authenticate to platform 100.
Authentication
may help control the in-flow of data limits, though by no means eliminates,
the likelihood of
injecting garbage data into the system or mounting sensitivity attacks. These
attacks merit
a short exposition: injecting garbage can be done in order to either take the
system down,
or deliberately generate false analytics results from which the attacker may
benefit; and
sensitivity attacks are more subtle.
[00144] An attacker may observe how the end result of analytics changes
relative to
changes in the input they provide and through observing the output provided to
them infer
more information about data provided by other parties than intended by the
designers. In
some embodiments, in order to counter potential attacks, offer presentment
need to be
carefully crafted and information presented to client institutions may be
limited.
[00145] In some embodiments, a library like OpenSSL may be implemented with
the
following considerations: Best enclave-design practices calls for simplicity
and minimalism.
Therefore, functionality that cab be securely delegated to an untrusted
component, should
be delegated as such. In the context of SSL, transformation of native
representation of
algebraic objects (such as public keys and ciphertexts) into standard ones and
policy
checks are such tasks.
[00146] As discussed earlier, the service authenticates to the client in a way
that diverges
from what is practiced in 2-way SSL connections. That is, the SSL
specification as
implemented may allow for modularly switching to a user-defined authentication
protocol.
[00147] FIG. 26 shows a schematic diagram illustrating a remote attestation
process
between a partner system 115 and a trust manager utility 127 of Security and
Encryption
unit 125. At step 410, a Certificate Manager utility 128 can issue a Public
Key Certificate
- 26 -
CA 3050487 2019-07-24

129 for each partner. The certificate 129 may be used to prove to the Trust
Manager 127
that incoming data is authentic.
[00148] At step 420, upon request from a partner system 115, trust manager 127
may
initiate a remote attestation process with the partner system 115 to verify
the authenticity
of platform 100. The request from partner system 115 may include a nonce N (a
non-
predictable random value) that has been generated for the purpose of remote
attestation.
Trust manager 127 receives the request including the nonce N, and in turn
sends the
nonce and a request to a Trusted Platform Module (TPM) 135 on platform 100 for
key
attestation.
[00149] A TPM 135 is designed to provide hardware-based security-related
functions. A
TPM 135 may include multiple physical security mechanisms to make it tamper
resistant,
and others are unable to tamper with the functions of the TPM 135.
[00150] TPM key attestation uses an Endorsement Key (EK) unique to each TPM
135
and is generated at manufacturing. The trust in the EK is based on the secure
and tamper-
proof storage of the EK in the TPM 135 and on the fact that the EK's
certificate chains to
the TPM manufacturer's issuing Certificate Authority (CA). That is, the EK's
certificate can
be cryptographically verified by the TPM manufacturer's issuing CA. One or
more
Attestation Identify Key (AIK) may be generated by the TPM 135 and signed with
the EK.
The AIK can be verified by a trusted Certificate Authority.
[00151] In some embodiments, the request from Trust Manager 127 to a TPM 135
includes one or more current Platform Configuration Register (PCR) values of
platform
100. The request may optionally include a TPM version number or any other
information
required for TPM 135 to sign the PCR values. PCR values are used primarily to
store
system measurements and cannot be arbitrarily overwritten. PCR values may be
hash
values which are computationally impossible to forge. Some PCR values may be
reset to
a default value, which would require proper permission.
[00152] TPM 135 receives the request from Trust Manager 127 and proceeds to
sign the
PCR values with an Attestation Identify Key (AIK), then sends a Signed
Response
including the nonce, the PCR values and the AIK back to Trust Manager 127.
Trust
- 27 -
CA 3050487 2019-07-24

Manager 127 then sends the Signed Response to partner system 115, which may
have a
Partner Portal 116 installed thereon for analyzing and verifying the Signed
Response.
[00153] Partner system 115 receives the Signed Response, verifies that the
signed data
is authentic and trustworthy by verifying that the FOR values and the AIK
signature are
accurate. For example, partner system 116 may verify that the AIK is valid
through a
trusted Certificate Authority. For another example, partner system 115 may
verify the FOR
values are trustworthy by comparing the values to stored values in a database
which maps
FOR values to a trust level. Partner system 116 may further verify that the
FOR values are
current by checking that the nonce in the Signed Response corresponds to the
nonce sent
by the partner in its initial request for attestation.
[00154] In some embodiments, instead of PCR values, another hash value may be
used,
such as a hash value of software code of platform 100, where the hash code
represents a
current state of platform 100.
[00155] Once partner system 115 is satisfied, based on the Signed Response,
that the
Clear Room 300 running on platform 100 is authentic and trustworthy, a SSL/TLS

handshake may occur at step 430 in order to establish a secure communication
channel.
[00156] At step 440, encrypted data may be transmitted from partner system 115
to
platform 100 using the secure communication channel. In some embodiments, a
public-
private key pair may be used to encrypt the data. As described herein,
Security and
Encryption unit 125 may send an access key (public key) to partner system 115
using the
communication channel. The partner may use the access key to encrypt all data
being
transmitted on the communication channel. When Clear Room 300 receives the
encrypted
data through the communication channel, a corresponding private key may be
used to
decrypt the data, so that they may be cleaned, normalized and processed
accordingly.
Partner portal 116 (see FIG. 5) may store the public key(s) assigned to
partner system 115
in a partner keystore. Clear Room 300 may store the corresponding private key
to each
public key in a keystore 130. Keystore 130 may store a plurality of private
keys, each
corresponding to a public key that is assigned to a partner. A partner system
115 may be
assigned one or more public keys for encrypting data.
- 28 -
CA 3050487 2019-07-24

[00157] In some embodiments, since arbitrary-length strings may make encrypted
data
identifiable, data sets may be pre-processed prior to transmission. For
example, one or
more data strings may be padded to a specific length, such as a maximum length
allowed
by the system. In other embodiments, data strings may be broken down to a
predefined
structure, and each atomic component may be hashed or encrypted prior to
transmission.
[00158] FIG. 27 shows another schematic diagram of an example Clean Room 300
for
processing secure transaction data according to some embodiments. Clean Room
300
may include a Data Manager 134 configured to send public key of one or more
enclaves to
a partner portal 116 for encryption of data at the partner portal. The
enclaves 133a, 133b,
133n may be referred to as destination enclaves as each enclave may be
selected by Data
Manager 134 to be a destination of encrypted data from partner portal 116. A
file system
such as Hadoop File System (HDFS) may be included in Clean Room to manage the
encrypted data stored by the enclaves 133a, 133b, 133n.
[00159] In some embodiments, a partner portal 116 may initiate a communication

channel 215 thru TLS or VPN with Data Manager 134 for sending data to Clean
Room
300. The partner portal 116 may first transmit to Data Manager 134 a request
indicating
that data is to be transmitted to Clean Room. In some embodiments, the request
may
include information representative of an amount of data to be transmitted.
Based on the
data request, Data Manager 134 may select one or more destination enclaves
133a, 133b,
133n for receiving the incoming data from partner portal 116.
[00160] In some embodiments, Data Manager 134 may select the destination
enclaves
based on the amount of data to be ingested by each enclave, such that each
selected
destination enclave is specified to receive a specific amount of data from
partner portal
116 through this communication session. In addition, Data Manager 134 may
select a
public key for each of the destination enclave and send the one or more public
keys, each
corresponding to a selected destination enclave, to partner portal 116, so the
partner portal
can encrypt raw data using the appropriate public key prior to transmission of
encrypted
data via communication channel 215. For example, Data Manager 134 can send
information representative of an upper limit of data amount to be received by
each
destination enclave and corresponding public key (e.g. "MaxSize,
PublicKeyID"), so
- 29 -
CA 3050487 2019-07-24

partner portal 116 can encrypt the appropriate amount of incoming data for
each
destination enclave, in a manner that is consistent with the requirements of
the destination
enclaves.
[00161] Once partner portal 116 receives the information representative of
data amount,
destination enclave(s) and public key(s) from Data Manager 134, partner portal
116 may
proceed to encrypting the raw data. For example, partner portal 116 may
randomly
generate a 256 bit Data Encryption Key (DEK) for each destination enclave and
encrypts
some raw data with the respective DEKs using AES-256 CBC or GCM. Partner
portal 116
may generate DEKs based on the number of destination enclaves and
corresponding
number of public keys. A different DEK may be generated for each destination
enclave,
and thus for each public key associated with the destination enclave. Partner
portal 116
may then encrypt each of the DEKs using an appropriate public key based on the

corresponding destination enclave for which the DEK is generated. Next,
partner portal
116 may send the encrypted data along with the encrypted key (e.g. encrypted
DEK) to
Data Manager 134 via communication channel 215.
[00162] In some embodiments, the communication channel 215 may be a VPN
communication channel, in which case partner portal 116 and Clean Room 300
have both
been verified to be authentic.
[00163] In some embodiments, the communication channel 215 may be established
and
maintained under TLS, similar to the TLS channel between a partner system 115
and a
trust manager utility 127 of Security and Encryption unit 125, as described
above in
relation to FIG. 4.
[00164] A client system 119 may submit a query 118 to resource manager 1100 on
Clean
Room 300. The query may be a data query sent through communication session
216. In
some embodiments, a client system 119 must be an authorized party to Clean
Room 300
in order to send data queries; to this end, resource manager 1100 may be
configured to
interact with the client system to ensure that the client system is an
authorized party and
has proper permission for the query. Resource manager 1100 may return an
answer to
- 30 -
CA 3050487 2019-07-24

the client system in response to the query, once the client system has been
verified to
have the proper permission for the query.
[00165] In order to send the data query, the client system may initiate an
authenticated
TLS communication session 216 with resource manager 1100. The communication
session 216 may be established and maintained in a manner similar to the TLS
channel
between a partner system 115 and a trust manager utility 127 of Security and
Encryption
unit 125, as described above in relation to FIG. 26.
[00166] Through the TLS communication protocol, resource manager 1100 can
verify
that the client system is an authorized party to Clean Room 300. Once the
client system
has been verified as an authorized party, resource manager 1100 may transmit,
and
display at the client system, one or more data analytics to which the client
system has
access. The client system may elect one or more options from the displayed
data
analytics options. Some of the data analytics may require additional
information, which the
client system may be configured to supply. The client system may then send the
complete
data query to resource manager 1100.
[00167] Resource manager 1100 may receive the data query from the client
system, and
proceed to send the query to application manager 1124 in order to launch the
data
analytics based on the data query from the client system. Application manager
1124 may
be an application configured to generate one or more enclaves 133a, 133b, 133n
in order
to run analytics on the encrypted data using the enclaves. In some
embodiments, one or
more worker nodes may be used to perform the required data analytics.
[00168] In some embodiments, one or more data analytic operations may be open
for
inspection and/or signed by all authorized parties participating in Clean Room
300 to
assure the authorized parties that the Clean Room is secure and intact.
[00169] In some embodiments, enclaves 133a, 133b, 133n may have authenticated
and
encrypted communication between data/ documents stored thereon. For example,
between one or more pair of enclaves 133a, 133b, 133n, TLS communication
channel may
be established to ensure secure communication and exchange of data between the

enclaves.
- 31 -
CA 3050487 2019-07-24

[00170] In some embodiments, the system includes a trusted execution
environment
including the protected memory region, the protected memory region
inaccessible to the
one or more processors when operating outside the trusted execution
environment. The
processor(s) configured to operating inside the trusted execution environment
can be
configured for: generating information associated with the entity data within
the trusted
execution environment, and passing the information associated with the entity
data for
communication outside the trusted execution environment. For example, the
information
associated with the entity can be a token, and/or can be data re-encrypted
with a key
associated with a recipient, and/or can be data derived from the entity data.
For example, if
the entity data is a person's age, the derived data can be a token or other
data message
indicated that the person is over 21, which provides the information required
by the
recipient without disclosing the person's actual age or any other information.
[00171] With reference to FIG. 25, in some embodiments, the system is
configured to
verify the recipient device before providing access to the personal
information. In some
embodiments, the recipient (User 2 device forms a communication channel with
Verifier, in
this case it is the Client Special Space).
[00172] At 2, the Verifier makes a "Proof Request"
[00173] At 3, assume User 2 has an identity attribute to attest their
identity, that proof is
sent back the Client Special Space.
[00174] At 4, the special space shares the personal information after
appropriate
verification.
[00175] Embodiments described herein are directed to computer systems and
devices
directed to provide a cryptographic platform for generating and transmitting
messages that
are adapted to assert attributes about various objects (e.g., user profiles)
without indicating
any more than is actually required, and corresponding methods and computer
readable
media storing machine-interpretable instruction sets for performing the
methods.
[00176] The computer systems and devices, in accordance with some embodiments,
are
adapted to a high-volume, scalable system, which dynamically responds to data
credential
- 32 -
CA 3050487 2019-07-24

requests of one or more users or one or more computer systems requesting
identity /
credential proofs.
[00177] In some embodiments, the assertions are conducted using mobile
endpoints
(e.g., user devices) which may have limited computational performance and
resources,
and accordingly, an improved cryptographic approach and system is proposed
that
enables the assertion functionality through the passing of cryptographically
generated
messages between devices. An improvement associated with the proposed
cryptographic
approach of some embodiments is that it is able to operate in a secure and
scalable way,
even on limited computational resources (e.g., those available on an
unenhanced
smartphone).
[00178] Prior approaches required large numbers of large messages being sent,
which
made the approaches impractical where resources were limited. The approach
proposed
herein requires less messages and streamlines the amount of cryptographic
computations
required to make these assertions. For example, Belenkiy describes an approach
which
requires a large number of computational steps, which can have deleterious
impacts on
performance.
[00179] Credential verification, when conducted manually, is a tedious process
prone to
falsification and also over-provisioning of information. In an example, Alice
is a law-abiding
26 year old, and she would like an alcoholic beverage. Before selling beer to
Alice, Bob
wants to make sure of two things: She is legally allowed to drink, meaning 21
years of age
or more, and that she is not a problem customer.
[00180] Alice thinks the conditions are fair, and they both know presenting
her ID card
would prove that she does satisfy them. She could provide her driver's
license, which
shows her name and date of birth. She would like to not disclose anything to
him other
than the fact that she satisfies the conditions. However, by providing her
driver's license,
Bob ends up knowing more than he needs to know (e.g., age and specific date of
birth as
opposed to the fact that she is above 21 years of age and is not the problem
customer).
Further, aside from visual inspect of the license, Bob has practical
difficulties in verifying
that the driver's license is not a fake driver's license.
- 33 -
CA 3050487 2019-07-24

[00181] Accordingly, a challenge involves providing a robust credential
verification
whereby Alice is able to prove to Bob that she does satisfy Bob's customer
policy, while
revealing nothing other than the fact to him. As an example, consider a policy
of being
older than 21. That is all Bob needs to know. He does not and should not know
that Alice
is in fact 26.
[00182] The system is configured to adduce stripped down credentials to meet
Bob's
customer policy without exposing additional information. In particular,
cryptographic
techniques are utilized that undertake specific steps and computational
approaches to
provide a secure, yet computationally efficient mechanism for proof
generation.
[00183] Accordingly, an issuer device issues one or more signed token data
objects,
which are stored on a client's device for later usage. Upon encountering a
situation where
verification is required, the client's device is configured to dynamically
generate proof data
messages which are then provided to the verifier's computing device (e.g., the
verifier's
smart phone, a point of sale device, an access control system, a mantrap
gate). The
verifier is able to conduct a verification check using the proof data message
to see only
that the conditions required in the original verification check message
without providing the
actual underlying characteristics. As the proof data messages are generated
using the
token data objects, the verifier is able to validate that such proof data
message is
associated with a trusted verifier.
[00184] There are two different types of proofs that are proposed in some
embodiments,
these being exact match proofs (non-zeroness protocol; e.g., this person
either matches
someone on a whitelist or doesn't match anyone on a blacklist), and
conditional proofs
(e.g., based on an inequality condition being matched, such as over 18 years
old?).
[00185] As described in various embodiments herein, improved cryptographic
protocols
are proposed that, relative to prior approaches, reduce an overall
cryptographic complexity
without a significant reduction in security. Accordingly, the proofs can be
generated more
quickly, which improves convenience, especially where a system is being
established for
mass adoption and client device characteristics are highly variable across the
users (e.g.,
some users may be using devices with extremely limited capabilities).
- 34 -
CA 3050487 2019-07-24

[00186] An enhanced solution is described herein that is adapted for
protecting a client's
personal information and only providing what is needed by leveraging a
client's special
space using a secure enclave and a blockchain solution, in accordance with
some
embodiments.
[00187] A blockchain infrastructure and the secure enclave each store data
sets
representing aspects of signed attributes and, in some embodiments, a proof
response
logic. The block chain infrastructure can include distributed logic
technologies and
combination with cascading encryption to provide an immutable ledger. In some
embodiments, the proof requests and responses can be conducted using
intelligent
connected devices such as a mobile device, or wearable devices (e.g., a
snnartwatch that
is connected to a mobile device across Bluetooth low energy).
[00188] In an example embodiment, there are multiple authoritative issuers who
are able
to provide signed attributes (e.g., for storage in secure enclaves or on a
distributed ledger
blockchain data structure). Secure enclaves can be utilized, or other types of
hardware
protected spaces are usable.
[00189] A registration mechanism and method is utilized to initialize and
populate the
attributes using public and secret (private) encryption keys. Issuer devices
create attribute
data records that are generated using a combination of a client's public key
and an issuer's
secret key (e.g., using digital signatures or encryption / decryption). The
attributes can be
made publicly available, for example, on a blockchain, whereby the attributes
can be
signed by an issuer's secret key but encrypted using the client's public key.
[00190] A verification mechanism and method is provided whereby a
communications
channel can be established with an authenticated verifier device, which
initiates a proof
request, which triggers a process to establish a proof response that is
transmitted to the
verifier.
[00191] An example use case includes a specially configured age verifier
terminal, which
for example, can include a graphical user interface rendering visual and coded
objects
such as a quick response code that can be scanned by a mobile device. Upon
scanning
the quick response code, the verification mechanism is invoked, and the mobile
device
- 35 -
CA 3050487 2019-07-24

may share data sets on a backend communications network such as the Internet.
The
proof response can be transferred to the verifier device based off of
identifiers or
information stored other on the age verifier terminal, or encoded within the
quick response
code the age verifier terminal returning true or false such that both a
verifier such as a
cashier, and the customer are able to visually confirm. The proof response
rendering, for
example, may be restricted to a true/false determination (e.g., additional
private
information is not disclosed or rendered).
[00192] Embodiments described herein are directed to computer systems and
devices
directed to provide a cryptographic platform for generating and transmitting
messages that
are adapted to assert attributes about various objects (e.g., user profiles)
without indicating
any more than is actually required, and corresponding methods and computer
readable
media storing machine-interpretable instruction sets for performing the
methods.
[00193] There are computing devices that interoperate with one another in
concert with
the cryptographic platform, including devices associated with issuers,
verifiers, and clients.
The issuers are trusted entities which provide cryptographically validated
credential
messages that are issued to the client devices for storage thereon.
[00194] The cryptographically validated credential messages are then
presentable to a
verifier (e.g., a third party organization) that seeks to validate that
identity or aspects of the
identity of the user associated with the client device. The cryptographically
validated
credential messages are configured such that the user is able to validate such
identity or
aspects without providing additional information associated with the user that
is not
requested (e.g., as opposed to presenting all the information on a driver's
license).
[00195] The credential assertion platform is a high-volume, scalable system
which
dynamically responds to data credential requests of one or more users or one
or more
computer systems requesting identity / credential proofs.
[00196] In some embodiments, the assertions are conducted using mobile
endpoints
(e.g., user devices) which may have limited computational performance and
resources,
and accordingly, an improved cryptographic approach and system is proposed
that
- 36 -
CA 3050487 2019-07-24

enables the assertion functionality through the passing of cryptographically
generated
messages between devices.
[00197] An improvement associated with the proposed cryptographic approach of
some
embodiments is that it is able to operate in a secure and scalable way, even
on limited
computational resources (e.g., those available on an unenhanced smartphone).
[00198] For example, a device with limited computational resources can include
basic
smartphones, which may be one or more generations out of date, and also have
limited
amounts of on-board memory (e.g., 1-4 GB of memory) and storage (e.g., 8-64 GB
of solid
state memory). The transfer protocols as between the client devices and the
verifier
devices may also have limited bandwidth (e.g., through near-field
communications (NFC),
Bluetooth, limiting communications to only several Mbit/s).
[00199] Prior approaches required large numbers of large messages being sent,
which
made the approaches impractical where resources were limited. The approach
proposed
herein requires less messages and streamlines the amount of cryptographic
computations
required to make these assertions.
[00200] As described herein, an improved cryptographic mechanism and protocol
is
proposed that reduces an overall number of data messages and/or cryptographic
steps
required to be taken to generate the proof data messages. For example, the
method of
Belenkiy requires 4 randomizations, 3 group multiplications and 7 group
exponentiations,
which includes elliptic curve exponentiations that are computationally
expensive (e.g.,
involves more than 256 operations on 512 long integers). In a proposed non-
zeroness
approach of some embodiments, a field inversion is provided, which itself is
an expensive
operation, but reduces a consideration number of group exponentiations.
[00201] The proof data messages are designed to have a "soundness" attribute
whereby
a malicious verifier is unable to find out from the proof data message more
information that
what is being provided in the proof data message (e.g., can't find out the
underlying
characteristic values).
- 37 -
CA 3050487 2019-07-24

[00202] A computer implemented identity brokerage solution is described in
accordance
with various embodiments. The identity brokerage solution is adapted to
address problems
with identity and attribute verification, using computer implemented
cryptographic
approaches to provide a robust mechanism for conducting verifications while
reducing the
provisioning of extraneous information (e.g., information not required for the
verification).
[00203] Credential verification, when conducted manually, is a tedious process
prone to
falsification and also over-provisioning of information.
[00204] FIG. 28 is a graphical representation of parties to a verification
event, according
to some embodiments. The parties to a verification event include a prover
(e.g., the entity
seeking to prove the entity's characteristics and/or identity, for example,
through
programmatic mobile application client having a token stored thereon), a
verifier (e.g., the
entity seeking to verify the prover's characteristics and/or identity in
accordance with a
policy), and an issuer (e.g., the entity, such as a financial institution,
which has a
relationship with the prover and can attest to the prover's characteristics
and/or identity,
whose representations are trusted by the verifier).
[00205] In accordance with various embodiments, the prover should be able to
hide as
many attributes as the prover seeks to prove that follows from their
attributes having zero
knowledge of the underlying attributes: "I've lived in the same city over the
last 5 years."
[00206] The prover's client holds credentials that are digitally signed by the
issuer
("tokens"). An example token are those provided by U-Prove specifications. A U-
Prove
token can include a credential similar to a PKI certificate with cryptographic
wrapping of
attributes to aid in reducing unwanted tracking of users.
[00207] For example, a token may have various artifacts wrapped therein and
may
include information, such as issuer parameters, including issuer public key
information
(e.g., coupled an issuer's private key) that can be used for signing or
encrypting elements
of information stored thereon to prove the veracity of such signature or to
protect sensitive
information. The issuer signature can be used by the prover or verifier to
verify issuer
parameters being relied upon, and the token itself, in some embodiments, may
have one
- 38 -
CA 3050487 2019-07-24

or more data fields storing information such as token usage restrictions,
validity period,
token metadata.
[00208] In some embodiments, the token is jointly created using a combination
of issuer
information and prover information. For example, there may be information
stored thereon
that is established in conjunction and hidden from the issuer, such as contact
information,
encryption key, or verifier supplied nonces, etc.
[00209] During issuance of a token, an issuer may authenticate the existence
and access
/ control that the prover has over the prover's device.
[00210] Tokens include attributes that can be converted from a natural form to
a
sequence of large numbers (field elements) suitable for public key operations.
These
public key operations include anonymous credentials protocols.
[00211] Attributes are organized in a tree. An attribute can either come with
a value, in
which case it is called a leaf attribute, or bundle a number of sub-attribute,
in which case it
is called a parent attribute.
[00212] For example, consider a geographic location attribute. That would be
most
naturally divided up into a latitude sub-attribute and a longitude sub-
attribute. Thus, a
credential token can be considered consisting of a single root attribute
containing all others
as descendants.
[00213] Regardless of whether an attribute is disclosed, committed to, or
hidden, the
prover may wish to communicate metadata about it to the verifier. The most
important
such property is an attribute's name. The number "170" in an attribute would
mean nothing
without the name "height" attached. Additionally, such numeric attributes
require units as
context. The number "170" is absurd if considered in inches but plausible when
in
centimeters.
[00214] It is important to disclose this metadata even when attributes are
being
committed to. Consider the non-trivial example of heights and units. Consider
an attraction
park that refuses to admit people taller than 180 cm on a rollercoaster.
Without the proper
- 39 -
CA 3050487 2019-07-24

context provided, a 188 cm tall person can abuse an attribute a height
attribute of 74
inches and successfully prove 74 < 180, thereby put him and others in danger.
[00215] In some embodiments, the token can include fields that additionally
give the
users an ability to decide if they want to hide an attribute's metadata. For
example, even if
hidden, an attribute attesting to a negative syphilis test can carry social
stigma.
[00216] An attribute will be serialized into one "raw attribute" (a number or
string) if the
user chooses its metadata to depend on its parent's. If not, it will be
serialized into two, the
first representing its metadata and the second representing the value.
[00217] Every attribute's metadata contain an array called "subAttributes". If
the array is
empty, the attribute is considered to be a leaf attribute. Each sub attribute
has a
corresponding entry in the array. If the sub attribute is encoded
independently, the entry
will be an integer, denoting how many raw attributes the sub attribute and all
of its
descendants (subtree) together will take. If it is encoded dependently, the
subAttributes
entry will be all of its metadata.
[00218] In this example, it is describing a token for an individual residing
in 35.796682 N,
51.416549 E, and 188 cm tall. In radians, the coordinates are 0.624769962188 N
and
0.897388070061 E.
[00219] The token from the slide will serialize into the following, each
bullet point
representing one raw attribute:
{subAttributes: [
{name: "homeAddress", type: "polarCoordinates", subAttributes: [
{name: "longitude", type: "polarCoordinate", unit: "mRad", subAttributes: [D,
2
{name: "height", type: "length", unit: "cm", subAttributes: []}
-40 -
CA 3050487 2019-07-24

897
{name: "latitude", type: "polarCoordinate", unit: "pRad", subAttributes:
624770
188
[00220] A proof request is issued from the verifier to the prover's client,
asking the prover
to give the verifier cryptographic assurance that according to some issuer
trusted by the
verifier, the prover's attributes satisfy a certain (arbitrary) policy (e.g.
older than 21, as far
as provisioning alcohol is concerned.), and these proof requests typically
contain one or
more challenge messages. A proof request can include a nonce, types of
conditions, etc.,
and these conditions may be encapsulated as inequalities (e.g., intUserAge >
18), or
logical statements (e.g., intUserlD not equal to 22412). One or more lookup
reference
data structures may also be passed, which can include blacklists, whitelists,
values for
various constants (e.g., MINIMUMDRINKINGAGE).
[00221] A proof is provided by the prover through client as a response to the
verifier's
request, which includes cryptographic assurance that the prover's credentials
satisfy the
verifier's proof request, the cryptographic assurance being held being as good
as the
issuer's word. The proof is a data message that encapsulates various
information (e.g.,
proof responses directed to a sigma protocol). The data message includes
sufficient
information such that the verifier is able to receive the data message, and
conduct steps to
validate and verify that such proof responses are indeed acceptable. In
processing proof
responses, the proof data message can include aspects indicative of the
identity of an
issuer, and a potential step is the validation by the verifier that such
issuer is indeed
trustworthy as a source of credential authentication.
[00222] The proof responses can be processed to generate gatekeeping control
signals,
which, for example, in an example embodiment, may be as simple as a device
that
operates a lightbulb whenever someone is validated as being of age (e.g., so
that a
- 41 -
CA 3050487 2019-07-24

bouncer at a bar is able to know that this person should be allowed in), or as
complex as
control mechanisms that automatically open doors, enable access to online
accounts (e.g.,
on a web portal), etc. Accordingly, the verifier systems can include physical
and electronic
mechanisms which can generate alerts, notifications, actuation / control
signals, digital or
electronic signals, among others.
[00223] Factors for assessing identity brokerage solutions include how light
the required
infrastructure is (e.g., it may be important to reduce the need for
specialized hardware,
centralized devices, or complex distributed systems that make deployment and
entry
difficult), a level of computational efficiency, a simplicity of cryptography,
a level of un-
linkability between various parties (e.g., the issuer should not be able to
aggregate
additional data about the client, even in collusion with verifiers), and a
flexibility and level of
minimalism of disclosed information.
[00224] Any solution requiring the issuer to be online at verification time
risks exposing
additional information about the client to the issuer. This is especially
concerning in cases
where the issuer and the verifier collude to track client activities.
[00225] Reduced complexity is desirable as a solution may be less likely to
suffer
implementation flaws, be more easily understood, and less likely to
theoretically break due
to reliance on unusual hardness assumptions. If computational operations that
have
optimized / low-level implementations, the solution may be able to operate
using less
computing resources and/or time.
[00226] The identity protocols, ideally, should require little time, take
little power, have
few rounds of message transmission, and pass messages having small sizes
and/or
overhead. This is especially important where the parties implement portions of
the identity
brokerage solution on mobile devices to handle one or more verification
events. The
mobile devices have limited computational, storage, and interface
capabilities.
[00227] The parties hold corresponding public / secret (e.g., private) key
pairs. The public
keys can be utilized to determine the veracity of information signed using the
private keys,
and to encrypt information that can be decrypted using the corresponding
private key.
- 42 -
CA 3050487 2019-07-24

[00228] The private keys can be utilized to sign information and to decrypt
information
that has been encrypted using the corresponding public key, and in some cases,
produce
Zero-Knowledge Proofs of Knowledge. Each secret key is maintained by the
corresponding computing device associated with the corresponding entity.
[00229] The parties each have corresponding computing systems, which are used
to
electronically communicate amongst one another (e.g., through a network) and
to perform
various cryptographic activities, including signing, verifying signatures,
encrypting
information, decrypting information and various anonymous credential issuance,
proof and
verification protocol implementations. Each verification event is associated
with validating
whether all logical conditions of the proof request are satisfied. A positive
determination
may lead to access / service / goods being provisioned to the prover. A
negative
determination may lead to access / service / goods not being provisioned to
the prover.
[00230] A specific technological implementation of providing identity
assertions with
minimal disclosure is described in various embodiments. Three separate
approaches are
described, along with variations thereof. These approaches include (1) an 0-
Auth token
based design, (2) a secure enclave based design, and (3) an anonymous
credentials
based design.
[00231] In some embodiments, a proposed approach is provided in an anonymous
credentials based design whereby a client receives token data structure(s)
that are stored
on data storage, and asynchronously, the client gets a verifier request from a
verifier. The
verifier may, for example, have a list of trusted issuers that the issuer
verifier trusts.
Certain organizations may be trusted for certain information, such as a bank
for
employment or financial status, a university for educational attainment
characteristics,
among others. The client generates a proof (e.g., encapsulated as a proof data
message)
based on the token and the verifier request, and the proof can be established
as either a
non-zeroness proof or a conditional proof. Token objects can be received from
or
computed jointly in a multiparty protocol with an issuer computing device.
[00232] For a non-zeroness proof, the proof approach generation can include a
first
modular inverse, two randomization steps, two group exponentiations, and a
group
- 43 -
CA 3050487 2019-07-24

multiplication. In particular, the steps in an example non-limiting embodiment
can be
established as:
[00233] (1) Receive a verification request data message from the verifier
computing
device, the verification request data message including at least a nonce co.
[00234] (2) Compute t = x-1 mod p, where x is the attribute value from the
token, and p
is the order (e.g., size, number of elements) of the discrete log group (e.g.,
elliptic curve,
Diffie-Hellman group) according to the cryptographic standards the parties
choose to use
(e.g., elliptic curve, Diffie-Hellman group); t is the modular inverse of x
mod p.
[00235] (3) Sample a first random numberri and a second random number,r2, such

thatri, r2 E Zp
[00236] (4) Compute R = Cxr1hr2, where R is effectively a commitment to random
values
r1 and r2,Cx is a commitment to attribute x,h is a group generator taken from
cryptographic
specifications (e.g., elliptic curve, Diffie-Hellman group). A commitment is a
representation
of a value that is both hiding and binding, hiding in the sense that the
recipient of the
commitment cannot find out anything about what the value of the commitment is,
and
binding in the sense that the sender later cannot pretend that it was a
commitment to
another value than it originally was.
[00237] (5) Compute c = H(Cx, R, co), where c is the proof challenge,
following the Fiat-
Shamir Heuristic.
[00238] (6) Compute z1 = ct + r1 andz2 = ¨cty + r2, where z1 and z2 are proof
responses
in a sigma protocol.
[00239] (7) Encapsulate and transmit one or more proof data messages including
R, z1
andz2 as data objects to the verifier computing device, such that the verifier
computing
device is able to compute c = H(Cx, R, co) and confirm that eR = Cxz1hz2, the
verifier
computing device controlling provisioning of access to a secured resource
responsive to
the confirmation that eR = Cxz1hz2.
- 44 -
CA 3050487 2019-07-24

[00240] The verifier independently validates the received proof and the
verifier makes a
determination of access grant or not grant.
[00241] In some embodiments, the verifier is a verifier computing system that
automatically grants access to one or more secured resources, such as a
physical access
entry (e.g., mantrap, revolving doors, locked gateway, locked cabinet), and in
other
embodiments, the system grants access to one or more virtual resources (e.g.,
administrator access on a computer system, logging into accounts, access to
secured
sections of webpages), among others.
[00242] In another example, a comparison protocol may be established (e.g., to
prove
some condition whereby a<= b). This can be utilized to establish proof
messages whereby
it is necessary to indicate that a person is of a particular age, that a
person has a particular
minimum creditworthiness, a person has a minimum educational attainment, among

others.
[00243] Consider G to be a discrete log group of prime order p and g and h be
generators
with unknown discrete logs.
[00244] Let numbers q and 1 be such that q ¨1= 2N <122 and two whole numbers a
and b
such that 1<a<b<q
[00245] Consider commitments A = gahma and B = gbhmb to a and b, respectively.
[00246] To prove that a b, the following steps can be taken:
[00247] (1) Prover computes C = BA-1 = gb-ahmb-ma = gchnic.
[00248] (2) Prover produces bit commitments Ai = gaihmai, Bi = gbihmbi, Ci =
gcihmci for
i E [1,...,N ¨ 1} where ai, 131 and ci are the i'th bits of a ¨ 1, b ¨ land c,
respectively. mai,
mbi and mci are sampled randomly.
[00249] (3) Prover computes Ao = gaohmao = A Ar2l and likewise Bo = gbohmbo
=
B H' Bj21 and Co = gcohmeo = C flj1 CF2i.
- 45 -
CA 3050487 2019-07-24

[00250] (4) For each i E ...,N ¨ 1}, the prover does the following:
[00251] (4.1) Randomly sample rai, d'ai and z.
[00252] (4.2) Compute RaLai = hrai and Ravi_ao
[00253] (4.3) Compute dai = H(Ai, Rao, Rao).
[00254] (4.4) Compute zai = (dai ¨ d'ai)mai + raj.
[00255] (4.5) Assign zai,ai = zai, zai,(1.-a1) = Z1, dia1= dai d'ai and 4i,(1-
ao = 41-
[00256] (4.6) Repeat steps 4.1 through 4.5 for B and C.
[00257] (5) Prover sends all Ai, Rao, Rai,i, 41,0, zai3O, zai,1, Bi, Rbi,o,
Rbi,i, d10, Zbi,o, zbi,i,
Ci, R10, Rc1,1, dic11,07 zci,o, z1,1.
[00258] (6) Verifier checks that A = nroiAr, B = fal-0113?i, BA-1 = cr.
[00259] (7) For each i E(0,1, ...,N ¨ 1) the verifier checks that:
[00260] (7.1) Ad:LoRaim = hzaro
[00261] (7.2) (Ag-1)8(AbRai,o,Rai,i)-d:LoRao
[00262] (7.3) Check the same conditions for B and C
[00263] Note: It may be that either a or b are known to the verifier. In such
a case there is
no need to decompose the known number and commitment C will have the same mask

exponent as that of the unknown parameter.
[00264] In some embodiments, that the client computing device (e.g., the
prover) does
not send Ao, Bo and Co to reduce the size of its messages. In that case, in
step 6, instead
of verifying a relation between the bit commitments the verifier derives Ao,
Bo and Co
- 46 -
CA 3050487 2019-07-24

independently. This aspect may be particularly useful in low data throughput
situations or
where storage space is very limited.
[00265] The comparison method of some embodiments reduces the problem of
comparison to three bit decompositions. As such, the computational burden on
the prover
consists of about 12N-3 group exponentiations.
[00266] In contrast, the method of Belenkiy involves two bit decompositions
and N-1
equality maps each consisting of four 2-variable equations and a total of six
distinct
variables.
[00267] As such, it is estimated that each equality map requires at least 8
group
exponentiations.
[00268] Using the efficient Bit Decomposition implementations of some proposed

embodiments, the two decompositions will require a total of 8N-2 group
exponentiations.
Accordingly, it is estimated that Belenkiy's method requires 16N-10 group
exponentiations.
This demonstrates that for N.2, the proposed method for the comparison
protocol is more
efficient, and this superiority becomes increasing important as the numbers to
be
compared scale up.
[00269] In particular, the scale up may occur if the credential verification
system is
utilized across a large number of users.
[00270] In some embodiments, the system includes a credential parsing engine
provided
to receive one or more credentials which in combination, validate one or more
characteristics of an identity profile of a prover entity.
[00271] A proof generation engine is provided that receives, from a verifier
computing
system, the one or more proof request data structures and the one or more
credentials;
and for each logical condition provided in the one or more proof request data
structures,
parse the one or more characteristics of the identity profile to determine
whether the logical
condition has been satisfied.
- 47 -
CA 3050487 2019-07-24

[00272] One or more proof output data structures storing signatures or zero
knowledge
proofs of satisfaction of a subset or all of the one or more logical
conditions is returned by
the system (e.g., in the form of data fields). A secure encryption engine and
a secure
processing enclave may be included, in accordance with various embodiments.
[00273] A proof generation engine, in some embodiments, resides at or is
coupled to a
data center of a financial institution, and wherein parsing the one or more
characteristics of
the identity profile includes invoking an electronic comparison against a
stored user profile
of the financial institution corresponding to the prover entity. The example
implementations
are not restricted to such a topology, and other topologies are contemplated,
including a
cloud I distributed resources based proof generation engine.
[00274] In other embodiments, the proof generation engine is coupled to the
secure
processing enclave, which may also be coupled to a verifier computing device.
[00275] In another embodiment, the proof generation engine lies within the
prover's user
device, thus user data will never be provided to the verifier and the issuer
will not be
informed of the transaction taking place.
[00276] In another aspect, the electronic comparison against the stored user
profile of the
financial institution corresponding to the prover entity includes querying one
or more
attributes of the stored user profile and comparing the queried one or more
attributes
against the one or more logical conditions to determine whether individual
logical
conditions of the one or more logical conditions have been satisfied. The
characteristics
and attributes of the user profile can be used established and stored thereon
the portable
client computing device as one or more token data objects that can be received
from or
computed jointly in a multiparty protocol with an issuer computing device.
[00277] The one or more token data objects are generated (e.g., as signed
objects or
encrypted objects) using at least an issuer computing device private issuance
key. The
one or more token data objects each including one or more signed data elements

representing at least one of the one or more characteristics of the client
associated with
the portable client computing device.
- 48 -
CA 3050487 2019-07-24

[00278] In another aspect, the verifier computing system is configured to
encapsulate the
one or more credentials along with the one or more proof request data
structures in a
single data container transmitted to the proof generation engine.
[00279] FIG. 29 is an example 0-Auth token based method, according to some
embodiments. The 0-Auth based method portrayed does not address the issue of
unlinkability, and in this example, the prover computing device electronically

communicates to the verifier 0-Auth tokens containing the verifier's proof
request, which
the verifier computing system can use to formulate a query message to be
transmitted to
the issuer computing system, and receive a yes/no answer in response. A
benefit to this
approach is that there is relatively light infrastructure required, it is very
efficient, and the
cryptography is simple and flexible.
[00280] However, the issuer computing system needs to be available (e.g.,
online) to be
able to process the request. In response to a proof request, the prover
confers an 0Auth
token (not to be confused with credentials) that the verifier can use to query
the issuer and
be assured that the prover does indeed satisfy their policy.
[00281] The verifier is provided tokens containing the verifier's proof
request which can
be used to query a computing system associated with an issuer, receiving an
answer, such
as a yes or no response (e.g., or a Boolean variable, such as TRUE / FALSE, 0,
1).
[00282] Secure Enclave
[00283] A challenging technical problem occurs in implementing a system where
the
verifier is able to ensure the prover has the correct credentials, while
preserving their
privacy. In some embodiments, a secure enclave based approach is described. In
order to
implement a secure enclave, Intel Software Guard Extensions TM (SGX) can be
utilized,
among others.
[00284] There are different mechanisms for public key cryptography. An
approach, for
example, supported by the Intel SGX SDK natively supports ECDH for key
encapsulation
and ECDSA for digital signatures over the PRIME256V1 (also known as SECP256R1)

curve. Other approaches are possible, such as Schnorr's, which would serve
just as well
- 49 -
CA 3050487 2019-07-24

for a digital signature scheme. 256-bit base fields may potentially provide
sufficient
security.
[00285] For symmetric cryptography, Intel SGX SDKTM natively supports 128-bit
AESGCM. This primitive can be used for authenticated encryption. It remains to
be seen if
larger AES key sizes are necessary. In that case, Galois-Counter Mode cannot
be used.
[00286] Hashing can be performed using SHA-2, as this is natively supported by
the Intel
SGXTM SDK. As it supports 256-bit blocks, it would also be useful in case of a
migration to
larger AES blocks; both as a means of key derivation as well of a MAC building
block.
[00287] The secure enclave approach improves computational efficiency and
minimizes a
trusted computing base, rendering it more amenable to formal analysis. The
verifier may
include a verify oracle, which is a trusted software/hardware component hosted
by an
untrusted third party. It is allowed to view a prover's attributes in the
clear and attest that
they satisfy a certain predicate queried by the verifier.
[00288] An example registration protocol is provided as follows. First, a
prover generates
their public key. The issuer hands the prover a random string r, the prover
generates
and generates pkp = ffskp) for skP = (*skP' = r) and common knowledge
collision resistant
function f. In order for the registration to be accepted, the prover should
prove to the issuer
in zero knowledge that it knows a corresponding The (semi-honest) issuer's
contribution to key generation is to keep a malicious prover from stealing
credentials
belonging to a previously revealed private key.
[00289] In regard to credential subletting, it may be beneficial that the
issuer should
demand the prover to incorporate some important secret about their account
(not even
known by the issuer) into the private key, such that the secret can be
inferred from the
private key. This will discourage provers from sharing credentials with one
another. Alice
may be willing to let Bob use some credential issued to her by her bank, but
not be
comfortable giving him complete control over her bank account.
- 50 -
CA 3050487 2019-07-24

[00290] Another possible technique to approach this is to issue credentials to
specific
devices, using private keys that the device can create for an application and
sign using
them on the application's behalf, without ever sharing the key with the
application.
[00291] An example issuer protocol is described:
[00292] The issuer generates a signature on the prover's attributes using an
arbitrary
signature scheme that is secure against existential forgery. For the
construction to be
secure, the signature should also involve a description of the token's data
structure.
[00293] More formally, the prover and the issuer agree on a string ap
representing the
prover's attributes. The issuer knows the prover as the owner of pkp,
satisfying ap. The
issuer sends the prover back a signature a, = sig(sk,;pkplIap) to the prover.
[00294] It is not strictly speaking necessary for the prover to have a public
key at all.
However, if the issuer enforces limits on how often it would register a new
public key for a
client, provers will be discouraged from subletting their credentials to one
another. This
stands in opposition to keyless credentials, where disclosing the secrets to a
credential
doesn't cost the original owner anything.
[00295] An example protocol for showing verification by the oracle is
provided.
[00296] Let the prover and the verifier both trust a verification oracle known
by the key
pair (sko,pko).
[00297] The verifier chooses a random challenge c and sends it to the prover.
A proof
request predicate P is agreed upon. The prover composes the string d =
(pk,IlskpllapIlapl and sends enc(pko;d) to the oracle.
[00298] The oracle decrypts d and checks that the following propositions are
satisfied:
sigver (pk, cri ; f (skp)Ilap)
P(pk= )
[00299] "" a
- 51 -
CA 3050487 2019-07-24

[00300] In case of successful verification, the oracle outputs ao =
sig(sko;clIP) or it outputs
1 otherwise.
[00301] The verifier only needs to check that sigver(pk0;a0;c1IP) holds.
[00302] Note that as regards propositions P that do not depend on anything
outside ap
(e.g. time) there is no freshness requirement; therefore the challenge c can
simply be
regarded to be the empty string in such cases.
[00303] For examining the approach security, the following collusion scenarios
are
considered:
[00304] Malicious issuer and prover to break soundness: This attack can be
trivially
mounted and in some embodiments, there is not attempt to prevent it. The
issuer can
always issue bogus adaptively chosen untruthful credentials for an accomplice
prover. In
practice, such a problem is best thwarted by strong and transparent
authentication and
KYC practices by issuers, as well as careful choice of trusted issuers by
verifier
consortiums based on thorough vetting.
[00305] Malicious issuer and verifier to break zero-knowledge: Zero-knowledge
in this
context means that an adversary controlling all issuers and verifiers cannot
pinpoint which
of the trusted issuers implied by the query and which of the credentials
obtained from the
issuer the credential in use is. For this, the analysis makes use of the CCA2
property of the
encryption scheme used in Acquire Proof queries.
[00306] More formally, consider the following game, where the adversary is
allowed to
make polynomially many of the following queries, interleaved with polynomial
computations:
[00307] Create Honest Oracle: Generate (sko,pko) and add pk, to the set
()honest known to
the adversary.
[00308] Confer Credential: Send (a, = sig(sk,,pkpilap),pk,) for arbitrary ap
and arbitrary key
pairs (sk,,pk,) and (skp,pkp).
- 52 -
CA 3050487 2019-07-24

[00309] Request Challenge: Send arbitrary pko E honest, P and c to the
challenger. The
chellenger picks a random element d from the set D = { (Ph.dIsh.pl lapilad lei
iP)1P(pkz = (T) and
sends enc(pko;d) back to the adversary.
[00310] The adversary wins if D is non-empty and he can guess the value of d
with non-
negligible advantage over a random choice.
[00311] A simulation argument naturally follows from this intuition and is
therefore
avoided.
[00312] Malicious prover to break soundness: The adversary is allowed
polynomially
many queries from the following list; arbitrarily interleaved with one another
and
polynomial-time computations.
[00313] Create Honest Issuer: Create a new key pair (sk,,pk,) and add pk, to
the set 'honest
available to the adversary.
[00314] Create Honest Oracle: Create a new key pair (sko,pko) and add pk, to
the set
Ohonest available to the adversary.
[00315] Initiate Registration: Receive a random string r from an honest
issuer.
[00316] Finish Registration: Send (r,pkp,r0 to an honest issuer that has sent
r in a past
Initiate Registration query. If -rr non-interactively proves knowledge of
'ski', such that
PkP - f(' r ), the issuer will later accept Acquire Credentials queries from
the adversary.
[00317] Finish Honest Registration: Create an honest prover to respond to an
already
initiated registration process.6k; will be hidden from the adversary, but pkp
will be known
and added to the set P
= honest.
[00318] Acquire Credentials: Acquire a, = sig(sk,;pkp,ap) for arbitrary ap and
the additional
requirement that pkp has been already registered with the owner of sk,. Add
(pkõap) to the
set A.
- 53 -
CA 3050487 2019-07-24

[00319] Acquire Proof: Submit enc(pko;d) an arbitrary but well-formed d =
to an honest oracle with key pko and acquire the output ao.
[00320] Acquire Honest Proof Request: Send arbitrary (P,c,pko) to an honest
prover and
receive enc(pko;d) if the prover has a credential attesting to P and 1
otherwise. Add c to
the set Coutsourced=
[00321] Forge: The adversary outputs some cro, and the game ends. She wins if:
[00322] sigver(pk0;cr0;c11P) for some c and P.
[00323] c 0 Coutsourced
[00324] pk, E ()honest
[00325] 4(pk1,ap) E A: pk, E Inonest,P(Pkbap)
[00326] vpki lhonest,ap -.P(pkõao)
[00327] There are no queries regarding corrupted or corruptible Issuers and
Oracles
since such parties can be simulated by the adversary herself.
[00328] In FIG. 30, the issuer computing system signs attributes with a
cryptographic
technique, the verifier computing system sends issuer computing system a
challenge and
proof request.
[00329] In response, the prover computing device sends encrypted credential,
challenge
and proof request to a master verifier computing device. The master verifier
signs
challenge and proof request computing device.
[00330] This approach, while requiring additional infrastructure relative to
the approach of
FIG. 29, satisfies many of the conditions for an ideal verification. The
issuer computing
system does not obtain further information (e.g., the identity of the
verifier) from the
verification event.
- 54 -
CA 3050487 2019-07-24

[00331] FIG. 31 is a state diagram of a verify oracle, according to some
embodiments.
The verify oracle can be implemented using software, hardware, or a
combination thereof.
For example, the states may be transitioned through the use of a processor
configured to
transition between one or more states, and to perform steps described below in

conjunction with machine-interpretable instruction sets.
[00332] A Verify Oracle supports three states:
[00333] Blank: At this state, only the initRemoteAttestation call would be
accepted. Then,
the first remote attestation message will be generated the enclave goes to the
Remote
Attestation state.
[00334] Remote Attestation: At this state, the enclave accepts either a reset
call or a
finishRemoteAttestation call. Upon a reset call, the enclave clears all of its
state data, as if
it were killed and reloaded. Upon a finishRemoteAttestation call, the enclave
consumes a
Remote Attestation challenge message. The enclave produces a Remote
Attestation
message 3, generates the necessary key pairs and outputs the Remote
Attestation
message and the public keys. If any of this fails, it performs a reset
operation.
[00335] Ready: This is the state wherein the enclave can actually evaluate
policies over
attributes. It can receive a checkProofRequest call, or a reset call.
[00336] Trust by Provers and Verifiers is assumed in all the previously
described models
as a common reference. Also, for obvious performance concerns, it is vital to
be able to
perform Remote Attestation on an enclave non-interactively. As such, the
enclave's host
can perform a publicly verifiable remote attestation with its enclave and
publish the
transcript to it. In order to do so, she may employ the Fiat-Shamir heuristic
using the
collision-resistant function H(.) modeled as a Random Oracle. If the Remote
Attestation
Verifier would normally use a probabilistic polynomial-time algorithm m2 4-
A(m1;r) to
generate the second message, in this scenario the second message would be
derived
through m2 4¨ A(rni;
[00337] A proof request can be defined in accordance with variations of the
following
examples.
- 55 -
CA 3050487 2019-07-24

[00338] The language describing policies should be simple to interpret so as
not to
expose the system to security risks.
[00339] In order to prevent the execution from leaking information about the
attributes,
the language should preclude programs with data-dependent access patterns,
runtime and
execution paths. Here, a C-like language called StraightTalk is described as
an example,
and it is only capable of describing straight-line programs:
- 56 -
CA 3050487 2019-07-24

(policy) ::= (token-definition) (statement)* (expression)
(token-definition)
I (token) (' (variable-definition)* )'
(variable _definition) (type) (identifier-list)'; '
(identifier-list) ::= (identifier)
=
(identifier-list) , ' (identifier)
(type) ::= (basic-type)
I (basic-type) '1' (integer) '1'
(basic-type) ::= 'unsigned' 'int' =1' (integer) 41 '
I 'int' '[' (integer) '1'
I 'f 1 oa '
(statement) ::= (variable-definition)
= (expression)';
(ettgument-list) ::= c
(notterripty-argurnent-list)
(nonempty-aigument-list) ::= ( (expression) = , (expression)
(expression) ::= (expression) = (expression) (expression)
(expression) (binary-operator) (expression)
(unary-operator) (expression)
(function-like-operator) = (' (argument-list) )'
e (expression) )'
(string)
(base64 )
(identifier)' ['(integer)']'
(identifier)' [' (integer)' ]" 1' (integer); 1'
(identifier)
(number)
(unary-opt?mtor) ::=
=
[00340]
- 57 -
CA 3050487 2019-07-24

[00341]
(butary-operatar)
==.
1 '+'
I
[00342]
[00343] FIG. 32 is a system diagram providing additional detail in the context
of a verifier
hosted enclave, according to some embodiments. As shown in FIG. 32, the
verifier
enclave stores a secret key which is utilized in a flow of signed messages.
The key
encapsulation process, in various embodiments, includes 2-way or 1-way
authenticated
public key encryption.
[00344] FIG. 33 is a simplified diagram providing additional detail in the
context of a
verifier hosted enclave, according to some embodiments. In FIG. 33, the
verifier receives
the proof request, the proof request, and the proofs directly from the prover
or prover
device, and transmits a proof verification message to the verifier.
[00345] In this example, the secure enclave is adapted for processing
encrypted
credentials, challenges, and proof requests. The secure enclave can be a
processor or a
secured memory location that is configured for maintaining a verifier secret
key utilized to
generate a first signed message.
[00346] The verifier computing device receives, from a prover computing
device, a
second signed message including at least an enclosed issuer signed message
representing one or more encrypted containers storing at least one or more
characteristics
of an identity profile of a prover entity along with a message authentication
code based at
least on the proof request data structure.
[00347] The verifier computing device then transmits the second signed
message, the
proof request data structure, and the one or more encrypted containers to the
secure
enclave.
[00348] The verifier computing device then receives a response data message
from the
secure enclave indicative of whether all of the one or more logical conditions
were satisfied
- 58 -
CA 3050487 2019-07-24

by the at least one or more characteristics of the identity profile of the
prover entity. In
some embodiments, the secure enclave is configured to provide publicly
verifiable remote
attestation with a verifiable threat model and a verifiable proof of security.
[00349] A remote attestation protocol involves a zero knowledge proof with a
prover and
a verifier, the enclave being the prover. A direct run of this protocol by
both Identity
Brokerage parties (prover and verifier) may compromise efficiency. Therefore,
a
mechanism is implemented using the Fiat-Shamir heuristic, and the enclave's
maintainer is
configured to run an instance of remote attestation in a publicly verifiable
manner.
[00350] Instead of using actual random inputs, the remote attestation verifier
(the
enclave's maintainer) replaces every randomness with the output of a
pseudorandom
function applied to the entire protocol transcript up until that moment, and
an arbitrary
initial nonce. Thus, by presenting the transcript of this protocol instance,
the enclave's
maintainer can efficiently convince the identity brokerage parties that the
enclave is a
trustworthy one.
[00351] In some embodiments, the verifier enclave or a third party hosted
system tracks
records transcripts of an exchange, which are exposed to the public. For
example, it may
be the responsibility of a verifier computing system to run a remote
attestation protocol
with its enclave once whereby the enclave communicates its public key, which
is then
stored in on a transcript exposure module, which may be hosted by any one of
the
computing systems associated with any one of the parties or by a third party
hosted
system. In order to establish the honesty of the transcript, all the
randomness used on the
verifier's cryptography are to be created using a pseudorandom function (hash,
block
cipher, etc.) involving all or some of the information available to the
verifier's computing
device at a time of a credential validation transaction.
[00352] The secure enclave processor maintains a verification transcript in
relation to its
own credentials, as the enclave is wholly trusted by both the prover and the
verifier, it
should be strongly vetted itself.
[00353] Chip manufacturers provide mechanisms to verify an enclave involving
multi-
round interactive protocols. Remote attestation is a protocol based on
bilinear group
- 59 -
CA 3050487 2019-07-24

signatures, whereby an enclave proves to a third party that it is running on a
legitimate
Intel SGX platform, and that is running a particular program.
[00354] FIG. 34 is a method diagram providing an example issuer sequence where
the
prover computing system has a corresponding key pair, according to some
embodiments.
[00355] FIG. 36 is a method diagram providing an example verification
sequence, where
the prover computing system has a corresponding key pair, according to some
embodiments.
[00356] FIG. 36 is a method diagram providing an example issuer sequence where
the
prover computing system does not have a corresponding key pair, according to
some
embodiments.
[00357] FIG. 37 is a method diagram providing an example verification
sequence, where
the prover computing system does not have a corresponding key pair, according
to some
embodiments.
[00358] FIG. 38 is a system diagram providing an example verification system
having a
third party hosted3 enclave, according to some embodiments.
[00359] FIG. 39 is an example C-based proof request description language,
according to
some embodiments. An example proof request is shown in FIG. 39, and other
policies are
possible. In some embodiments, the policies can be compiled from a simple C-
like
language only capable of writing straight-line non-branching programs.
[00360] In some embodiments, aspects of the present application may provide
electronic
functionality for customers (a person) to store their personal info securely
in an electronic
vault, to share information with another person or entity in a secure and
private manner, to
grant access to another person or entity such as for estate planning a
customer can give
specific access to a will or other documents to lawyers or other family
members.
[00361] In some embodiments, similar to a safety deposit box, the system may
give the
user the ability to store and access sensitive personal information such as
- 60 -
CA 3050487 2019-07-24

[00362] Government: Drivers License, Health Care, Nexus, Passport
[00363] Health: Heath records
[00364] Personal: Will, Insurance Information, Pension
[00365] Home: Home info, House Title
[00366] Car: Car info, Car Ownership, Digital Car Keys
[00367] And the like.
[00368] The embodiments of the devices, systems and methods described herein
may be
implemented in a combination of both hardware and software. These embodiments
may
be implemented on programmable computers, each computer including at least one

processor, a data storage system (including volatile memory or non-volatile
memory or
other data storage elements or a combination thereof), and at least one
communication
interface.
[00369] Program code is applied to input data to perform the functions
described herein
and to generate output information. The output information is applied to one
or more output
devices. In some embodiments, the communication interface may be a network
communication interface. In embodiments in which elements may be combined, the

communication interface may be a software communication interface, such as
those for
inter-process communication. In still other embodiments, there may be a
combination of
communication interfaces implemented as hardware, software, and combination
thereof.
[00370] Throughout the foregoing discussion, numerous references will be made
regarding servers, services, interfaces, portals, platforms, or other systems
formed from
computing devices. It should be appreciated that the use of such terms is
deemed to
represent one or more computing devices having at least one processor
configured to
execute software instructions stored on a computer readable tangible, non-
transitory
medium. For example, a server can include one or more computers operating as a
web
server, database server, or other type of computer server in a manner to
fulfill described
roles, responsibilities, or functions.
- 61 -
CA 3050487 2019-07-24

[00371] The technical solution of embodiments may be in the form of a software
product.
The software product may be stored in a non-volatile or non-transitory storage
medium,
which can be a compact disk read-only memory (CD-ROM), a USB flash disk, or a
removable hard disk. The software product includes a number of instructions
that enable a
computer device (personal computer, server, or network device) to execute the
methods
provided by the embodiments.
[00372] The embodiments described herein are implemented by physical computer
hardware, including computing devices, servers, receivers, transmitters,
processors,
memory, displays, and networks. The embodiments described herein provide
useful
physical machines and particularly configured computer hardware arrangements.
[00373] Applicant notes that the described embodiments and examples are
illustrative
and non-limiting. Practical implementation of the features may incorporate a
combination
of some or all of the aspects, and features described herein should not be
taken as
indications of future or existing product plans. Applicant partakes in both
foundational and
applied research, and in some cases, the features described are developed on
an
exploratory basis.
[00374] Although the embodiments have been described in detail, it should be
understood that various changes, substitutions and alterations can be made
herein.
[00375] Moreover, the scope of the present application is not intended to be
limited to the
particular embodiments of the process, machine, manufacture, composition of
matter,
means, methods and steps described in the specification.
[00376] As can be understood, the examples described above and illustrated are

intended to be exemplary only.
- 62 -
CA 3050487 2019-07-24

Representative Drawing

Sorry, the representative drawing for patent document number 3050487 was not found.

Administrative Status

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Administrative Status , Maintenance Fee  and Payment History  should be consulted.

Administrative Status

Title Date
Forecasted Issue Date Unavailable
(22) Filed 2019-07-24
(41) Open to Public Inspection 2020-01-24

Abandonment History

There is no abandonment history.

Maintenance Fee

Last Payment of $100.00 was received on 2023-06-05


 Upcoming maintenance fee amounts

Description Date Amount
Next Payment if small entity fee 2024-07-24 $100.00
Next Payment if standard fee 2024-07-24 $277.00

Note : If the full payment has not been received on or before the date indicated, a further fee may be required which may be one of the following

  • the reinstatement fee;
  • the late payment fee; or
  • additional fee to reverse deemed expiry.

Patent fees are adjusted on the 1st of January every year. The amounts above are the current amounts if received by December 31 of the current year.
Please refer to the CIPO Patent Fees web page to see all current fee amounts.

Payment History

Fee Type Anniversary Year Due Date Amount Paid Paid Date
Application Fee $400.00 2019-07-24
Maintenance Fee - Application - New Act 2 2021-07-26 $100.00 2021-07-12
Maintenance Fee - Application - New Act 3 2022-07-25 $100.00 2022-05-25
Maintenance Fee - Application - New Act 4 2023-07-24 $100.00 2023-06-05
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
ROYAL BANK OF CANADA
Past Owners on Record
None
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Cover Page 2020-01-10 2 65
Abstract 2019-07-24 1 16
Description 2019-07-24 62 2,632
Claims 2019-07-24 5 154
Drawings 2019-07-24 39 1,397