Language selection

Search

Patent 3073549 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent: (11) CA 3073549
(54) English Title: METHODS AND SYSTEMS FOR SECURE DATA COMMUNICATION
(54) French Title: PROCEDES ET SYSTEMES POUR UNE COMMUNICATION DE DONNEES SECURISEE
Status: Granted
Bibliographic Data
(51) International Patent Classification (IPC):
  • H04L 9/28 (2006.01)
  • H04W 12/041 (2021.01)
  • H03M 7/00 (2006.01)
  • H04L 9/30 (2006.01)
(72) Inventors :
  • KUANG, RANDY (Canada)
(73) Owners :
  • QUANTROPI INC. (Canada)
(71) Applicants :
  • QUANTROPI INC. (Canada)
(74) Agent: SMART & BIGGAR LP
(74) Associate agent:
(45) Issued: 2021-06-08
(86) PCT Filing Date: 2018-10-23
(87) Open to Public Inspection: 2019-05-02
Examination requested: 2020-02-14
Availability of licence: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): Yes
(86) PCT Filing Number: PCT/CA2018/051339
(87) International Publication Number: WO2019/079890
(85) National Entry: 2020-02-14

(30) Application Priority Data:
Application No. Country/Territory Date
15/796,577 United States of America 2017-10-27
62/662,819 United States of America 2018-04-26

Abstracts

English Abstract


A computer-implemented method, which comprises: receiving an input message
comprising N-bit input segments, N
being an integer greater than one; converting the N-bit input segments into
corresponding N-bit output segments using a 2N-by-2N
one-to-one mapping stored in a non-transitory storage medium; and generating
an output message comprising the N-bit output segments.
Also, a computer-implemented method for a recipient to validate a message
received from a sender, the message including a first part
and a second part. This method comprises receiving a token from a witnessing
entity; obtaining a first data element by joint processing
of the first part of the message and the token; obtaining a second data
element by joint processing of the second part of the message
using a key associated with the sender; and validating the message by
comparing the first and second data elements.



French Abstract

La présente invention concerne un procédé mis en uvre par ordinateur, qui consiste : à recevoir un message d'entrée comprenant des segments d'entrée à N bits, N étant un nombre entier supérieur à un ; à convertir les segments d'entrée à N bits en segments de sortie à N bits correspondants à l'aide d'un mappage 2N-par-2N un-à-un stocké dans un support d'informations non transitoire ; et à générer un message de sortie comprenant les segments de sortie à N bits. L'invention concerne également un procédé mis en uvre par ordinateur permettant à un destinataire de valider un message reçu d'un émetteur, le message comprenant une première partie et une seconde partie Ce procédé consiste à recevoir un jeton d'une entité témoin ; à obtenir un premier élément de données par traitement conjoint de la première partie du message et du jeton ; à obtenir un second élément de données par traitement conjoint de la seconde partie du message à l'aide d'une clé associée à l'expéditeur ; et à valider le message par comparaison des premier et second éléments de données.

Claims

Note: Claims are shown in the official language in which they were submitted.


PPH
CLAIMS
1. A communication system, comprising:
- a first apparatus comprising a first processing entity coupled to a first
memory; and
- a second apparatus comprising a second processing entity coupled to a
second memory;
- wherein the first and second apparatuses communicate over a communication
channel;
- wherein the first memory stores first computer-readable instructions and
a first mapping
between 2N possible input indexes and 2N possible output indexes, N being an
integer
greater than one;
- wherein the second memory stores second computer-readable instructions and a
second
mapping between 2N possible input indexes and 2N possible output indexes, the
second
mapping being derivable from the first mapping;
- wherein the first processing entity is configured for:
- obtaining a first bit stream;
- subdividing the first bit stream into a plurality of N-bit first input
segments;
- for each of the N-bit first input segments, determining a first input
index as a value
represented by the N bits of a particular one of the N-bit first input
segments,
determining a first output index based on the first input index and the first
mapping,
and setting bits of a corresponding N-bit first output segment so as to
represent the
value of the first output index; and
- causing a second bit stream formed using each corresponding N-bit first
output
segment to be transmitted over the communication channel;
- wherein the second processing entity is configured for:
- receiving the second bit stream;
- subdividing the second bit stream into a plurality of N-bit second input
segments;
- for each of the N-bit second input segments, determining a second input
index as a
value represented by the N bits of a particular one of the N-bit second input
segments, determining a second output index based on the second input index
and
the second mapping, and setting bits of a corresponding N-bit second output
segment so as to represent the value of the second output index so as to
recover a
corresponding one of the N-bit input segments; and
Date Recue/Date Received 2020-09-01

PPH
- outputting a third bit stream formed using each corresponding N-
bit second output
segment.
2. The communication system defined in claim 1, wherein the first
processing entity is further
configured to send the second mapping to the second apparatus for storage in
the second
memory.
3. The communication system defined in claim 2, wherein the first
processing entity is further
configured to derive the second mapping from the first mapping.
4. The communication system defined in claim 3, wherein when the first
mapping is represented
as a first matrix and the second mapping is represented as a second matrix,
the first and second
matrices being transposes of one another.
5. The communication system defined in claim 1, wherein the first
processing entity is further
configured to send a transmitted mapping to the second apparatus, the
transmitted mapping
being the first mapping or the second mapping.
6. The communication system defined in claim 5, wherein the first
processing entity is further
configured to send a flag to the second apparatus, the flag indicating whether
the transmitted
mapping is the first mapping or a transpose of the first mapping.
7. The communication system defined in claim 6, wherein the second
processing entity is further
configured to process the flag and to store in the second memory the
transmitted mapping
when the flag indicates that the transmitted mapping is the transpose of the
first mapping.
8. The communication system defined in claim 6, wherein the second
processing entity is further
configured to process the flag and to store in the second memory a transpose
of the transmitted
mapping when the flag indicates that the transmitted mapping is the first
mapping.
61
Date Recue/Date Received 2020-09-01

PPH
9. The communication system defined in claim 5, wherein the first
processing entity is further
configured for encrypting data indicative of the transmitted mapping prior to
sending it to the
second apparatus.
10. The communication system defined in claim 9, wherein to encrypt the data
indicative of the
transmitted mapping prior to sending it to the second apparatus, the first
processing entity is
configured for using a private key of a private key / public key pair to
encrypt the data
indicative of the transmitted mapping, the private key being uniquely known to
the first
apparatus, the public key being made available to the second apparatus.
11. The communication system defined in claim 5, wherein the first and second
processing
entities are configured for carrying out a handshaking protocol with one
another to securely
transmit the transmitted mapping from the first apparatus to the second
apparatus.
12. The communication system defined in any one of claims 1 to 11, wherein the
first mapping is
stored in the first memory by a permutation matrix, wherein determining the
first output index
based on the first input index and the first mapping comprises creating a 2N-
1ength input vector
with all zeroes except for a -1" in a single position corresponding to the
first input index,
multiplying the permutation matrix and the input vector to obtain a 2N-1ength
output vector
haying all zeroes except for a "1" in a single position, wherein the first
output index is set to
equal the position in which the -1" appears in the output vector.
13. The communication system defined in any one of claims 1 to 12, wherein the
first and second
mappings are each one-to-one mappings.
14. The communication system defined in any one of claims 1 to 13, wherein for
plural successive
N-bit first input segments, the first processing entity is configured for:
62
Date Recue/Date Received 2020-09-01

PPH
- determining a first input index for each of the first N-bit input
segments;
- creating a succession of arrays of size 2N, each array corresponding to
one of the N-bit first
input segments and having all zeroes except for a "1" in a single position
corresponding to
the first input index;
- applying each of the arrays to inputs of a 2N-input, 2N-output switch
fabric that implements
the mapping, thereby to obtain a succession of output arrays at the outputs of
the switch
fabric, each of the arrays having all zeroes except for a "1" in a single
position;
- setting the output index for each of the N-bit output segments to
correspond to the position
in which the "1" appears in a corresponding one of the output arrays.
15. The communication system defined in any one of claims 1 to 14, wherein N
is at least as great
as 14.
16. The communication system defined in any one of claims 1 to 15, the first
processing entity
being further configured encrypting the first bit stream with an encryption
key before said
subdividing, the second processing entity being further configured for
decrypting the third bit
stream with a decryption key corresponding to the encryption key, thereby to
recover the first
bit stream.
17. The communication system defined in any one of claims 1 to 16, the first
processing entity
being further configured encrypting the second bit stream with an encryption
key before said
causing, the second processing entity being further configured for decrypting
the received bit
stream with a decryption key corresponding to the encryption key prior to said
subdividing.
18. The communication system defined in any one of claims 1 to 17,
wherein the first and
second apparatuses are mobile communication devices.
63
Date Recue/Date Received 2020-12-10

PPH
19. A method of implementing a secure network amongst a plurality of
communication
devices, comprising:
- determining a mapping between 2N possible input indexes and 2N possible
output
indexes, where N > 1;
- securely embedding the mapping within a memory of each of the
communication
devices;
- configuring one or more processors in each of the communication devices
to execute a
data encoding method when transmitting data to other ones of the communication

devices; and
- configuring one or more processors in each of the communication devices
to execute a
data decoding method when processing data received from other ones of the
communication devices;
- the data encoding and decoding methods comprising:
- obtaining an input bit stream;
- subdividing the input bit stream into a plurality of N-bit input
segments;
- producing a plurality of N-bit output segments corresponding to
respective ones of the
N-bit input segments, wherein a particular one of the N-bit output segments is
produced
from a particular one of the N-bit input segments by:
- determining an input index as a value represented by the N bits of the
particular one
of the N-bit input segments;
- determining an output index based on the input index and the mapping; and
- setting the bits of the particular one of the N-bit output segments so as
to represent
the value of the output index; and
- outputting an output bit stream formed using the N-bit output segments.
20. A data protection method comprising:
- storing in a computer memory a permutation mapping that maps each
possible 2N-bit
input array having a 1 in a single position and Os in each of the 2N-1 other
positions to
a respective 2N-bit output array having a 1 in a corresponding single position
and Os in
each of the 2N-1 other positions, N being an integer greater than one;
64
Date Recue/Date Received 2020-09-01

PPH
- obtaining an input bit stream;
- subdividing the input bit stream into a plurality of N-bit input segments
stored in the
computer memory;
- producing, by one or more processors, a plurality of N-bit output
segments
corresponding to respective ones of the N-bit input segments, the N-bit output
segments
stored in the computer memory,
- wherein a particular one of the N-bit output segments is produced from a
particular one
of the N-bit input segments by the one or more processors by:
- determining a 2N-bit input array from the N bits of the particular one of
the N-bit
input segments;
- determining a 2N-bit output array based on the 2N-bit input array input
index and
the permutation mapping; and
- setting the bits of the particular one of the N-bit output segments so as
to represent
the value of the 2N-bit output array; and
- outputting an output bit stream formed using the N-bit output segments.
21. The method defined in claim 20, wherein determining the 2N-bit input
array from the N
bits of the particular one of the N-bit input segments comprises obtaining a
decimal value M
corresponding to the particular one of the N-bit input segments and setting
the (M+1)th bit of the
input array to 1.
22. The method defined in claim 20 or 21, wherein determining the 2N-bit
output array from
the 2N-bit input array and the permutation mapping comprises carrying out a
multiplication
between the 2N-bit input array and a 2N-by-2N sparse matrix corresponding to
the permutation
mapping.
23. The method defined in claim 22, wherein the output bit stream comprises
a decoded version
of data encoded in the input bit stream, the method further comprising
encoding the data in the
input bit stream using a transpose of the sparse matrix.
Date Recue/Date Received 2020-09-01

PPH
24. The method defined in any one of claims 20 to 23, wherein the input bit
stream comprises
encoded data, wherein the output bit stream comprises a decoded version of the
data that was
encoded in the input bit stream, and wherein the pelmutation mapping is a
transpose of a
permutation mapping used to encode the data in the input bit stream.
25. The method defined in any one of claims 20 to 24, wherein the output
bit stream comprises
an encoded version of data in the input bit stream, further comprising sending
the pemmtation
mapping to a remote apparatus for decoding of the data in the output bit
stream.
26. The method defined in any one of claims 20 to 25, wherein the output
bit stream comprises
an encoded version of data in the input bit stream, further comprising sending
the permutation
mapping or a transpose of the permutation mapping to a remote apparatus for
decoding of the data
in the output bit stream.
27. The method defined in claim 26, further comprising sending a flag to
the remote apparatus,
the flag indicating whether the mapping sent to the remote apparatus is the
permutation mapping
or a transpose of the permutation mapping.
28. The method defined in any one of claims 20 to 27, wherein determining
the 2N-bit output
array from the 2N-bit input array and the permutation mapping comprises
obtaining the 2N-bit
output array with all Os except for a 1 in a single position by combining the
2N-bit input array and
a matrix representation of the permutation mapping.
29. The method defined in any one of claims 20 to 28, further comprising:
- determining a respective equivalent input decimal value of each of a
plurality of successive
ones of the N-bit input segments;
- creating a succession of arrays of size 2N, each array corresponding to
one of the N-bit
input segments and having all Os except for a 1 in a single position
corresponding to the
respective input decimal value;
66
Date Recue/Date Received 2020-09-01

PPH
- applying each of the 2N-bit arrays to inputs of a 2N-input, 2N-output switch
fabric that
implements the permutation mapping, thereby to obtain a succession of 2N-bit
output arrays
at the outputs of the switch fabric, each of the arrays having all Os except
for a 1 in a single
position;
- setting each of the N-bit output segments to have a respective equivalent
decimal value
corresponding to the position in which the 1 appears in a corresponding one of
the output
arrays.
30. The method defined in any one of claims 20 to 29, wherein N is at least
as great as 9.
31. The method defined in claim 30, wherein N is not a power of 2.
32. The method defined in any one of claims 20 to 31, further comprising
changing the value
of N during operation.
33. A non-transitory computer-readable storage medium storing instructions
which, when
executed by at least one processor, cause the at least one processor to
execute a method that
comprises:
- obtaining an input bit stream;
- subdividing the input bit stream into a plurality of N-bit input segments
stored in the
computer memory, N being an integer greater than one;
- producing, by one or more processors, a plurality of N-bit output
segments
corresponding to respective ones of the N-bit input segments, the N-bit output
segments
stored in the computer memory,
- wherein a particular one of the N-bit output segments is produced from a
particular one
of the N-bit input segments by the one or more processors by:
- determining a 2N-bit input array from the N bits of the particular one of
the N-bit
input segments;
- detennining a 2N-bit output array based on the 2N-bit input array and a
stored
permutation mapping that maps each possible 2N-bit input array having a 1 in a
67
Date Recue/Date Received 2020-09-01

PPH
single position and Os in each of the 2N-1 other positions to a respective 2N-
bit
output array having a 1 in a corresponding single position and Os in each of
the 2N-
1 other positions; and
- setting the bits of the particular one of the N-bit output
segments so as to represent
the value of the 2N-bit output array; and
- outputting an output bit stream formed using the N-bit output
segments.
34. The non-transitory computer-readable storage medium defined in claim
33, wherein N is
at least as great as 9 and is not a power of 2.
35. The non-transitory computer-readable storage medium defined in claim 33
or 34, wherein
detemiining the 2N-bit input array from the N bits of the particular one of
the N-bit input segments
comprises obtaining a decimal value M corresponding to the particular one of
the N-bit input
segments and setting the (M+1)th bit of the input array to 1.
36. The non-transitory computer-readable storage medium defined in any one
of claims 33 to
35, wherein determining the 2N-bit output array from the 2N-bit input array
and the permutation
mapping comprises carrying out a multiplication between the 2N-bit input array
and a 2N-by-2N
sparse matrix corresponding to the pemmtation mapping.
37. The communication system defined in any one of claims 1 to 18, wherein
N is at least as
great as 9.
38. The communication system defined in claim 37, wherein N is not a power
of 2.
39. The communication system defined in claim 37, wherein the value of N is
changed
dynamically.
40. The communication system defined in any one of claims 1 to 18 or 37 to
39, wherein the
first mapping is stored in the first memory by a permutation matrix, wherein
determining the first
68
Date Recue/Date Received 2020-09-01

PPH
output index based on the first input index and the first mapping comprises
creating a 2N-1ength
input vector with all zeroes except for a "1" in a single position
corresponding to the first input
index, and carrying out a position switching operation based on the positions
of the 1 values in the
permutation matrix to obtain a 2N-1ength output vector having all zeroes
except for a "1" in a single
position from, wherein the first output index is set to equal the position in
which the "1" appears
in the output vector.
41.
The method defined in any one of claims 20 to 32, wherein detennining the 2N-
bit output
array from the 2N-bit input array and the permutation mapping comprises
carrying out a position
switching operation based on the positions of the 1 values in a 2N-by-2N
sparse matrix representing
the pemmtation mapping.
69
Date Recue/Date Received 2020-09-01

Description

Note: Descriptions are shown in the official language in which they were submitted.


METHODS AND SYSTEMS FOR SECURE DATA COMMUNICATION
FIELD
The present invention relates in general to data protection and, in
particular, to data encoding
and decoding.
BACKGROUND
Current data encryption technologies rely on the solution of complex numerical
problems
that present a formidable challenge to solve. Yet, when armed with a "key" to
the solution, a
legitimate user can easily gain access to the original, unencrypted data. This
is the principle behind
technologies such as AES (Advanced Encryption Standard), according to which
data can be safely
transmitted in encrypted form. However, the security provided by AES and other
encryption
technologies lasts only as long as a malicious party that intercepts the
encrypted data does not have
enough computing power and enough target data available to actually solve the
problem without the
required key.
To hedge against the inevitable increases in computing power at the disposal
of malicious
parties worldwide (and which is poised to increase further still with the
advent of quantum
computers), those with a need for secure communications typically seek to
increase the complexity
of the numerical problems being presented for solution. However, one side
effect of this escalation in
problem complexity is that a legitimate user, i.e., one with the required key,
must also now expend
increasingly significant resources to protect and decrypt the data. Thus,
while the resources needed
by a legitimate user are still designed to be less than the resources needed
to solve the problem by
brute force, they present a non-negligible burden on various performance
parameters such as
throughput and energy consumption.
As such, a highly secure yet computationally economical data protection
solution would be
welcomed by the industry.
1
Date Recue/Date Received 2020-07-22

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
SUMMARY
According to a broad aspect, there may be provided a computer-implemented
method,
comprising:
- receiving an input message comprising N-bit input segments. N being an
integer greater
than one;
- converting the N-bit input segments into corresponding N-bit output
segments using a
2N-by-2N one-to-one mapping stored in a non-transitory storage medium; and
- generating an output message comprising the N-bit output segments.
According to a further broad aspect, there may be provided a non-transitory
computer-
readable storage medium comprising computer-readable instructions which, when
executed by a
processor, cause the processor to carry out a method that comprises:
- receiving an input message comprising N-bit input segments, N being an
integer greater
than one;
- converting the N-bit input segments into corresponding N-bit output
segments using a
2N-by-2N one-to-one mapping stored in a non-transitory storage medium; and
- generating an output message comprising the N-bit output segments.
According to a further broad aspect, there may be provided a computer-
implemented method,
comprising:
- receiving an input message comprising N-bit input segments, N being an
integer greater
than one;
- converting the N-bit input segments into corresponding N-bit output
segments using a
2N-by-2N one-to-one mapping stored in a non-transitory storage medium;
- reassembling the N-bit output segments into M-bit input segments, M being
an integer
greater than one and different from N;
- converting the M-bit input segments into corresponding M-bit output
segments using a
2m-by-2m one-to-one mapping stored in a non-transitory storage medium; and
- generating an output message comprising the M-bit output segments.
According to a further broad aspect, there may be provided a non-transitory
computer-
readable storage medium comprising computer-readable instructions which, when
executed by a
processor, cause the processor to carry out a method that comprises:
2

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
- receiving an input message comprising N-bit input segments. N being an
integer greater
than one;
- converting the N-bit input segments into corresponding N-bit output
segments using a
2N-by-2N one-to-one mapping stored in a non-transitory storage medium;
- reassembling the N-bit output segments into M-bit input segments. M being
an integer
greater than one and different from N;
- converting the M-bit input segments into corresponding M-bit output
segments using a
2m-by-2m one-to-one mapping stored in a non-transitory storage medium; and
- generating an output message comprising the M-bit output segments.
According to a further broad aspect, there may be provided a non-transitory
computer-
readable storage medium comprising computer-readable instructions which, when
executed by a
processor, cause the processor to carry out a method that comprises encoding
input segments of data
into output segments of data of the same size using a one-to-one mapping of
dimensionality greater
than the size of the segments.
According to a further broad aspect, there may be provided a method comprising
encoding
input segments of data into output segments of data of the same size using a
one-to-one mapping of
dimensionality greater than the size of the segments.
According to a further broad aspect, there may be provided a non-transitory
computer-
readable storage medium comprising computer-readable instructions which, when
executed by a
processor, cause the processor to carry out a method that comprises using a
permutation mapping to
encode individual first sets of bits of an input bit stream into corresponding
same-sized second sets of
bits of an output bit stream, the permutation mapping being such that, for
most of the possible
corresponding pairs of first and second sets, the relative proportion of ones
and zeroes is different
between the two sets in the pair.
According to a further broad aspect, there may be provided a computer-
implemented method
for a recipient to validate a message received from a sender, the message
including a first part and a
second part, the method comprising:
- receiving a token from a witnessing entity;
- obtaining a first data element by joint processing of the first part of
the message and the
token;
- obtaining a second data element by joint processing of the second part of
the message
using a key associated with the sender; and
3

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
- validating the message by comparing the first and second data elements.
According to a further broad aspect, there may be provided a non-transitory
computer-
readable storage medium comprising computer-readable instructions which, when
executed by a
processor, cause the processor to carry out a method for a recipient to
validate a message received
from a sender, the message including a first part and a second part, wherein
the method comprises:
- receiving a token from a witnessing entity;
- obtaining a first data element by joint processing of the first part of
the message and the
token;
- obtaining a second data element by joint processing of the second part of
the message
using a key associated with the sender; and
- validating the message by comparing the first and second data elements.
According to a further broad aspect, there may be provided a computer-
implemented method
executed by a witnessing entity, comprising:
- obtaining a code and a signature from a sender;
- consulting a first database to obtain entropy data associated with the
code;
- generating a token from the entropy data;
- storing in a second database an association between the token and the
signature; and
- transmitting a message comprising the token in response to receipt of a
request
identifying at least the signature.
According to a further broad aspect, there may be provided a non-transitory
computer-
readable storage medium comprising computer-readable instructions which, when
executed by a
processor, cause the processor to carry out a method that comprises:
- obtaining a code and a signature from a sender;
- consulting a first database to obtain entropy data associated with the
code;
- generating a token from the entropy data;
- storing in a second database an association between the token and the
signature; and
- transmitting a message comprising the token in response to receipt of a
request
identifying at least the signature.
According to a further broad aspect, there may be provided a computer-
implemented method
of synchronizing an Internet-enabled appliance with an application device,
comprising:
4

CA 03073549 2020-02-14
- at the application device:
- generating an encoding mapping based on (i) a previous encoding mapping
used to
communicate previously with the appliance and (ii) a seed;
- transmitting the encoding mapping to the appliance over a local
connection that
does not traverse the Internet;
- at the appliance:
- receiving the encoding mapping over the local connection; and
- using the encoding mapping to subsequently secure data exchanged with
the
application device over the Internet.
According to a further broad aspect, there may be provided an IoT system
comprising:
- an application device comprising a processing and networking capabilities
configured
for:
- generating an encoding mapping based on (i) a previous encoding mapping
used to
communicate previously with the appliance and (ii) a seed; and
- transmitting the encoding mapping to the appliance over a local
connection that
does not traverse the Internet; and
- an appliance connected to the application device over the Internet and
comprising
processing and networking capabilities configured for:
- receiving the encoding mapping over the local connection; and
- using the encoding mapping to subsequently secure data exchanged with
the
application device over the Internet.
According to a further broad aspect, there may be provided a data protection
method
comprising:
- storing in a computer memory a permutation mapping that maps each
possible 2N
bit input array having a 1 in a single position and Os in each of the 2N-1
other
positions to a respective 2N-bit output array having a 1 in a corresponding
single
position and Os in each of the 2N-1 other positions, N being an integer
greater than
one;
- obtaining an input bit stream;
- subdividing the input bit stream into a plurality of N-bit input segments
stored in
the computer memory;

CA 03073549 2020-02-14
- producing, by one or more processors, a plurality of N-bit output segments
corresponding to respective ones of the N-bit input segments, the N-bit output

segments stored in the computer memory,
- wherein a particular one of the N-bit output segments is produced from a
particular one of the N-bit input segments by the one or more processors by:
- determining a 2N-bit input array from the N bits of the particular
one of the
N-bit input segments;
- determining a 2N-bit output array based on the 2N-bit input array and
the
permutation mapping; and
- setting the bits of the particular one of the N-bit output segments
so as to
represent the value of the 2N-bit output array; and
- outputting an output bit stream formed using the N-bit output segments.
According to a further broad aspect, there may be provided a non-transitory
computer-
readable storage medium storing instructions which, when executed by at least
one
processor, cause the at least one processor to execute a method that
comprises:
- obtaining an input bit stream;
- subdividing the input bit stream into a plurality of N-bit input
segments stored in
the computer memory, N being an integer greater than one;
- producing, by one or more processors, a plurality of N-bit output segments
corresponding to respective ones of the N-bit input segments, the N-bit output

segments stored in the computer memory,
- wherein a particular one of the N-bit output segments is produced from a
particular one of the N-bit input segments by the one or more processors by:
- determining a 2N-bit input array from the N bits of the particular
one of the
N-bit input segments;
- determining a 2N-bit output array based on the 2N-bit input array and a
stored permutation mapping that maps each possible 2N-bit input array
having a 1 in a single position and Os in each of the 2N-1 other positions to
a respective 21"-bit output array having a 1 in a corresponding single
position and Os in each of the 2N1 other positions; and
- setting the bits of the particular one of the N-bit output segments
so as to
represent the value of the 2N-bit output array; and
- outputting an output bit stream formed using the N-bit output segments.
5a

=
CA 03073549 2020-02-14
According to a further broad aspect, there may be provided a method of
implementing a
secure network amongst a plurality of communication devices, comprising:
- determining a mapping between 2N possible input indexes and 2"
possible output
indexes, where N> 1;
- securely embedding the mapping within a memory of each of the
communication
devices;
- configuring one or more processors in each of the communication
devices to
execute a data encoding method when transmitting data to other ones of the
communication devices; and
- configuring one or more processors in each of the communication
devices to
execute a data decoding method when processing data received from other ones
of the communication devices;
- the data encoding and decoding methods comprising:
- obtaining an input bit stream;
- subdividing the input bit stream into a plurality of N-bit
input segments;
- producing a plurality of N-bit output segments corresponding
to respective
ones of the N-bit input segments, wherein a particular one of the N-bit
output segments is produced from a particular one of the N-bit input
segments by:
- determining an input index as a value represented by
the N bits of
the particular one of the N-bit input segments;
- determining an output index based on the input index and the
mapping; and
- setting the bits of the particular one of the N-bit
output segments
so as to represent the value of the output index; and
- outputting an output bit stream formed using the N-bit
output segments.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other aspects will become apparent from the following description of
embodiments in conjunction with the accompanying drawings, wherein:
Fig. 1 is a block diagram showing a sending device, a receiving device and a
channel.
5b

=
CA 03073549 2020-02-14
Fig. 2 is a flowchart showing steps in a data encoding process, in accordance
with a non-
limiting embodiment.
Fig. 3 is a conceptual block diagram, showing various steps in the data
encoding process, in
accordance with a non-limiting embodiment.
Figs. 4A to 4C are block diagrams showing various ways to carry out one of the
steps in Fig.
2, in accordance with various non-limiting embodiments.
5c

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
Fig. 5 is a block diagram, showing various steps in the data encoding process,
in the context
of a specific example of a non-limiting embodiment.
Figs. 6A-6D are block diagrams showing distribution of the encoding and
decoding
mappings to the sending and receiving devices, respectively, in accordance
with various non-limiting
embodiments.
Fig. 7A-7C illustrate how certain embodiments may be applied to an application
loading
scenario for an operating system (OS).
Fig. 8 schematically shows the relative size of memory space and entropy
space.
Fig. 9 is a block diagram showing communication from one peer to another over
the Internet.
Fig. 10 is a schematic block diagram showing generation and exchange of an
initial secret
between two peers.
Fig. 11 is a bock diagram showing communication from one peer to another over
the Internet
and over an alternate channel.
Fig. 12 is a block diagram further detailing the communication in Fig. 11,
showing specific
messages exchanged over the Internet and over the alternate channel.
Fig. 13 is a message flow diagram showing a message exchange among two peers,
Alice and
Bob, and a server.
Figs. 14A and 14B illustrate processes for generating permutation matrices
from an initial
secret.
Fig. 15 is a message flow diagram showing a message exchange among two peers,
Alice and
Bob, and a server that issues seeds, resulting in Alice and Bob both having
access to an encryption
key QK.
Fig. 16 is a variant of Fig. 15, in which the seeds are broadcast by the
server.
Figs. 17A-17C are variants of communication between two peers.
Fig. 18 is a schematic diagram that conceptually illustrates transactions in a
blockchain.
Fig. 19 is a schematic block diagram that illustrates a process of encoding an
initial secret
with a blockchain transaction as well as a process for retrieving the initial
secret from the transaction.
Fig. 20 is a schematic block diagram illustrating secure file storage on a
blockchain.
Fig. 21 is a table showing a relationship between the proportion of zeroes and
ones in an 8-bit
input segment and the likelihood of occurrence of an input segment having such
proportion.
Fig. 22 is a network diagram of an Internet-of-things (IoT) environment.
Fig. 23 is a schematic diagram showing an appliance in the loT environment.
Fig. 24 is a block diagram illustrating a broker implementing a subscribe-
publish
communication paradigm in the loT environment.
Fig. 25 is a schematic diagram showing a PUBLISH message containing data, for
use in the
IoT environment.
6

Fig. 26 is a schematic diagram illustrating transmission of a PUBLISH message
containing
data in the IoT environment.
Fig. 27 is a schematic diagram illustrating transmission of a PUBLISH message
containing a
seed in the IoT environment.
Fig. 28 is a block diagram of an application device in the IoT environment.
Fig. 29 is a schematic block diagram illustrating transmission of a message
containing a seed,
from the application device to the appliance in the IoT environment
Fig. 30 is a schematic diagram showing a PUBLISH message containing a seed,
for use in
the IoT environment.
Fig. 31 is a schematic diagram showing an alternate message forniat for use in
an alternate
IoT environment.
Fig. 32 is a block diagram showing a security enhancement involving dynamic
spreading
using a stream cipher.
Fig. 33A is a block diagram of a communication system with the capability of
building an
Entropy History Table by two peers.
Fig. 33B is a flowchart illustrating a preliminary phase in the context of
building an Entropy
History Table by two peers.
Fig. 34 is a flowchart illustrating an update phase in the context of building
an Entropy
History Table by two peers.
Fig. 35 shows a system for communicating a witnessed message.
Fig. 36 is a diagram showing generation of a witnessed message.
Fig. 37 is a diagram showing receipt of a witnessed message.
Fig. 38 shows a system for publishing a witnessed blockchain transaction.
Fig. 39 is a diagram showing generation of a witnessed blockchain transaction.
Fig. 40 is a block diagram showing a security enhancement involving multi-
level
quantropization.
Fig. 41 is a block diagram showing a security enhancement involving bit
position shuffling.
Fig. 42 is a block diagram showing a security enhancement involving block
quantropization.
Fig. 43 is a block diagram showing a security enhancement involving dynamic
spreading.
The drawings are to be considered as illustrative of certain examples and are
not to be
considered limiting.
DETAILED DESCRIPTION
Fig. 1 shows a communication system that may be configured to incorporate an
example non-
limiting embodiment of the present invention. Specifically, the system
includes a sending device 110,
7
Date Recue/Date Received 2021-02-12

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
a receiving device 120 and a channel 130 between the sending device 110 and
the receiving device
120. The sending device 110 and the receiving device 120 may each be
implemented within a mobile
phone, smartphone, laptop, desktop, on-board vehicle computer, internet-
enabled appliance, etc.
The sending device 110 may release a signal 140 to the receiving device 120
over the channel
130. The signal 140 may be a modulated signal that carries digital data that
has been encoded by
elements of the sending device 110. The channel 130 may be any suitable
communication channel
physically implemented using any suitable medium including one or more of
wired, RF, fiber optic,
free-space optical, acoustic, etc. The channel 130 may be implemented
logically as a path over one or
more data networks including but not limited to an intranet, a virtual private
network and the
Internet.
The receiving device 120 may also release a signal (not shown) containing data
for the
sending device 110 onto the channel 130 such that both the sending device 110
and the receiving
device 120 are in bidirectional communication over the channel 130. However,
to simplify the
description herein below, a description of the generation of a signal by the
receiving device 120 for
transmission towards the sending device 110 will not be provided, as it would
closely match the
description of the generation of the signal 140.
In one embodiment, the sending device 110 uses electricity (e.g., DC or AC
electricity from a
battery, generator, inverter, power lines, photovoltaic cell or electrical
transformer) to effect a
transformation or change on an input signal carrying an input bit stream, in
order to produce an
output signal carrying an output bit stream. To this end, the sending device
110 includes a processing
entity 112 and a memory 114 that stores computer-readable instructions. The
memory 114 may be
implemented in a variety of ways, such as a magnetic disk, or solid state
memory, and may include
flash memory, SRAM, DRAM, phase-change memory and the like. The processing
entity 112 is
configured to execute the computer-readable instructions in the memory 114. In
doing so, the
processing entity 112 of the sending device 110 causes the sending device 110
to implement a variety
of processes, including data processes and control processes. Examples of a
processing entity may
include electronic components such as a computer processor on a microchip, or
a quantum computer.
An example of a process that may be implemented by the processing entity 112
includes a data
encoding process, described herein below in further detail. The data encoding
process may be
encoded as a subset 116 of the computer-readable instructions in the memory
114. An input/output
(I/O) 118 enables the processing entity 112 to communicate externally and may
include a screen
(e.g., touchscreen), keyboard/mouse, network interface device/card (e.g., to
support NFC, WiFi,
Ethernet or cellular / GSM / LTE communications), USB port(s), etc.
For its part, the receiving device 120 also uses electricity to effect a
transformation or change
on an input signal carrying an input bit stream, in order to produce an output
signal carrying an
output bit stream. To this end, the receiving device 120 includes a processing
entity 122 and a
8

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
memory 124 that stores computer-readable instructions. The processing entity
122 is configured to
execute the computer-readable instructions in the memory 124. In doing so, the
processing entity 122
of the receiving device 120 causes the receiving device 120 to implement a
variety of processes,
including data processes and control processes. Examples of a processing
entity may include
electronic components such as a computer processor on a microchip, or a
quantum computer. An
example of a process that may be implemented by the processing entity 122
includes a data encoding
process, described herein below in further detail. The data decoding process
may be encoded as a
subset 126 of the computer-readable instructions in the memory 124. An
input/output (I/O) 11l
enables the processing entity 122 to communicate externally and may include a
screen (e.g.,
touchscreen), keyboard/mouse, network interface device/card (e.g., to support
NFC_ WiFi, Ethernet
or cellular / GSM / LTE communications), USB port(s), etc.
Components of the sending device 110 (receiving device 120), such as the
processing entity
112 (122) and the memory 114 (124) and various other input and other output
devices, may be
connected by various interconnects, such as a bus. Such interconnects may
include a Peripheral
Component Interconnect (PCI), such as F'CI Express, a Universal Serial Bus
(USB), Firewire (IEEE
1394), an optical bus structure, and the like. In another embodiment,
components of the sending
device 110 and the receiving device 120 may be interconnected by a network.
For example, the
memory 114 (124) may be comprised of multiple physical memory units located in
different physical
locations interconnected by a network. Moreover, depending on the exact device
configuration and
type, the memory 114 (124) may be volatile (such as RAM, for example), non-
volatile (such as
ROM, flash memory, etc., for example) or some combination of the two. The
computer readable
instructions stored in the memory 114 (124) may be implemented as program
modules, such as
functions, objects, Application Programming Interfaces (APIs), data
structures, and the like, that
perform particular tasks or implement particular abstract data types.
Typically, the functionality of
the computer readable instructions may be combined or distributed as desired
in various
environments.
It should be appreciated that in some cases, the sending device 110 and the
receiving device
120 may be the same computer, the channel 130 may be internal circuitry of the
computer, the
processing entities 112, 122 may be the same processing entity, the memories
114, 124 may be a
common computer memory, and the subsets 116, 126 may be different subsets of
the common
computer memory.
It should also be appreciated that the term "bit stream" generally encompasses
a sequence of
binary digits. A "bit stream" may include a sequence of bits that represents a
stream of data,
transmitted continuously over a communications path. In some cases, the term
"bit stream" may
encompass a bit string or an ordered collection of bits that may be encoded
into, or reside on, a
computer-readable medium, such as the bits that define a file stored on a
solid state or magnetic
9

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
drive. In some cases, a "bit stream- may include part of a digital message
that may be formatted as
an email, a text message, an instant message, an image, a video, a document in
a format such as
Word, PDF, etc. or in any other suitable way.
Data Encoding Process
A non-limiting embodiment of the data encoding process that may be implemented
by the
processing entity 112 of the sending device 110 in certain non-limiting
embodiments will now be
described with further reference to the flowchart in Fig. 2 and the conceptual
block diagram in Fig. 3,
for a received! obtained input bit stream 310.
At step 210 of the data encoding process, which may be caused by execution of
the
computer-readable instructions 116, the processing entity 112 determines a
system size. This system
size is denoted N and may be stored as a constant or variable in the memory
114. N is an integer at
least as great as 1 and typically would be higher for added security. For
example, in some
embodiments, N may be 2, 4, 6, 8 or at least as great as 10, while in other
embodiments, N may be at
least as great as 20. In specific example embodiments, which arc non-limiting,
N may be 12, 13 or
14. There is no particular limitation on the magnitude of N, although it can
be expected that greater
security will be achieved with a higher value of N. Also, the system size may
be fixed, or may be
dynamically changed over time as will be described later on.
At step 220 of the data encoding process, which may be caused by execution of
the
computer-readable instructions 116, the processing entity 112 encodes input
segments of data into
output segments of data of the same size using a one-to-one mapping of
dimensionality greater than
the size of each segment. In an embodiment, the processing entity 112 obtains
data indicative of a
one-to-one mapping between 2N- possible input indexes and 2N- possible output
indexes where, it is
recalled, N represents the system size. To disambiguate this mapping from
other mappings described
elsewhere, the mapping obtained at step 220 will be referred to herein as the
"encoding mapping".
The encoding mapping may be expressed in different ways. For example, the
encoding
mapping may be expressed as {P(x,y)}, which is a set of values at coordinates
P(x,y) that can be
stored in the memory 114, wherein for each x (between 0 and 2N-1) there is a
single y (also between
0 and 2N-1) such that P(x,y)=1, and wherein for each y there is a single x
such that P(x,y)=1, and
wherein P(x,y)=0 for all other combinations of x and y. The encoding mapping
may thus represent a
one-to-one association between each of the values from 0 to 2N-1 and another
(typically but not
necessarily) different value between 0 and 2N-1.
Conceptually, the encoding mapping take the form of a 2N-by-2' matrix "P",
where each row
and each column contains a single -1", and the rest of the matrix elements (of
which there are 22N-2N)
are Such a matrix may be referred to as a "binary permutation matrix-, and
may be stored in the
memory 114.

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
In yet another example, the encoding mapping may take the form of a switch
fabric input-
output correspondence table that associates each of 2N switch inputs to one of
2N switch outputs. The
switch fabric input-output correspondence table may be stored in the memory
114. Other ways of
representing the encoding mapping, such that it may be stored in the memory
114 and suitably
accessed and interpreted by the processing entity 112, are within the scope of
the present invention.
As with the system size (N), the encoding mapping may be fixed, or may be
dynamically changed
over time as will be described later on.
At step 230 of the data encoding process, which may be caused by execution of
the
computer-readable instructions 116, the processing entity 112 subdivides or
separates the input bit
stream 310 in Fig 3 into a plurality of input bit segments 320 in Fig 3, such
that each of the input bit
segments 320 is configured to have the system size, i.e., each of the input
bit segments 320 is N bits
long. This can be referred to as a disassembly process. It is noted that
whereas a customary way to
separate binary data is in groups of 8 bits (i.e., bytes), the value of N need
not be 8 or a multiple
thereof, and therefore the input bit segments 320 resulting from execution of
step 230 are not
necessarily a multiple of 8 bits long.
At step 240 of the data encoding process, which may be caused by execution of
the
computer-readable instructions 116, the processing entity 112 produces a
plurality of output bit
segments 330, where each of the output bit segments 330 corresponds to a
respective one of the input
bit segments 320. Each of the output bit segments 330 is configured to have
the system size and is
therefore N bits long, i.e., just like the input bit segments 320. Moreover,
the contents of a given one
of the output bit segments 330 is related to the contents of the respective
one of the input bit
segments 320 by the encoding mapping as will be described herein below.
Specifically, for a
particular N-bit input bit segment, an input index is determined as the
(decimal) value of the
particular N-bit input bit segment. Then, an output index is determined based
on the input index and
the encoding mapping. Finally, the corresponding N-bit output segment is set
to the binary
representation of this output index. This can be referred to as a
quantropization process.
There may be various ways to determine an output index based on an input index
and the
encoding mapping in a practical implementation. In one example, with reference
to Fig. 4A, if one
considers that the encoding mapping is represented by the set {13(x,y)}, a
look-up table 402 may store
the various combinations of x and y that give P(x,y)=1. The look-up table 402
could be stored in the
memory 114, wherein each of 2N. input indexes (the "x" column) is associated
with a single output
index (the "y" column). Thus, step 240 can involve accessing the look-up table
402 in the memory
114, based on an input index derived as the value of a particular N-bit input
bit segment, so as to
obtain an output index.
In another example, with reference to Fig. 4B, if one considers that the
encoding mapping is
represented by a 2N-by-21' binary permutation matrix P stored in the memory
114 (in this example,
11

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
N=3), and if one creates a vector "g- as the 1 x 2N (i.e., 1 x 8) vector that
contains a "1- element in
the column that represents the input index, i.e., the (decimal) value of a
particular N-bit input bit
segment, then step 240 can involve calling a subroutine that executes a vector-
matrix multiplication
between g and P. The subroutine may be stored as computer-readable
instructions stored in the
memory 114 and executable by the processing entity 112. The outcome of the
vector-matrix
multiplication will be a 1 x 2' (i.e., 1 x 8) vector "g*" that contains a "1"
element in the column that
represents the output index.
In yet another example, with reference to Fig. 4C, a hardware or software
switch fabric 400
having 2N inputs 410 and 2' outputs 420 may be implemented within the sending
device 110 and the
encoding mapping may be represented as a switch fabric input-output
correspondence table 401 for
the switch fabric 400, as may be stored in the memory 114. The inputs 410
correspond to bit
positions in successive 2Nx1 input arrays (each containing only one "1"
element at the position of the
input index, the rest being zeroes) formed from corresponding N-bit input bit
segments 320. The
result is a succession of 2Nx1 output arrays at the outputs 420 of the switch
fabric 400. Each such
output array contains only one -1" element at the position of the output
index, the rest being zeroes.
The decimal value of the position of the output index is converted into binary
form, thus yielding the
corresponding one of the N-bit output bit segments 330.
The foregoing is repeated for multiple ones of the input bit segments 320 and
it will be
appreciated that the manipulations and calculations can be achieved highly
efficiently using a
computer, notably by the processing entity 112.
At step 250 of the data encoding process, which may be caused by execution of
the
computer-readable instructions 116, the processing entity 112 may concatenate
or combine multiple
(e.g., successive) ones of the output bit segments 330 into an output bit
stream 340, which may itself
then be organized into bytes (or any other convenient grouping) and sent out
onto the channel 130,
e.g., they may be carried by the signal 140 after modulation into a modulated
signal. The term
"modulated signal" may include a signal that has one or more of its
characteristics set or changed in
such a manner as to encode information in the signal.
Of course, additional encoding and encryption of the input bit stream 310
(i.e., prior to
execution of the data encoding process) and/or the output bit stream 340
(i.e., after execution of the
data encoding process) could be added to render overall data encoding
procedure even more secure.
A specific non-limiting example of the data encoding process is now described
with
reference to Fig. 5. It is seen that an input data stream includes a sequence
of hex bytes whose values
are, from right to left, 57, 85, 33, 30, etc. The hex bytes can be broken down
into bits, thus creating
an input bit stream. The input bit stream is then broken down (disassembled)
into segments of size N,
where in this example, N=14. The first 14-bit input bit segment is
00010101010111, which has a
decimal value of 1367, the next 14-bit input bit segment is 00000011001110,
which has a decimal
12

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
value of 206, and so on. The encoding mapping is first applied to decimal
value 1367. In this case,
the encoding mapping is represented by the set IP(x,y)I, where P(x,y) = 1 for
any x between 0 and
(=16383), but for only one value of y for each such x: in all other cases
P(x,y)=0; and mutatis
mutandis for y and x. In this particular example, the relevant value of x is
1367, and it is noted that
P(1367,y) = 0 for all values of y except one. Let such value of y be 802.
Thus, P(1367,802) = 1.
Then, 802 is converted into a 14-bit binary value, which works out to
00001100100010.
It is seen that the input bit segment had seven "1"s and the output bit
segment only has four
"1"s. This shows that the encoding mapping does not merely represent a
permutation of the bits in
the input bit segment. In fact, a given density of "1"s in the input bit
segment may produce any given
density of "1"s in the corresponding output bit segment, depending on the
specifics of the encoding
mapping.
Next, the encoding mapping is applied to decimal value 206. Let P(206,y) = 0
for all values
of y except 14332. Thus, the value of 14332 is obtained as the -column" index
for -row" index 206.
Then, the decimal value 14332 is converted into a 14-bit binary value, which
amounts to
11011111111100. It is seen that there is again a difference in the density of
the -1"s in the input bit
segment and in the output bit segment. The bits in the output bit segments may
then be concatenated
in various ways and grouped again in, say, hex bytes, in this case yielding,
from right to left, 22, 03,
FF, AD.
The change in the bit pattern between a particular N-bit input bit segment and
the
corresponding N-bit output bit segment can be manifested, in a computer
system, as a difference in
voltage, current or other physical characteristics of the signal used to store
and/or transmit the bit
pattern in question.
As such, what has been described is a method of communication that uses a one-
to-one
mapping to encode individual sets of bits of an input bit stream into
corresponding same-sized sets of
bits of an output bit stream, wherein the relative proportion of ones and
zeroes in a given set of bits
of the input bit stream is often different from the relative proportion of
ones and zeroes in the
corresponding set of bits in the output bit stream.
More particularly, and with reference to Fig. 21, there is illustrated the
situation where N=8.
The relative proportion of ones and zeroes in an 8-bit input (or output
segment) segment could range
from 0:8, 1:7, 2:6, 3:5, 4:4, 5:3, 6:2, 7:1 and 8:0, corresponding to a
"weight" of 0.000, 0.125, 0.250,
0.475, 0.500, 0.625, 0.750, 0.875 and 1.000. The number of combinations in
each case is shown in
the second column as 1, 8, 28, 56, 70, 56, 28, 8, 1, corresponding to a
probability of 1/256, 8/256,
etc. The total number of combinations is of course 28 = 256. Given an
arbitrary mapping, each input
port is equally likely to map to any output port. As such, if one randomly
selects an input-output pair,
the chance of them having the same weight is:
13

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
the probability of the input segment having weight 0.000 multiplied the
probability of the
output segment having weight 0.000
the probability of the input segment having weight 0.125 multiplied the
probability of the
output segment having weight 0.125
Etc.
This results in a total probability of 0.077606, or 7.7606%, of an 8-bit input
segment
preserving its weight as it is mapped by a 256-by-256 mapping. Conversely,
this means that at least
90% of the possible input segments will have a different proportion of ones
and zeroes when one
looks at the output segment to which it is mapped. In other words, when the
size of the input and
output segments is 8, at least 90% of the possible corresponding pairs of
input and output segments
will have different relative proportions of ones and zeroes between the input
and output segments. Of
course, when N is higher, differences will arise in an even greater percentage
of cases. Clearly, the
permutation mapping P does not merely result in a permutation of the 8 bits in
the input segment.
Rather, the permutation mapping is of a dimensionality greater than the size
of the input and output
segments. In particular, when the input and output segments are of size N, the
permutation mapping
P can be of a size 2N by 21', if not greater.
In a still further example, with the 2N¨by-2N mapping stored in the memory, an
ordered set
of input bits is obtained. The processing entity 112 breaks down
(disassembles) the ordered set of
input bits into a plurality of N-bit input segments. Then, N-bit output
segments corresponding to
respective ones of the N-bit input segments are produced by: (i) expanding
each of the N-bit input
segments into a 2N-bit input segment, applying the 2N¨by-2N mapping to the 2N-
bit input segment to
determine a 2N-bit output segment, compressing the 2N-bit output segment into
an N-bit output
segment. Then an ordered set of output bits is formed using the resulting N-
bit output segments and
the ordered set of output bits can be released onto a physical medium or
stored in the memory.
Thus, the above has shown how N-bit input segments of an input message are
converted into
corresponding N-bit output segments of an output message using a 2N-by-2 one-
to-one mapping
stored in a non-transitory storage medium.
Data Decoding Process
The signal 140 travels along the channel 130 and reaches the receiving device
120. Certain
pre-processing may be required to account for channel loss, distortion and
other factors. Ultimately,
however, the receiving device 120 executes a data decoding process that is
virtually identical to the
data encoding process performed by the sending device 110, except that it is
performed with a
14

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
decoding mapping rather than with an encoding mapping. The decoding mapping is
the inverse of the
encoding mapping so as to recover the originally encoded data.
Thus, for example. where the encoding mapping is represented by {P(x,y)}, the
decoding
mapping may be represented by {Q(x,y)} = {13-1(x,y)} = {P(y,x)}. This can be
referred to as a
de quantropization process.
Alternatively, if the encoding mapping applied during the data encoding
process is
represented by a binary permutation matrix P. the decoding mapping to be
applied during the data
decoding process may be represented by Q = PT (the transpose of P). It is
noted that due to the
structure of P being a binary permutation matrix, PT x P = I. In other words,
if the N-bit input bit
segment to the data encoding process was an "unencoded bit segment" (with the
N-bit output bit
segment resulting from application of P to the unencoded bit segment being an -
encoded bit
segment"), and if the N-bit input bit segment to the data decoding process is
the encoded bit segment,
and furthermore if the binary permutation matrix Q applied by the data
decoding process is the
transpose of P, then the N-bit output bit segment resulting from the data
decoding process will be the
original, unencoded bit segment. That is, the original N -bit input bit
segments are recovered after the
decoding process. They can then be assembled into a string of bits that can
then be suitably resized
(e.g., as bytes).
Due to the relationship between P and Q (i.e., Q = PT), it follows that Q
could be used in the
data encoding process and P could be used in the data decoding process, since
QT = P. Thus, overall,
the encoding and decoding mappings may be represented as a pair of symmetric
permutation
matrixes P. Q, each being the transpose of the other, one of which is used in
encoding and the other
of which is used in decoding. Alternatively, the encoding and decoding
mappings may be expressed
as a single binary permutation matrix B together with additional an indicator
(or "flag") being
provided to the sending device 110 and the receiving device 120 to instruct
one of the two devices to
use B and the other to use BT.
It is seen that both the data encoding and decoding processes do not
significantly add to the
computational burden of the processing entities 112, 122 in the sending and
receiving devices 110,
120. This is due to the fact that the data encoding and decoding processes
involve operations such as
(i) index mapping from input index to output index or (ii) sparse vector-
matrix multiplication, either
of which may be efficiently handled by modern computer systems and processing
entities. As such,
CPU usage and energy costs of encoding and decoding may be kept low,
potentially resulting in
longer battery life, less latency and freeing up processing resources for
other tasks. This contrasts
with more complex numerical methods such as prime number factorization.
With reference to Figure 8, it is seen that for a given bit system size of N
bits, each set of N
original input bits can be said to reside in memory in a "bit space" 810 of
size N. The information
carried by these N original input bits can be said to exist in an "information
space" 820 of size 2'.

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
That is to say. N bits can be used to express 2N values or other information
elements. Now, it is noted
that the size of the corresponding encoding mapping (e.g., permutation matrix
or switch fabric input-
output correspondence table or look-up table) is 2N- by 2N. In other words,
the dimensionality of the
encoding mapping is 2N. This means that, if one represents the encoding
mapping as a binary
permutation matrix, there are (21')! (=2N* ((2i\ )- 1)* (
(2 ) 2)*...*3*2*1) different possible permutation
matrices, yet only one of them represents the selected encoding mapping being
used by the sending
device 110 for the particular set of N original input bits. An N-bit output
bit segment produced
through application of the selected encoding mapping can be said to exist in
an "entropy space" 830
that is so vast that, from a practical perspective, encryption is no longer
needed to guarantee security,
even for relatively small values of N. The process of converting an N-bit
input segment into an N-bit
output segment using an encoding mapping may be referred to as
"quantropization", and the inverse
process may be referred to as "dequantropization". The input-output
configuration of a particular 2N-
by-2N encoding mapping used in quantropization can be referred to as an
"entropy state", which can
be represented in some non-limiting embodiments as a states array WI, although
one may use a
multitude of other ways to express the encoding mapping.
For example, even in just a 12-bit system, a malicious third party would need
to guess which
of the 212 factorial possible permutation matrices was used (which is greater
than 1013019).
Furthermore, this assumes that the malicious party knows it is a 12-bit system
to begin with, which
knowledge might not be easily available to the malicious party. This may be
considered a
computationally intractable problem for many of today's systems, including
quantum computers, and
has the potential to dissuade malicious parties from trying to "hack" the
system. As such, while each
possible 2N input bit stream does indeed map deterministically to a single 2N
output bit stream, this
mapping does not arise from mere permutation of the bits in the input bit
stream; rather it results
from selection of an encoding mapping in a much larger space (i.e., the
selection of one 2Nx2N matrix
in the space of 21x21 matrices), which makes it more economical for a hacker
to "guess" the N-bit
value of the original input bit segment corresponding to an observed N-bit
output bit segment, than to
try to find the "correct" encoding mapping over any reasonable observation
period. Since "guessing"
by an outside party cannot be prevented even in a maximally secure system, one
can conclude that by
reducing a hacker's effectiveness to guessing, the level of data protection
provided by the presently
described embodiments is practically equivalent to the level of data
protection provided by a
maximally secure system, yet at a low computational cost.
This combination of low computational complexity, low latency and enhanced
security
against intrusion / hacking, provided by certain embodiments described herein,
may help achieve
high-bandwidth, real-time, quantum-secure communications.
Another way to describe the above is with reference to the conceptual diagram
of Figure 9,
which shows two peers, Alice and Bob, each of which can be a laptop,
smartphonc, Internet-enabled
16

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
appliance, vehicle electronic control unit (ECU), web server, etc. Alice has
the role of the "sending
device" 110 of Fig. 1 and, through execution of the subset of instructions
116, implements a plurality
of functional modules (e.g., a disassembler 920A, a quantropizer 930A, a
packager 940A and a
control module 980A), which will be described herein below. Similarly, Bob has
the role of the
"receiving device" 120 of Fig. 1 and, through execution of the subset of
instructions 126, implements
a plurality of functional modules (e.g., a depackager 950B, a dequantropizer
960B, an assembler
970B and a control module 980B), which will be described herein below.
Specifically, Alice receives information segments (e.g., plain text input
bits), which are then
disassembled by the disassembler 920A into segments of a system size N (e.g.,
step 230 in Fig. 2).
The N-bit input segments are processed by the quantropizer 930A into
corresponding N-bit output
segments according to an encoding mapping (e.g., a permutation matrix P) of
size 2N by 2N (e.g., step
240 in Fig. 2). The resulting N-bit output segments are packaged into a
suitable format by a packager
940A (e.g., step 250 in Fig. 2), which may group N -bit output segments
together or break them down
further before transmission to Bob over a network 990 (e.g., the Internet).
Additionally, Bob includes
the depackager 950B, which depackages the received quantropized data into N-
bit input segments.
The dequantropizer 960B applies the decoding mapping (e.g., Pr) to obtain N-
bit output bit
segments. These arc then assembled into suitable information segments (e.g.,
plain text output bits)
by the assembler 970B.
Thus far, it has been assumed that Alice and Bob use matching encoding and
decoding
mappings. In this regard, control module 980A is responsible for determining
the encoding mapping
(e.g., permutation matrix P) to be used by the corresponding quantopizer 930A
(e.g., steps 210 and
220 in Fig. 2), whereas control module 980B is responsible for determining the
decoding mapping
(e.g., permutation matrix PT) to be used by the corresponding dequantropizer
930B. How the control
modules 980A, 980B determine and agree on the encoding and decoding mappings
to be used for
quantropization and dequantropization is now described in further detail.
Agreement on encoding / decoding mapping between sending and receiving devices

Recalling that the Alice and Bob are mapped to Fig. 1 such that Alice is the
sending device
110 and Bob is the receiving device 120, the permutation matrix P is used by
the sending device 110
and the transpose of this matrix, namely PT, is used by the receiving device
120. However, since PT is
derivable from P. it is possible for both devices to be advised of just the
permutation matrix P as well
as to be advised of a flag (or other indicator) that tells the sending device
110 to use P in its data
encoding process (i.e., without transposing it), and that tells the receiving
device 120 to use PT in its
data decoding process. Of course, the reverse is also possible (i.e., using PT
in the data encoding
process and P in the data decoding process).
17

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
In the case of the encoding mapping being represented by a set {P(x,y)}, the
sending device
110 uses {P(x,y)} and the receiving device 120 uses {13-1(x,y)}. Again, since
= {13-1(x,y)} is derivable
from {P(x,y)} (as it equals to {P(y,x)}), it is possible for both devices to
be advised of just the set
{P(x,y)} as well as to be advised of a flag (or other indicator) that tells
the sending device 110 to use
{P(x,y)} in its data encoding process, and that tells the receiving device 120
to use {P(y,x)} in its
data decoding process. Of course, the reverse is also possible (i.e., using
{P(y,x)} in the data
encoding process and {P(x,y)} in the data decoding process).
The following sections describe various embodiments for informing the sending
device 110
and the receiving device 120 of the appropriate encoding or decoding mapping
to use. For the sake of
simplicity, the encoding mapping is represented by a binary permutation matrix
P and the decoding
mapping is represented by a binary permutation matrix PT; however. lookup
tables or the previously
described sets {P(x,y), {P(y,x)} }could also have been used. Also, in each of
the below embodiments,
rather than transmitting the specific matrix (P or PT) to be used by a
particular recipient of the
transmission, it is possible to include a common permutation matrix (P) in the
transmission along
with a flag or other indicator that specifics whether the intent is to
transmit P or PT to the recipient.
First initialization method
In one embodiment, shown in Fig. 6A, the sending device 110 generates the
encoding
mapping P and stores it in the memory 114. The sending device may send PT to
the receiving device
120 in a signal 615 during an initialization phase. To this end, a handshaking
protocol may be
executed by the sending device 110 and the receiving device 120. This
handshaking protocol may be
securely implemented using current encryption technologies such as AES or DES.
Alternatively, the
handshaking protocol may be securely implemented using the techniques taught
herein with a pre-
defined system size M and default encoding (and decoding) mapping that is
known and/or made
available to the sending device 110 and the receiving device 120. For example,
where M=3, the
sending device 110 may use an 8x8 binary permutation matrix R for the purposes
of encoding data
that carries PT, and the receiving device may use RT in its data decoding
process for the purposes of
recovering PT from the data encoded using R.
Thus, one embodiment of a data protection method could include encrypting the
data
indicative of the default mapping prior to sending it to the receiving device,
e.g., using a private key
of a private key / public key pair, where the private key being uniquely known
to the sending device,
and where the public key is known by or made available to the receiving
device.
Second initialization method
In another embodiment, shown in Fig. 6B, a central server 600 may securely
communicate
with the sending device 110 and/or the receiving device 120 in order to
distribute the permutation
18

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
matrix P during an initialization phase in signals 625 and 635. This secure
communication may be
implemented using current encryption technologies such as AES or DES, because
it only requires the
transmission of enough data to define a 2N-by-2N permutation matrix.
Alternatively, the server 600
may communicate with each of the sending device 110 and the receiving device
using the techniques
taught herein with a pre-defined system size M and default encoding (and
decoding) mapping that is
a shared secret between the server 600 and the sending device 110, and between
the server 600 and
the receiving device 120 (it is noted that different shared secrets may be
used by the server 600 when
communicating with the sending device 100 and the receiving device 120 during
the initialization
phase). The central server 600 is equipped with a processor connected to a
memory and an I/O,
wherein the processor executes computer-readable instructions stored in the
memory in order to
perform the above specified functions.
Third initialization method
In another embodiment, shown in Fig. 6C, the sending device 110 and the
receiving device
120 may each store a respective table 610 of pre-determined permutation
matrices (P1, P2, ... in the
case of the sending device 110 and PT1, PT2, ... in the case of the receiving
device 120) that is
securely installed in the respective memory 114, 124, with each permutation
matrix in the table being
referred to by a unique code (1, 2, ...). Thus, when the sending device 110 is
using a particular
permutation matrix in its data encoding process, rather than send the
corresponding permutation
matrix transpose to the receiving device 120, the sending device 110 may send
a message 620
specifying the code corresponding to that permutation matrix in the table 6102
stored in the memory
124 at the receiving device 120. This allows the transmission between the
sending device 110 and the
receiving device 120 to be unencrypted (i.e., plaintext) because the code
corresponding to one of the
permutation matrices in the memory 124 is meaningless to an outside observer
that does not have the
table 6102. This embodiment only requires the tables 6101 and 6102 to be
securely populated
beforehand.
As an extension of the above embodiments, the sending device 110 and/or the
receiving
device 120 may choose to change the permutation matrix, or may be prompted to
do so by an
external party (e.g., a user, a central server). This can result the
transmission of a new permutation
matrix (appropriately protected or encrypted), or of a code corresponding to a
new permutation
matrix (which could be sent in plaintext). Alternatively, the order in which
the permutation matrix
changes can be programmed such that the mere request for a change in the
permutation matrix may
prompt the sending device 110 and the receiving device 120 to navigate ("hop")
through its
respective table 6101, 6102 of permutation matrices in a predefined way.
Furthermore, there is no
need for a subsequent permutation matrix to have the same system size. The
system size may be
specified as an independent variable, and there may be different levels of
sophistication, such as
19

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
different tables of permutation matrices for different system sizes, such that
a code may be valid
when received for different system sizes but would refer to different
permutation matrices of
different sizes, but known / made available to both parties due to their prior
storage in the table of
permutation matrixes.
After initial sharing the permutation between the sending device and receiving
device, both
sides can agree to form a session permutation matrix by using the
encoding/decoding mechanism
described in this document. Using the newly formed permutation matrix for a
session data
transmission can improve the data transmission security.
A change in session permutation matrix (which may involve a new system size)
may be
triggered by providing an indication to the sending device 110 and/or the
receiving device 120. The
indication may indicate a separation between N-bit input segments (and/or N-
bit output segments) to
be processed using the previous (old) permutation matrix and the N-bit input
segments (and/or N-bit
output segments) to be processed using the new permutation matrix. In one
embodiment, the
indication may specify the number of bits (or segments) to which a first
permutation matrix applies
before a change of permutation matrix (which may involve a new system size) is
required. In another
embodiment, the indication may signal an instantaneous change to a new system
size or permutation
matrix. In yet another embodiment, the indication may signal a time at which
the sending device or
the receiving device is to switch over to a new system size or permutation
matrix. Also, the
indication may provide a set of conditions which, when met, trigger a change
in the permutation
matrix, with the understanding that such conditions are being monitored by the
sending and/or
receiving devices 110, 120. An example of a condition is elapsed time,
absolute time (e.g., change of
day), number of bits processed, a detected hacking attempt, an interaction
with a user through the
I/O, etc. The indication may be controlled by the sending device 110 and
transmitted from the
sending device 110 to the receiving device 120, or it may be controlled by a
server (e.g., central
server 600) and send to both the sending device 110 and the receiving device
120. The indications
may be encoded into a physical signal.
Fourth initialization method
In another embodiment, shown in Fig. 6D, all sending devices and receiving
devices that are
susceptible of communicating with one another are produced in the same secure
facility 650.
Consider for example devices such as vehicles, satellites and smartphones that
are produced in a
secure plant. The vehicles, satellites, or smartphones (here illustrated as
Vehicle A, Vehicle B and
Vehicle C in a non-limiting example) can thus be provided with sending devices
(110A, 110B, 110C,
respectively) embedded with the same encoding mapping P and with receiving
devices (120A, 120B,
120C, respective) embedded with the same decoding mapping P. In this way,
Vehicle A, Vehicle B
and Vehicle C can communicate amongst themselves (and/or with a central server
at the secure

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
facility 650 that is also aware of P and PT), and this communication remains
secure once Vehicle A,
Vehicle B and Vehicle C have left the plant 650 and entered the marketplace.
There is no need for
any particular vehicle, satellite, or smartphone to perform handshaking, and
there is no need for an
initialization phase. Even in the case of a 12-bit system, the difficulty of
hacking such a system is so
extreme (as there are 212 factorial ((212)! > 1E+13019) possible permutation
matrices) that it is
envisaged that a single encoding mapping would provide adequate data
protection throughout the life
of the vehicles, satellites, or smartphones. The mapping may therefore be
securely embedded in the
memory of each of the devices in such a way that it prevents any external
access or tampering.
Fifth initialization method
In another embodiment, the encoding mapping is locally generated based on an
initial secret.
This may follow the general process shown in Figure 10 and outlined below:
= Stage 1010: an initial secret S is shared between two peers (Alice and
Bob).
= Stage 1020: Each of Alice and Bob locally generates an initial encoding
mapping (e.g.,
permutation matrix P) based on the initial secret S.
There are numerous ways of sharing of the initial secret S at Stage 1010.
Three variants will now
be described in greater detail.
Initial secret sharing (variant 1 of Stage 1010)
Reference is made to Figure 11, which is similar to Figure 9 in that it shows
two peers, Alice
and Bob, communicating over the Internet 990. Alice implements a control
module 980A and peer
Bob includes a control module 980B. The control modules 980A, 980B are
initialized and tuned so
as to allow proper synchronization of the encoding mapping and decoding
mapping being used by
Alice and Bob. In the embodiment of Figure 11, Alice has the ability to
communicate with Bob not
only over the Internet 990 but also over an alternate channel 1100. In an
embodiment, the alternate
channel 1100 is an out-of-band (00B) channel, which is out-of-band in that it
does not utilize the
Internet 990. An example of an 00B channel 1100 may be a cellular link
established over the public
switched telephone network (PSTN), or an NFC link.
To carry out Stage 1010 in the embodiment of Figure 11, a process may be
followed as now
described with reference to the diagram in Figure 12.The flow of operation is
as follows:
(1) Firstly, Alice generates an identifier QID and sends a message 1210
containing the identifier
QID to Bob.
(2) Next, Bob returns a message 1220 containing the identifier QID to Alice,
as a form of
acknowledgement.
(3) Then, Alice generates or obtains the initial secret S, which may but need
not be a locally
generated random number.
21

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
(4) Alice then generates a code 1230 (e.g., a QR code or hash code) from the
identifier QID and
the initial secret S.
(5) Next, Alice sends a message 1240 containing the code 1230 to Bob over the
alternate / out-
of-band channel 1100.
(6) Bob then receives the message 1240 and decodes the initial secret S from
the code 1230
based on Bob's prior knowledge of the identifier QID.
At this point, Alice and Bob each have the initial secret S and can proceed to
Stage 1020, which
is described later on.
Initial secret sharing (variant 2 of Stage 1010)
An alternate way of carrying out Stage 1010 (i.e., sharing of the initial
secret S between Alice
and Bob) will now be described with reference to the diagram in Figure 13.
This embodiment does
not use an out-of-band channel between Alice and Bob. but rather uses three
parties to achieve the
initial secret sharing, namely Alice, Bob, and a network element 1300 (e.g., a
server). The server
1300 has the ability to communicate with both Alice and Bob over the Internet
990. The flow of
operation is as follows:
(I) Both Alice and Bob send a respective hello message 1310 to the server 1300
before they
start to talk to one another. The server 1300 replies with a respective reply
message 1312
containing the server's public key PUKE to Alice and Bob. It is noted that the
server's public
key PUKs is part of a public/private key pair and that the server 1300 is
therefore imputed to
know its own private key PRRs;
(2a) Alice generates a token T and a public/private key pair (PCKA, PRKA),
splits PUKA into
two parts: PUK_lA + PUK_2A; uses the server's public key PUKs to encrypt
PUK_2A and
the token T: then sends the result to the server 1300 in a message 1320;
(2b) The server 1300 decrypts the message 1320 with its private key PRKs and
retrieves
PUK_2A and the token T, then records the token T, PUK JA, and Alice's IP
address Alice-IP.
It is noted that token T is stored in a memory in association with Alice's IP
address Alice-IP.
(2c) Alice then sends a message 1330 Sec-Req(T, PUK_1A) to Bob;
(3a) Bob receives T and PUK _lA in the message 1330. Bob also generates a
private/public
key pair (PUKH, PRKB), which may be done before receiving the message 1330;
(3b) Bob encrypts token T, Bob's public key PUKD and Alice's IP address Alice-
IP with the
server's public key PUKs and sends the resulting encrypted message 1340 to the
server 1300.
The server 1300 decrypts the message 1340 with its private key PRKs and
retrieves the token
T, Bob's public key F'UKH, and Alice's IP address Alice-IP. The server 1300
then looks for
token T and verifies that Alice's IP address Alice-1P is indeed the IP address
associated with
token T;
22

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
(3c) The server 1300 encrypts part 2 of the Alice's public key PUK _2A with
Bob's public key
PUKE and sends back to Bob in a message 1350;
(3d) Bob decrypts the message 1350 with Bob's private key PRKE and retrieves
PUK_2A;
Bob assembles PUK 1A and PUK JA to create the totality of Alice's public key
PUKA;
(4a) Bob generates an initial secret S and encrypts it with Alice's public key
PUKA.
(4b) Bob sends a message 1360 Sec-Resp(PUKA (S)) back to Alice, containing the
initial
secret S encrypted with Alice's public key PUKA (that Bob was able to assemble
at step
(3d));
(5) Alice decrypts the message 1360 with Alice's private key PRKA and
retrieves the secret
S.
At this point, Alice and Bob each have the initial secret S and can proceed to
Stage 1020,
which is described later on.
Initial secret sharing (variant 3 of Stage 1010)
Bloekchains have generated interest in a variety of fields as a decentralized
data storage
mechanism with reliable redundant validation. An example application includes
the exchange of
cryptocurrencies (e.g., Bitcoins), which are transferred via transactions
linked on a blockchain.
Another example application includes the settlement of smart contracts,
whereby rights and
responsibilities of contracting parties are similarly transferred via
transactions on a blockchain. In
this embodiment, a blockchain is used by Alice to share the initial secret S
with Bob.
Conceptually, a blockchain is a digital ledger in which transactions are
recorded
chronologically and publicly. From a technology point of view, and with
reference to Figure 18, a
blockchain 1800 is a continuously growing list of records, called blocks 1810,
which are linked and
secured using cryptography. A "block" is a container data structure and lists
one or more transactions
1820.
Participants to a transaction in a particular blockchain-enabled environment
have an address,
which is derivable from a participant's "public key" (e.g., by way of a hash
function involving the
public key and other information about the network). With continued reference
to Figure 18, a
participant's public key 1830 is known to other participants in the blockchain-
enabled environment.
The participant also has a private key 1840, which is used to sign
transactions. The participant's
public key 1830 is used by such other participants to verify the signature of
a received transaction.
An algorithm, such as elliptic curve cryptography, can be used to generate one
or more public keys
from the private key. In one embodiment, such algorithm may involve converting
the private key to a
binary representation, identifying the bits in this binary representation that
have a value of 1, and
summing an exponentially multiplied generator variable to arrive at the final
public key. While the
process of public key generation is quite straightforward, reversing the
process is computationally
23

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
intense. The private key 1840 and the public key 1830 can be large integer
numbers, but since these
numbers can be very large, they tend to be represented using a separate format
consisting of letters
and numbers (e.g., Wallet Import Format (WIF)).
The private key and the one or more public keys may be stored by a "wallet". A
wallet 1850
can be a software client of the blockchain-enabled environment that is
associated with a given
participant. The wallet 1850 can be implemented as computer-readable
instructions carried out by a
processor in a mobile phone or desktop computer, for example. In addition to
storing the
participant's private key 1840 and the one or more associated public keys
1830, the wallet 1850 may
be configured to allow the participant to receive and send blocks in which
transactions are listed.
With continued reference to Figure 18 and with additional reference to Figure
19, a
transaction 1820 includes the sender's address 1910, a recipient's address
1920, transaction content
1930 and a signature 1940. The transaction content 1930 and the participant's
private key 1840 are
used to generate the signature 1940. A hash message authentication code (HMAC)
can also be
generated and inserted in the transaction 1820 to allow authenticity of the
transaction content 1930 to
be verified.
In this application of the blockchain to initial secret sharing, the
transaction content 1930 is
an encrypted version of the initial secret S. Specifically, once Alice has
obtained the initial secret S
and desires to share it with Bob, Alice's control module 980A first uses
Alice's private key PRKA
plus Bob's public key PUKB to generate a shared key Ks. The shared key Ks has
the special property
of being derivable from both the combination of Alice's private key PRKA key
plus Bob's public key
PUKB, and from the combination of Alice's public key PUKA key plus Bob's
private key PRKB.
Control module 980A uses the shared key Ks to encrypt the initial secret S
(resulting in a "shared
secret" SS) and creates a transaction on the blockchain. The transaction is
from Alice and destined
for Bob. When Bob detects the transaction from Alice, Bob's control module
980B uses Bob's
private key PRKB plus Alice's public key PUKA to generate the same shared key
Ks. Bob then uses
this shared key Ks to decrypt the initial secret S from the shared secret SS.
At this point, Alice and Bob each have the initial secret S and can proceed to
Stage 1020,
which is described herein below.
It should be noted that the initial secret S should have a certain minimum
length. Generally
speaking, the size of the initial secret S can be based on the system size N,
and may correspond to N
* 2N bits. For example, in the case of N=10, the length of the initial secret
"S" can be 10 * 210 =
10,240 bits = 1,280 bytes. However, this is not to be viewed as a limitation,
as other lengths can be
used.
The above three variants have described ways in which the initial secret S can
be shared
between Alice and Bob at Stage 1010 of the process for tuning the control
modules 980A, 980B.
Stage 1020 is now described in greater detail.
24

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
Specifically, each of Alice and Bob locally generates an initial encoding
mapping (e.g.,
permutation matrix P) based on the initial secret S known to both parties.
This can be further
understood with reference to Figures 14A and 14B, where the control modules
980A, 980B are
configured to generate the permutation matrix P (as well as its transpose PT),
by executing a process
referred to as "EntroGen(*)" whose argument is, in this case, the initial
secret S.
In Figure 14A, the EntroGen(*) process implements an algorithm 1410 for
generating P and
PT, which can take on many forms, such as an array shuffle algorithm,
including the Key Scheduling
Algorithm (K SA), Fisher-Yates Shuffle, etc., with information states from 0
to (2N-1). In a non-
limiting embodiment, 2N-bv-2N permutation matrix P can be expressed by a
"states array" S[211.
Alternatively, and with reference to Figure 14B, the EntroGen(*) process
involves passing
the initial secret S through a hash module 1420 in order to generate an
intermediate secret S*, which
is then processed by the appropriate (e.g., array shuffle) algorithm 1410. The
hash module 1420
ensures that a small difference in S would produce a vastly different
permutation matrix P.
Sixth initialization method
With reference to Fig. 33A, each of Alice and Bob maintains an "Entropy
History Table"
3310, 3320 in the respective memory. The Entropy History Table keeps track of
recently configured
entropy "records-. Each such entropy record includes a plurality of entries,
including an ID of the
peer associated with the record (under the "App ID" column, noting that this
is not the identity of the
peer where the Entropy History Table is stored, but rather the identity of
another peer with which
such peer communicates), a time indicator (e.g., time stamp or time frame or
sequence number, under
the "time" column), an entropy state (under the "QE" column) and a hash (under
the "Hash"
column). It is recalled that a given entropy state is uniquely associated with
an encoding mapping /
permutation matrix. As each entropy record is associated with a time frame or
sequence value, it
becomes possible to readily find a previous entropy state and configure the
associated permutation
matrix; this can be done by both Alice and Bob independently, if provided with
of the same time
indicator. It is noted that with each time indicator is associated an entropy
state, and the entropy state
associated with one time indicator is dependent on the entropy state
associated with a previous time
indicator. The hash is produced by passing the entropy state through a hash
function, which may be a
one-way function.
In order to build the Entropy History Table, Alice and Bob participate in a
two-phase
exchange. The first phase is a preliminary phase. The second phase is an
update phase. The
preliminary phase is now described with reference to Fig. 33B and steps 3310
through 3370.
Step 3310: Alice generates a public-private key pair. Alice's public key is
denoted PUKA.
Step 3320: Alice sends Bob a message comprising Alice's public key PUKA.
This information
may reach Bob via a third party, i.e., not necessarily directly sent to Bob by
Alice.

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
Step 3330: Bob selects or generates an initial entropy state E0 and
encrypts it with Alice's public
key PUKA, which gives PU(Eo). The initial entropy state Eo may be represented
by a
states array So [211.
Step 3340: Bob sends PU(E0), the encrypted version of the initial entropy
state E0, to Alice's ID.
Step 3350: Alice decrypts the initial entropy state E0 using the private
key, which is known to
Alice but not Bob. Both peers now share the initial entropy state Eo.
Step 3360: Alice and Bob produce the hash Ho and update their Entropy
History Table with Ho
and E0 for the current time indicator to.
Step 3370: Alice and/or Bob set a trigger to trigger the update phase at a
later time, possibly a
random time in the future. In a non-limiting example, the trigger may be
implemented
as a timer.
Al this point, Alice and Bob each have the initial entropy state Eo. The
update phase is now
described with reference to Fig. 34 and steps 3402 through 3432.
Step 3402: One of the triggers is set off This could be Alice's trigger or
Bob's trigger. Let it be
assumed, for the purposes of the present discussion, that it is Alice's
trigger that is
received (e.g., a timer that expires first). This means that it is time to
determine the
entropy state for a new time indicator t1 (denoted E1) and the corresponding
hash (to
be denoted H1).
Step 3404: Alice obtains the initial entropy state Eo and the hash Ho for
the current time indicator
to.
Step 3406: Alice obtains the encoding mapping P (and PT) uniquely
associated with the entropy
state Eo.
Step 3408: Alice requests a random number, denoted g.
Step 3410: Alice creates a new entropy state El from g and the current
entropy state, which is the
initial entropy state Eo. For example, this could involve an XOR operation
such as
El= g (XOR) Eo. Of course, this is merely an example and should not be viewed
as a
limitation, as many other operations are possible.
Step 3412: Alice generates hash H1 from the new entropy state El using a
desired technique.
Step 3414: Alice updates its Entropy History Table with the new entropy
state E0 and the
associated hash Ho and for the new time indicator t1.
Step 3416: Alice computes or otherwise obtains the quantropized version of
g, namely P(g).
Step 3418: Alice sends the hash Ho and P(g) to Bob. It is noted that g is
quantropized and
therefore P(g) can be sent in plaintext, although it is also feasible to
encrypt it.
Step 3420: Bob receives the hash Ho and P(g). Based on the hash Ho, Bob
consults the Entropy
History Table to find the corresponding entropy state Eo.
26

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
Step 3422: Bob obtains the encoding mapping P (and PT) associated with
entropy state E0
obtained at step 3420.
Step 3424: Bob applies the decoding mapping PT to P(g). It is only in the
case where Bob has the
same Entropy History Table as Alice that this operation will give g. That is
to say, if
Bob does not have the same Entropy History Table, Bob may not find a match to
the
hash Ho, or if a match is somehow found, it will not correspond to entropy
state E0,
which means that the result of Step 3422 will not result in PT, but rather a
different
mapping that will not allow g to be extracted.
Step 3426: Bob creates a new entropy state E1 from g and the current
entropy state E0 in the same
manner as was done by Alice. For example, as previously described, this could
involve an XOR operation such as El= g (XOR) Eo. Of course, this is merely an
example and should not be viewed as a limitation, as many other types of
operations
are possible.
Step 3428: Bob generates the hash H1 from the new entropy state E1 using a
desired technique.
Step 3430: Bob sends the hash H1 and Bob's ID to Alice.
Step 3432: Alice receives the hash H1 and recognizes Bob's ID and sees that
Bob sent the
expected hash H1 for time indicator t1.
Both sides are now considered to have their Entropy History Tables updated. A
further
trigger may be set and the update phase carried out again.
Another method to update the entropy the Entropy History Tables would be,
after initial
synchronization, to use the most recent entropy state (e.g., Ex) as a
symmetric cryptographic key that
encrypts the new entropy state (e.g., Ex+1).
From the foregoing, it will be noticed that each entropy state depends on the
previous entropy
state, which depends on the one before that. As such, a chain of entropy
states is created, and each
such entropy state Ex defines an encoding mapping P. Moreover, as long as the
peers agree on the
time indicator, each peer will know which entropy state, and therefore which
encoding mapping, to
u se.
Implementation scenarios after agreement on encoding / decoding mapping
between sending
and receiving devices
It should be appreciated that the aforementioned quantropization techniques
for converting
N-bit input segments into N-bit output segments using an encoding mapping
associated with a
particular "entropy state" may be used in various implementation scenarios,
from transmitting a
small amount of information such as a single encryption key, to transmitting a
large amount of
information such as real-time streaming application data, to locally
generating an encryption key
without transmission to the other party.
27

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
Figures 17A-17C conceptually illustrate various possible implementation
scenarios related to
the foregoing. In all implementation scenarios, Alice is equipped with a
quantropization module
982A that includes the necessary functional modules related to quantropization
(and
dequantropization), as previously discussed. Similarly, Bob is equipped with
quantropization module
982B that includes the necessary functional modules related to
dequantropization (and
quantropization). It is assumed that Alice and Bob have reached agreement as
to which encoding and
decoding mapping to use; this can be achieved according to any of the
aforementioned initialization
methods (including maintaining an Entropy History Table and agreeing on the
current time
indicator). Also, optionally, Alice may be equipped with an application module
981A and Bob may
be equipped with an application module 981B. The application modules 981A and
981B
communicate "application data" such as a message between two users.
First implementation scenario: encoding mapping used by Alice to encode an
encryption key
for transmission to Bob
With reference to Fig. 17A, Alice's application module 981A communicates
application data
with Bob's application module 981B using, e.g., a conventional encryption
scheme. An example of
such an encryption scheme can be the Advanced Encryption Standard (AES), which
may be
considered secure if a sufficiently long encryption key is used; however, the
issue becomes how to
make Alice and Bob aware of the same encryption key without exposing this
encryption key to a
security risk.
To this end, Alice's application module 981A determines the desired encryption
key, invokes
Alice's quantropization module 982A at step 1702. The quantropization module
982A applies the
encoding mapping to the encryption key to produce a quantropized key and
returns it to the
application module 981A at step 1704. Alice's application module 981A then
sends the quantropized
key to Bob's application module 981B over the network 990. Bob's application
module 981B then
invokes Bob's quantropization module 982B with the quantropized key (step
1706), which
dequantropizes the quantropized key (using the decoding mapping) so as to
obtain the original
encryption key and returns it to the application module 981B (step 1708). From
this point on, the
application modules 981A and 981B can use the encryption key (which has been
securely transmitted
from Alice to Bob) to encrypt application data using AES, TLS (Transport Layer
Security), SSL
(Secure Sockets Layer) or any other suitable cryptographic protocol. While
this description has dealt
with a key being sent from Alice to Bob, the opposite could be true as well.
Second implementation scenario: encoding mapping used by Alice to encode
application data
for transmission to Bob
28

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
With reference to Fig. 17B, Alice's application module 981A sends application
data to
Alice's quantropization module 982A (step 1712). Alice's quantropization
module 982A
quantropizes the application data (using the encoding mapping) and returns
quantropized data back
to Alice's application module 981A (step 1714), which sends the quantropized
data to Bob's
application module 981B over the network 990. Upon receipt of the quantropized
data, Bob's
application module 981B sends it to Bob's quantropization module 982B for
dequantropization (step
1716). Quantropization module 982B dequantropizes the application data (using
the decoding
mapping) and returns the original data to Bob's application module 981B (step
1718), where it is
interpreted and processed.
In a variant of the second implementation scenario, instead of Alice's
application module
981A receiving quantropized application data returned to it from
quantropization module 982A and
forwarding it to Bob's application module 981B, Alice's quantropization module
982A could send
the quantropized application data directly to Bob's quantropization module
982B over the network
990.
Third implementation scenario: encoding mapping used by Alice and Bob to
individually
generate identical encryption keys
With reference to Fig. 17C, Alice.s application module 981A communicates
application data
with Bob's application module 981B using, e.g., a conventional encryption
scheme. An example of
such an encryption scheme can be the Advanced Encryption Standard (AES), which
may be
considered secure if a sufficiently long encryption key is used; however, the
issue becomes how to
make Alice and Bob aware of the same encryption key without exposing this
encryption key to a
security risk.
In this particular implementation scenario, the encryption key is not
transmitted from Alice to
Bob. Rather, Alice and Bob locally generate the encryption key based on the
encoding mapping (e.g.,
permutation matrix P) of which they are both aware.
Specifically, this process of locally generating the encryption key may be
described with
reference to the following stages:
= Stage 1030: Each of Alice and Bob obtains a common seed from a seed
source. The seed may be
a randomly generated quantity.
= Stage 1040: Each of Alice and Bob creates an encryption key using the
encoding mapping (e.g.,
permutation matrix P) and the seed.
= Stage 1050: Alice and Bob arc now ready to encrypt their communications
using the encryption
key.
29

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
The aforementioned stages are now described in the context of two possible
variants, having
regard to Figures 15 and 16. Shown in both of these figures is the server
1300, which may have the
ability to generate random numbers and to communicate with both Alice and Bob
over the Internet
990.
The first variant, shown in Figure 15, is referred to as the request-response
variant. It is assumed
that Bob wants to establish communication with Alice. The flow of operation is
as follows (it is
understood that control modules 980A and 980B within Alice and Bob are
configured to carry out
these actions):
(7a) Bob sends a message 1510 requesting a seed from the server 1300. The
message 1510
specifies Alice's client identifier (denoted QID), Alice's IP address (denoted
Alice-IP), and
other information such as a number of bytes.
(7b) The server 1300 generates a seed (denoted SEED). The seed SEED may be
generated by
a truly random quantum process at the server 1300. The server 1300 records
Alice's client
identifier QID and Alice's IP address Alice-IP in association with the seed
SEED. The server
1300 responds to Bob with a message 1520 containing the seed SEED for
communicating
with Alice.
(8) Bob receives the message 1520 containing the seed SEED and implements a
key
generation process to generate an encryption key (denoted QK) using, as
inputs, the seed
SEED and the permutation matrix P (which was obtained previously). For
example, the seed
SEED can be treated as an information input of a matrix multiplication with
the permutation
matrix P. The output will then be the encryption key QK. Due to the feature of
randomness of
the seed SEED, so the resulting encryption key QK is also random.
(9a) Alice sends a message 1530 requesting a seed from the server 1300. The
message 1530
specifies Alice's client identifier QID, Alice's IP address Alice-IP, and
other information
such as a number of bytes.
(9b) The server 1300 consults its records (e.g., in memory) and verifies
whether Alice's IP
address Alice-IP matches the address that Bob had indicated (in message 1510)
should be
associated with Alice's identifier QID for this session. If there is a match,
the server 1300
responds with a message 1540 containing the previously generated seed SEED; if
there is no
match, the server 1300 may respond with a warning message (not shown) and/or
Alice's
request is rejected.
(9c) Alice receives the message 1540 containing the seed SEED and generates
the same
encryption key QK as was generated by Bob, using, as inputs to the key
generation process,
the seed SEED and the permutation matrix P.

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
At this point, Alice and Bob have possession of the encryption key QK. This
process of
transforming a seed (such as SEED) into an encryption key (such as QK) using a
permutation matrix
(such as P) can be repeated on-demand during the course of secure
communications.
It should be noted that the messages 1520 and 1540 containing the seed SEED do
not need to
be transmitted with utmost security (e.g., they can even be transmitted in
plaintext), which reduces
the complexity of key distribution. In other words, if a third party obtains
knowledge the seed SEED,
this does not allow the third party to obtain the encryption key QK. This
makes distribution of the
encryption key QK feasible and secure for today's internet needs, while
avoiding the physical
constraints of traditional quantum key distribution.
The second variant of Stages 1030-1050, shown in Figure 16, is referred to as
a "broadcast"
variant. As such, in this variant, the server 1300 may be a broadcast server
for true quantum random
seeds. The flow of operation is as follows:
(7a) Alice sends a message 1610 to the server 1300 indicating its desire or
intention to listen
for seeds. Bob sends a similar such message 1620.
(7b) Both Alice and Bob listen for seeds from the server 1300. At some point,
the server
1300 issues broadcast messages 1630 containing seeds SEED1, SEED2, etc. An
example
format of a message containing a seed may be: 4 bytes of seed1D / timcstamp,
1K bytes of
seed, in addition to a signature so a receiver can verify the integrity of the
seed. Other
formats are of course possible.
(7c) Alice and Bob each receive various ones of the broadcast messages 1620,
and store the
seeds SEED1, SEED2, etc. contained in such messages.
(8a) Alice communicates with Bob (message flow 1640) to agree on which seedID
/
timestamp to used. The use of a timestamp or seedID is to avoid having to
exchange the
actual seeds.
(8b) Alice and Bob each generate the encryption key QK using, as inputs to the
key
generation process, the agreed-upon seed and the permutation matrix P or its
transpose.
It will be appreciated that at this point, Alice and Bob have possession of
the encryption key
QK.
Having thus completed Stages 1030-1050 (by way of either the first or second
variant, for
example), Alice and Bob have possession of the encryption key QK. This key
cannot be easily
determined by a malicious third party, even if the seed used to generate the
encryption key QK is
intercepted. This is because the encryption key QK is generated using the
permutation matrix P (or
its transpose PT). The distribution of the encryption key QK may thus be
considered secure, and if the
encryption key QK is sufficiently long, one can now proceed to transmit data
using AES (with the
encryption key QK) while retaining a high level of security.
31

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
General application use cases
A non-limiting example use case of the present technology is to enable of
secure
communication over the internet (e.g., amongst smartphones, automatic teller
machines, corporate
servers, data centers, satellites, e-commerce servers, vehicles, IoT devices
including smart home
appliances, etc.).
Embodiments of the invention can be applied to any layer of communication,
such as any
layer of the OSI (open systems interconnect) reference model because it deals
with primitive data N-
bit segments. The near end device takes an input bit stream and applies the
encoding mapping (e.g.,
permutation matrix P) to N-bit segments to produce an output bit stream of N-
bit segments, finally
modulating the output bit stream before sending it to the far end device over
the channel l30; and
upon receipt, the far end device performs the decoding process by applying the
decoding mapping
(e.g., permutation matrix PI) to obtain back the original bit stream. This
physical layer
implementation can be applied to long haul/metro point-to-point transmissions
for highly secure data
communications.
If implemented in the application layer, then the encoding process may be
executed by a
sending application and the decoding process may be executed in a receiving
application. The
sending application may share the permutation matrix P with the receiving
application, and when the
sending application plans to send data, it performs the permutation switching
with the given segment
size N and then the output bit segments will be converted into a regular bit
stream or byte stream to
send; at the receiving application, it will apply permutation switching with
PT to bring back the
original data. In this implementation, there is no need to change any existing
information
infrastructure, it functions just like plug-and-play.
Embodiments of the invention can be also implemented as software-only such as
in
applications directly, or a software-embedded system such as inside a network
protocol stack, to be
enabled and disabled, as well as browser adds-on, etc. Certain embodiments can
also be implemented
in data communication devices such as network nodes, routers/wireless and
wireless base stations.
As well, certain embodiments can be implemented into smart phone
transmission/receiving units, or
mobile applications. Data security for intern& information communications may
thus be improved.
Further implementations include vehicle-to-vehicle (V2V) communications for
smart driving
to provide highly secure communications between vehicles.
Certain embodiments can be used to build a secure space for cloud data storage
by providing
an integrated encoding and decoding device in the front of cloud data storage
devices. All data would
be encoded by the device using the permutation matrix P before sending to
storage devices and
decoded by the device using the same permutation matrix PT before sending out
of the cloud.
Other non-limiting examples of use cases of the present technology may be to:
- Enable secure communication between autonomous vehicles and a central
server.
32

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
- Enable secure communication between satellites and a ground station.
- Enable secure communication amongst vehicles on the road (V2V private
network).
- Enable secure data storage.
A further non-limiting example use case pertains to an individualized
operating system (OS)
obtained by embedding a permutation switching sublayer into the OS to enhance
data security and
prevent data breaches. The permutation switching sublayer utilizes a
permutation switching matrix as
described above. The permutation switching matrix can be automatically
determined by each
individual system, based on system identifiers. In doing so, malicious
software cannot be run in such
an individualized system.
Fig. 7A illustrates a typical process for an OS to load an application into
memory then run it.
In this case, malware might execute in a computing system if the malware is in
the system; this type
of occurrence gives the malware a "write once and run everywhere" attribute
which can create huge
damage to users.
In contrast, an individualized OS consists of the typical OS with embedded
application
localizer (Fig. 7B) and embedded application loader (Fig. 7C). The typical OS
generate the installed
system permutation matrix based on the OS license and the hardware system
characteristics, securely
stored into its system or re-produced at run-time it every time when the
system starts. A user
application must be "localized-, with the user authorization, by the embedded
application localizer
with a permutation switching (using the pemmtation matrix P) to application
executables and
generating localization IDs stored into the localized application images.
Only an application that has been localized (see Fig. 7C) can be loaded and
applied the
permutation switching with the permutation matrix PT to bring back to original
executable to be
loaded to memory to run. At the same time, the localization can be verified
for correctness.
It will be appreciated that automatic downloaded malware will be prevented
from executing
in an individualized OS system.
Specific application use case: Internet of Things (IoT)
Reference is now made to Figure 22, which shows an example Internet of Things
(IoT)
environment 2200. The IoT environment 2200 generally refers to an ensemble of
IoT-enabled objects
2210, typically with some data reporting or control functionality, that are in
communication with an
application device 2220 over a network such as the Internet 990. Examples of
IoT-enabled objects
2210 can include physical devices such as smart home appliances (washers,
dryers, vacuums, ovens,
refrigerators, thermostats, cameras, etc.), clothing, toys, healthcare
devices, vehicle parts and various
other items embedded with electronics, sensors (optional), software and a
network communication
interface. The application device 2220 is interested in sending messages to,
and receiving messages
from, the various IoT-enabled objects 2210. Examples of the application device
2220 could be a
33

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
smartphone, workstation, laptop or mainframe server. The IoT environment 2200
may include
further interconnected network elements 2270, 2280 having various other
functionalities.
A computer-implemented method of synchronizing an Internet-enabled object 2210
(or
Internet-enabled appliance) with the application device 2220 may be
implemented in this
environment. Such a method could include a portion carried out at the
application device 2220 and a
portion carried out at the Internet-enabled appliance. Among other things, the
application device
2220 may generate an encoding mapping based on (i) a previous encoding mapping
used to
communicate previously with the appliance 2210 and (ii) a seed (e.g., from a
remote server over the
Internet); and transmit the encoding mapping to the appliance 2210 over a
local connection that does
not traverse the Internet. For its part, the appliance 2220. upon receiving
the encoding mapping over
the local connection, may use the encoding mapping to subsequently secure data
exchanged with the
application device 2220 over the Internet. The application device 2220 may
also receive encoded
data from the appliance 2210 and decode the data using a decoding mapping
derivable from the
encoding mapping.
In the example of one of the loT-enabled objects 2210 being a home appliance,
and with
reference to Figure 23, the appliance 2300 may be conventional in many
respects except that it also
includes a microprocessor-enabled loT communication device 2310. The loT
communication device
2310 includes a processor 2312, a memory 2314 and a network communication
interface 2316. An
external sensor 2318 may also be provided, or the sensor 2318 may be embedded
elsewhere in the
appliance 2300 with the output of the 2318 sensor being supplied to the IoT
communication device
2310. The loT communication device 2310 may be powered from the same power
source as the rest
of the appliance 2300, or it may include its own power source (e.g., a
battery). The IoT
communication device 2310 may also include other components that are not
specifically illustrated.
In this example, the memory 2314 stores computer-readable instructions that
are executed by
the processor 2312. In some embodiments, the computer-readable instructions
have a small code
footprint, sometimes as small as 30 kilobytes (30KB), so as to facilitate
usage with battery-powered
appliances. By executing the computer-readable instructions in the memory
2314, the processor 2312
can carry out a variety of processes, including a communications process 2320,
a data security
process 2330, a sensing process 2340 (which is optional) and data processing
process 2350. In other
embodiments, the aforementioned functions may be implemented by the existing
hardware and
software of the appliance 2300, i.e., in the absence of a dedicated
microprocessor-enabled IoT
communication device. Additional processes may be encoded in the computer-
readable instructions
stored in the memory 2314 and executable by the processor 2312.
In the example of an loT-enabled object being the application device 2220, and
with
reference to Figure 28, the application device 2220 does not require a
dedicated microprocessor-
enabled IoT communication device, as existing hardware and software of the
application device 2220
34

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
can be utilized to provide the requisite functionality. In this regard, it is
useful to note that the
application device 2220 includes, inter alia, a processor 2812, a memory 2814,
a network
communication interface 2816 and a user interface 2818 (which can include
various input/output
devices such as a screen, keyboard, loudspeaker, etc.).
In this example, the memory 2814 stores computer-readable instructions that
are executed by
the processor 2812. Execution of the computer-readable instructions may cause
the processor 2812 to
carry out an operating system 2880 and various applications (or "apps") 2882.
One such app may
exhibit functionality for monitoring and/or controlling the appliance 2300. To
this end, carrying out
the app may involve executing a variety of processes, including a
communications process 2820, a
data security process 2830, a data processing process 2850 and a user
interface process 2860.
Additional processes and apps may be encoded in the computer-readable
instructions stored in the
memory 2814 and executable by the processor 2812.
The aforementioned processes will now be described in further detail, as they
apply to the
appliance 2300 or the application device 2220.
The sensing process 2340 (executed by the processor 2312 of the loT
communication device
2310 of the appliance 2300) may include steps of obtaining a reading from the
sensor 2318 and
storing the reading in the memory 2314. Readings from the sensor 2318 may be
obtained at regular
intervals or on demand upon receipt of a command. Examples of the sensor 2318
include a
thermometer, an inclinometer, a light sensor, a hygrometer, a carbon monoxide
sensor, an
accelerometer, a pressure sensor, an image sensor (e.g., CCD) and a detection
and ranging sensor
(radar, lidar, sonar), to name a few non-limiting possibilities.
The data processing process 2350 (executed by the processor 2312 of the IoT
communication
device 2310 of the appliance 2300) may include a step of processing the sensor
data in the memory
2314 in order to produce consumable data for transmission to an external
party. The consumable data
may take the form of a stream of bytes in a certain format demanded by the
external party. By way of
non-limiting example, the sensing process 2340 may produce and store a
temperature reading once
every 5 minutes and the data processing process 2350 may determine an average
temperature over 1
hour by averaging the 12 most recent entries stored in the memory 2314.
The user interface process 2860 (executed by the processor 2812 of the
application device
2220) may include a step of interacting with a user of the application device
2220 so as to obtain an
indication of a request from the user. For example, the user may request a
reading from a remote
appliance (such as the appliance 2300) or the user may request to exert
control of such appliance.
The user interface process 2860 may also include a step of presenting
information to the user in a
particular format, such as graphically.
For its part, the data processing process 2850 (executed by the processor 2812
of the
application device 2220) may include a step of processing various data before
displaying it via the

user interface 2818. The data processing process 2850 may thus perform various
manipulations on
the data in the memory 2814, including graphical and mathematical
manipulations.
Turning now to the communication process 2320 (executed by the processor 2312
of the IoT
communication device 2310 of the appliance 2300), this process implements
certain media access
control (MAC) and physical (PHY) specifications for communicating with a
nearby network access
point 2360 via the network communication interface 2316. In a home
environment, network
connectivity may be achieved wirelessly with a home router over Wi-Fi. Other
low-level protocols
for communication between the appliance 2300 and the nearby access point 2360
include Bluetooth,
near-field communication (NFC) and Zigbee, to name a few non-limiting
possibilities. The home
router 2360, which is typically connected to a service provider modem (not
shown), then provides
the IoT communication device 2310 with access to the Internet.
For its part, the communication process 2820 (executed by the processor 2812
of the
application device 2220) may also implement certain media access control (MAC)
and physical
(PHY) specifications for communicating with a nearby network access point 2360
via the network
communication interface 2316. However, this type of low-level communication
for providing a data
connection to the Internet may instead be handled by the operating system of
the application device
2220. Such low-level communication may be conventional, and thus it will not
be described here.
Additionally, both the communication process 2320 and the communication
process 2820
also implement certain higher-layer protocols for communicating with entities
on the Internet. A non-
limiting example of such a higher-layer protocol is MQTT (Message Queuing
Telemetry Transport),
which works on top of the TCP/IP protocol. Other non-limiting examples of a
higher-layer protocol
for communicating with the server over the Internet include Advanced Message
Queuing Protocol
(AMQP), Streaming Text Oriented Messaging Protocol (STOMP), IETF Constrained
Application
Protocol, XMPP, DDS, OPC UA, and Web Application Messaging Protocol (WAMP).
The MQTT protocol follows a publish/subscribe messaging model. Instead of the
traditional
client-server model, where a client communicates directly with the server, the
publish/subscribe
model provides for two types of "client" participants that communicate
messages through a broker.
Each message is ascribed a "topic". Each client can be a "publisher" client
for certain topics and/or a
"subscriber" client for certain other topics. The publisher clients are
decoupled from the subscriber
clients by the broker, which is known by both the publisher and subscriber
clients (e.g., at a pre-
determined URL). The broker filters all incoming messages and distributes them
according to topics.
The broker is considered the heart of any publish/subscribe messaging model
such as is
implemented by the MQTT protocol. Depending on the concrete implementation,
the broker can
handle up to thousands of concurrently connected IoT-enabled objects. The
broker is primarily
36
Date Recue/Date Received 2020-07-22

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
responsible for receiving all messages, filtering them according to topic,
determining who is
interested in which topics and then sending the filtered messages to all
subscribed clients. The broker
also holds the session of all persisted clients including subscriptions and
missed messages.
Considering now how the MQTT protocol may be mapped to the IoT environment
2200 in
Fig. 22, and with additional reference to Figure 23, the IoT-enabled objects
2210 and the application
device 2220 can be considered the MQTT clients. In particular, the appliance
2300 can be a publisher
client of certain topics and a subscriber client to others. Similarly, the
application device 2220 can be
a publisher client of certain topics (including those topics to which the
appliance 2300 subscribes)
and can be a subscriber client to certain other topics (including those topics
published by the
appliance 2300). One of the network elements, in this case network element
2280, can play the role
of the MQTT broker. In a non-limiting example, the MQTT broker 2280 can be
implemented as a
server on the Internet having a pre-determined IP address. Each of the MQTT
clients 2220, 2300 runs
a suitable MQTT protocol library and connects to the MQTT broker 2280 over a
suitable network
connection. The MQTT protocol is based on top of TCP/IP and therefore the MQTT
clients 2220,
2300 and the MQTT broker 2280 each implement a TCP/IP stack.
An MQTT connection itself is always between one MQTT client and the MQTT
broker
2280: no MQTT client is connected to another MQTT client directly. The
connection is initiated
through an MQTT client sending a CONNECT message to the MQTT broker 2280. The
MQTT
broker 2280 responds with a CONNACK message and a status code. Once the
connection is
established, the MQTT broker 2280 will keep it open as long as the client does
not send a
DISCONNECT command or the connection is otherwise lost.
After an MQTT client is connected to the MQTT broker 2280, it can publish
messages. The
MQTT protocol implements topic-based filtering of the messages by the broker,
so each message
must contain a topic, which will be used by the MQTT broker 2280 to forward
the message to
interested MQTT clients.
A PUBLISH message can include the name of a topic and a payload. A SUBSCRIBE
message includes the name of a topic. The MQTT broker 2280 notes which MQTT
clients have
subscribed to which topics. Thus, when the broker receives a PUBLISH message
from an MQTT
client for a particular topic, the MQTT broker 2280 routes the PUBLISH message
to those MQTT
clients that have subscribed to the particular topic.
Figure 24 shows an example of the subscription, publishing and brokering
concepts of the
MQTT protocol, applied to the IoT environment of Fig. 21, and carried out by
the communication
process of each respective IoT-enabled object (in the case of the appliance
2300, this is the
communication process 2320 and in the case of the application device 2220,
this is the
communication process 2820). Specifically, the appliance 2300 may be
configured to capture sensed
data (e.g., via the sensing process 2340) and may have the capability of
publishing messages
37

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
containing this sensed data under topic -X". In addition, topic X is of
interest to the application
device 2220. Accordingly, as represented by 2410, the application device 2220
sends a SUBSCRIBE
message (topic = X) to the broker 2280. As represented by 2420, the broker
2280 registers the
application device 2220's interest in topic X. For its part, as represented by
2430, the appliance 2300
sends a PUBLISH message (topic = X) to the broker 2280 containing the sensed
data (DATA). The
broker 2280 recognizes that the application device 2220 is interested in topic
X and therefore, as
represented by 2440, routes the PUBLISH message (topic = X) with the data
(DATA) to the
application device 2220.
With reference to Figure 25, there is shown a format of an MQTT message 2500,
in this case
a PUBLISH message. At a minimum, all MQTT messages include a fixed header
field 2510 with a
control header field 2512 and a packet length field 2514. The control header
field 2512 indicates the
type of MQTT message (in this case a PUBLISH message). As MQTT messages are
variable-length
messages, the packet length field 2514 indicates the length of the MQTT
message 2500. Then,
depending on the message type, there may be a header field 2520 and a payload
field 2530. Such
fields do exist for a PUBLISH message, but their length and content varies. In
particular, the header
field 2520 includes a topic name length field 2522, a topic name field 2524
and a message ID field
2526 (also referred to as a time stamp). The topic name length field 2524
indicates the topic to which
the PUBLISH message 2500 relates, while the topic name length field 2522
indicates the length of
the topic name length field 2522. As for the message ID field 2526, this may
indicate a sequential
number issued by the source of the message 2500, which could be unique for
each message or unique
for each message having the same topic.
Finally, the payload field 2530 contains the application message to be
published. The MQTT
protocol is data-agnostic, and how the payload is structured depends on the
use case. It is completely
up to the sender if it wants to send binary data, textual data or even full-
fledged XML or JSON.
The objects in an IoT environment suffer from security issues similar to those
of
conventional servers, workstations and smartphones, except that firewall,
security update and anti-
malware systems used for the latter types of devices are generally unsuitable
for the typically
smaller, less capable, IoT-enabled objects. As such, a solution is to provide
the IoT-enabled objects
with data security processes, as now described in greater detail.
The data security processes 2330, 2380 function as a layer above the
respective
communication process 2320, 2820. For example, the communication processes
2320, 2820 are
responsible for functionality such as modulation and demodulation, antenna /
gain control and
connection maintenance (e.g., retransmissions at the TCP level). For their
part, the data security
process 2330, 2830 are responsible for the following three functions:
Function 1: Determining and distributing the encoding and decoding
mappings:
38

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
Function 2: Quantropization and dequantropization using the encoding and
decoding mappings;
and
Function 3: Remote tuning of the encoding and decoding mappings.
These three functions will now be described in greater detail.
Function 1 of the Data Security Process: Determining and distributing the
encoding and
decoding mappings
In order to allow communication to take place in the entropy space between the
appliance
2300 and the application device 2220, each of these devices needs to know the
encoding mapping
that it will use to quantropize its data, and vice versa using a decoding
mapping. The data security
process 2330 carried out by the appliance 2300 and the data security process
2830 carried out by the
application device 2220 are responsible for determining the encoding and
decoding mappings and
coordinating their distribution. In the present embodiment, the data security
process 2830 of the
application device 2220 is responsible for determining the permutation matrix
P of dimensionality 2N
by 2N and distributing it to the data security process 2330 of the appliance
2300. As such, the
processor 2812 of the application device 2220 may handle a greater
computational load than the
processor 2312 of the appliance 2300.
One example way in which the permutation matrix P is determined and
distributed is now
described with reference to Figure 27 which further illustrates one of the IoT-
enabled clients in the
IoT environment 2200, in this case network element 2270, having the role of a
"seed distributor".
The seed distributor 2270 may be configured to generate seeds using a random
process as is known
in the industry.
At step 2700, the data security process 2830 of the application device 2220
obtains an initial
seed ISEED. For example, the initial seed ISEED may have been stored in the
memory 2814 of the
application device 2220.
At step 2710, the data security process 2830 of the application device 2220
generates an
encoding mapping based on the initial seed ISEED. To this end, the data
security process 2830 may
execute the EntroGen(*) process. It is recalled that EntroGen(*)process
produces an encoding
mapping based on the value of the argument. Thus, for example, calling
EntroGen(ISEED) produces
an initial permutation matrix PT.
At step 2720, the seed distributor 2270 generates a random seed QSEED. This
may be a true
random number in the mathematical sense, but this is not a requirement. It may
simply be a bit string
of any particular size that would be difficult for an outside party to guess.
At step 2730, the seed distributor 2270 sends the random seed QSEED to the
application
device 2220 via the broker 2280. That is to say, the seed distributor 2270
places the random seed
QSEED into the payload of a PUBLISH message 2735, for which the topic is
chosen to be "SEED"
39

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
(by way of example). Meanwhile, the application device 2220 is assumed to
subscribe to the topic
"SEED". The PUBLISH message 2735 reaches the broker 2280, which recognizes
that the
application device 2220 subscribes to the topic "SEED" and accordingly routes
the PUBLISH
message 2735 to the application device 2220. The version of the PUBLISH
message 2735 routed by
the broker 2280 is given the reference numeral 2735* in order to distinguish
it from the PUBLISH
message 2735 received by the broker 2280, since there will be processing by
the broker 2280.
With reference to Figure 30, there is shown the PUBLISH message 2735
transmitted by the
seed distributor 2270 towards the application device 2220 via the broker 2280.
The PUBLISH
message 2735 has structural elements corresponding to those previously
described with respect to the
message 2500, namely a fixed header field 3010, a header field 3020
(containing the topic name
"SEED") and a payload field 3030. In the present embodiment, the topic "SEED"
signals
transmission of the random seed QSEED within the payload field 3030. The
payload field 3030 may
also include a seed ID (which is an identifier of the seed being published), a
hash-based message
authentication code (HMAC, which is optional) and a time stamp (which can be
measured in terms
of relative packet order or in terms of elapsed time, for example).
Returning now to Figure 27, at step 2740, the PUBLISH message 2735 ultimately
reaches the
application device 2220, where and the contents of the payload field 3030 of
the PUBLISH message
2735 is extracted by the communication process 2820. In short, this is how the
data security process
2830 of the application device 2220 obtains the random seed QSEED.
At step 2750, the data security process 2830 generates a key QK by applying
the initial
permutation matrix PI to the received random seed QSEED. Then, the data
security process 2830
calls the EntroGen(*) process again, but with the key QK as the argument. This
results in the
permutation matrix P. It is noted that by applying the initial permutation
matrix PI to the received
random seed QSEED before calling EntroGen(*), even if an outsider knows the
EntroGen(*) process
and the random seed QSEED, they still cannot generate the permutation matrix
P.
At step 2760, the application device 2220 carries out the process of
distributing the
permutation matrix P to the IoT-enabled devices with which it is wants to
securely communicate. In
the present embodiment, this includes the appliance 2300. The application
device 2220 establishes a
link 2770 with the appliance 2300 over a secure private network, such as local
private WiFi, NFC or
Bluetooth. After the appliance 2300 securely receives the permutation matrix
P. both IoT-enabled
devices (the application device 2220 and the appliance 2300) are "in" the same
entropy state, and the
application device 2220 can from now on securely manage the appliance 2300
over the public
Internet without security concerns.
Step 2760 can be carried out whenever the application device 2220 wishes to
perform a reset
or re-synchronization of the encoding mapping with the appliance 2300 or other
loT-enabled devices
with which it communicates. Re-synchronization can be motivated by a variety
of factors, such as the

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
passage of a certain amount of time or a perceived security breach. It is
noted that the seed distributor
2270 may publish random seeds on a regular basis (e.g., as PUBLISH messages
under the topic
"SEED"), and the application device 2220, in subscribing to the topic "SEED-,
may receive these
seeds (via the broker 2280) and use them for resynchronization purposes as
required.
Function 2 of the Data Security Process: Quantropization and dequantropization
using the
encoding and decoding mappings
With reference to Figure 26, there is shown a PUBLISH message 2600 transmitted
by the
appliance 2300 towards the application device 2220 via the broker 2280. The
message 2600 has
structural elements corresponding to those previously described with respect
to the message 2500,
namely a fixed header field 2610, a header field 2620 (containing the topic
name) and a payload field
2630. In the present embodiment, the topic "QEP" (stands for "Quantum Entropy
Processing")
signals transmission of a quantropized bit string in the payload field 2630.
The quantropized bit
string corresponds to a bit string carrying sensor data or any other kind of
information provided by
the data processing process 2350, and having undergone quantropization.
The corresponding bits in the information space may include additional
information such as a
seed ID (e.g., an identifier of the seed from which the current version of the
encoding mapping was
generated), a hash-based message authentication code and a time stamp (which
can be measured in
terms of relative packet order or in terms of elapsed time, for example). In
one embodiment, all this
information can be concatenated and quantropized before being placed into the
payload field 2630 of
the PUBLISH message 2600. In another embodiment, only some of this information
(e.g., the sensor
data, the seed ID) is quantropized and then is concatenated with other (non-
quantropized)
information (e.g., HMAC and time stamp) when being placed into the payload
field 2630 of the
PUBLISH message 2600.
Quantropization is achieved by the data security process 2330 of the appliance
2300 applying
an encoding mapping that is stored in the memory 2314. For exemplary purposes,
and as described
above in connection with Function 1, the encoding mapping is represented by a
permutation matrix P
of dimensionality 21' by 2N, while the decoding mapping is represented by the
matrix PT. The
decoding mapping is assumed to be also known to the application device 2220
(see Function 1),
which is further assumed to subscribe to the topic "QEP" and is tasked with
carrying out the
corresponding dequantropization process to retrieve the sensor data or other
information.
More particularly, the data security process 2330 of the appliance 2300
includes a step of
breaking down (disassembling) an input bit string 2650 received from the data
processing process
2350 into N-bit segments which are mapped to corresponding N-bit output
segments. Specifically,
for each given N -bit segment in the input bit string, the value (e.g.,
decimal value) represented by
this N-bit segment is mapped by the data security process 2330 to a new value
(e.g., decimal value)
41

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
under operation of P, and then the binary digits used to represent this new
value (e.g., decimal value)
becomes the corresponding N-bit output segment for the given N-bit segment.
The data security
process 2320 of the appliance 2300 then invokes (e.g., calls) the
communication process 2320.
Specifically, the communication process 2320 creates the PUBLISH message 2600
by
populating the header field 2620 with the topic name "QEP" and by populating
the payload field
2630 with the aforementioned N-bit output segment, denoted 2660. The PUBLISH
message 2600 is
then sent onto the Internet by the appliance 2300 and reaches the broker 2280.
The broker 2280
receives the PUBLISH message sent by the appliance 2300, recognizes the topic
"QEP" in the header
field 2620, recognizes that the application device 2220 subscribes to the
topic -QEP" and routes the
PUBLISH message 2600 towards the application device 2220. The routed PUBLISH
message is
given reference numeral 2600* to distinguish it from the PUBLISH message 2600
received by the
broker 2280, since there will be processing by the broker 2280. However, the
content of the two
messages 2600, 2600* is identical. It is noted that no change to the MQTT
protocol is required for
communication of sensitive data to take place in the entropy space so that the
quantropized data
reaches the application device 2220.
At the application device 2220, the communication process 2820 of the
application device
2220 determines that the topic of the received PUBLISH message 2600* is "QEP",
extracts the
contents of the payload field 2630* (namely the segment 2660) and invokes
(e.g., calls) the data
security process 2830. By virtue of the topic being "QEP", the data security
process 2380 knows that
the received data is quantropized and proceeds to a step of dequantropizing
the N-bit segment 2660
in the payload field 2630* of the received PUBLISH message 2600*.
Specifically, for each given N-
bit segment, the value represented by the N-bit segment is mapped by the data
security process 2830
to a new value under operation of the permutation matrix PT, and then the
binary digits used to
represent this new value form the corresponding N-bit output segment. The N-
bit output segment
may be combined with earlier such segments and then assembled by the data
security process 2830
into an output bit string 2670 of suitably sized characters (e.g., bytes) that
may be further processed
by the data processing process 2850 to derive useful information and lead to
actions being taken. It
should be noted that due to the encoding and decoding mappings being
transposes of one another, bit
strings 2650, 2670 are identical.
Function 3 of the Data Security Process: Remote tuning of the encoding and
decoding
mappings
Although step 2760 described above provides a way to re-synchronize the
encoding mapping
between the application device 2220 and the appliance 2300 in physical
proximity to one another, it
may be in some instances be desirable to reset or re-synchronize the encoding
mapping when the
application device 2220 is not in the vicinity of the appliance 2300. This
remote re-synchronization is
42

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
now described with additional reference to Figure 29. A further topic is
created for re-
synchronization, under the name "ENTROSYNC". It is assumed that the IoT-
enabled devices
(including the application device 2220 and the appliance 2300) are aware of
the existence of the topic
"ENTROSYNC" and subscribe to this topic.
At step 2910, the communication process 2820 of the application device 2220
receives a new
random seed NEW_QSEED from the seed distributor 2270 (via the broker 2280). As
described
earlier, this could be a result of regular transmissions by the seed
distributor 2270 of PUBLISH
messages under the topic "SEED". Alternatively, a special request-response
sequence can be
designed to result in the release of a seed on demand.
At step 2920, the data security process 2830 of the application device 2220
generates a new
key QK1 by applying the current permutation matrix P to the new random seed
NEW_QSEED. The
application device 2220 then calls the EntroGen(*) process with the new key
QK1 as an argument, in
order to generate a new permutation matrix P_NEW in preparation for exchanging
with the appliance
2300 once it will also be synchronized.
At step 2930, the application device 2220 sends a PUBLISH message 2935 under
the topic
"ENTROSYNC" and includes, in the payload, the value of the random seed
NEW_QSEED that was
used to generate the new key QK1 and the new permutation matrix P_NEW. The
PUBLISH message
2935 reaches the broker 2280, which routes it to the appliance 2300. The
routed PUBLISH message
is given reference numeral 2935* to distinguish it from the PUBLISH message
2935, since there will
be processing by the broker 2280.
At step 2940, the communication process 2320 of the appliance 2300 receives
the PUBLISH
message 2935, recognizes the topic "ENTROSYNC", and then extracts the new
random seed
NEW_QSEED from the payload and sends it to the data security process 2330.
At step 2950, the data security process 2330 of the appliance 2300 generates
the same new
key QK1 by applying the current permutation matrix P to the received new
random seed
NEW_QSEED. The appliance 2300 then calls the EntroGen(*) process with the new
key QK1 as an
argument, in order to generate the same new permutation matrix P_NEW in
preparation for
exchanging with the application device 2220.
At this point, both the application device 2220 and the appliance 2300 have
knowledge of the
same new permutation matrix P_NEW. Of course, the transpose P_NEWT can be
easily derived from
P_NEW if it is not already known. Thus, the data security process 2330 of the
appliance 2300 can
use an encoding mapping represented by the permutation matrix P_NEW while the
data security
process 2830 of the application device 2220 can use a decoding mapping
represented by P_NEW'.
Similarly, the data security process 2830 of the application device 2220 can
use an encoding
mapping represented by the permutation matrix P_NEWr while the data security
process 2330 of the
appliance 2300 can use a decoding mapping represented by P_NEW.
43

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
Alternative protocols to MQTT
Turning now to Figure 31, there is shown an alternative message format, which
is applicable
in the case of the AMQP protocol. In particular, there is shown an AMQP
message 3110 for seed
delivery and an AMQP message 3120 for delivery of quantropized information. In
particular, the
general message format of an AMQP message includes a header field 3130, a
delivery annotations
field 3140, a message annotations field 3150, a properties field 3160, an
application properties field
3170, an application data field 31g0 and a footer field 3190.
The properties field 3160 and the application properties field 3170 may be
used to convey
some of the information described earlier in this document. For example, the
properties field 3160 of
the AQMP message 3010 for seed delivery may include a type flag (which
identifies the type of
message), whereas the application properties field 3170 may include the seed
itself (e.g., QSEED),
the HMAC and the time stamp. Also, the properties field 3160 of the AQMP
message 3120 for
delivery of quantropized information may include a type flag (which identifies
the type of message),
the seed identifier, the HMAC and the time stamp, whereas the application data
field 3170 may
contain the quantropized data itself.
Specific application use case: Secure blockchain data stora2e
It should be appreciated that a user can use a blockchain to securely store
sensitive data (such
as the initial secret for a cryptographic exchange, confidential government or
corporate data,
surveillance video, any other sensitive data) for its own private use. For
example, consider a user in a
blockchain network that has a wallet with a private key and a public key. The
user can store the
sensitive data on a local device (e.g., smartphone, desktop) and, as a backup,
on the blockchain. This
can be done by using the public key to encrypt the sensitive data, and
recording it as a transaction on
the blockchain. Other users cannot decrypt the sensitive data without the
private key. Only the user
can decrypt the sensitive data from the blockchain using the private key. In
this way, only the wallet
needs to be backed up securely by the user, not the data, which remains
securely located in the cloud.
This can be used for cloud-based secure backup storage. As such, if the user
is using a new device
(for example, a new smartphone after the previous one was lost or stolen), the
user obtains a backup
copy of the wallet (including the private key). Then the encrypted sensitive
data is downloaded from
the blockchain and decrypted using the private key to obtain the sensitive
data in decrypted form.
This is now described in greater detail with additional reference to Fig. 20,
which shows a
user device 2010 in a blockchain-enabled environment. The user device 2010 can
be a smartphone or
computer, which is configured to be capable of carrying out quantropization as
described herein and
to this end includes a control module 2015 for determining the correct
encoding and decoding
mappings to use. The user device 2010 wishes to upload a file using the
blockchain so as to, for
44

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
example, store the file for future access. The file can be sliced into one or
more file blocks. A server
2020 is also present in this blockchain-enabled environment and is configured
for broadcasting seeds
from time to time. In addition, the server 2020 maintains a historical record
of the seeds it has
broadcast, together with their time ranges. As seeds SEED1, SEED2, etc. are
received from the
server 2010, the control module 2015 of the user device 2010 can generate a
succession of keys
QK1, QK2, etc. with the aid of one or more permutation matrices Pl, P2, etc.
as has been described
previously. That is to say:
= P1(SEED1) 4 QK1
= EntroGen(QK1) 4 P2
= P2(SEED2) 4 QK2
= EntroGen(QK2) 4 P3
= Etc.
When the user is ready to upload the file, the metadata of the file is first
encrypted (using by
AES) with the current key QKi; the result is denoted AES(metadata). Then the
encrypted metadata is
written on the blockchain as a transaction with a user ID, the timestamp and a
signature. In addition,
a hashed message authentication code (HMAC) is also added, which is produced
from the key and
the message cipher. HMAC can only be verified by peers that have been paired,
which provides
robustness in the face of a quantum computer. In other words, in the event
that the user's private key
being used to carry out the transactions on the blockchain is cracked (e.g.,
by a quantum computer)
or stolen, the user can still identify authentic and fake transactions by
verifying the HMAC along
with each transaction record. Also, a checksum value related to the file data
may be produced and
added as part of the transaction. This can be used once the file is downloaded
to test for data
integrity.
Each file block FID is then encrypted using the same technique and with the
same key QKi
(unless changes have occurred in the meantime). The entire encrypted file
block is hashed, e.g., using
SHA256. Then this hash is uploaded to the blockchain, along with User ID,
sequence number,
HMAC, timestamp and signature of the user.
Consider now that the user wants to download the file, say, onto a second user
device 2050.
The second user device 2050 can be another smartphone or computer which,
similar to user device
2010, is configured to be capable of carrying out quantropization as described
herein and to this end
includes a control module 2055 for determining the correct encoding and
decoding mappings to use.
The second user device 2050 firsts download the encrypted metadata from the
blockchain. The
integrity of the file metadata will be verified by the HMAC. The metadata can
be decrypted by AES
using the correct key QKi generated with the correct seed SEEDi. The correct
value of "i" can be
retrieved from the server 2020 based on the timestamp stored in the file. The
decrypted metadata will

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
tell the second user device 2050 which file blocks (FIDs) to download (and
from which storage
nodes) as well as their sequence. Then the second user device 2050 will
download the all encrypted
file blocks and decrypt them using the same mechanism. The integrity of each
file block will be
verified by HMAC. Finally, the device will concatenate the file blocks to
formulate the original file.
The file can be verified by the checksum value.
Specific application use case: Quantum-resistant si2natures
With reference now to Figure 35, there is shown an example system for
transmission of a
"witnessed message" in accordance with a non-limiting embodiment. The
witnessed message may in
some embodiments comprise a blockchain transaction. Specifically, the system
includes two peers,
Alice and Bob, as well as a witnessing entity, referred to as "William".
William represents a
computing device that could be accessible to Alice and Bob over a network 990
such as the internet.
This could be in the context of an online service provided by William, and
which charges a fee for
each witnessed transaction, for example. It is assumed that Alice and William
have securely built up
a common Entropy History Table (as described above), including entropy
records. It is recalled that
an entropy record includes a plurality of entries, which may include an ID of
the peer associated with
the record, a time indicator, an entropy state and a hash.
It is further assumed that a common entropy record is identified between Alice
and William.
The common entropy record may be identified by, for example, agreeing on the
time indicator. The
common entropy record identifies an entropy state, which for the purposes of
the present example,
will be referred to as the shared entropy state at time "t" (denoted Et), and
which has a corresponding
hash Ht.
In a particular non-limiting embodiment, steps involved in Alice generating a
witnessed
message WM from an original message M are now described with reference to the
diagram in Figure
36. It is assumed that Alice maintains a public-private key pair. Alice's
public key PU can be used by
Bob and William as an identifier of Alice.
= Alice initiates a token generation protocol TokenGen(*) to generate a
token T from the entropy
state E. By way of non-limiting example, TokenGen may involve two steps:
1. Use a pseudo random number generator (such as RC4 PRGA) to generate a
pseudo-random
number R based on the entropy state Et;
2. Use a hash function (such as MD5 or SHA-2) to generate the token T from the
random
number R.
= Alice generates a signature SIG. This is done by first creating a digest
D of a combination of the
message M and the token T. In a simple example, the digest D can be generated
as a hash
46

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
function applied to a concatenation of the message M and the token T. Then,
the digest D is
encrypted with Alice's private key to create the signature SIG.
= Alice creates a witnessed message WM with the message M and the signature
SIG. This can be
done by concatenating the message M and the signature SIG.
= Meanwhile, Alice informs William that a token was generated, and supplies
William with Alice's
ID, the signature SIG (which, it is recalled, was generated foul' the message
M and the token T),
as well as the hash H, corresponding to the (current) entropy state Et
(obtained from the Entropy
History Table).
= Based on Alice's ID, William obtains Alice's public key PU (for example,
this could be stored in
a database, which may be published online). Based on Alice's public key PU,
William retrieves
the corresponding hash, and verifies that the retrieved hash matches the hash
received from
Alice, namely H. If not, then this would mean that Alice is not using the same
Entropy History
Table as William, which could signal a security breach.
= Assuming that there is a match between the hash retrieved from the
Entropy History Table and
the hash obtained from Alice, William retrieves the corresponding entropy
state (which should be
the current entropy state Et) and initiates the same token generation protocol
TokenGen(*) to
generate the same token T as Alice from the same entropy state E. that is, T =
TokenGen(E).
= William updates a "token ledger- 3675 with the token T and the signature
SIG, in association
with Alice's public key PU. The token ledger 3675 may be implemented as a data
structure, such
as a table, linked list, etc., stored in a database and physically residing in
memory. The token
ledger 3675 may also log other details such as a message ID or session ID.
= William then informs Alice that the token ledger has been updated.
At this point, Alice sends the witnessed message WM to recipient Bob. It is
recalled that the
witnessed message WM includes a first part (the message M) and a second part
(the signature SIG) -
the two pans could be concatenated or combined in different nays to form the
witnessed message
WM.
Generally speaking, Bob validates the witnessed message WM received from Alice
by
carrying out the following:
- receiving a token (T) from William (the witnessing entity); this could be
done in response to
providing at least the second part of the message to William;
- obtaining a first data element (D) by joint processing of the first part
of the message and the
token (such as hashing a combination of the first part of the message and the
token);
- obtaining a second data element by joint processing of the second part of
the message using a
key associated with Alice (e.g., decrypting the second part of the message
using Alice's
public key, whereby Alice's private key would have been used to encrypt the
second part of
47

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
the message and, more specifically to encrypt a result of joint processing, by
Alice, of the
first part of the message and the token, thereby to create the second part of
the message); and
- validating the witnessed message by comparing the first and second data
elements; note that
if the validating is unsuccessful, a warning of some kind (e.g., signal) may
be issued by Bob.
It is noted that the first and second data elements match only when the token
received from
William matches an original token used by Alice in creating the second part.
The token created by William and the original token used by Alice in creating
the second part
are both created from the same entropy data (E). This entropy data was
obtained by William
consulting a ledger based on a code (such as a hash H derived by executing a
one-way function on
the entropy data) received from Alice.
In a particular non-limiting embodiment, steps involved in Bob validating /
verifying the
integrity of the witnessed message WM are now described with reference to the
diagram in Fig. 37.
= Bob extracts the signature SIG from the witnessed message WM.
= Bob provides the signature SIG to William, together with Alice's public
key PU (or any other
identifier of Alice that would allow William to obtain Alice's public key PU).
= William obtains the token T by consulting the token ledger 3675 based on
at least one of Alice's
public key PU and the signature SIG received from Bob.
= William sends the token T to Bob.
= Bob verifies integrity of the witnessed message WM using the token T. In
particular:
o Bob extracts the message M from the witnessed message WM received from
Alice,
combines it with the token T received from William, and creates an expected
digest D.
o Meanwhile, Bob uses Alice's public key PU to decrypt the signature SIG,
thereby to
obtain a decrypted digest U.
o Bob then compares the expected digest D to the decrypted digest D'. If
there is a match,
this will confirm that the message M in its current form was indeed sent by
Alice as
witnessed by Bob. If not, this could mean that the message was tampered with,
or that it
was not actually sent by Alice.
The above-described process prevents a malicious party that gains access to
Alice's private
key from generating a valid transaction, even if the malicious party also
gains access to the token T.
This is because the malicious party does not have access to the Entropy
History Table, and therefore
the malicious party does not know the current entropy state Et or its
corresponding hash Ht. Thus, the
malicious party cannot successfully instruct William to generate the correct
token T for use in the
receiving process.
Stated differently, Alice and William must generate the same token because Bob
relies only
on William for the token. If the tokens are different, Bob will be unable to
verify the message. It is
48

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
assumed that Alice and William share a privileged/secured relationship.
Specifically. William will
only generate tokens when directed by Alice. As such, so long as a secured
channel can be
established between Alice and William, this approach may be more difficult to
crack than a
conventional approach, even for quantum computers.
It is noted that in the above scenario, William (the witnessing entity) is
configured to obtain a
code (such as a hash) and a signature from Alice, consult a first database
(Entropy History table) to
obtain entropy data associated with the code, generate a token from the
entropy data, store in a
second database (token ledger 3675) an association between the token and the
signature (and
possibly also the identifier of Alice, such as a public key) and, later,
transmit a message comprising
the token in response to receipt of a request identifying at least the
signature. However, the first
database is only consulted when there is a match between code obtained from
Alice and a code
associated with the identifier of Alice. The entropy data may represent a
configuration of a one-to-
one switch with 2N input ports and 2N output ports.
Specific application use case: Ouantum-resistant blockchain transactions
With reference now to Figure 38, there is shown an example system for
publishing of a
"witnessed blockchain transaction" in accordance with a non-limiting
embodiment. Specifically, the
system includes two peers, Alice and Bob, as well as a witness William.
William represents a
computing device that could be accessible to Alice and Bob over a network such
as the internet. It is
assumed that Alice and William establish and maintain a common Entropy History
Table. In a non-
limiting example, Alice and William agree on a common entropy record in the
Entropy History
Table. The common entropy record may be identified by, for example, agreeing
on a time indicator.
For the purposes of the present example, the common entropy record is
associated with the shared
entropy state Et which, according to the Entropy History Table, has a
corresponding hash H. It is
further assumed that Alice maintains a public-private key pair, and that
Alice's public key PU can be
used by Bob and William as an identifier of Alice.
Steps involved in Alice generating a witnessed blockchain transaction WB from
an original
transaction payload/message B are now described with reference to the diagram
in Fig. 39.
Alice initiates a token generation protocol TokenGen(*) to generate a token T
from the
current entropy state E. that is, T = TokenGen(Et).
Alice generates a signature SIG. This is done by encrypting a combination
(e.g.,
concatenation) of the transaction payload/message B and the token T with
Alice's private key.
Alice creates a witnessed blockchain transaction WB with the transaction ID,
the transaction
payload/message B and the signature SIG.
49

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
Meanwhile, Alice informs William that a token was generated, and supplies
William with
Alice's ID, the signature SIG (which was generated form the message M and the
token T), as well as
the hash H, corresponding to the current entropy state E, (obtained from the
Entropy History Table).
Based on Alice's ID, William obtains Alice's public key PU. Based on Alice's
public key
PU, William retrieves the corresponding hash, and verifies that the retrieved
hash matches the hash
received from Alice, namely Ht. If not, then this would mean that Alice is not
using the same Entropy
History Table as William, which could signal a security breach.
Assuming that there is a match between the hash retrieved from the Entropy
History Table
and the hash obtained from Alice, William retrieves the corresponding entropy
state E and initiates
the same token generation protocol TokenGen(*) to generate the same token T as
Alice from the
same entropy state E. that is, T = TokenGen(Et).
William updates a "token ledger" 3975 with the token T and the signature SIG,
in association
with Alice's public key PU. The token ledger 3975 may be implemented as a data
structure, such as a
table, linked list, etc., and may be stored in a database, and physically
residing in a computer
memory. The token ledger 3975 may also log other details such as transaction
ID or session ID.
William informs Alice that the token ledger has been updated.
At this point, Alice publishes the witnessed blockchain transaction WB to
recipient Bob. It is
recalled that the witnessed blockchain transaction WB includes the transaction
ID, the transaction
payload/message B and the signature SIG. Steps involved in Bob verifying the
integrity of the
witnessed blockchain transaction WB are similar to those already described
with reference to the
diagram in Fig. 37.
Security enhancements
It has been shown that the N-bit output bit segments produced through
application of an
encoding mapping as described herein exist in a space that is so vast that,
from a practical
perspective, encryption is no longer needed to guarantee security, even for
relatively small values of
N. Nevertheless, the security provided by the use of an encoding mapping as
described herein can be
further enhanced. For example, one can use more than one permutation matrix in
series, or avoid the
use of common encoding lengths such as 8 bits (with ASCII) or 16 bits (with
Unicode), in order to
reduce the presence of a statistically significant fingerprint in the output
bit stream that could help
reverse engineer the input bit stream. Also, one can prepend bits into the
input bit stream to alter the
-frame" into which the input bits of the original bit stream are placed.
Other security features help reduce the presence of statistically significant
fingerprint in the
output bit stream. Examples are now described in greater detail.
First security enhancement: multi-level quantropization

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
With reference to Fig. 40, there is shown a multi-level system with two system
sizes, Ni and
N2. At the sender-side, an input bit stream (e.g., provided in 8-bit or 16-bit
characters) is first
disassembled by a first disassembler 4010 into segments of size Ni. These
segments are then
quantropized by a first quantropizer 4020 using a first permutation matrix P1
of size Ni. The
resulting N1-bit segments are reassembled by a reassembler 4030, which
produces segments of size
N2. The reassembler 4030 may include a packager, which creates 8-bit or 16-bit
characters, and then
a disassembler, which breaks up the 8-bit or 16-bit characters into N2-bit
segments. These N2-bit
segments are then quantropized by a second quantropizer 4040 using a second
permutation matrix P2
of size N2. The resulting N2-bit segments are then packaged by a packager 4050
into 8-bit or 16-bit
characters before being transmitted to the recipient. The recipient side
essentially conducts the same
operations but in reverse order, starting with N2-bit dequantropization (using
permutation matrix
P2') and then NI-bit dequantropization (using permutation matrix Pl").
Thus, it has been shown how NI-bit input segments of an input message are
converted into
corresponding N1-bit output segments using a 2N '-by-2 one-to-one mapping
stored in a non-
transitory storage medium; then, the N 1-bit output segments arc reassembled
into N2-bit input
segments, which are then converted into corresponding N2-bit output segments
using a 2N2-by-2N2
_
one-to-one mapping. The 2N1_by2N1 one-to-one mapping and the 2N2-by-2N2 one-to-
one mapping are
stored in a non-transitory storage medium. Ni and N2 can be arbitrary integers
greater than 1, and
Ni is different from N2.
Second security enhancement: bit position shuffling
With reference to Fig. 41, there is shown a system with a bit shuffler and a
quantropizer.
Specifically, at the sender-side, an input bit stream (e.g., provided in 8-bit
or 16-bit characters) is first
disassembled by a first disassembler 4110 into segments of size Ni. These
segments are then bit-
shuffled by a bit shuffler 4120 of size NI. The bit shuffler 4120 maps input
bit positions to output bit
positions. As such, the resulting NI-bit segment corresponding to a particular
NI-bit input segment
will have the same number of ones and the same number of zeros as the
particular N1-bit input
segment, only at different positions. When Ni is not equal to 8 or 16, this
will greatly reduce any
byte-level statistical fingerprint in the input bit stream, as consecutive
instances of the same bit
pattern in the input will not produce the same bit pattern in the output. The
NI-bit segments are then
disassembled by a disassembler 4130, which produces segments of size N2. These
N2-bit segments
are then quantropized by a quantropizer 4140 using a permutation matrix P of
size N2. The resulting
N2-bit segments are then packaged by a packager 4150 into 8-bit or 16-bit
characters before being
transmitted to the recipient. The recipient side essentially conducts the same
operations but in reverse
order, starting with N2-bit dequantropization (using permutation matrix P2 T)
and then N1-bit
deshuffling.
51

Third security enhancement: block quantropization
With reference to Fig. 42, there is shown a block quantropization system with
a block of four
quantropizers 4200, 4202, 4204, 4206. At the sender-side, an input bit stream
(e.g., provided in 8-bit
or 16-bit characters) is first disassembled by a disassembler 4210 into
batches of four segments of
size N. The first such segment in each batch is sent to the first quantropizer
4200 for quantropization
using a first permutation matrix P1 of size N. The second segment in each
batch is sent to the second
quantropizer 4202 for quantropization using a second permutation matrix P2 of
size N. The third
segment in each batch is sent to the third quantropizer 4204 for
quantropization using a third
permutation matrix P3 of size N. Finally, the fourth segment in each batch is
sent to the fourth
quantropizer 4206 for quantropization using a fourth permutation matrix P4 of
size N. The resulting
batch of four N-bit segments is reassembled by a packager 4020, which
transforms the sequence of
four sets of N bits into 8-bit or 16-bit characters for transmission to the
recipient. The recipient side
essentially conducts the same operations but in reverse order. It should be
understood that the
number of quantropizers in the block is not limited to four, and could be a
greater or smaller number.
Fourth security enhancement: dynamic spreading
For any communication in which the message contains structure (e.g. English
words,
language), one may seek to remove such structure prior to quantropization for
at least two security-
related reasons:
i. Certain messages (e.g., SMS or IM) are known to start with certain
text strings (such as "Hi",
"Hey" or "Hello"). Such predictability of a vast majority of messages
increases the chances
that the permutation matrix can be reverse-engineered (this is the "known
plaintext attack");
2. Certain messages do not take advantage of all the 2N unique input
ports of the peimutation
switch. This is indeed true for an 8-bit system (N=8) and in the case of
English text, which
uses 26 letters (52 including capitals, plus and a few punctuation marks)
according to a
highly skewed probability distribution. As such, where there is an uneven
usage of the 2N
possible bit arrangements when encoding the message, the difficulty of
cracking a 2N x 2N
permutation matrix reduces to the difficulty of cracking a much smaller
permutation matrix.
Accordingly, dynamic spreading of the input data may be used.
With reference to Figs. 32 and 43, there is shown a quantropization system
that includes a
disassembler 4310, a dynamic spreader 4320 (also referred to as a "scrambler")
and a quantropizer
4330. The dynamic spreader 4320 processes L-bit input segments in batches of
2' such segments.
The dynamic spreader 4320 keeps track of the position (i.e., 0 to 21-4) of
each L-bit input segment
4340 within each batch of 21- such input segments. The dynamic spreader 4320
also maintains 21-
mutually exclusive templates 4350 for bit-toggling, one for each position
(i.e., 0 to 21-4). The 21-
52
Date Recue/Date Received 2021-02-12

mutually exclusive templates 4350 could be, for example, the 2L-1 possible
combinations of zeros
and ones possible with N bits. The mutually exclusive templates 4350 could be
stored in a memory
of the dynamic spreader 4320. The dynamic spreader 4320 then combines the 0th
L-bit input segment
with the 0th template to yield the 0th L-bit "inteimediate" segment. Next, the
dynamic spreader 4320
combines the 1' L-bit input segment with the 1St template to yield the 1St L-
bit inteimediate segment,
and so on. By "combining", this could include a variety of invertible
functions, such as exclusive or
(XOR).
As a result, the dynamic spreader 4320 will produce an ordered sequence of 21-
L-bit
inteimediate segments 4360, which are fed, in sequence, to the quantropizer
4330 via the
disassembler 4310. The disassembler 4310 converts the sequence of 21- L-bit
inteimediate segments
4360 into a sequence of N-bit segments 4340 ready for quantropization.
The quantropizer 4330 conducts a quantropization process using an encoding
mapping (e.g.,
characterized by a peimutation matrix P). In some embodiments, the system
size, N, of the
quantropizer 4330 may be the same as the system size, L, of the dynamic
spreader 4320. However, in
general, the dynamic spreader 4320 and the quantropizer 4330 may operate with
different system
sizes. At the receiving end, a dequantropization process is conducted by a
dequantropizer and then a
dynamic despreading operation is perfoimed by a dynamic despreader. Depending
on the nature of
the combining perfoimed by the dynamic spreader 4320, the dynamic despreader
may perfolin
exactly the same operation (e.g., XOR) as the dynamic spreader 4320, or a
different inverse
operation.
In some embodiments, the templates 4350 used by the dynamic spreader 4320 may
be fixed.
In other embodiments, the templates 4350 used by the dynamic spreader 4320 may
be dynamic,
which means that the templates 4350 may be adjusted over time. In an example,
a given template
may be X0Red with a "salt", and this salt has to be also applied inversely at
the dynamic despreader.
It will be seen that use of the dynamic spreader 4320 causes a set of
identical input strings to
map to different bit segments at the input of the quantropizer 4330, leading
to different output
segments being transmitted to the recipient, which therefore makes it more
difficult for someone
intercepting the message to infer the encoding mapping being perfoimed by the
quantropizer 4330.
As such, spreading an original message amounts to mapping different
instantiations of same-valued
input segments in the original message into different-valued output segment in
the inteimediate
original message.
In an alternative embodiment, a quantropization system may include a quantum
entropy
processor with a disassembler, a quantropizer and a packager. Also provided is
a dynamic spreader,
which in this embodiment could be a stream cipher (such as RC4, for example).
The dynamic
spreader and the quantropizer are initialized with the appropriate entropy
state Et, which allows the
53
Date Recue/Date Received 2021-02-12

quantropizer to generate the appropriate peimutation matrix P and serves as a
seed for the dynamic
spreader. The entropy state Et can be obtained by consulting an Entropy
History Table.
Conclusion
Various operational embodiments are provided herein, including embodiments in
which one
or more of the operations described may correspond to computer readable
instructions stored on one
or more computer readable media, which if executed by a computing device, will
cause the
computing device to perform the operations described. The order in which some
or all of the
operations are described should not be construed as to imply that these
operations are necessarily
order-dependent. Alternative ordering will be appreciated by one skilled in
the art having the benefit
of this description. Further, it will be understood that not all operations
are necessarily present in
each embodiment provided herein.
Various aspects and embodiments can also be described by the following
additional clauses:
1. A communication system, comprising:
- a first apparatus comprising a first processing entity coupled to a first
memory; and
- a second apparatus comprising a second processing entity coupled to a
second memory;
- wherein the first and second apparatuses communicate over a communication
channel;
- wherein the first memory stores first computer-readable instructions and
a first mapping
between 2.1' possible input indexes and 2.N possible output indexes, N being
an integer
greater than one;
- wherein the second memory stores second computer-readable instructions
and a second
mapping between 2.1' possible input indexes and 2.1' possible output indexes,
the second
mapping being derivable from the first mapping;
- wherein the first processing entity is configured for:
- obtaining a first bit stream;
- subdividing the first bit stream into a plurality of N-bit first input
segments;
- for each of the N-bit first input segments, deteimining a first input
index as a value
represented by the N bits of a particular one of the N-bit first input
segments,
deteimining a first output index based on the first input index and the first
mapping,
and setting bits of a corresponding N-bit first output segment so as to
represent the
value of the first output index; and
54
Date Recue/Date Received 2021-02-12

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
- causing a second bit stream formed using each corresponding N-bit first
output
segment to be transmitted over the communication channel;
- wherein the second processing entity is configured for:
- receiving the second bit stream;
- subdividing the second bit stream into a plurality of N-bit second input
segments;
- for each of the N-bit second input segments, determining a second input
index as a
value represented by the N bits of a particular one of the N-bit second input
segments, determining a second output index based on the second input index
and
the second mapping, and setting bits of a corresponding N-bit second output
segment so as to represent the value of the second output index so as to
recover a
corresponding one of the N-bit input segments; and
- outputting a third bit stream formed using each corresponding N-bit
second output
segment.
2. The communication system defined in clause 1, wherein the first processing
entity is further
configured to send the second mapping to the second apparatus for storage in
the second
memory.
3. The communication system defined in clause 2, wherein the first processing
entity is further
configured to derive the second mapping from the first mapping.
4. The communication system defined in clause 3, wherein when the first
mapping is represented as
a first matrix and the second mapping is represented as a second matrix, the
first and second
matrices being transposes of one another.
5. The communication system defined in clause 1, wherein the first processing
entity is further
configured to send the transmitted mapping to the second apparatus, the
transmitted mapping
being the first mapping or the second mapping.
6. The communication system defined in clause 4, wherein the first processing
entity is further
configured to send a flag to the second apparatus, the flag indicating whether
the transmitted
mapping is the first mapping or a transpose of the first mapping.
7. The communication system defined in clause 5, wherein the second processing
entity is further
configured to process the flag and to store in the second memory the
transmitted mapping when
the flag indicates that the transmitted mapping is the transpose of the first
mapping.

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
8. The communication system defined in clause 5, wherein the second processing
entity is further
configured to process the flag and to store in the second memory a transpose
of the transmitted
mapping when the flag indicates that the transmitted mapping is the first
mapping.
9. The communication system defined in clause 5, wherein the first processing
entity is further
configured for encrypting data indicative of the transmitted mapping prior to
sending it to the
second apparatus.
10. The communication system defined in clause 9, wherein to encrypt the data
indicative of the
transmitted mapping prior to sending it to the second apparatus, the first
processing entity is
configured for using a private key of a private key / public key pair to
encrypt the data indicative
of the transmitted mapping, the private key being uniquely known to the first
apparatus, the
public key being made available to the second apparatus.
11. The communication system defined in clause 5, wherein the first and second
processing entities
are configured for carrying out a handshaking protocol with one another to
securely transmit the
transmitted mapping from the first apparatus to the second apparatus.
12. The communication system defined in clause 1, wherein the first mapping is
stored in the first
memory by a permutation matrix, wherein determining the first output index
based on the first
input index and the first mapping comprises creating a 2"-length input vector
with all zeroes
except for a "1" in a single position corresponding to the first input index,
multiplying the
permutation matrix and the input vector to obtain a 2N-length output vector
having all zeroes
except for a "1" in a single position, wherein the first output index is set
to equal the position in
which the "1- appears in the output vector.
13. The communication system defined in clause 1, wherein the first and second
mappings are each
one-to-one mappings.
14. The communication system defined in clause 1, wherein for plural
successive N-bit first input
segments, the first processing entity is configured for:
- determining a first input index for each of the first N-bit input
segments;
- creating a succession of arrays of size 2N, each array corresponding to
one of the N-bit
first input segments and having all zeroes except for a "1" in a single
position
corresponding to the first input index;
- applying each of the arrays to inputs of a 2N-input, 2N-output switch
fabric that
implements the mapping, thereby to obtain a succession of output arrays at the
outputs of
the switch fabric, each of the arrays having all zeroes except for a "1" in a
single
position;
56

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
- setting the output index for each of the N-bit output segments to
correspond to the
position in which the "1" appears in a corresponding one of the output arrays.
15. The communication system defined in clause 1, wherein N is at least as
great as 14.
16. The communication system defined in clause 1, the first processing entity
being further
configured encrypting the first bit stream with an encryption key before said
subdividing, the
second processing entity being further configured for decrypting the third bit
stream with a
decryption key corresponding to the encryption key, thereby to recover the
first bit stream.
17. The communication system defined in clause 1, the first processing entity
being further
configured encrypting the second bit stream with an encryption key before said
causing, the
second processing entity being further configured for decrypting the received
bit stream with a
decryption key corresponding to the encryption key prior to said subdividing.
18. The communication system defined in clause 1, wherein the first and second
apparatuses are
mobile communication devices.
19. A data protection method comprising:
- using electricity to store, in a computer memory, a mapping between 2N
possible input
indexes and 21 possible output indexes;
- using electricity to obtain an input bit stream;
- using electricity to subdivide the input bit stream into a plurality of N-
bit input segments
stored in the computer memory, N being an integer greater than one;
- using electricity to produce a plurality of N-bit output segments
corresponding to
respective ones of the N-bit input segments and to store the N-bit output
segment in the
computer memory, wherein a particular one of the N-bit output segments is
produced
from a particular one of the N-bit input segments by:
- determining an input index as a value represented by the N bits of the
particular one
of the N -bit input segments;
- determining an output index based on the input index and the mapping;
- setting the bits of the particular one of the N-bit output segments so as
to represent
the value of the output index;
- using electricity to output an output bit stream formed using the N-bit
output segments;
- using electricity to convert the output bit stream into an output signal;
and
- using electricity to release the output signal onto a physical medium.
57

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
20. A method of implementing a secure network amongst a plurality of
communication devices,
comprising:
- determining a mapping between 21` possible input indexes and 2' possible
output
indexes, where N> 1,
- securely embedding the mapping within a memory of each of the
communication
devices;
- configuring one or more processors in each of the communication devices
to execute a
data encoding method when transmitting data to other ones of the communication

devices; and
- configuring one or more processors in each of the communication devices
to execute a
data decoding method when processing data received from other ones of the
communication devices;
- the data encoding and decoding methods comprising:
- obtaining an input bit stream;
- subdividing the input bit stream into a plurality of N-bit input
segments;
- producing a plurality of N-bit output segments corresponding to
respective ones of
the N-bit input segments, wherein a particular one of the N-bit output
segments is
produced from a particular one of the N-bit input segments by:
- determining an input index as a value represented by the N bits of the
particular
one of the N-bit input segments;
- determining an output index based on the input index and the mapping; and
- setting the bits of the particular one of the N-bit output segments so as
to
represent the value of the output index; and
- outputting an output bit stream formed using the N-bit output segments.
Finally, although the disclosure has been shown and described with respect to
one or more
implementations, equivalent alterations and modifications will occur to others
skilled in the art based
upon a reading and understanding of this specification and the annexed
drawings. The disclosure
includes all such modifications and alterations and is limited only by the
scope of the following
claims. In particular regard to the various functions performed by the above
described components
(e.g., elements, resources, etc.), the terms used to describe such components
are intended to
correspond, unless otherwise indicated, to any component which performs the
specified function of
the described component (e.g., that is functionally equivalent), even though
not structurally
58

CA 03073549 2020-02-14
WO 2019/079890 PCT/CA2018/051339
equivalent to the disclosed structure which performs the function in the
herein illustrated exemplary
implementations of the disclosure. In addition, while a particular feature of
the disclosure may have
been disclosed with respect to only one of several implementations, such
feature may be combined
with one or more other features of the other implementations as may be desired
and advantageous for
any given or particular application.
59

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Administrative Status , Maintenance Fee  and Payment History  should be consulted.

Administrative Status

Title Date
Forecasted Issue Date 2021-06-08
(86) PCT Filing Date 2018-10-23
(87) PCT Publication Date 2019-05-02
(85) National Entry 2020-02-14
Examination Requested 2020-02-14
(45) Issued 2021-06-08

Abandonment History

There is no abandonment history.

Maintenance Fee

Last Payment of $210.51 was received on 2023-09-25


 Upcoming maintenance fee amounts

Description Date Amount
Next Payment if small entity fee 2024-10-23 $100.00
Next Payment if standard fee 2024-10-23 $277.00

Note : If the full payment has not been received on or before the date indicated, a further fee may be required which may be one of the following

  • the reinstatement fee;
  • the late payment fee; or
  • additional fee to reverse deemed expiry.

Patent fees are adjusted on the 1st of January every year. The amounts above are the current amounts if received by December 31 of the current year.
Please refer to the CIPO Patent Fees web page to see all current fee amounts.

Payment History

Fee Type Anniversary Year Due Date Amount Paid Paid Date
Application Fee 2020-02-14 $400.00 2020-02-14
Request for Examination 2023-10-23 $200.00 2020-02-14
Registration of a document - section 124 2020-07-22 $100.00 2020-07-22
Final Fee 2021-07-15 $416.16 2021-04-13
Back Payment of Fees 2021-10-08 $100.00 2021-10-08
Unpaid Maintenance Fee before Grant, Late Fee and next Maintenance Fee 2021-10-25 $350.00 2022-02-15
Late Fee for failure to pay new-style Patent Maintenance Fee 2022-02-15 $150.00 2022-02-15
Maintenance Fee - Patent - New Act 4 2022-10-24 $100.00 2022-10-24
Maintenance Fee - Patent - New Act 5 2023-10-23 $210.51 2023-09-25
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
QUANTROPI INC.
Past Owners on Record
None
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Abstract 2020-02-14 2 77
Claims 2020-02-14 10 394
Drawings 2020-02-14 46 510
Description 2020-02-14 59 3,403
Representative Drawing 2020-02-14 1 10
International Search Report 2020-02-14 3 161
National Entry Request 2020-02-14 3 101
Prosecution/Amendment 2020-02-14 15 608
Drawings 2020-02-15 46 567
Description 2020-02-15 62 3,572
Claims 2020-02-15 5 200
Examiner Requisition 2020-03-05 4 191
Modification to the Applicant-Inventor / Completion Fee - PCT 2020-03-03 4 104
Office Letter 2020-03-13 1 167
Cover Page 2020-04-09 2 45
Amendment 2020-07-22 24 1,109
Claims 2020-07-22 7 288
Description 2020-07-22 62 3,568
Amendment 2020-09-01 14 546
Claims 2020-09-01 10 395
Withdrawal from Allowance 2020-09-28 1 47
Office Letter 2020-09-28 1 194
Examiner Requisition 2020-12-01 3 150
Amendment 2020-12-10 7 215
Claims 2020-12-10 10 394
Examiner Requisition 2021-02-05 3 153
Amendment 2021-02-12 15 695
Description 2021-02-12 62 3,568
Drawings 2021-02-12 46 570
PCT Correspondence 2021-04-12 4 125
Final Fee 2021-04-13 5 124
Office Letter 2021-04-27 2 184
Representative Drawing 2021-05-14 1 5
Cover Page 2021-05-14 1 42
Electronic Grant Certificate 2021-06-08 1 2,527
Office Letter 2021-11-05 2 185
Maintenance Fee Payment 2022-02-15 1 33
Maintenance Fee Correspondence 2022-04-06 4 207