Language selection

Search

Patent 3075627 Summary

Third-party information liability

Some of the information on this Web page has been provided by external sources. The Government of Canada is not responsible for the accuracy, reliability or currency of the information supplied by external sources. Users wishing to rely upon this information should consult directly with the source of the information. Content provided by external sources is not subject to official languages, privacy and accessibility requirements.

Claims and Abstract availability

Any discrepancies in the text and image of the Claims and Abstract are due to differing posting times. Text of the Claims and Abstract are posted:

  • At the time the application is open to public inspection;
  • At the time of issue of the patent (grant).
(12) Patent Application: (11) CA 3075627
(54) English Title: INTEGRATED DOCUMENT EDITOR
(54) French Title: EDITEUR DE DOCUMENT INTEGRE
Status: Examination Requested
Bibliographic Data
(51) International Patent Classification (IPC):
  • G06F 3/0481 (2022.01)
  • G06F 3/0484 (2022.01)
  • G06F 40/166 (2020.01)
  • G06F 3/04817 (2022.01)
  • G06F 3/04883 (2022.01)
(72) Inventors :
  • ZEEVI, ELI (United States of America)
(73) Owners :
  • ZEEVI, ELI (United States of America)
(71) Applicants :
  • ZEEVI, ELI (United States of America)
(74) Agent: ROWAND LLP
(74) Associate agent:
(45) Issued:
(86) PCT Filing Date: 2018-09-18
(87) Open to Public Inspection: 2019-03-21
Examination requested: 2023-12-29
Availability of licence: N/A
(25) Language of filing: English

Patent Cooperation Treaty (PCT): Yes
(86) PCT Filing Number: PCT/US2018/051400
(87) International Publication Number: WO2019/055952
(85) National Entry: 2020-03-11

(30) Application Priority Data:
Application No. Country/Territory Date
62/559,269 United States of America 2017-09-15

Abstracts

English Abstract

A computing device includes a memory and a touch screen including a display medium for displaying a representation of at least one graphic object stored in the memory, the graphic object having at least one parameter stored in the memory, a surface for determining an indication of a change to at the least one parameter. In response to indicating the change, the computing device is configured to automatically change the at least one parameter in the memory and automatically change the representation of the one or more graphic objects in the memory, and the display medium is configured to display the changed representation of one or more graphic objects with the changed parameter.


French Abstract

Un dispositif informatique comprend une mémoire et un écran tactile pourvu d'un support d'affichage conçu pour afficher une représentation d'au moins un objet graphique stocké dans la mémoire. L'objet graphique contient au moins un paramètre stocké dans la mémoire et une surface de détermination d'une indication d'une modification dudit au moins un paramètre. Le dispositif informatique est configuré pour, en réponse à l'indication de la modification, modifier automatiquement ledit au moins un paramètre dans la mémoire et modifier automatiquement la représentation desdits un ou plusieurs objets graphiques dans la mémoire. Le support d'affichage est configuré pour afficher la représentation desdits un ou plusieurs objets graphiques modifiée avec le paramètre modifié.

Claims

Note: Claims are shown in the official language in which they were submitted.



CLAIMS

1. A computing device, comprising:
a memory, for storing vector graphics, the vector graphics comprising a
plurality of
graphic objects, each graphic object having at least one location stored in
the
memory and one or more parameters, each parameter being changeable by one or
more functions, and
a touch screen, including:
a display medium for displaying a representation of the vector graphics,
and
a surface for detecting an indication of a change in at least one of the
one or more parameters of at least one of the plurality of graphic objects;
responsive to detecting the indication, the computing device is configured to
automatically change:
the at least one parameter,
a geometrical feature within the vector graphics based on the changed
at least one parameter, and
the representation of the vector graphics based on the changed
geometrical feature;
wherein the display medium is configured to display the changed
representation of the vector graphics.
2. The computing device of claim 1, wherein the representation of the vector
graphics on the display medium is a two-dimensional vector image.
3. The computing device of claim 1, wherein the representation of the vector
graphics on the display medium is a three-dimensional vector image.
4. The computing device of claim 1, wherein the indication comprises one or
more
gestures.

59


5. The computing device of claim 1, wherein the indication of a change
comprises an
indication of a command to change the at least one parameter.
6. The computing device of claim 5, wherein the indication of a command
comprises
a drawing of a letter or a selection of an icon.
7. The computing device of claim 5, further configured to automatically
identify a
portion of the representation of the at least one graphic object having the at
least one
parameter to be changed.
8. The computing device of claim 7, further configured to cause the display
medium
to zoom in the portion of the representation of the at least one graphic
object having
the at least one parameter to be changed while the at least one parameter is
being
changed and to zoom out after the at least one parameter has been changed.
9. The computing device of claim 7, wherein the indication of a change
comprises a
gesture for indicating the portion of the representation.
10. The computing device of claim 7, wherein one or more functions are
configured
to automatically identify a touching gesture on the display medium as an
increase or
as a decrease in a value of at least one of the one or more parameters to be
changed.
11. The computing device of claim 7 wherein one or more functions are
configured to
automatically identify a tapping gesture on the display medium as an increase
or as
a decrease in a value of at least one of the one or more parameters to be
changed.
12.-13. (Cancelled).
14. The computing device of claim 5, wherein the indication of command
comprises
an indication of a command to change a length of a line between two locations
of the
at least one graphic object, and wherein a change to apply to the length is



automatically identified based on detection of a change in position starting
at or
proximate one of the locations as represented on the display medium.
15. The computing device of claim 5, wherein the indication of a command
comprises an indication of a command to change an angle of a line between two
locations of the at least one graphic object, and wherein a change to apply to
the
angle is automatically identified based on detection of a change in position
starting at
or proximate one of the locations as represented on the display medium.
16. (Cancelled).
17. The computing device of claim 5, wherein the indication of a command
comprises an indication of a command to apply a radius to a line or to change
a
radius of an arc between two locations of the at least one graphic object, and

wherein the radius to apply to the line or the change in the radius to apply
to the arc
is automatically identified based on detection of a change in position
starting at or
proximate a position within the line or the arc as represented on the display
medium.
18. (Cancelled).
19. The computing device of claim 5, wherein the indication of a command
comprises an indication of a command to make a line of the at least one
graphic
object parallel to another line of the at least one graphic object.
20. (Cancelled).
21. The computing device of claim 5, wherein the indication of a command to
change
the at least one parameter comprises an indication of a command to add a
fillet to an
inside surface of a corner of the at least one graphic object or an arc to an
outside
surface of a corner of the at least one graphic object, and wherein a change
to apply
to a radius of the arc or of the fillet is automatically identified based on
detection of a
change in position starting at or proximate a position within the arc or the
fillet as
represented on the display medium.

61


22. The computing device of claim 5, wherein the indication of a command
comprises an indication of a command to add a chamfer to the at least one
graphic
object, and wherein a change to apply to at least one of a width, a height and
an
angle of the chamfer is automatically identified based on detection of a
change in
position starting at or proximate at least one location of the chamfer as
represented
on the display medium.
23. The computing device of claim 5, wherein the indication of a command
comprises an indication of a command to trim a portion of the at least one
graphic
object.
24. (Cancelled).
25. The computing device of claim 5, wherein the indication of a command
comprises an indication of a command to unsnap an intersection of two parts of
the
at least one graphic object.
26. A method, comprising:
displaying, on a display medium of a computing device, a representation of
vector graphics, the vector graphics comprising a plurality of graphic
objects, each
graphic object having at least one location stored in the memory and one or
more
parameters, each parameter being changeable by one or more functions;
detecting an indication of a change in at least one of the one or more
parameters of at least one of the plurality of graphic objects; wherein in
response to
detecting the indication:
automatically changing the at least one parameter;
automatically changing a geometrical feature within the vector graphics based
on the changed at least one parameter;
automatically changing the representation of the vector graphics based on the
changed geometrical feature; and
displaying the changed representation of the vector graphics on the display
medium.

62


27. The method of claim 26, wherein the representation of the vector graphics
on the
display medium is a two-dimensional vector image.
28. The method of claim 26, wherein the representation of the vector graphics
on the
display medium is a three-dimensional vector image.
29. The method of claim 26, wherein said detecting a change in the at least
one
parameter comprises detecting one or more gestures.
30. The method of claim 26, wherein the indication comprises an indication of
a
command to change the at least one parameter.
31. The method of claim 30, wherein the indication of a command comprises a
drawing of a letter or a selection of an icon.
32. The method of claim 30, further comprises automatically identifying a
portion of
the representation of the at least one graphic object having the at least one
parameter to be changed.
33. The method of claim 32, further comprises zooming in the displayed
representation of the portion of the representation of the at least one
graphic object
having the at least one parameter to be changed while the at least one
parameter is
being changed and zooming out after the at least one parameter has been
changed.
34. The method of claim 32, wherein the indication comprises a gesture for
indicating
the portion of the representation.
35. The method of claim 32, wherein the indication comprises a touching
gesture to
indicate an increase or a decrease in a value of at least one of the one or
more
parameters to be changed.

63


36. The method of claim 32, wherein the indication comprises a tapping gesture
to
indicate an increase or a decrease in a value of at least one of the one or
more
parameters to be changed.
37.-38. (Cancelled).
39. The method of claim 30, wherein the indication of a command comprises an
indication of a command to change a length of a line between two locations of
the at
least one graphic object, and wherein a change to apply to the length is
automatically identified based on detecting a change in position starting at
or
proximate one of the locations as represented on the display medium.
40. The method of claim 30, wherein the indication of a command comprises an
indication of a command to change an angle of a line between two locations of
the at
least one graphic object, and wherein a change to apply to the angle is
automatically
identified based on detecting a change in position starting at or proximate
one of the
locations as represented on the display medium.
41. (Cancelled).
42. The method of claim 30, wherein the indication of a
command comprises an indication of a command to apply a radius to a line or to

change a radius of an arc between two locations of the at least one graphic
object,
and wherein the radius to apply to the line or the change to apply to the
radius of the
arc is automatically identified based on detecting a change in position
starting at or
proximate a position within the line or the arc as represented on the display
medium.
43. (Cancelled).
44. The method of claim 30, wherein the indication of a command comprises an
indication of a command to make a line of the at least one graphic object
parallel to
another line of the at least one graphic object.

64


45. (Cancelled).
46. The method of claim 30, wherein the indication of a command comprises an
indication of a command to add a fillet to an inside surface of a corner of
the at least
one graphic object, or an arc to an outside surface of a corner of the at
least one
graphic object, and wherein a change to apply to a radius of the arc or of the
fillet is
automatically identified based on detecting a change in position starting at
or
proximate a position within the arc or the fillet as represented on the
display medium.
47. The method of claim 30, wherein the indication of a command comprises an
indication of a command to add a chamfer to the at least one graphic object,
and
wherein a change to apply to at least one of a width, a height and an angle of
the
chamfer is automatically identified based on detecting a change in position
starting at
or proximate at least one location of the chamfer as represented on the
display
medium.
48. The method of claim 30, wherein the indication of a command comprises an
indication of a command to trim a portion of the at least one graphic object.
49. (Cancelled)
50. The method of claim 30, wherein the indication of a command comprises an
indication of a command to unsnap an intersection of two parts of the at least
one
graphic object.
51. A computing device, comprising:
a memory,
a touch screen, including:
a display medium for displaying a representation of one or more text
characters or graphic objects stored in the memory, and
a surface for detecting user input, and
one or more processing units, configured to invoke a command mode and a
data entry mode,



said command mode is invoked when a command associated with at least
one text character or graphic object stored at one or more data locations in
the
memory is identified, and
said data entry mode is invoked when a command to insert or paste one or
more text characters or graphic objects at one or more insertion locations in
the
memory is identified;
responsive to detecting a gesture on the surface to indicate at least one of
said one or more data locations:
said computing device is configured to apply said command to said at least
one text character or graphic object or to change at least one parameter of
said at
least one graphic object,
wherein said data entry mode is disabled in said command mode to allow for
unconfined input of said gesture within the user input.
52. The computing device of claim 51, wherein to apply the command to the at
least
one graphic object in said command mode, comprises to at least one of select,
copy,
delete, move or change an attribute of, said stored at least one graphic
object,
wherein said attribute comprises one of color, shade, size, style or line
thickness,
and wherein to change at least one parameter comprises to change at least one
of
line length, line angle, arc radius or to change or add line segmentation.
53. The computing device of claim 51, wherein said command mode is disabled in

said data entry mode:
to allow for unconfined input of a drawn shape on the surface within the user
input, wherein the drawn shape is indicative of a graphic object to be
inserted at said
one or more insertion locations, or
to indicate said one or more insertion locations.
54. The computing device of claim 51, wherein in said data entry mode:
responsive to a finger or a writing tool being lifted from the surface for a
predetermined period of time, said computing device is configured to insert or
paste
one or more text characters or graphic objects at said one or more insertion
locations, wherein said one or more insertion locations are automatically
determined.

66

55A. The computing device of claim 51, wherein in said data entry mode, one or

more functions are configured to insert or paste one or more text characters
or
graphic objects at said one or more insertion locations, wherein said one or
more
insertion locations are automatically determined.
55B. The computing device of claim 51, wherein said command mode is
automatically invoked after one or more text characters or graphic objects are

inserted or pasted at said one or more insertion locations.
56. The computing device of claim 51, wherein responsive to detecting a speed
change between a first user selected position and a second user selected
position
while drawing said gesture in said command mode or a shape in said data entry
mode on the surface:
the computing device is configured to automatically zoom in or zoom out
proximate to said second user selected position, as said speed change is
decreased
or increased.
57. The computing device of claim 51, further comprising:
responsive to detecting no movement at a user selected position on the
surface, for a predetermined period of time:
the computing device is configured to zoom in gradually up to a maximal
predetermined zoom percentage proximate to said user selected position.
58. The computing device of claim 51, wherein said command mode is a default
mode.
59. The computing device of claim 51, further comprising:
responsive to detecting a continued tapping at a user selected position on the

surface:
the computing device is configured to automatically zoom out gradually down
to a minimal predetermined zoom percentage proximate to said user selected
position.
67

60. The computing device of claim 53, wherein one or more functions are
configured
to automatically estimate a length of a graphic object to be inserted at said
one or
more insertion locations as the shape is being input on the surface
61. The computing device of claim 52, wherein the at least one graphic object
comprises an arc, and wherein the arc is automatically shifted when a location
of the
arc is at or proximate another location of a line in the memory, such that the
arc is
tangent to the line at the location of the arc.
62. The computing device of claim 51, wherein to apply the command to the
stored
at least one text character in said command mode comprises to at least one of
select, delete, remove, replace, move, copy, cut and change an attribute of,
the
stored at least one text character, and wherein said attribute comprising font
type,
size, style or color, or bold, italic, underline, double underline, dotted
underline,
strikethrough, double strikethrough, capitalized, small caps, or all caps.
63. The computing device of claim 1, wherein the at least one parameter to be
changed is automatically identified based on at least one portion of the
indication
being proximate or at the geometrical feature as represented on the display
medium.
64. The computing device of claim 3, wherein the at least one parameter to be
changed is automatically identified based on at least one portion of the
indication
being within the geometrical feature as represented within the vector image.
65. The computing device of claim 64, wherein the at least one portion of the
indication comprises a change in position.
66. The computing device of claim 65, wherein a direction of the change in
position
is indicative of an increase or a decrease of a value of at least one of the
one or
more parameters to be changed.
68

67. The computing device of claim 66, wherein one or more functions,
configured to
increase or decrease the value, are automatically identified based on
detection of the
direction of the change in position.
68. The computing device of claim 1, wherein the indication comprises a
touching
gesture indicative of an increase or a decrease in a value of at least one of
the one
or more parameters to be changed.
69. The computing device of claim 1, wherein the indication comprises a
tapping
gesture indicative of an increase or a decrease in a value of at least one of
the one
or more parameters to be changed.
70. The computing device of claim 1, further configured to automatically
change at
least one location of at least one graphic object of the vector graphics based
on at
least one of the changed one or more parameters.
71. The computing device of claim 5, wherein the indication of a change
further
comprises a change in position.
72. The computing device of claim 71, wherein a direction of the change in
position
is indicative of an increase or a decrease in a value of at least one of the
one or
more parameters to be changed.
73. The computing device of claim 72, wherein one or more functions,
configured to
automatically change the value, are automatically identified based on
detection of
the direction of the change in position.
74. The computing device of claim 1, wherein each graphic object being within
or
having said location connected to or proximate at least one other graphic
object.
75. The computing device of claim 1, wherein at least one of said plurality of
graphic
objects remains unchanged after the geometrical feature has been changed.
69

76. The method of claim 26, further comprising:
automatically identifying the at least one parameter to be changed based on
at least one portion of the indication being proximate or at the geometrical
feature as
represented on the display medium.
77. The method of claim 28, further comprising:
automatically identifying the at least one parameter to be changed based on
at least one portion of the indication being within the geometrical feature as

represented within the vector image.
78. The method of 77, wherein the at least one portion of the indication
comprises a
change in position.
79. The method of claim 78, wherein a direction of the change in position is
indicative of an increase or a decrease in a value of at least one of the one
or more
parameters to be changed.
80. The method of claim 79, further comprising:
automatically identifying one or more functions for automatically changing the
value
based on detecting the direction of the change in position.
81. The method of claim 26, wherein the indication comprises a touching
gesture
indicative of an increase or a decrease in a value of at least one of the one
or more
parameters to be changed.
82. The method of claim 26, wherein the indication comprises a tapping gesture

indicative of an increase or a decrease in a value of at least one of the one
or more
parameters to be changed.
83. The method of claim 26, further comprising:
automatically changing at least one location of at least one graphic object of
the
vector graphics based on at least one of the changed one or more parameters.

84. The method of claim 30, wherein said indicating a change further comprises
a
change in position.
85. The method of claim 84, wherein a direction of the change in position is
indicative of an increase or a decrease in a value of at least one of the one
or more
parameters to be changed.
86. The method of claim 85, further comprising:
automatically identifying one or more functions for automatically changing the
value based on detecting the direction the change in position.
87. The method of claim 26, wherein each graphic object being within or having
said
location connected to or proximate at least one other graphic object.
88. The method of claim 26, wherein at least one of said plurality of graphic
objects
remains unchanged after the geometrical feature has been changed.
89. A computing device, comprising:
a memory,
a touch screen, including:
a display medium for displaying a representation of one or more
graphic objects stored in the memory, and
a surface for detecting user input, and
one or more processing units, configured to invoke a command mode,
said command mode is invoked when a command to change at least one
parameter of at least one graphic object stored at one or more data locations
in the
memory is identified;
responsive to detecting a gesture on the surface to indicate at least one of
said one or more data locations:
said computing device is configured to change the at least one
parameter,
71

wherein, a command to insert or paste one or more graphic objects in
the memory is disabled in said command mode to allow for unconfined input
of said gesture on the surface.
90. The computing device of claim 89, wherein said to change at least one
parameter comprises to change at least one of line length, line angle, arc
radius, or
to add or change line segmentation.
91. A method, comprising:
displaying, on a display medium of a computing device, a representation of
one or more graphic objects stored in a memory of the computing device;
invoking a command mode; wherein said command mode is invoked when a
command to change at least one parameter of at least one graphic object stored
at
one or more data locations in the memory is identified;
detecting a gesture to indicate at least one of said one or more data
locations;
wherein, in response to detecting the gesture:
changing the at least one parameter;
wherein, a command to insert or paste one or more graphic objects in the
memory is disabled in said command mode to allow for unconfined input of said
gesture.
92. The method of claim 91, wherein said to change at least one parameter
comprises to change at least one of line length, line angle, arc radius, or to
add or
change line segmentation.
93. A computing device, comprising:
a memory,
a touch screen, including:
a display medium for displaying a representation of one or more text
characters or graphic objects stored in the memory, and
a surface for detecting user input, and
one or more processing units, configured to invoke a data entry mode,
72

said data entry mode is invoked when a command to insert or paste one or
more text characters or graphic objects at one or more insertion locations in
the
memory is identified;
responsive to detecting a shape or a gesture being input on the surface to
indicate said one or more insertion locations:
said computing device is configured to insert said one or more text
characters or graphic objects at said one or more insertion locations,
wherein, a command to apply to a stored text character or graphic
object in the memory or to change a parameter of said stored graphic object is

disabled in said data entry mode to allow for unconfined input of said shape
or
gesture.
94. A method, comprising:
displaying, on a display medium of a computing device, a representation of
one or more text characters or graphic objects stored in a memory of the
computing
device;
invoking a data entry mode; wherein said data entry mode is invoked when a
command to insert or paste at least one text character or graphic object at
one or
more insertion locations in the memory is identified;
detecting a shape or a gesture being input to indicate said one or more
insertion locations; wherein in response to detecting the shape or the
gesture:
inserting said at least one text character or graphic object at said one or
more
insertion locations;
wherein, a command to apply to a stored text character or graphic object in
the memory or to change a parameter of said stored graphic object is disabled
in
said data entry mode to allow for unconfined input of said shape or gesture.
73

Description

Note: Descriptions are shown in the official language in which they were submitted.


CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
INTEGRATED DOCUMENT EDITOR
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This
Application claims the benefit of U.S. Provisional Patent
Application 62/559,269, filed September 15, 2017, the contents of which are
herein
incorporated by reference.
BACKGROUND
[0002] The
disclosed embodiments relate to document creation and editing.
More specifically, the disclosed embodiments relate to integration of
recognition of
information entry with document creation. Handwritten data entry into computer

programs is known. The most widespread use has been in personal digital
assistant
devices. Handwritten input to devices using keyboards is not widespread for
various
reasons. For example, character transcription and recognition are relatively
slow,
and there are as yet no widely accepted standards for character or command
input.
SUMMARY
[0003]
According to the disclosed embodiments, methods and systems are
provided for incorporating handwritten information, particularly corrective
information,
into a previously created revisable text or graphics document, for example
text data,
image data or command cues, by use of a digitizing recognizer, such as a
digitizing
pad, a touch screen or other positional input receiving mechanism as part of a

display. In a data entry mode, a unit of data is inserted by means of a
writing pen or
like scribing tool and accepted for placement at a designated location,
correlating x-y
location of the writing pen to the actual location in the document, or
accessing
locations in the document memory by emulating keyboard keystrokes (or by the
running of code/programs). In a recognition mode, the entered data is
recognized as
legible text with optionally embedded edit or other commands, and it is
converted to
machine-readable format. Otherwise, the data is recognized as graphics (for
applications that accommodate graphics) and accepted into an associated image
frame. Combinations of data, in text or in graphics form, may be concurrently
recognized. In a specific embodiment, there is a window of error in location
of the
writing tool after initial invocation of the data entry mode, so that actual
placement of
the tool is not critical, since the input of data is correlated by the initial
x-y location of
the writing pen to the actual location in the document. In addition, there is
an
allowed error as a function of the pen's location within the document (i.e.,
with
1

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
respect to the surrounding data). In a command entry mode, handwritten symbols

selected from a basic set common to various application programs may be
entered
and the corresponding commands may be executed. In specific embodiments, a
basic set of handwritten symbols and/or commands that are not application-
dependent and that may be user-intuitive are applied. This handwritten command

set allows for the making of revisions and creating documents without having
prior
knowledge of commands for a specific application.
[0004] In a
specific embodiment, such as in use with a word processor, the
disclosed embodiments may be implemented when the user invokes a Comments
Mode of at a designated location in a document and then the handwritten
information
may be entered via the input device into the native Comments field, whereupon
it is
either converted to text or image or to the command data to be executed, with
a
handwriting recognizer operating either concurrently or after completion of
entry of a
unit of the handwritten information. Information recognized as text is then
converted
to ciphers and imported into the main body of the text, either automatically
or upon a
separate command. Information recognized as graphics is then converted to
image
data, such as a native graphics format or as a JPEG image and imported into to
the
main body of the text at the designated point, either automatically or upon a
separate
command. Information interpreted as commands can be executed, such as editing
commands, which control addition, deletion or movement of text within the
document, as well as font type or size change or color change. In a further
specific
embodiment, the disclosed embodiments may be incorporated as a plug-in module
for the word processor program and invoked as part of the system, such as the
use
of a macro or as invoked through the Track Changes feature.
[0005] In an
alternative embodiment, the user may manually indicate, prior to
invoking the recognition mode, the nature of the input, whether the input is
text,
graphics or command, recognition can be further improved by providing a step-
by-
step protocol prompted by the program for setting up preferred symbols and for

learning the handwriting patterns of the user.
[00061 In at
least one aspect of the disclosed embodiments, a computing
device includes a memory and a touch screen including a display medium for
displaying a representation of at least one graphic object stored in the
memory, the
graphic object having at least one parameter stored in the memory, a surface
for
determining an indication of a change to at the least one parameter, wherein,
in
2

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
response to indicating the change, the computing device is configured to
automatically change the at least one parameter in the memory and
automatically
change the representation of the one or more graphic objects in the memory,
and
wherein the display medium is configured to display the changed representation
of
one or more graphic objects with the changed parameter.
[0006]-100071 In another aspect of the disclosed embodiments, a method
includes
displaying, on a display medium of a computing device, a representation of at
least
one graphic object stored in a memory, each graphic object having at least one

parameter stored in the memory, indicating a change to the least one
parameter, and
in response to indicating the change, automatically changing the at least one

parameter in the memory and automatically changing the representation of the
at
least one graphic object in the memory, and displaying the changed
representation
of the at least one graphic object on the display medium
[0007] J00081 These and other features of the disclosed embodiments will be
better
understood by reference to the following detailed description in connection
with the
accompanying drawings, which should be taken as illustrative and not limiting.
BRIEF DESCRIPTION OF THE DRAWINGS
[00418]-J00091 [0001] Figure 1 is a block schematic diagram illustrating
basic
functional blocks and data flow according to one embodiment of the disclosed
embodiments.
[0009]-100101 [0002] Figure 2 is a flow chart of an interrupt handler that
reads
handwritten information in response to writing pen taps on a writing surface.
[001-0]400111 [0003] Figure 3 is a flow chart of a polling technique for
reading
handwritten information.
[0011]100121 [0004] Figure 4 is a flow chart of operation according to a
representative embodiment of the disclosed embodiments wherein handwritten
information is incorporated into the document after all handwritten
information is
concluded.
[0012] J00131 [0005] Figure 5 is a flow chart of operation according to a
representative embodiment of the disclosed embodiments, wherein handwritten
information is incorporated into the document concurrently during input.
3

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
[0043]400141 [0006] Figure 6 is an illustration example of options available
for
displaying handwritten information during various steps in the process
according to
the disclosed embodiments.
[00-14]-100151 [0007] Figure 7 is an illustration of samples of handwritten
symbols /
commands and their associated meanings.
[0015]100161 [0008] Figure 8 is a listing that provides generic routines for
each of the
first 3 symbol operations illustrated in Figure 7.
[0048]-100171 [0009] Figure 9 is an illustration of data flow for data
received from a
recognition functionality element processed and defined in an RHI memory.
[0017]100181 [0010] Figure 10 is an example of a memory block format of the
RHI
memory suitable for storing data associated with one handwritten command.
[0048]-100191 [0011] Figure 11 is an example of data flow of the embedded
element
of Figure 1 and Figure 38 according to the first embodiment illustrating the
emulating
of keyboard keystrokes.
[0019] J00201 [0012] Figure 12 is a flow chart representing subroutine D of
Figure 4
and Figure 5 according to the first embodiment using techniques to emulate
keyboard keystrokes.
[0020]-100211 [0013] Figure 13 is an example of data flow of the embedded
element
of Figure 1 and Figure 38 according to the second embodiment illustrating the
running of programs.
[0021]-100221 [0014] Figure 14 is a flow chart representing subroutine D of
Figure 4
and Figure 5 according to the second embodiment illustrating the running of
programs.
[0022]J00231 [0015] Figure 15 through Figure 20 are flow charts of subroutine
H
referenced in Figure 12 for the first three symbol operations illustrated in
Figure 7
and according to the generic routines illustrated in Figure 8.
[0023]-100241 [0016] Figure 21 is a flow chart of subroutine L referenced in
Figure 4
and Figure 5 for concluding the embedding of revisions for a Microsoft Word
type
document, according to the first embodiment using techniques to emulate
keyboard
keystrokes.
[0024]-100251 [0017] Figure 22 is a flow chart of an alternative to
subroutine L of
Figure 21 for concluding revisions for MS Word type document.
4

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
[0025] 100261 [0018] Figure 23 is a sample flow chart of the subroutine I
referenced in
Figure 12 for copying a recognized image from the RHI memory and placing it in
the
document memory via a clipboard.
[0026]-100271 [0019] Figure 24 is a sample of code for subroutine N
referenced in
Figure 23 and Figure 37, for copying an image from the RHI memory into the
clipboard.
[0027]-100281 [0020] Figure 25 is a sample of translated Visual Basic code
for built-in
macros referenced in the flow charts of Figure 26 to Figure 32 and Figure 37.
[0028]-100291 [0021] Figure 26 through Figure 32 are flow charts of
subroutine J
referenced in Figure 14 for the first three symbol operations illustrated in
Figure 7
and according to the generic routines illustrated in Figure 8 for MS Word.
[0029] 100301 [0022] Figure 33 is a sample of code in Visual Basic for the
subroutine
M referenced in Figure 4 and Figure 5, for concluding embedding of the
revisions for
MS Word, according to the second embodiment using the running of programs.
[0030]-J00311 [0023] Figure 34 is a sample of translated Visual Basic code
for useful
built-in macros in comment mode for MS Word.
[0034100321 [0024] Figure 35 provides examples of recorded macros translated
into
Visual Basic code that emulates some keyboard keys for MS Word.
[0032]-100331 [0025] Figure 36 is a flow chart of a process for checking if a

handwritten character to be emulated as a keyboard keystroke exists in table
and
thus can be emulated and, if so, for executing the relevant line of code that
emulates
the keystroke.
[0033]-100341 [0026] Figure 37 is a flow chart of an example for subroutine K
in
Figure 14 for copying a recognized image from RHI memory and placing it in the

document memory via the clipboard.
[0034]-100351 [0027] Figure 38 is an alternate block schematic diagram to the
one
illustrated in Figure 1, illustrating basic functional blocks and data flow
according to
another embodiment of the disclosed embodiments, using a touch screen.
[003,5]-100361 [0028] Figure 39 is a schematic diagram of an integrated
edited
document made with the use of a wireless pad.
[0036]-100371 [0029] Figures 40A-40D illustrate an example of user
interaction with
the touch screen to Insert a line.
[0037]-100381 [0030] Figures 41A-41C illustrate an example of use of the
command to
delete an object.

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
[0038]-J00391 Figures 42A-42D illustrate an example of user interaction with
the
touch screen to change line length.
[0039]-J00401 Figures 43A-43D illustrate an example of user interaction with
the
touch screen to change line angle.
[0040]-J00411 [0031] Figures 44A-44D illustrate an example of user
interaction with
the touch screen to apply a radius to a line or to change the radius of an
arc.
[0011] J00421 [0032] Figures 45A-450 illustrate an example of user
interaction with
the touch screen to make a line parallel to another line.
[0042]-J00431 [0033] Figures 46A-46D illustrate an example of user
interaction with
the touch screen to add a fillet or an arc to an object.
[0043]-J00441 [0034] Figures 47A-47D illustrate an example of user
interaction with
the touch screen to add a chamfer.
[0044]-J00451 [0035] Figures 48A-48F illustrate an example of use of the
command to
trim an object.
[0045]-J00461 [0036] Figures 49A-49D illustrate an example of user
interaction with
the touch screen to move an arced object.
[0046]-J00471 [0037] Figures 50A-50D illustrate an example of use of the "no
snap"
command.
[0017] J00481 [0038] Figures 51A-51D illustrate another example of use of the
`No
Snap' command.
[0048]-J00491 Figures 52A-52D illustrate another example of use of the
command to
trim an object.
[0049]-J00501 [0039] Figure 53 is an example of a user interface with icons.
[0050]-J00511 [0040] Figures 54A-54B illustrate an example of before and
after
interacting with a three-dimensional representation of a vector graphics of a
cube on
the touch screen.
[0051] J00521 [0041] Figures 540-54D illustrate an example of before and
after
interacting with a three-dimensional representation of a vector graphics of a
sphere
on the touch screen.
[0052] J00531 [0042] Figures 54E-54F illustrate an example of before and
after
interacting with a three-dimensional representation of a vector graphics of a
ramp on
the touch screen.
[0063]-J00541 [0043] Figures 55A-55B illustrate examples of a user interface
menus
for text editing, selection mode.
6

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
[0064]-J00551 [0044] Figure 56 illustrates an example of a gesture to mark
text in
command mode.
[0055] 100561 [0045] Figure 57 illustrates another example of a gesture to
mark text in
command mode.
[0056]-J00571 [0046] Figures 58A-58B illustrate an example of automatically
zooming
a text while drawing the gesture to mark text.
DETAILED DESCRIPTION
[0057] J00581 [0047] Referring to Figure 1, there is a block schematic
diagram of an
integrated document editor 10 according to a first embodiment, which
illustrates the
basic functional blocks and data flow according to that first embodiment. A
digitizing
pad 12 is used, with its writing area (e.g., within margins of an 8-1/2" x 11"
sheet) to
accommodate standard sized papers that corresponds to the x-y location of the
edited page. Pad 12 receives data from a writing pen 10 (e.g., magnetically,
or
mechanically by way of pressure with a standard pen). Data from the digitizing
pad
12 is read by a data receiver 14 as bitmap and/or vector data and then stored
corresponding to or referencing the appropriate x-y location in a data
receiving
memory 16. Optionally, this information can be displayed on the screen of a
display
25 on a real-time basis to provide the writer with real-time feedback.
[0068]-100591 [0048] Alternatively, and as illustrated in Figure 38, a touch
screen 11
(or other positional input receiving mechanism as part of a display) with its
receiving
and displaying mechanisms integrated, receives data from the writing pen 10,
whereby the original document is displayed on the touch screen as it would
have
been displayed on a printed page placed on the digitizing pad 12 and the
writing by
the pen 10 occurs on the touch screen at the same locations as it would have
been
written on a printed page. Under this scenario, the display 25, pad 12 and
data
receiver 14 of Figure 1 are replaced with element 11, the touch screen and
associated electronics of Figure 38, and elements 16, 18, 20, 22, and 24 are
discussed hereunder with reference to Figure 1. Under the touch screen display

alternative, writing paper is eliminated.
[0059] J00601 [0049]When a printed page is used with the digitizing pad 12,
adjustments in registration of location may be required such that locations on
the
printed page correlates to the correct x-y locations for data stored in the
data
receiving memory 16.
7

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
[0060]-100611 [0050] The correlation between locations of the writing pen 10
(on the
touch screen 11 or on the digitizing pad 12) and the actual x-y locations in
the
document memory 22 need not be perfectly accurate, since the location of the
pen
is with reference to existing machine code data. In other words, there is a
window of error around the writing point that can be allowed without loss of
useful
information, because it is assumed that the new handwritten information (e.g.,

revisions) must always correspond to a specific location of the pen, e.g.,
near text,
drawing or image. This is similar to, but not always the same as, placing a
cursor at
an insertion point in a document and changing from command mode to data input
mode. For example, the writing point may be between two lines of text but
closer to
one line of text than to the other. This window of error could be continuously

computed as a function of the pen tapping point and the data surrounding the
tapping point. In case of ambiguity as to the exact location where the new
data are
intended to be inserted (e.g., when the writing point overlaps multiple
possible
locations in the document memory 22), the touch screen 11 (or the pad 12) may
generate a signal, such as a beeping sound, requesting the user to tap closer
to the
point where handwritten information needs to be inserted. If the ambiguity is
still not
resolved (when the digitizing pad 12 is used), the user may be requested to
follow an
adjustment procedure.
[0064]-J00621 [0051] If desired, adjustments may be made such that the
writing area
on the digitizing pad 12 will be set to correspond to a specific active window
(for
example, in multi-windows screen), or to a portion of a window (i.e., when the
active
portion of a window covers partial screen, e.g., an invoice or a bill of an
accounting
program QuickBooks), such that the writing area of the digitizing pad 12 is
efficiently
utilized. In situations where a document is a form (e.g., an order form), the
paper
document can be a pre-set to the specific format of the form, such that the
handwritten information can be entered at specific fields of the form (that
correspond
to these fields in the document memory 22). In addition, in operations that do
not
require archiving of the handwritten paper documents, handwritten information
on
the digitizing pad 12 may be deleted after it is integrated into the document
memory
22. Alternatively, multi-use media that allow multiple deletions (that clear
the
handwritten information) can be used, although the touch screen alternative
would
be preferred over this alternative.
8

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
[0062]-J00631 [0052] A recognition functionality element 18 reads information
from the
data receiving memory 16 and writes the recognition results or recognized
handwritten elements into the recognized handwritten information (RHI) memory
20.
Recognized handwritten information elements, (RHI elements) such as
characters,
words, and symbols, are stored in the RHI memory 20. Location of an RHI
element
in the RHI memory 20 correlates to its location in the data receiving memory
16 and
in the document memory 22. After symbols are recognized and interpreted as
commands, they may be stored as images or icons in, for example, JPEG format
(or
they can be emulated as if they were keyboard keys. This technique will be
discussed hereafter.), since the symbols are intended to be intuitive. They
can be
useful for reviewing and interpreting revisions in the document. In addition,
the
recognized handwritten information prior to final incorporation (e.g.,
revisions for
review) may be displayed either in handwriting (as is or as revised machine
code
handwriting for improved readability) or in standard text.
[0063]-100641 [0053] An embedded criteria and functionality element 24 reads
the
information from the RHI memory 20 and embeds it into the document memory 22.
Information in the document memory 22 is displayed on the display 25, which is
for
example a computer monitor or a display of a touch screen. The embedded
functionality determines what to display and what to be embedded into the
document
memory 22 based on the stage of the revision and selected user
criteria/preferences.
[0064]-100651 [0054] Embedding the recognized information into the document
memory 22 can be either applied concurrently or after input of all handwritten

information, such as after revisions, have been concluded. Incorporation of
the
handwritten information concurrently can occur with or without user
involvement.
The user can indicate each time a handwritten command and its associated text
and/or image has been concluded, and then it can be incorporated into the
document memory 22 one at a time. (Incorporation of handwritten information
concurrently without user involvement will be discussed hereafter.) The
document
memory 22 contains, for example, one of the following files: 1) A word
processing
file, such as a MS Word file or a Word Perfect file, 2) A spreadsheet, such as
an
Excel file, 3) A form such as a sales order, an invoice or a bill in
accounting software
(e.g., QuickBooks), 4) A table or a database, 5) A desktop publishing file,
such as a
QuarkXPress or a PageMaker file, or 6) A presentation file, such as a MS Power

Point file.
9

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
[0065]-100661 [0055] It should be noted that the document could be any kind
of
electronic file, word processing document, spreadsheet, web page, form, e-
mail,
database, table, template, chart, graph, image, object, or any portion of
these types
of documents, such as a block of text or a unit of data. In addition, the
document
memory 22, the data receiving memory 16 and the RHI memory 20 could be any
kind of memory or memory device or a portion of a memory device, e.g., any
type of
RAM, magnetic disk, CD-ROM, DVD-ROM, optical disk or any other type of
storage.
It should be further noted that one skilled in the art will recognize that the

elements/components discussed herein (e.g., in Figures 1, 38, 9, 11, 13), such
as
the RHI element may be implemented in any combination of electronic or
computer
hardware and/or software. For example, the disclosed embodiments could be
implemented in software operating on a general-purpose computer or other types
of
computing / communication devices, such as hand-held computers, personal
digital
assistant (PDA)s, cell phones, etc. Alternatively, a general-purpose computer
may
be interfaced with specialized hardware such as an Application Specific
Integrated
Circuit (ASIC) or some other electronic components to implement the disclosed
embodiments. Therefore, it is understood that the disclosed embodiments may be

carried out using various codes of one or more software modules forming a
program
and executed as instructions/data by, e.g., a central processing unit, or
using
hardware modules specifically configured and dedicated to perform the
disclosed
embodiments. Alternatively, the disclosed embodiments may be carried out using
a
combination of software and hardware modules.
[0066]-J00671 [0056] The recognition functionality element 18 encompasses one
or
more of the following recognition approaches:
1- Character recognition, which can for example be used in cases where the
user clearly spells each character in capital letters in an effort to minimize
recognition
errors,
2- A holistic approach where recognition is globally performed on the whole
representation of the words and there is no attempt to identify characters
individually.
(The main advantage of the holistic methods is that they avoid word
segmentation.
Their main drawback is that they are related to a fixed lexicon of words
description:
since these methods do not rely on letters, words are directly described by
means of
features. Adding new words to the lexicon typically requires human training or
the
automatic generation of a word description from ASCII words.)

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
3- Analytical strategies that deal with several levels of representation
corresponding to increasing levels of abstractions. (Words are not considered
as a
whole, but as sequences of smaller size units, which must be easily related to
characters in order to make recognition independent from a specific
vocabulary.)
[0067]-J00681 [0057] Strings of words or symbols, such as those described in
connection with Figure 7 and discussed hereafter, can be recognized by either
the
holistic approach or by the analytical strategies, although character
recognition may
be preferred. Units recognized as characters, words or symbols are stored into
the
RHI memory 20, for example in ASCII format. Units that are graphics are stored
into
the RHI memory as graphics, for example as a JPEG file. Units that could not
be
recognized as a character, word or a symbol are interpreted as images if the
application accommodates graphics and optionally, if approved by the user as
graphics and stored into the RHI memory 20 as graphics. It should be noted
that
units that could not be recognized as character, word or symbol may not be
interpreted as graphics in applications that do not accommodate graphics
(e.g.,
Excel); in this scenario, user involvement may be required.
[0068]-100691 [0058] To improve the recognition functionality, data may be
read from
the document memory 22 by the recognition element 18 to verify that the
recognized
handwritten information does not conflict with data in the original document
and to
resolve/minimize as much as possible recognized information retaining
ambiguity.
The user may also resolve ambiguity by approving/disapproving recognized
handwritten information (e.g., revisions) shown on the display 25. In
addition,
adaptive algorithms (beyond the scope of this disclosure) may be employed.
Thereunder, user involvement may be relatively significant at first, but as
the
adaptive algorithms learn the specific handwritten patterns and store them as
historical patterns, future ambiguities should be minimized as recognition
becomes
more robust.
[0069]-J00701 [0059] Figure 2 though Figure 5 are flow charts of operation
according
to an exemplary embodiment and are briefly explained herein below. The text in
all
of the drawings is herewith explicitly incorporated into this written
description for the
purposes of claim support. Figure 2 illustrates a program that reads the
output of the
digitizing pad 12 (or of the touch screen 11) each time the writing pen 10
taps on
and/or leaves the writing surface of the pad 12 (or of the touch screen 11).
Thereafter data is stored in the data receiving memory 16 (Step E). Both the
11

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
recognition element and the data receiver (or the touch screen) access the
data
receiving memory. Therefore, during read/write cycle by one element, the
access by
the other element should be disabled.
[007-0]-100711 [0060] Optionally, as illustrated in Figure 3, the program
checks every
few milliseconds to see if there is new data to read from the digitizing pad
12 (or of
the touch screen 11). If so, data is received from the digitizing recognizer
and stored
in the data receiving memory 16 (E). This process continues until the user
indicates
that the revisions are concluded, or until there is a timeout.
[0071]100721 [0061] Embedding of the handwritten information may be executed
either all at once according to procedures explained with Figure 4, or
concurrently
according to procedures explained with Figure 5.
[0072]100731 [0062] The recognition element 18 recognizes one unit at a time,
e.g., a
character, a word, graphic or a symbol, and makes them available to the RHI
processor and memory 20 (C). The functionality of this processor and the way
in
which it stores recognized units into the RHI memory will be discussed
hereafter with
reference to Figure 9. Units that are not recognized immediately are either
dealt with
at the end as graphics, or the user may indicate otherwise manually by other
means,
such as a selection table or keyboard input (F).
Alternatively, graphics are
interpreted as graphics if the user indicates when the writing of graphics
begins and
when it is concluded. Once the handwritten information is concluded, it is
grouped
into memory blocks, whereby each memory block contains all (as in Figure 4) or

possibly partial (as in Figure 5) recognized information that is related to
one
handwritten command, e.g., a revision. The embedded function (D) then embeds
the recognized handwritten information (e.g., revisions) in "for review" mode.
Once
the user approves/disapproves revisions, they are embedded in final mode (L)
according to the preferences set up (A) by the user. In the examples
illustrated
hereafter, revisions in MS Word are embedded in Track Changes mode all at
once.
Also, in the examples illustrated hereafter, revisions in MS Word that are
according
to Figure 4 may, for example, be useful when the digitizing pad 12 is separate
from
the rest of the system, whereby handwritten information from the digitizing
pad
internal memory may be downloaded into the data receiving memory 16 after the
revisions are concluded via a USB or other IEEE or ANSI standard port.
[0073]-J00741 [0063] Figure 4 is a flow chart of the various steps, whereby
embedding
"all" recognized handwritten information (such as revisions) into the document
12

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
memory 22 is executed once "all" handwritten information is concluded. First,
the
Document Type is set up (e.g., Microsoft Word or QuarkXPress), with software
version and user preferences (e.g., whether to incorporate revisions as they
are
available or one at a time upon user approval/disapproval), and the various
symbols
preferred by the user for the various commands such as for inserting text, for

deleting text and for moving text around) (A). The handwritten information is
read
from the data receiving memory 16 and stored in the memory of the recognition
element 18 (B).
Information that is read from the receiving memory 16 is
marked/flagged as read, or it is erased after it is read by the recognition
element 18
and stored in its memory; this will insure that only new data is read by the
recognition
element 18.
1 [0071] J00751 [0064] Figure 5 is a flow chart of the various steps whereby
embedding
recognized handwritten information (e.g., revisions) into the document memory
22 is
executed concurrently (e.g., with the making of the revisions). Steps 1 ¨ 3
are
identical to the steps of the flow chart in Figure 4 (discussed above). Once a
unit,
such as a character, a symbol or a word is recognized, it is processed by the
RHI
processor 20 and stored in the RHI memory. A processor (GMB functionality 30
referenced in Figure 9) identifies it as either a unit that can be embedded
immediately or not. It is checked if it can be embedded (step 4.3); if it can
be (step
5), it is embedded (D) and then (step 6) deleted or marked/updated as an
embedded
(G). If it cannot be embedded (step 4.1), more information is read from the
digitizing
pad 12 (or from the touch screen 11). This process of steps 4 ¨ 6 repeats and
continues so long as handwritten information is forthcoming. Once all data is
embedded (indicated by an End command or a simple timeout), units that could
not
be recognized are dealt with (F) in the same manner discussed for the flow
chart of
Figure 4. Finally, once the user approves/disapproves revisions, they are
embedded
in final mode (L) according to the preferences chosen by the user.
1 [0075] J00761 [0065] Figure 6 is an example of the various options and
preferences
available to the user to display the handwritten information in the various
steps for
MS Word. In "For Review" mode the revisions are displayed as "For Review"
pending approval for "Final" incorporation.
Revisions, for example, can be
embedded in a "Track Changes" mode, and once approved/disapproved (as in
"Accept/Reject changes"), they are embedded into the document memory 22 as
"Final". Alternatively, symbols may be also displayed on the display 25. The
13

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
symbols are selectively chosen to be intuitive, and, therefore, can be useful
for quick
review of revisions. For the same reason, text revisions may be displayed
either in
handwriting as is, or as revised machine code handwriting for improved
readability;
in "Final" mode, all the symbols are erased, and the revisions are
incorporated as an
integral part of the document.
[0076]-J00771 [0066] An example of a basic set of handwritten
commands/symbols
and their interpretation with respect to their associated data for making
revisions in
various types of documents is illustrated in Figure 7.
[0077] J00781 [0067] Direct access to specific locations is needed in the
document
memory 22 for read/write operations.
Embedding recognized handwritten
information from the RHI memory 20 into the document memory 22 (e.g., for
incorporating revisions) may not be possible (or limited) for after-market
applications.
Each of the embodiments discussed below provides an alternate "back door"
solution to overcome this obstacle.
Embodiment One: Emulating Keyboard Entries:
[00-78]-J00791 [0068] Command information in the RHI memory 20 is used to
insert or
revise data, such as text or images in designated locations in the document
memory
22, wherein the execution mechanisms emulate keyboard keystrokes, and when
available, operate in conjunction with running pre-recorded and/or built-in
macros
assigned to sequences of keystrokes (i.e., shortcut keys). Data such as text
can be
copied from the RHI memory 20 to the clipboard and then pasted into designated

locations in the document memory 22, or it can be emulated as keyboard
keystrokes.
This embodiment will be discussed hereafter.
Embodiment Two: Running Programs:
[0079] J00801 [0069] In applications such as Microsoft Word, Excel and
WordPerfect, where programming capabilities, such as VB Scripts and Visual
Basic
are available, the commands and their associated data stored in the RHI memory
20
are translated to programs that embed them into the document memory 22 as
intended. In this embodiment, the operating system clipboard can be used as a
buffer for data (e.g., text and images). This embodiment will also be
discussed
hereafter.
[00.80]-100811 [0070] Information associated with a handwritten command as
discussed in Embodiment One and Embodiment Two is either text or graphics
14

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
(image), although it could be a combination of text and graphics. In
either
embodiment, the clipboard can be used as a buffer.
For copy operations in the RHI memory:
[0084100821 When a unit of text or image is copied from a specific location
indicated
in the memory block in the RHI memory 20 to be inserted in a designated
location in
the document memory 22.
For Cut/Paste and for Paste operations within the document memory:
[00.82]-J00831 For moving text or image around within the document memory 22,
and
for pasting text or image copied from the RHI memory 20.
[N83]-100841 [0071] A key benefit of Embodiment One is usefulness in a large
array
of applications, with or without programming capabilities, to execute
commands,
relying merely on control keys, and when available built-in or pre-recorded
macros.
When a control key, such as Arrow Up or a simultaneous combination of keys,
such
as Cntrl-C, is emulated, a command is executed.
[04184]-100851 [0072] Macros cannot be run in Embodiment Two unless
translated to
actual low-level programming code (e.g., Visual Basic Code). In contrast,
running a
macro in a control language native to the application (recorded and/or built-
in) in
Embodiment One is simply achieved by emulating its assigned shortcut key(s).
Embodiment Two may be preferred over Embodiment One, for example in MS Word,
if a Visual Basic Editor is used to create codes that include Visual Basic
instructions
that cannot be recorded as macros.
[N86]-100861 [0073] Alternatively, Embodiment Two may be used in conjunction
with
Embodiment One, whereby, for example, instead of moving text from the RHI
memory 20 to the clipboard and then placing it in a designation location in
the
document memory 22, text is emulated as keyboard keystrokes. If desired, the
keyboards keys can be emulated in Embodiment Two by writing a code for each
key,
that, when executed, emulates a keystroke. Alternatively, Embodiment One may
be
implemented for applications with no programming capabilities, such as
QuarkXPress, and Embodiment Two may be implemented for some of the
applications that do have programming capabilities. Under this scenario, some
applications with programming capabilities may still be implemented in
Embodiment
One or in both Embodiment One and Embodiment Two.

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
[N86]-100871 [0074] Alternatively, x-y locations in the data receiving memory
16 (as
well as designated locations in the document memory 22), can be identified on
a
printout or on the display 25, and if desired, on the touch screen 11, based
on: 1)
recognition/identification of a unique text and/or image representation around
the
writing pen, and 2) searching for and matching the recognized/identified data
around
the pen with data in the original document which may be converted into the
bitmap
and/or vector format that is identical to the format handwritten information
is stored in
the data receiving memory 16. Then handwritten information along with its x-y
locations correspondingly indexed in the document memory 22 is transmitted to
a
remote platform for recognition, embedding and displaying.
[N87]-100881 [0075] The data representation around the writing pen and the
handwritten information are read by a miniature camera with attached circuitry
that is
built-in the pen. The data representing the original data in the document
memory 22
is downloaded into the pen internal memory prior the commencement of
handwriting,
either via a wireless connection (e.g., Bluetooth) or via physical connection
(e.g.,
USB port).
[0 88]-100891 [0076] The handwritten information along with its identified x-
y locations
is either downloaded into the data receiving memory 16 of the remote platform
after
the handwritten information is concluded (via physical or wireless link), or
it can be
transmitted to the remote platform via wireless link as the x-y location of
the
handwritten information is identified. Then, the handwritten information is
embedded
into the document memory 22 all at once (i.e., according to the flow chart
illustrated
in Figure 4), or concurrently (i.e., according to the flow chart illustrated
in Figure 5).
[0089]-J00901 [0077] If desired, the display 25 may include pre-set patterns
(e.g.,
engraved or silk-screened) throughout the display or at selected location of
the
display, such that when read by the camera of the pen, the exact x-y location
on the
display 25 can be determined. The pre-set patterns on the display 25 can be
useful
to resolve ambiguities, for example when the identical information around
locations
in the document memory 22 exists multiple times within the document.
[G090]-100911 [0078] Further, the tapping of the pen in selected locations of
the touch
screen 11 can be used to determine the x-y location in the document memory
(e.g.,
when the user makes yes-no type selections within a form displayed on the
touch
screen). This, for example, can be performed on a tablet that can accept input
from a
pen or any other pointing device that function as a mouse and writing
instrument.
16

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
[0091] 100921 [0079] Alternatively (or in addition to a touch screen), the
writing pen
can emit a focused laser/IR beam to a screen with thermal or optical sensing,
and
the location of the sensed beam may be used to identify the x-y location on
the
screen. Under this scenario, the use of a pen with a built-in miniature camera
is not
needed. When a touch screen or a display with thermal/optical sensing (or when

preset patterns on an ordinary display) is used to detect x-y locations on the
screen,
the designated x-y location in the document memory 22 can be determined based
on: 1) the detected x-y location of the pen 10 on the screen, and 2)
parameters that
correlate between the displayed data and the data in the document memory 22
(e.g.,
application name, cursor location on the screen and zoom percent).
[0092] J00931 [0080] Alternatively, the mouse could be emulated to place the
insertion point at designated locations in the document memory 22 based on the
X-Y
locations indicated in the Data receiving memory 16. Then information from the
RHI
memory 20 can be embedded into the document memory 22 according to
Embodiment One or Embodiment Two. Further, once the insertion point is at a
designated location in the document memory 22, selection of text or an image
within
the document memory 22 may be also achieved by emulating the mouse pointer
click operation.
Use of the Comments insertion feature:
[0093]-100941 [0081] The Comments feature of Microsoft Word (or similar
comment-
inserting feature in other program applications) may be employed by the user
or
automatically in conjunction with either of the approaches discussed above,
and then
handwritten information from the RHI memory 20 can be embedded into designated

Comments fields of the document memory 22. This approach will be discussed
further hereafter.
Use of the Track Changes Feature:
[00.9/1] J00951 [0082] Before embedding information into the document memory
22,
the document type is identified and user preferences are set (A). The user may

select to display revisions in Track Change feature. The Track Changes Mode of

Microsoft Word (or similar features in other applications) can be invoked by
the
user or automatically in conjunction with either or both Embodiment One and
Embodiment Two, and then handwritten information from the RHI memory 20 can be

embedded into the document memory 22. After all revisions are incorporated
into
17

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
the document memory 22, they can be accepted for the entire document, or they
can
be accepted /rejected one at a time upon user command. Alternatively, they can
be
accepted/rejected at the making of the revisions.
[0095]100961 [0083]The insertion mechanism may also be a plug-in that
emulates
the Track Changes feature. Alternatively, the Track Changes Feature may be
invoked after the Comments Feature is invoked such that revisions in the
Comments
fields are displayed as revisions, i.e., "For Review". This could in
particular be useful
for large documents reviewed/revised by multiple parties.
[0096]-J00971 [0084] In another embodiment, the original document is read and

converted into a document with known accessible format (e.g., ASCII for text
and
JPEG for graphics) and stored into an intermediate memory location. All
read/write
operations are performed directly on it. Once revisions are completed, or
before
transmitting to another platform, it can be converted back into the original
format and
stored into the document memory 22.
[0097] J00981 [0085] As discussed, revisions are written on a paper document
placed
on the digitizing pad 12, whereby the paper document contains/resembles the
machine code information stored in the document memory 22, and the x-y
locations
on the paper document corresponds to the x-y locations in the document memory
22. In an alternative embodiment, the revisions can be made on a blank paper
(or
on another document), whereby, the handwritten information, for example, is a
command (or a set of commands) to write or revise a value/number in a cell of
a
spreadsheet, or to update new information in a specific location of a
database; this
can be useful, for example in cases were an action to update a spreadsheet, a
table
or a database is needed after reviewing a document (or a set of documents). In
this
embodiment, the x-y location in the Receiving Memory 16 is immaterial.
RH I processor and memory blocks
[0098]-100991 [0086] Before discussing the way in which information is
embedded
into the document memory 22 in greater detail with reference to the flow
charts, it is
necessary to define how recognized data is stored in memory and how it
correlates
to locations in the document memory 22. As previously explained, embedding the

recognized information into the document memory 22 can be either applied
concurrently or after all handwritten information has been concluded. The
Embed
function (D) referenced in Figure 4 reads data from memory blocks in the RHI
18

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
memory 20 one at a time, which corresponds to one handwritten command and its
associated text data or image data. The Embed function (D) referenced in
Figure 5
reads data from memory blocks and embeds recognized units concurrently.
[0099] J001001 [0087]
Memory blocks: An example of how a handwritten
command and its associated text or image is defined in the memory block 32 is
illustrated in Figure 10. This format may be expanded, for example, if
additional
commands are added, i.e., in addition to the commands specified in the Command

field. The parameters defining the x-y location of recognized units (i.e.,
InsertionPoint1 and InsertionPoint2 in Figure 10) vary as a function of the
application. For example, the x-y locations/insertion points of text or image
in MS
Word can be defined with the parameters Page#, Line# and Column# (as
illustrated
in Figure 10). In the application Excel, the x-y locations can be translated
into the
cell location in the spreadsheet, i.e., Sheet#, Row# and Column#. Therefore,
different formats for x-y InsertionPoint1 and x-y InsertionPoint2 need to be
defined to
accommodate variety of applications.
[09-1-00]-1001011 [0088]
Figure 9 is a chart of data flow of recognized units.
These are discussed below.
[004-04-]-1001021 [0089]
FIFO (First In First Out) Protocol: Once a unit is
recognized it is stored in a queue, awaiting processing by the processor of
element
20, and more specifically, by the GMB functionality 30. The "New Recog" flag
(set to
"One" by the recognition element 18 when a unit is available), indicates to
the RU
receiver 29 that a recognized unit (i.e., the next in the queue) is available.
The "New
Recog" flag is reset back to "Zero" after the recognized unit is read and
stored in the
memory elements 26 and 28 of Figure 9 (e.g., as in step 3.2. of the
subroutines
illustrated in Figure 4 and Figure 5). In response, the recognition element
18: 1)
makes the next recognized unit available to read by the RU receiver 29, and 2)
sets
the "New Recog" flag back to "One" to indicate to the RU receiver 29 that the
next
unit is ready. This process continues so long as recognized units are
forthcoming.
This protocol insures that the recognition element 18 is in synch with the
speed with
which recognized units are read from the recognition element and stored in the
RHI
memory (i.e., in memory elements 26 and 28 of Figure 9). For example, when
handwritten information is processed concurrently, there may be more than one
memory block available before the previous memory block is embedded into the
document memory 22.
19

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
[00402]-1001031 [0090]
In a similar manner, this FIFO technique may also be
employed between elements 24 and 22 and between elements 16 and 18 of Figure 1

and Figure 38, and between elements 14 and 12 of Figure 1, to ensure that
independent processes are well synchronized, regardless of the speed by which
data is available by one element and the speed by which data is read and
processed
by the other element.
[001-03]-1001 041 [0091]
Optionally, the "New Recog" flag could be implemented in
h/w (such as within an IC), for example, by setting a line to "High" when a
recognized
unit is available and to "Low" after the unit is read and stored, i.e., to
acknowledge
receipt.
{00404]-J001051 [0092]
Process 1: As a unit, such as a character, a symbol or a
word is recognized: 1) it is stored in Recognized Units (RU) Memory 28, and 2)
its
location in the RU memory 28 along with its x-y location, as indicated in the
data
receiving memory 16, is stored in the XY-RU Location to Address in RU table
26.
This process continues so long as handwritten units are recognized and
forthcoming.
[00405]4001061 [0093]
Process 2: In parallel to Process 1, the grouping into
memory blocks (GMB) functionality 30 identifies each recognized unit such as a

character, a word or a handwritten command (symbols or words), and stores them
in
the appropriate locations of memory blocks 32. In operations such as "moving
text
around", "increasing fonts size" or "changing color", an entire handwritten
command
must be concluded before it can be embedded into the document memory 22. In
operations such as "deleting text" or "inserting new text", deleting or
embedding the
text can begin as soon as the command has been identified and the deletion (or

insertion of text) operation can then continue concurrently as the user
continue to
write on the digitizing pad 12 (or on the touch screen 11).
[001-06]-1001 071 [0094]
In this last scenario, as soon as the recognized unit(s) is
incorporated into (or deleted from) the document memory 22, it is deleted from
the
RHI memory 22, i.e., from the memory elements 26, 28 and 32 of Figure 9. If
deletion is not desired, embedded units may be flagged as
"incorporated/embedded"
or moved to another memory location (as illustrated in step 6.2 of the flow
chart in
Figure 5). This should insure that information in the memory blocks is
continuously
current with new unincorporated information.
[00407-]-1001081 [0095]
Process 3: As unit(s) are grouped into memory blocks, 1)
the identity of the recognized units (whether they can be immediately
incorporated or

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
not) and 2) the locations of the units that can be incorporated in the RHI
memory are
continuously updated.
[0096] 1. As units are groups into memory blocks, a flag (i.e., "Identity-
Flag")
is set to "One" to indicate when unit(s) can be embedded. It should be noted
that
this flag is defined for each memory block and that it could be set more than
one
time for the same memory block (for example, when the user strikes through a
line of
text). This flag is checked in steps 4.1 ¨ 4.3 of Figure 5 and is reset to
"Zero" after
the recognized unit(s) is embedded, i.e., in step 6.1 of the subroutine in
Figure 5,
and at initialization. It should be noted that the "Identity" flag discussed
above is
irrelevant when all recognized units associated with a memory block are
embedded
all at once; under this scenario and after the handwritten information is
concluded,
recognized, grouped and stored in the proper locations of the RHI memory, the
"All
Units" flag in step 6.1 of Figure 4 will be set to "One" by the GMB
functionality 30 of
Figure 9, to indicate that all units can be embedded.
[0097] 2. As units are grouped into memory blocks, a pointer for memory
block, i.e., the "Next memory block pointer" 31, is updated every time a new
memory
block is introduced (i.e., when a recognized unit(s) that is not yet ready to
be
embedded is introduced; when the "Identity" flag is Zero), and every time a
memory
block is embedded into the document memory 22, such that the pointer will
always
point to the location of the memory block that is ready (when it is ready) to
be
embedded. This pointer indicates to the subroutines Embedd1 (of Figure 12) and

Embedd2 (of Figure 14) the exact location of the relevant memory block with
the
recognized unit(s) that is ready to be embedded (as in step 1.2 of these
subroutines).
[001-08]-1001 091 [0098]
An example of a scenario under which the "next memory
block pointer" 31 is updated is when a handwritten input related to changing
font size
has begun, then another handwritten input related to changing colors has begun

(Note that these two commands cannot be incorporated until after they are
concluded), and then another handwritten input for deleting text has begun
(Note
that this command may be embedded as soon as the GMB functionality identify
it).
[00-1-09]-1001 1 01 [0099]
The value in the "# of memory blocks" 33 indicates the
number of memory blocks to be embedded. This element is set by the GMB
functionality 30 and used in step 1.1 of the subroutines illustrated in Figure
12 and
Figure 14. This counter is relevant when the handwritten information is
embedded
21

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
all at once after its conclusion, i.e., when the subroutines of Figure 12 and
Figure 14
are called from the subroutine illustrated in Figure 4 (i.e., it is not
relevant when they
are called from the subroutine in Figure 5; its value then is set to "One",
since in this
embodiment, memory blocks are embedded one at a time).
Embodiment One
[00/40]-1001111
[0100]Figure 11 is a block schematic diagram illustrating the
basic functional blocks and data flow according to Embodiment One. The text of

these and all other figures is largely self-explanatory and need not be
repeated
herein. Nevertheless, the text thereof may be the basis of claim language used
in
this document.
[00111] 1001121
[0101]Figure 12 is a flow chart example of the Embed
subroutine D referenced in Figure 4 and Figure 5 according to Embodiment One.
The following is to be noted.
[0102] 1. When this subroutine is called by the routine illustrated in
Figure 5
(i.e., when handwritten information is embedded concurrently): 1) memory block

counter (in step 1.1) is set to 1, and 2) memory block pointer is set to the
location in
which the current memory block to be embedded is located; this value is
defined in
memory block pointers element (31) of Figure 9.
[0103] 2. When this subroutine is called by the subroutine illustrated in
Figure
4 (i.e., when all handwritten information is embedded after all handwritten
information is concluded): 1) memory block pointer is set to the location of
the first
memory block to be embedded, and 2) memory block counter is set to the value
in #
of memory blocks element (33) of Figure 9.
[00112] 1001131 [0104]
In operation, memory blocks 32 are fetched one at a time
from the RHI memory 20 (G) and processed as follows:
Memory blocks related to text revisions (H):
[00113] J001141 [0105]
Commands are converted to keystrokes (35) in the same
sequence as the operation is performed via the keyboard and then stored in
sequence in the keystrokes memory 34. The emulate keyboard element 36 uses
this data to emulate the keyboard, such that the application reads the data as
it was
received from the keyboard (although this element may include additional keys
not
available via a keyboard such as the symbols illustrated in Figure 7, e.g. for

insertion of new text in MS Word document). The clipboard 38 can handle
insertion
22

CA 03075627 2020-03-11
WO 2019/055952 PCT/US2018/051400
of text, or text can be emulated as keyboard keystrokes. The lookup tables 40
determines the appropriate control key(s) and keystroke sequences for pre-
recorded
and built-in macros that, when emulated, execute the desired command. These
keyboard keys are application-dependent and are a function of parameters, such
as
application name, software version and platform. Some control keys, such as
the
arrow keys, execute the same commands in a large array of applications;
however,
this assumption is excluded from the design in Figure 11, i.e., by the
inclusion of the
lookup table command-keystrokes in element 40 of Figure 11. Although, in the
flow
charts in Figures 15 ¨ 20, it is assumed that the following control keys
execute the
same commands (in the applications that are included): "Page Up", "Page Down",

"Arrow Up", "Arrow Down", "Arrow Right" and "Arrow Left" (For moving the
insertion
point within the document), "Shift + Arrow Right" (for selection of text), and
"Delete"
for deleting a selected text. Element 40 may include lookup tables for a large
array
of applications, although it could include tables for one or any desired
number of
applications.
Memory blocks related to new image (I):
1 [00111] 1001151 [0106] The image
(graphic) is first copied from the RHI memory
20, more specifically, based on information in the memory block 32, into the
clipboard 38. Its designated location is located in the document memory 22 via
a
sequence of keystrokes (e.g., via the arrow keys). It is stored (i.e., pasted
from the
clipboard 38 by the keystrokes sequence: Cntr-V) into the document memory 22.
If
the command involves another operation, such as "Reduce Image Size" or "Move
image", the image is first identified in the document memory 22 and selected.
Then
the operation is applied by the appropriate sequences of keystrokes.
1 [00115] 1001161 [0107] Figure 15
through Figure 20, the flow charts of the
subroutines H referenced in Figure 12, illustrate execution of the first three
basic text
revisions discussed in connection with and in Figure 8 for MS Word and other
applications. These flow charts are self-explanatory and are therefore not
further
described herein but are incorporated into this text. The following points are
to be
noted with reference to the function Start0fDocEmb1 illustrated in the flow
chart of
Figure 15:
[0108] 1. This function is called by the function SetPointeremb1,
illustrated in
Figure 16.
23

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
[0109] 2. Although, in many applications, the shortcut keys combination
"Cntrl+Home" will bring the insertion point to the start of the document
(including MS
Word), this routine was written to execute the same operation with the arrow
keys.
[0110] 3. Designated x-y locations in the document memory 22 in this
subroutine are defined based on Page#, Line# & Column#; other subroutines are
required when the x-y definition differs.
[004-1-6]-1001171
[0111]Once all revisions are embedded, they are incorporated
in final mode according to the flow chart illustrated in Figure 21 or
according to the
flow chart illustrated in Figure 22. In this implementation example, the Track

Changes feature is used to "Accept All Changes" which embed all revisions as
an
integral part of the document.
[00117] 1001181 [0112]
As discussed above, a basic set of keystrokes sequences
can be used to execute a basic set of commands for creation and revision of a
document in a large array of applications. For example, the arrow keys can be
used
for jumping to a designated location in the document. When these keys are used
in
conjunction with the Shift key, a desired text/graphic object can be selected.
Further,
clipboard operations, i.e., the typical combined keystroke sequences Cntrl-X
(for
Cut), Cntrl-C (for Copy) and Cntrl-V (for Paste), can be used for basic
edit/revision
operations in many applications. It should be noted that, although a
relatively small
number of keyboard control keys are available, the design of an application at
the
OEM level is unlimited in this regard. (See for example Figures 1-5). It
should be
noted that the same key combination could execute different commands. For
example, deleting an item in QuarkXPress is achieved by the keystrokes Cntrl-
K,
where the keystrokes Cntrl-K in MS Word open a hyperlink. Therefore, the
ConvertText1 function H determines the keyboard keystroke sequences for
commands data stored in the RHI memory by accessing the lookup table command-
keystrokes command-control-key 40 of Figure 11.
The Use of Macros:
[00-1-1-8]-1001191 [0113]
Execution of handwritten commands in applications such
as Microsoft Word, Excel and Word Perfect is enhanced with the use of macros.

This is because sequences of keystrokes that can execute desired operations
may
simply be recorded and assigned to shortcut keys. Once the assigned shortcut
key(s) are emulated, the recorded macro is executed. Below are some useful
built-in
24

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
macros for Microsoft Word. For simplification, they are grouped based on the
operations used to embed handwritten information (D).
[01141 Bringing the insertion point to a specific location in the document:
CharRight, CharLeft, LineUp, LineDown, StartOf Document, StartOf Line,
EndOf Document, EndOf Line, EditGoto, GotoNextPage, GotoNextSection,
GotoPreviousPage, GotoPreviousSelection, GoBack
[0115] Selection:
CharRightExtent, CharLeftExtend, LineDownExtend, LineUpExtend,
ExtendSelection, EditFind, EditReplace
[0116] Operations on selected text/graphic:
EditClear, EditCopy, EditCut, EditPaste,
CopyText, FontColors, FontSizeSelect, GrowFont, ShrinkFont, GrowFontOnePoint,
ShrinkFontOnePoint, AllCaps, SmallCaps, Bold, Italic, Underline,
UnderlineCoor,
UnderlineStyle, WordUnderline, ChangeCase, DoubleStrikethrough, Font,
FontColor,
FontSizeSelect
[0117] Displaying revisions:
Hidden, Magnifier, Highlight, DocAccent, CommaAccent, DottedUnderline,
DoubleUnderline, DoubleStrikethrough, HtmlSourceRefresh, InsertFieldChar (for
enclosing a symbol for display), ViewMasterDocument, ViewPage, ViewZoom,
ViewZoom100, ViewZoom200, ViewZoom75
[0118] Images:
InsertFrame, InsertObject, InsertPicture, EditCopyPicture, EditCopyAsPicture,
EditObject, InsertDrawing, InsertFram, InsertHorizentlLine
[0119] File operations:
FileOpen, FileNew, FileNewDefault, DocClose, FileSave, SaveTemplate
[00119] 1001201 [0120]
If a macro has no shortcut key assigned to it, it can be
assigned by the following procedure:
[091-20]-1001211
[0121]Clicking on the Tools menu and selecting Customize
causing the Customize form to appear. Clicking on the Keyboard button brings
the
dialog box Customize Keyboard. In the Categories box all the menus are listed,
and
in the Commands box all their associated commands are listed. Assigning a

CA 03075627 2020-03-11
WO 2019/055952 PCT/US2018/051400
shortcut key to a specific macro can be simply done by selecting the desired
built-in
macro in the command box and pressing the desired shortcut keys.
1 [00121]1001221 [0122]Combinations
of macros can be recorded as a new
macro; the new macro runs whenever the sequence of keystrokes that is assigned
to
it is emulated. In the same manner, a macro in combination with keystrokes
(e.g., of
arrow keys) may be recorded as a new macro. It should be noted that recording
of
some sequences as a macro may not be permitted.
1 [00122] 1001231 [0123]The use of
macros, as well as the assignment of a
sequence of keys to macros can also be done in other word processors, such as
WordPerfect.
1 [00123]1001241 [0124] Emulating a
keyboard key 36 in applications with built-in
programming capability, such as Microsoft Word, can be achieved by running
code
that is equivalent to pressing that keyboard key. Referring to Figure 35 and
Figure
36, details of this operation are presented. The text thereof is incorporated
herein by
reference. Otherwise, emulating the keyboard is a function that can be
performed in
conjunction with Windows or other computer operating systems.
Embodiment Two
1 [00121]1001251 [0125] Figure 13 is
a block schematic diagram illustrating the
basic functional blocks and data flow according to Embodiment Two. Figure 14
is a
flow chart example of the Embed function D referenced in Figure 4 and in
Figure 5
according to Embodiment Two. Memory blocks are fetched from the RHI memory 20
(G) and processed. Text of these figures is incorporated herein by reference.
The
following should be noted with Figure 14:
[0126] 1. When this subroutine is called by the routine illustrated in
Figure 5
(i.e., when handwritten information is embedded concurrently): 1) memory block

counter (in step 1.1 below) is set to 1, and 2) memory block pointer is set to
the
location in which the current memory block to be embedded is located; this
value is
defined in memory block pointers element (31) of Figure 9.
[0127] 2. When this subroutine is called by the subroutine illustrated
in Figure
4 (i.e., when all handwritten information is embedded after all handwritten
information is concluded): 1) memory block Pointer is set to the location of
the first
memory block to be embedded, and 2) memory block counter is set to the value
in #
of memory blocks element (33) of Figure 9.
26

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
[00125] 1001261 [0128] A
set of programs executes the commands defined in the
memory blocks 32 of Figure 9, one at a time. Figure 26 through Figure 32, with
text
incorporated herein by reference, are flow charts of the subroutine J
referenced in
Figure 14. The programs depicted execute the first three basic text revisions
discussed in Figure 8 for MS Word. These sub-routines are self-explanatory and
are
not further explained here, but the text is incorporated by reference.
[004-26]-1001271 [0129]
Figure 33 is the code in Visual Basic that embeds the
information in Final Mode, i.e., Accept All Changes" of the Track Changes,
which
embeds all revisions to be an integral part of the document.
[00127] J001281 [0130]
Each of the macros referenced in the flow charts of
Figure 26 through Figure 32 needs to be translated into executable code such
as VB
Script or Visual Basic code. If there is uncertainty as to which method or
property to
use, the macro recorder typically can translate the recorded actions into
code. The
translated code for these macros to Visual Basic is illustrated in Figure 25.
[00-1-28]-J001291 [0131]
The clipboard 38 can handle the insertion of text into the
document memory 22, or text can be emulated as keyboard keystrokes. (Refer to
Figures 35-36 for details). As in Embodiment One, an image operation (K) such
as
copying an image from the RHI memory 20 to the document memory 22 is executed
as follow: an image is first copied from the RHI memory 20 into the clipboard
3f8. Its
designated location is located in the document memory 22. Then it is pasted
via the
clipboard 38 into the document memory 22.
[00129] 1001301
[0132]The selection of a program by the program selection and
execution element 42 is a function of the command, the application, software
version, platform, and the like. Therefore, the ConvertText2 J selects a
specific
program for command data that are stored in the RHI memory 20 by accessing the

lookup command-programs table 44. Programs may also be initiated by events,
e.g., when opening or closing a file, or by a key entry, e.g., when bringing
the
insertion point to a specific cell of a spreadsheet by pressing the Tab key.
[00-1-30]-J001311 [0133]
In Microsoft Word, the Visual Basic Editor can be used
to create very flexible, powerful macros that include Visual Basic
instructions that
cannot be recorded from the keyboard. The Visual Basic Editor provides
additional
assistance, such as reference information about objects and properties or an
aspect
of its behavior.
27

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
Working with the Comment feature as an insertion mechanism
[00131] 1001321 [0134]
Incorporating the handwritten revisions into the document
through the Comment feature may be beneficial in cases where the revisions are

mainly insertion of new text into designated locations, or when plurality of
revisions in
various designated locations in the document need to be indexed to simplify
future
access to revisions; this can be particularly useful for large documents under
review
by multiple parties. Each comment can be further loaded into a sub-document
which
is referenced by a comment # (or a flag) in the main document. The Comments
mode can also work in conjunction with Track Changes mode.
[00132] J001331 [0135]
For Embodiment One: Insert Annotation can be achieved
by emulating the keystrokes sequence Alt+Cntrl+M. The Visual Basic translated
code for the recorded macro with this sequence is "Selection.Comments.Add
Range:=Selection.Range", which could be used to achieve the same result in
embodiment 2.
[00133] J001341 [0136]
Once in Comment mode, revisions in the RHI memory 20
can be incorporated into the document memory 22 as comments. If the text
includes
revisions, the Track Changes mode can be invoked prior to insertion of text
into a
comment pane.
[01371 Useful built-in macros for use in the Comment mode of MS Word:
GotoCommentScope ;highlight the text associated with a comment reference
mark
GotoNextComment, ;jump to the next comment in the active document
GotoPreviousComment ;jump to the previous comment in the active document
I nsertAnnotation ;insert comment
DeleteAnnotation ;delete comment
ViewAnnotation ;show or hide the comment pane
[004-3.4]-J001351 The
above macros can be used in Embodiment One by
emulating their shortcut keys or in Embodiment Two with their translated code
in
Visual Basic. Figure 34 provides the translated Visual Basic code for each of
these
macros.
Spreadsheets, forms and Tables
[00135] J001361
Embedding handwritten information in a cell of a spreadsheet or
a field in a form or a table can either be for new information or it could be
for revising
28

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
an existing data (e.g., deletion, moving data between cells or for adding new
data in
a field). Either way, after the handwritten information is embedded in the
document
memory 22, it can cause the application (e.g., Excel) to change parameters
within
the document memory 22, e.g., when the embedded information in a cell is a
parameter of a formula in a spreadsheet which when embedded changes the output

of the formula, or when it is a price of an item in a Sales Order which when
embedded changes the subtotal of the Sales Order; if desired, these new
parameters may be read by the embed functionality 24 and displayed on the
display
25 to provide the user with useful information such as new subtotals, spell
check
output, stock status of an item (e.g., as a sales order is filed in).
[00-1-36]-1001371 [0138]
As discussed, the x-y location in the document memory
22 for a word processing type documents can for example be defined by page#,
line# and character# (see figure 10, x-y locations for InsertionPoint1 and
InsertionPoint2). Similarly, the x-y location in the document memory 22 for a
form,
table or a spreadsheet can for example be defined based on the location of a
cell /
field within the document (e.g., column #, Row # and Page # for a
spreadsheet).
Alternatively, it can be defined based on number of Tabs and/or Arrow keys
from a
given known location. For example, a field in a Sales Order in the accounting
application QuickBooks can be defined based on the number of Tab from the
first
field (i.e., "customer; job") in the form.
[00137] 1001381 [0139]
The embed functionality can read the x-y information (see
step 2 in flow charts referenced in Figures 12 and 14), and then bring the
insertion
point to the desired location according to Embodiment One (see example flow
charts
referenced in Figures 15-16), or according to Embodiment Two (see example flow

charts for MS Word referenced in Figure 26). Then the handwritten information
can
be embedded. For example, for a Sales Order in QuickBooks, emulating the
keyboard keys combination "Cntrl+J" will bring the insertion point to the
first field,
customer; job; then, emulating three Tab keys will bring the insertion point
to the
"Date" field, or emulating eight Tab keys will bring the insertion point to
the field of
the first "Item Code".
[00-1-38]-J001391
[0140]The software application QuickBooks has no macros or
programming capabilities. Forms (e.g., Sales Order, a Bill, or a Purchase
Order) and
Lists (e.g., Chart of Accounts and customer; job list) in QuickBooks can
either be
invoked via pull-down menus via the toolbar, or via a shortcut key. Therefore,
29

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
Embodiment One could be used to emulate keyboard keystrokes to invoke specific

form or a specific list. For example, invoking a new invoice can be achieved
by
emulating the keyboard keys combination "Cntrl+N" and invoking the chart of
accounts list can be achieved by emulating the keyboard keys combination
"Cntrl+A". Invoking a Sales Order, which has no associated shortcut key
defined,
can be achieved by emulating the following keyboard keystrokes:
1. "Alt+C" ;brings the pull-down menu from the toolbar menu related to
"Customers"
2. "Alt+0" ;Invokes a new sales order form
[00-1-39]-1001401
[0141]Once a form is invoked, the insertion point can be
brought to the specified x-y location, and then the recognized handwritten
information (i.e., command(s) and associated text) can be embedded.
[00-140]-1001411 [0142]
As far as the user is concerned, he can either write the
information (e.g., for posting a bill) on a pre-set form (e.g., in conjunction
with the
digitizing pad 12 or touch screen 11) or specify commands related to the
operation
desired. Parameters, such as the type of entry (a form, or a command), the
order for
entering commands, and the setup of the form are selected by the user in step
1
"Document Type and Preferences Setup" (A) illustrated in Figure 4 and in
Figure 5.
[00111] J001421 [0143]
For example, the following sequence handwritten
commands will post a bill for purchase of office supply at OfficeMax on
03/02/05, for
a total of $45. The parameter "office supply", which is the account associated
with
the purchase, may be omitted if the vendor OfficeMax has already been set up
in
QuickBooks. Information can be read from the document memory 22 and based on
this information the embed functionality 24 can determine if the account has
previously been set up or not, and report the result on the display 25. This,
for
example can be achieved by attempting to cut information from the "Account"
field
(i.e., via the clipboard), assuming the account is already set up. The data in
the
clipboard can be compared with the expected results, and based on that,
generating
output for the display.
Bill
03/02/05
OfficeMax
$45

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
Office supply
[00112] 1001431 In
applications such as Excel, either or both Embodiment One
and Embodiment Two can be used to bring the insertion point to the desired
location
and to embed recognized handwritten information.
APPLICATIONS EXAMPLES
Wireless Pad
[0044,3]-1001441 A
wireless pad can be used for transmission of an integrated
document to a computer and optionally receiving back information that is
related to
the transmitted information. It can be used, for example, in the following
scenarios:
1- Filling up a form at a doctor office
2- Filling up an airway bill for shipping a package
3- Filing up an application for a driver license at the DMV
4- Serving a customer at a car rental agency or at a retail store.
5- Taking notes at a crime scene or at an accident site
6- Order taking off-site, e.g., at conventions.
[00-144]-1001451 [0144]
Handwritten information can be inserted in designated
locations in a pre-designed document such an order form, an application, a
table or
an invoice, on top of a digitizing pad 12 or using a touch screen 11 or the
like. The
pre-designed form is stored in a remote or a close-by computer. The
handwritten
information can be transmitted via a wireless link concurrently to a receiving

computer. The receiving computer will recognize the handwritten information,
interpret it and store it in a machine code into the pre-designed document.
Optionally, the receiving computer will prepare a response to and transmit it
back to
the transmitting pad (or touch screen), e.g., to assist the user.
[00445]-1001461 [0145]
For example, information filled out on the pad 12 in an
order form at a convention can be transmitted to an accounting program or a
database residing in a close-by or remote server computer as the information
is
written. In turn, the program can check the status of an item, such as cost,
price and
stock status, and transmit information in real-time to assist the order taker.
When
the order taker indicates that the order has been completed, a sales order or
an
invoice can be posted in the remote server computer.
31

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
[004-4-6]-1001471 [0146]
Figure 39 is a schematic diagram of an Integrated Edited
Document System shown in connection with the use of a Wireless Pad. The
Wireless Pad comprises a digitizing pad 12, display 25, data receiver 48,
processing
circuitry 60, transmission circuitry I 50, and receiving circuitry II 58. The
digitizing
pad receives tactile positional input from a writing pen 10. The transmission
circuitry
I 50 takes data from the digitizing pad 12 via the data receiver 48 and
supplies it to
receiving circuitry I 52 of a remote processing unit. The receiving circuitry
ll 58
captures information from display processing 54 via transmission circuitry II
56 of the
remote circuitry and supplies it to processing circuitry 60 for the display
25. The
receiving memory I 52 communicates with the data receiving memory 16 which
interacts with the recognition module 18 as previously explained, which in
turn
interacts with the RHI processor and memory 20 and the document memory 22 as
previously explained. The embedded criteria and functionality element 24
interacts
with the elements 20 and 22 to modify the subject electronic document and
communicate output to the display processing unit 54.
Remote Communication
[00117] J001481 In a
communication between two or more parties at different
locations, handwritten information can be incorporated into a document,
information
can be recognized and converted into machine-readable text and image and
incorporated into the document as "For Review". As discussed in connection
with
Figure 6 (as an exemplary embodiment for MS Word type document), "For review"
information can be displayed in a number of ways. The "For Review" document
can
then be sent to one or more receiving parties (e.g., via email). The receiving
party
may approve portions or all of the revisions and/or revise further in
handwriting (as
the sender has done) via the digitized pad 12, via the touch screen 11 or via
a
wireless pad. The document can then be sent again "for review". This process
may
continue until all revisions are incorporated/concluded.
Revisions via Fax
[004-4-8]-J001491
Handwritten information on a page (with or without machine-
printed information) can be sent via fax, and the receiving facsimile machine
enhanced as a Multiple Function Device (printer/fax, character recognizing
scanner)
can convert the document into a machine-readable text/image for a designated
application (e.g., Microsoft Word).
Revisions vs. original information can be
32

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
distinguished and converted accordingly based on designated revision areas
marked
on the page (e.g., by underlining or circling the revisions). Then it can be
sent (e.g.,
via email) "For Review" (as discussed above, under "Remote Communication").
Integrated Document Editor with the use of a Cell Phone
[00119] 1001501
Handwritten information can be entered on a digitizing pad 12
whereby locations on the digitizing pad 12 correspond to locations on the cell
phone
display. Alternatively, handwritten information can be entered on a touch
screen that
is used as a digitizing pad as well as a display (i.e., similar to the touch
screen 11
referenced in Figure 38). Handwritten information can either be new
information, or
revision of an existing stored information (e.g., a phone number, contact
name, to do
list, calendar events, an image photo, etc.). Handwritten information can be
recognized by the recognition element 18, processed by the RHI element 20 and
then embedded into the document memory 22 (e.g., in a specific memory location
of
a specific contact information). Embedding the handwritten information can,
for
example, be achieved by directly accessing locations in the document memory
(e.g.,
specific contact name); however, the method by which recognized handwritten
information is embedded can be determined at the OEM level by the manufacturer
of
the phone.
Use of the Integrated Document Editor in authentication of handwritten
information
[00450]-1001511 A unique
representation such as a signature, a stamp, a finger
print or any other drawing pattern can be pre-set and fed into the recognition
element
18 as units that are part of a vocabulary or as a new character. When
handwritten
information is recognized as one of these pre-set units to be placed in a,
e.g.,
specific expected x-y location of the digitizing pad 12 (Figure 1) or touch
screen 11
(Figure 38), an authentication or part of an authentication will pass. The
authentication will fail if there is no match between the recognized unit and
the pre-
set expected unit. This can be useful for authentication of a document (e.g.,
an
email, a ballot or a form) to ensure that the writer / sender of the document
is the
intended sender. Other examples are for authentication and access of bank
information or credit reports. The unique pre-set patterns can be either or
both:
1) stored in a specific platform belonging to the user and/or 2) stored in a
remote
database location. It should be noted that the unique pre-set patterns (e.g.,
a
signature) do not have to be disclosed in the document. For example, when an
33

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
authentication of a signature passes, the embedded functionality 24 will, for
example
embed the word "OK" in the signature line / field of the document.
[00151]1001521 [0147]
Computing devices and methods discussing automatic
computation of document locations at which to automatically apply user
commands
communicated by user input on a touch screen of a computing device are
discussed
in US Patent no. 9,582,095, in US patent application no.4-315/391,710 which
is a
continuation of US patent no. 9,582.095 and in US patent application
no.13/955,288.
[00152]1001531
[0148]The disclosed embodiments further relate to simplified
user interaction with displayed representations of one or more graphic
objects. The
simplified user interaction may utilize, a touch screen of a computing device,
and
may include using gestures to indicate desired change(s) in one or more
parameters
of the graphic objects. The parameters may include one or more of a line
length, a
line angle or arc radius, a size, surface area, or any other parameter of a
graphic
object, stored in memory of the computing device or computed by functions of
the
computing device. Changes in these one or more parameters are computed by
functions of the computing device based on the user interaction on the touch
screen,
and these computed changes may be used by other functions of the computing
device to compute changes in other graphic objects.
[00153] 1001541 As
mentioned above, the document could be any kind of
electronic file, word processing document, spreadsheet, web page, form, e-
mail,
database, table, template, chart, graph, image, objects, or any portion of
these types
of documents, such as a block of text or a unit of data. It should be
understood that
the document or file may be utilized in any suitable application, including
but not
limited to, computer aided design, gaming, and educational materials.
[00-1-54]-1001551 [0149]
It is an object of the disclosed embodiments to allow
users to quickly edit Computer Aided Design (CAD) drawings on the go or on
site
following a short interactive on-screen tutorial; there is no need for
skills/expertise
such as those needed in operating CAD drawings applications, for example,
AutoCADO software. In addition, the disclosed embodiments may provide a
significant time saving by providing simpler and faster user interaction,
while revision
iterations with professionals are avoided. Typical users may include, but not
limited
34

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
to construction builders and contractors, architects, interior designers,
patent
attorneys, inventors, and manufacturing plant managers.
[00155] 1001561 [0150]
It is a further object of the disclosed embodiments to
allow users to use the same set of gestures provided for editing CAD drawings
to
edit graphics documents in a variety of commonly used document formats, such
as
in doc and docx formats. It should be noted that some of the commands commonly

used in CAD drawing applications, for example AutoCAD software, such as the
command to apply a radius to a line or to add a chamfer, are not available in
word
processing applications or in desktop publishing applications.
[00456]-1001571 [0151]
It is a further object of the disclosed embodiments to
allow users to create CAD drawing and graphics documents, based on user
interaction on a touch screen of a computing device, in a variety of document
formats, including CAD drawings formats such as DXF format and doc and docx
formats, using the same gestures.
[00157] 1001581 It is
yet a further object of the disclosed embodiments to allow
users to interact with a three-dimensional representation of graphic objects
on the
touch screen to indicate desired changes in one or more parameters of one or
more
graphic objects, which in turn, will cause functions of the computing device
to
automatically affect the indicated changes.
[00458]-1001591
[0152]These, other embodiments, and other features of the
disclosed embodiments herein will be better understood by reference to the set
of
accompanying drawings (Figures 40A-58B), which should be taken as an
illustrative
example and not limiting. Figures 40A-52D, Figures 54A-54F, and Figures 56-58A

may be viewed as a portion of a tutorial of an app to familiarize users with
the use of
the gestures discussed in these drawings.
[00-1-59]-J001601 While
the disclosed embodiments from Figures 41A through
Figure 52D are described with reference to user interaction with two-
dimensional
representations of graphic objects, it should be understood that the disclosed

embodiments may also be implemented with reference to user interaction with
three-
dimensional representations of graphic objects.
[00-1-60]-1001611 First,
the user selects a command (e.g., a command to change
line length, discussed in Figures 42A-42D), by drawing a letter or by
selecting an
icon which represents the desired command. Second, the computing device
identifies the command. Then, responsive to user interaction with a displayed

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
representation of a graphic object on the touch screen to indicate a desired
change
in one or more parameters (such as, in line length), the computing device
automatically causes the desired change in the indicated parameter and, when
applicable, also automatically affects changes in locations of the graphic
object and
further, as a result, in other graphic objects in memory in which the drawing
is stored.
[001-61] 1001621 A
desired (gradual or single) change in a parameter of a graphic
object, being an increase or a decrease in its value (and/or in its shape,
when the
shape of the graphic object being the parameter, such as a change from a
straight
line object to a segmented line object, or gradual change from one shape to
another,
such as from a circle/sphere to an eclipse and vice versa), may be indicated,
by
changes in positional locations along a gesture being drawn on the touch
screen (as
illustrated for example, in Figures 42A-42B), and during which the computing
device
gradually and automatically applies the desired changes as the user continues
to
draw the gesture. From the user perspective, it would seem as the value of the

parameter is changing at the same time as the gesture is being drawn.
[00-1-62]-J001631 The
subject drawing or a portion thereof stored in the device
memory (herein defined as "graphics vector") may be displayed on the touch
screen
as a two-dimensional representation (herein defined as "vector image"), with
which
the user may interact in order to communicate desired changes in one or more
parameters of a graphic object, such as in line length, line angle, or arc
radius. As
discussed above, the computing device automatically causes these desired
changes
in the graphic object, and when applicable, also in its locations, and further
in
parameters and locations of other graphic objects within the graphics vector
which
may be caused as a result of the changes in the graphic object indicated by
the user.
The graphics vector may altrrnatively be represented on the touch screen as a
three-
dimensional vector image, so as to allow the user to view/review the effects
of a
change in a parameter of a graphic object in an actual three-dimensional
representation of the graphics vector, rather than attempting to visualize the
effects
while viewing a two-dimensional representation.
[001-64J001641
Furthermore, the user may interact with a three-dimensional
vector image on the touch screen to indicate desired changes in one or more
parameters of one or more graphic objects, for example, by pointing/touching
or
tapping at geometrical features of the three-dimensional representation, such
as on
surfaces or at corners, which will cause the computing device to automatically
36

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
change one or more parameters of one or more graphic objects of the graphics
vector. Such user interaction with geometrical features may, for example, be
along
surface length, width or height, along edges of two connecting surfaces (e.g.,
along
an edge connecting the top surface and one of the side surfaces, within
surface(s)
inside or outside a beveled/trimmed corner, a sloped surface (e.g., of a
ramp), or
within an arced surface inside or outside an arced corner.
[00161] 1001651 The
correlation between user interaction with a geometrical
feature of the three-dimensional vector image on the touch screen and changes
in
size and/or geometry of the vector graphics stored in the device memory may be

achieved, by first, using one or more points/locations in the vector graphics
stored
(and defined in the xyz coordinate axis system) in the device memory (referred
to
herein as "locations"), and correlating them with the geometrical features of
the
vector image with which the user may interact to communicate desired changes
in
graphic objects. A location herein is defined such that, changes in that
location, or in
a stored or computed parameter of a line (straight, arced, or segmented)
extending/branching from that location, such as length, radius or angle,
herein
defined as "variable", can be used as the variable (or as one of the
variables) in
function(s) capable to compute changes in size and/or geometry of the vector
graphics as a result of changes in that variable. User interaction may be
defined
within a region of interest, being the area of the geometrical feature on the
touch
screen within which the user may gesture/interact; this region may, for
example, be
an entire surface of a cube, or the entire cube surface with an area proximate
to the
center excluded. In addition, responsive to detecting finger movements in
predefined/expected direction (or in one of predefined/expected directions),
or
predefined/expected touching and/or tapping within this region, the computing
device
automatically determines/identifies the relevant variable and automatically
carries out
its associated function(s) to automatically affect the desired change(s)
communicated by the user.
[001-65]-1001 661 For
example, a position of either of the edges/corners of a
rectangle or of a cube is a location that may be used as a variable in a
function (or in
one of the functions) capable to compute a change in the geometry of the
rectangle
or of the cube as a result of a change in that variable. Similarly, the length
of a line
between two edges/corners (i.e., between two locations) of the cube or the
angle
between two connected surfaces of the cube may be used as the variable. Or,
the
37

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
center point of a circle or of a sphere, may be used as the "location" from
which the
radius of the circle or of the sphere is extending; the radius in this example
may be a
variable of a function capable to compute the circumference and surface area
of the
circle or the circumference, surface and volume of the sphere, as the user
interacts
with (e.g., touches) the sphere. Similarly, a length of a line extending from
the center
point of a vector graphics having a symmetrical geometry, such as a cube or a
tube,
or the location at the end of the extending line from the center point, may be
used as
a variable (or one of the variables) of a function (or of one of the
functions) capable
to computes changes in the size of the symmetrical vector graphics or changes
in its
geometry, as the user interacts with the symmetrical vector image. Or, in a
three-
dimensional vector graphics with symmetry in one of more of its displayed
surfaces
such as in the surface of a base of a cone, two locations may be defined, the
first at
the center point of the surface at the base, and the second being the edge of
the line
extending from that location to the top of the cone; the variables in this
example may
be the first location and the length of the line extending from the first
location to the
top of the cone, which can be used in function(s) capable to compute changes
in the
size and geometry of the cone, as the user interacts with the vector image
representing the cone. Or, a complex or non-symmetrical graphics vector,
represented on the touch screen as a three-dimensional vector image, with
which
the user may interact to communicate changes in the graphics vector, may be
divided into a plurality of partial graphics vectors in the device memory
(represented
as one vector image on the touch screen), each represented by one or more
functions capable to compute changes in its size and geometry, whereby the
size
and geometry of the graphics vector may be computed by the computing device
based on the sum of the partial graphics vectors.
[00-1-66]-1001671 In one
embodiment, responsive to a user "pushing" (i.e., in
effect touching) or tapping at a geometrical feature of a displayed
representation of a
graphics vector (i.e., at the vector image), the computing device
automatically
increases or decreases the size of the graphics vector or of one or more
parameters
represented on the graphic feature. For example, touching or tapping at a
displayed
representation of a corner of a cube or at a surface of a ramp, will cause the

computing device to automatically decrease or increase the size of the cube
(Figures
54A-54B) or of the decline/incline angle of the ramp, respectively.
38

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
[00167] 1001681
Similarly, responsive to touching or tapping anywhere at a
displayed representation of a sphere, the computing device automatically
decreases
or increases the radius of the sphere, respectively, which in turn, decreases
or
increases, respectively the circumference, surface area and volume of the
sphere.
Or, responsive to continued "squeezing" (i.e. holding/touching) a geometrical
feature
of a vector image representing a feature in graphics vector, such as the side
edges
of a top of a tube or of a cube, the computing device automatically brings the
outside
edge(s) of that graphics vector together gradually as the user continues
squeezing/holding the geometrical feature of the vector image. Similarly,
responsive
to the user tapping at or holding/touching the top surface of the geometrical
feature,
the computing device automatically and gradually brings the outside edges of
the
geometrical feature outward or inward, respectively as the user continues
tapping at
or touching the top surface of the vector image, respectively. Or, responsive
to
touching at or, in proximity to a center point of a top surface (note that the
region of
interest here is proximate to the center, which is excluded from the region of
interest
in the prior example), the computing device automatically creates a wale (or
other
predetermined shape) with a radius centered at that center point, and
continued
touching or tapping (anywhere on the touch screen) will cause the computing
device
to automatically and gradually decrease or increase the radius of the wale,
respectively.
[001-68]-1001 691 In
another embodiment, first responsive to a user indicating a
desired command, the computing device identifies the command. Then, the user
may gesture at a displayed geometrical feature of a vector image to indicate
desired
changes in the vector graphics. For example, responsive to continued 'pushing'
(i.e.,
touching) or tapping at a displayed representation of a surface of a corner,
after the
user has indicated a command to add a fillet (at the surface of the inside
corner) or
an arc (at the surface of the outside corner) and the computing device
identified the
command, the computing device automatically rounds the corner (if the corner
is not
yet rounded), and then causes an increase or a decrease in the value of the
radius
of the fillet/arc (as well as in locations of the adjacent line objects), as
the user
continues touching or tapping, respectively at the fillet/arc surface (or
anywhere on
the touch screen). Or, after the computing device identifies a command to
change
line length (e.g., after the user touches a distinct icon representing the
command),
responsive to finger movement to the right or to the left (indicative of a
desired
39

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
change in width from the right edge or from the left edge of the surface of
the cube,
respectively) anywhere on a surface of the displayed cube, followed by
continued
touching or tapping (anywhere on the touch screen), the computing device
automatically decreases or increases the width of the cube, respectively from
the
right edge or from the left edge of the surface, as the user continues
touching or
tapping. Similarly, responsive to a finger movement up or down on the surface
of the
cube followed by continued touching or tapping anywhere on the touch screen,
the
computing device automatically decreases or increases the height of the cube,
respectively from the top edge or from the bottom edge of the surface, as the
user
continues touching or tapping. Further, responsive to tapping or touching a
point
proximate to an edge along two connected surfaces of a graphic image of a
cube,
the computing device automatically increases or decreases the angle between
the
two connected surfaces. Or, after the computing device identifies a command to

insert a blind hole and a point on a surface of the graphic image at which to
insert
the blind hole (e.g., after detecting a long press at that point, indicating
the point on
the surface at which to drill the hole), responsive to continued tapping or
touching
(anywhere on the touch screen), the computing device gradually and
automatically
increases or decreases the depth of the hole, respectively in the graphics
vector and
updates the vector image. Similarly, responsive to identifying a command to
drill a
through hole at user indicated point on a surface of the vector image, the
computing
device automatically inserts the a through hole in the vector graphics and
updates
the vector image with the inserted through hole. Further, responsive to
tapping or
touching at a point along the circumference of the hole, the computing device
automatically increases or decreases the radius of the hole. Or, responsive to

touching the inside surface of the hole, the computing device automatically
invokes a
selection table/menu of standard threads, from which the user may select a
desired
thread to apply to the outside surface of the hole.
[00-1-69]-1001701 [0153]
Figures 40A-40D relate to a command to Insert a line.
They illustrate the interaction between a user and a touch screen, whereby a
user
draws a line 3705 free-hand between two points A and B (Figure 40B). In some
embodiments, an estimated distance of the line 3710 is displayed while the
line is
being drawn. Responsive to the user finger being lifted from the touch screen
(Figure 400), the computing device automatically inserts a straight-line
object in the
device memory, at memory locations represented by points A and B on the touch

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
screen, where the drawing is stored, and displays the straight-line object
3715 along
with its actual distance 3720 on the touch screen.
[004-7-0]-1001711 [0154]
Figures 41A-410 relate to a command to delete an
object. The user selects the desired object 3725 by touching it (Figure 41A)
and then
may draw a command indicator 3730, for example, the letter 'd' to indicate the

command 'Delete' (Figure 41B). In response, the computing device identifies
the
command and deletes the object (Figure 41C). It should be noted that the user
may
indicate the command by selecting an icon representing the command, by an
audible
signal and the like.
[00171] 1001721 Figures
42A-42D relate to a command to change line length.
First, the user selects the line 3735 by touching it (Figure 42A) and then may
draw a
command indicator 3740, for example, the letter to
indicate the desired command
(Figure 42B). It should be noted that selecting line 3735 prior to drawing the

command indicator 3740 is optional, for example, to view its distance or to
copy or
cut it. Then, responsive to each of gradual changes in user selected
positional
locations on the touch screen starting from point 3745 of line 3735, the
computing
device automatically causes each of respective gradual changes in line length
stored
in the device memory and updates the length on display box 3750 (Figures 42B-
420).
[00172] J001731 Figures
43A-43D relate to a command to change line angle.
The user may optionaly first select line 3755 (Figure 43A) and then may draw a

command indicator 3760, for example, the letter 'a' to indicate the desired
command
(Figure 43B). Then, in similar manner to changing line length, responsive to
each of
gradual changes in user selected positional locations (up or down) on the
touch
screen starting from the edge 3765 of line 3755, the computing device
automatically
causes each of respective gradual changes in line angle stored in the device
memory and updates the angle of the line, for example, relative to the x-axis,
in the
device memory, and also updates the angle on display box 3770 (Figures 43B-
430).
[00173] J001741 It
should be noted that if the user indicates both commands: to
change line length and to change line angle prior to drawing the gesture
discussed in
the two paragraphs above (for example, by selecting two distinct icons, each
representing one of the commands), then the computing device will
automatically
cause gradual changes in length and/or angle of the line based on direction of

movement of the gesture, and accordingly will update the values of either or
both the
41

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
length and the angle on the display box at each of gradual changes in user
selected
positional locations on the touch screen.
[00-1-7-4]-1001751 [0155]
Figures 44A-41D relate to a command to apply a radius
to a line or to change the radius of an arc between A and B. The user may
optionally
first select the displayed line or arc, being line 3775 in this example
(Figure 44A) and
then may draw a command indicator 3780, for example, the letter 'IR' to
indicate the
desired command (Figure 44B). Then, in similar manner to changing line length
or
line angle, responsive to each of gradual changes in user selected positional
locations on the touch screen across the displayed line/arc 3785, starting
from a
position along the displayed line/arc 3775, the computing device automatically

causes each of respective gradual changes in the radius of the line/arc in the

drawing stored in the device memory and updates the radius of the arc on
display
box 3790 (Figures 440).
[00175]1001761 [0156]
Figures 45A-450 relate to a command to make a line
parallel to another line. First, the user may draw a command indicator 3795,
for
example, the letter 'N' to indicate the desired command and then touch a
reference
line 3800 (Figure 45A). The user then selects target line 3805 (Figure 45B)
and lifts
finger (Figure 450). Responsive to the finger being lifted, the computing
device
automatically alters the target line 3805 in the device memory to be parallel
to the
reference line 3800 and updates the displayed target line on the touch screen
(Figure 450).
[001-7-6]-1001771 [0157]
Figures 46A-46D relate to a command to add a fillet (at a
2D representation of a corner or at a 3D representation of an inside surface
of a
corner) or an arc (at a 3D representation of an outside surface of a corner).
First, the
user may draw a command indicator 3810 to indicate the desired command and
then
touch corner 3815 to which to apply a fillet (Figure 46A). In response, the
computing
device converts the sharp corner 3815 into rounded corner 3820 (having a
default
radius value) and zooms in that corner (Figure 46B). Then, responsive to each
of
gradual changes in user selected positional locations on the touch screen
across the
displayed arc 3825 at a position along it, the computing device causes each of

respective gradual changes in the radius of the arc stored in the device
memory and
in its locations in memory represented by A and B, such that the arc is
tangent to the
adjacent lines 3830 and 3835 (Figure 460). Next, the user touches the screen
and in
response the computing device zooms out the drawing to its original zoom
42

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
percentage (Figure 46D). Otherwise, the user may indicate additional changes
in the
radius, even after the finger is lifted.
[00177] 1001781 [0158]
Figures 47A-47D relate to a command to add a chamfer.
First, the user may draw a command indicator 3840 to indicate the desired
command
and then touches the desired corner 3845 to which to apply a chamfer/bevel
(Figure
47A). In response, the computing device trims the corner between two locations

represented by A and B on the touch screen, and sets the height H and width W
at
default values, and as a result also the angle a (Figure 47B). Then,
responsive to
each of gradual changes in user selected positional locations on the touch
screen (in
parallel motions to line 3850 and/or line 3855), the computing device causes
gradual
changes in the width W and/or height H, respectively, as stored in the device
memory as well as in locations A and B as stored in memory, and updates their
displayed representation (Figure 470). Next, the user touches the screen and
in
response the computing device zooms out the drawing to its original zoom
percentage (Figure 47D). Otherwise, the user may indicate additional changes
in
parameters W and/or H, even after the finger is lifted.
[00-1-7-8]-1001791 [0159]
Figure 48A-48F relate to the command to trim an object.
First, the user may draw a command indicator 3860 to indicate the desired
command
(Figure 48A). Next, the user touches target object 3865 (Figure 48B) and then
reference object 3870 (Figure 480); it should be noted that these steps are
optional.
The user then moves reference object 3870 to indicate the desired trim in
target
object 3865 (Figures 48D-48E). Then, responsive to the finger being lifted
from the
touch screen, the computing device automatically applies the desired trim 3875
to
target object 3865 (Figure 48F).
[00179] 1001801 [0160]
Figure 49A-49D relate to a command to move an arced
object. First, the user may optionally select object 3885 (Figure 49A) and
then draw
a command indicator 3880 to indicate the desired command, and then touches the

displayed target object 3885 (Figure 49B) (at this point the object is
selected), and
moves it until edge 3890 of the arc 3885 is at or proximate to edge 3895 of
line 3897
(Figure 490). Then, responsive to the finger being lifted from the screen, the

computing device automatically moves the arc 3885 such that it is tangent to
line
3897 where the edges meet (Figure 49D).
[00-1-80]-1001811 [0161]
Figures 50A-50D relate to the `No Snap' command.
First, the user may touch command indicator 3900 to indicate the desired
command
43

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
(Figure 50A), and then the user may touch the desired intersection 3905 to
unsnap
(Figure 50B). Then, responsive to the finger being lifted from the touch
screen, the
computing device automatically applies the no-snap 3910 at intersection 3905
and
zooms in the intersection (Figure 500). Touching again causes the computing
device to zoom out the drawing to its original zoom percentage (Figure 50D).
[00-1-81]-1001821 [0162]
Figures 51A-51D illustrate another example of use of
the `No Snap' command. First, the user may touch command indicator 3915 to
indicate the desired command (Figure 51A). Next, the user may draw a command
indicator 3920, for example, the letter to
indicate the desired command to change
line length (Figure 51B). Then, responsive to each of gradual changes in user
selected positional locations on the touch screen, starting from the edge 3925
of line
3930 and ending at position 3935 on the touch screen, across line 3940, the
computing device automatically unsnaps intersection 3945 or avoids the
intersection
3945 from being snapped, if the snap operation is set as a default operation
by the
computing device.
[001-82]-1001831 Figures
52A-52D illustrate another example of use of the
command to trim an object. First, the user may draw a command indicator 3950
to
indicate the desired command (Figure 52A). Next, the user moves reference
object
3955 to indicate the desired trim in target object 3960 (Figures 52B-520).
Then,
responsive to the user finger being lifted from the touch screen, the
computing
device automatically applies the desired trim 3965 to target object 3960
(Figure
52D).
[00-1-83]-J001841
[0163]Commands to copy and cut graphic objects may be
added to the set of gestures discussed above, and carried out for example by
selecting one or more graphic objects (as shown for example in Figure 42A),
and
then the user may draw a command indicator or touch an associated distinct
icon on
the touch screen to indicate the desired command, to copy or cut. The command
to
paste may also be added, and may be carried out for example by drawing a
command indicator, such as the letter 'ID (or by touching a distinct icon
representing
the command), and then pointing at a position on the touch screen, which
represents
a location in memory at which to paste the clipboard content. The copy, cut
and
paste commands may be useful, for example, in copying a portion of a CAD
drawing
representing a feature such as a bath tab and pasting it at another location
of the
drawing representing a second bathroom of a renovation site.
44

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
{001-84]-1001851 [0164]
Figure 53 is an example of a user interface with icons
corresponding to the available user commands discussed in the Figures above
and a
'Gesture Help' by each distinct icon indicating a letter/symbol which may be
drawn to
indicate a command, instead of selecting the icon by it representing the
command.
[00-1-86]-1001861 [0165]
Figures 54A-54B illustrate an example of before and after
interacting with a three-dimensional representation of a vector graphics of a
cube.
Responsive to a user touching corner 3970 of vector image 3975, representing a

graphics vector of a cube (Figure 54A), for a predetermined period of time,
the
computing device interprets/identifies the touching at corner 3970 as a
command to
proportionally decrease the dimensions of the cube. Then, responsive to
continued
touching at corner 3970, the computing device automatically and gradually
decreases the length, width and height of the cube in the vector graphics,
displayed
at 3977, 3980 and 3985, respectively, at the same rate, and updates the
displayed
length 3990, width 3950 and height 4000 in vector image 4005 (Figure 54B).
[00-1-86]-J001871 Figures
540-54D illustrate an example of before and after
interacting with a three-dimensional representation of a vector graphics of a
sphere.
Responsive to continued touching at point 4010 or anywhere on the vector image

4015 of a sphere (Figure 540), representing a graphics vector of the sphere,
for a
predetermined period of time, the computing device interprets/identifies the
touching
at point 4010 as a command to decrease the radius of the sphere. Then
responsive
to continued touching at point 4010, the computing device automatically and
gradually decreases the radius of the vector graphics of the sphere, and
updates the
vector image 4017 (Figure 54D) on the touch screen.
[00-1-87]-J001881 Figures
54E-54F illustrate an example of before and after
interacting with a three-dimensional representation of a vector graphics of a
ramp.
Responsive to a user touching at point 4020 or any point along edge 4025 of
base
4030 of the vector image 4035 of a ramp (Figure 54E), representing a graphics
vector of the ramp, for a predetermined period of time, the computing device
interprets/identifies the touching as a command to increase incline angle 4040
and
decrease distance 4045 of base 4030 in the graphic object, such that distance
4050
along the ramp remains unchanged. Then, responsive to continued touching at
point
4020, the computing device automatically and gradually increases incline angle
4040
and decreases distance 4045 of base 4030 in the graphics vector, such that
distance
4050 along the height of the ramp remains unchanged, and updates displayed

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
incline angle 4040 and distance 4045 to incline angle 4055 and distance 4060
in
vector image 4065 (Figure 54F). Similarly, responsive to tapping, at point
4020, the
computing device may be configured to automatically and gradually decrease
inclines angle 4040 and increase distance 4045, such that distance 4050 along
the
ramp will remain unchanged.
[00-1-88]-1001891 [0166]
Figures 55A-55B illustrate examples of user interface
menus for the text editing, selection mode, discussed below.
[00/49]-1001901 [0167]
Figure 56 is an example of a gesture to mark text in
command mode. First, the user indicates a desired command, such as a command
to underline, for example by touching icon 4055 representing the command.
Then,
responsive to the user drawing line 4060 free-hand between A and B, from the
right
to the left or from the left to the right, to indicate the locations in memory
at which to
underline text, the computing device automatically underlines the text at the
indicated locations and displays a representation of the underlined text on
the touch
screen as the user continues drawing the gesture or when the finger is lifted
from the
touch screen, depending on the user predefined preference.
[00-1-90]-1001911 [0168]
Figure 57 is another example of a gesture to mark text in
command mode. First, the user indicates a desired command, such as a command
to move text, for example by touching icon 4065 representing the command.
Then,
responsive to the user drawing a zigzagged line 4070 free-hand between A and
B,
from the right to the left or from the left to the right, to indicate the
locations in
memory at which to select text to be moved, the computing device automatically

selects the text at the indicated locations in memory and highlights it on the
touch
screen as the user continues drawing the gesture or when the finger is lifted
from the
touch screen, depending on user predefined preference. At this point, the
computing
device automatically switches to data entry mode. Next (not shown), responsive
to
the user pointing at a position on the touch screen, indicative of a location
in memory
at which to paste the selected text, the computing device automatically pastes
the
selected text, starting from that indicated location. Once the text is pasted,
the
computing device will automatically revert back to command mode.
[00191] 1001921 In one
embodiment, the computing device invokes command
mode or data entry mode; command mode is invoked when a command intended to
be applied to text or graphics already stored in memory and displayed on the
touch
screen is identified, and data entry mode is invoked when a command to insert
or
46

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
paste text or graphics is identified. In command mode, data entry mode is
disabled
to allow for unrestricted/unconfined user input, on the touch screen of the
computing
device, in order to indicate locations of displayed text/graphics at which to:
apply
user pre-defined command(s), and in data entry mode, command mode is disabled
to enable pointing at positions on the touch screen indicative of locations in
memory
at which to insert text, insert a drawn shape such as a line, or paste text or
graphics.
Command mode may be set to be a default mode.
[00192] 1001931 When in
command mode, the drawing by the user on displayed
text or graphics (defined herein as "marking gesture") to indicate locations
in memory
(at which to apply pre-defined command(s)) will not be interpreted by the
computing
device as a command to insert a line, and stopping movement while drawing the
marking gesture or simply touching a position on the touch screen will not be
interpreted by the computing device as a position indicative of a location in
memory
where to insert text or graphics, since in this mode, data entry mode is
disabled. In
one embodiment, however, when in data entry mode, the computing device will
interpret such a position as indicative of an insertion location in memory,
only after
the finger is lifted from the touch screen, to further improve robustness/user

friendliness; the benefit of this feature with respect to control over a
zooming
functionality is further discussed below. The user may draw the marking
gesture
free-hand on displayed text on the touch screen to indicate desired locations
of text
characters in memory where a desired command, such as bold, underline, move or

delete, should be applied, or on displayed graphics (i.e., on vector image) to
indicate
desired locations of graphic objects in memory where a desired command, such
as
select, delete, replace, change objects color, color shade, size, style, or
line
thickness, should be applied.
[001-941001941 Prior to
drawing the marking gesture, the user may define a
command, by selecting a distinct icon representing the command from a bar menu

on the touch screen, illustrated for example in Figure 53. Alternatively, the
user may
define a desired command by drawing a letter/symbol which represents the
command; under this scenario, however, both command mode and data entry mode
may be disabled while drawing the letter/symbol, to allow for unconfined free-
hand
drawing of the letter/symbol anywhere on the touch screen, such that the
drawing of
a letter/symbol will not be interpreted as the marking gesture, or as a drawn
feature,
47

CA 03075627 2020-03-11
WO 2019/055952 PCT/US2018/051400
such as a drawn line, to be inserted, and a finger being lifted from the touch
screen
will not be interpreted as inserting or pasting data.
1 [00191] 1001951 It should be noted,
that the drawing of the marking gesture on
displayed text/graphics to indicate the desired locations in memory at which
to apply
user indicated commands to text/graphics, can be achieved in a single step,
and if
desired, in one or more time interval breaks, if for example the user lifts
his/her finger
from the touch screen up to a predetermined period of time, or under other
predetermined conditions, such as between double taps, during which the user
may,
for example, wish to review a portion in another document before deciding
whether
to continue marking additional displayed text/graphics from the last indicated
location
prior to the time break or on other displayed text/graphics, or to simply
conclude the
marking. It should be further noted that the marking gesture may be drawn free-
hand
in any shape, such as in zigzag (Figure 57), a line across (Figure 56), or a
line above
or below displayed text/graphics. The user may also choose to display the
marking
gesture as it is being drawn, and to draw back along the gesture (or anywhere
along
it) to undo applied command(s) to text/graphics indicated by previously marked

area(s) of displayed text/graphics.
1 [00195] 1001961 In another
embodiment, especially useful in, but not limited to,
text editing, responsive to a gesture being drawn on the touch screen to mark
displayed text or graphics while in command mode and no command was selected
prior to drawing the gesture, the computing device automatically invokes
selection
mode, selects the marked/indicated text/graphics on the touch screen as the
finger is
lifted from the touch screen, and automatically invokes a set of icons, each
representing a distinct command, arranged in menus and/or tooltips by the
selected
text/graphics (Figures 55A-55B). In these examples, when the user selects one
or
more of the displayed icons, and the computing device automatically applies
the
corresponding command(s) to the selected text. The user may exit the selection

mode by simply dismissing the screen, which in response, the computing device
will
automatically revert back to command mode. The computing device will also
automatically revert back to command mode after the selected text is moved (if
the
user have had indicated a command to move text, pointed at a position on the
touch
screen representing the location in memory at which to move the selected text,
and
then lifts his/her finger). As in command mode, data entry mode is disabled
while in
selection mode to allow for unrestricted/unconfined drawing of the marking
gesture
48

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
to mark displayed text or graphics. Selection mode may be useful, for example,

when the user wishes to focus on a specific portion of text and perform some
trial
and errors prior concluding the edits on that portion of text. When the
selected text is
a single word, the user may for example indicate a command to suggest a
synonym,
capitalize the word, or change its fonts to all caps.
[00496]-1001971 Figures
58A-58B illustrate an example of automatically zooming
a text while drawing the gesture to mark text, as discussed below.
[00197] 1001981 In
another embodiment while in command mode or in data entry
mode, or while drawing the marking gesture during selection mode (prior to the

finger being lifted from the touch screen), responsive to detecting a decrease
or an
increase in speed between two positions on the touch screen while the marking
gesture or a shape such as a line to be inserted, is being drawn, the
computing
device automatically zooms in or zooms out, respectively a portion of the
displayed
text/graphic on the touch screen which is proximate to the current position
along the
marking gesture or the drawn line. In addition, responsive to detecting a user

selected position on the touch screen with no movement for a predetermined
period
of time while in either command mode or data entry mode, the computing device
automatically zooms in a portion of the displayed text/graphic on the touch
screen
which is proximate to the selected position and further continues to gradually
zoom
in up to a maximal predetermined zoom percentage as the user continues to
point at
that selected position; this feature may be useful especially near or at the
start and
end points along the gesture or along the drawn line, as the user may needs to
see
more details in their proximity so as to point closer at the desired displayed
text
character/graphic object or its location; naturally, the finger is at rest at
the starting
point (prior to drawing the gesture or the line) as well as while at a
potential end
point. As discussed, in one embodiment, when in data entry mode, the finger
(or
writing tool) being at rest on the touch screen will not be interpreted as the
insertion
location in memory at which to insert text/graphics, until after the finger
(or writing
tool) is lifted from the touch screen, and therefore, the user may have
his/her finger
be periodically at rest (to zoom in) while approaching the intended position.
Furthermore, responsive to detecting continued tapping, the computing device
may
be configured to automatically zoom out as the user continues tapping.
[001-98]-J001991 The
disclosed embodiments may further provide a facility that
allows a user to specify customized gestures for interacting with the
displayed
49

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
representations of the graphic objects. The user may be prompted to select one
or
more parameters to be associated with a desired gesture. In some aspects, the
user
may be presented with a list of available parameters, or may be provided with
a
facility to input custom parameters. Once a parameter has been specified, the
user
may be prompted to associate desired gesture(s), indicative of change(s) in
the
specified parameter, with a geometrical feature within the vector image; In
some
aspects, the user may be prompted to input a desired gesture indicative of an
increase in the value of the specified parameter and then to input another
desired
gesture indicative of a decrease in the value of the specified parameter, in
other
aspects, the user may be prompted to associate desired gesture(s) indicative
of
change(s) in its shape (when the shape/geometry of graphic object(s) being the

specified parameter), and in other aspects, the user may be prompted to
associate
direction(s) of movement of a drawn gesture with a feature within the
geometrical
feature, and the like. Then, the computing device may associate the custom
parameter(s) with one or more functions, or the user may be presented with a
list of
available functions, or the user may be provided with a facility to specify
custom
function(s), such that when the user inputs the specified gesture(s) within
other,
similar geometrical features within the same vector image or within another
vector
image, the computing device will automatically affect the indicated changes in
the
vector graphics, represented by the vector image, in memory of the computing
device.
[00199] 1002001 It is
noted that the embodiments described herein can be used
individually or in any combination thereof. It should be understood that the
foregoing
description is only illustrative of the embodiments. Various alternatives and
modifications can be devised by those skilled in the art without departing
from the
embodiments. Accordingly, the present embodiments are intended to embrace all
such alternatives, modifications and variances that fall within the scope of
the
appended claims.
[00200]-1002011 Various
modifications and adaptations may become apparent to
those skilled in the relevant arts in view of the foregoing description, when
read in
conjunction with the accompanying drawings. However, all such and similar
modifications of the teachings of the disclosed embodiments will still fall
within the
scope of the disclosed embodiments.

CA 03075627 2020-03-11
WO 2019/055952
PCT/US2018/051400
[00201] 1002021 Various
features of the different embodiments described herein
are interchangeable, one with the other. The various described features, as
well as
any known equivalents can be mixed and matched to construct additional
embodiments and techniques in accordance with the principles of this
disclosure.
[00202]-1002031
Furthermore, some of the features of the exemplary
embodiments could be used to advantage without the corresponding use of other
features. As such, the foregoing description should be considered as merely
illustrative of the principles of the disclosed embodiments and not in
limitation
thereof.
51

Representative Drawing
A single figure which represents the drawing illustrating the invention.
Administrative Status

For a clearer understanding of the status of the application/patent presented on this page, the site Disclaimer , as well as the definitions for Patent , Administrative Status , Maintenance Fee  and Payment History  should be consulted.

Administrative Status

Title Date
Forecasted Issue Date Unavailable
(86) PCT Filing Date 2018-09-18
(87) PCT Publication Date 2019-03-21
(85) National Entry 2020-03-11
Examination Requested 2023-12-29

Abandonment History

There is no abandonment history.

Maintenance Fee

Last Payment of $100.00 was received on 2024-01-10


 Upcoming maintenance fee amounts

Description Date Amount
Next Payment if small entity fee 2024-09-18 $100.00
Next Payment if standard fee 2024-09-18 $277.00

Note : If the full payment has not been received on or before the date indicated, a further fee may be required which may be one of the following

  • the reinstatement fee;
  • the late payment fee; or
  • additional fee to reverse deemed expiry.

Patent fees are adjusted on the 1st of January every year. The amounts above are the current amounts if received by December 31 of the current year.
Please refer to the CIPO Patent Fees web page to see all current fee amounts.

Payment History

Fee Type Anniversary Year Due Date Amount Paid Paid Date
Application Fee 2020-03-11 $200.00 2020-03-11
Maintenance Fee - Application - New Act 2 2020-09-18 $50.00 2020-09-15
Maintenance Fee - Application - New Act 3 2021-09-20 $50.00 2021-09-15
Maintenance Fee - Application - New Act 4 2022-09-19 $50.00 2022-08-30
Request for Examination 2023-09-18 $408.00 2023-12-29
Late Fee for failure to pay Request for Examination new rule 2023-12-29 $150.00 2023-12-29
Maintenance Fee - Application - New Act 5 2023-09-18 $100.00 2024-01-10
Late Fee for failure to pay Application Maintenance Fee 2024-01-10 $150.00 2024-01-10
Owners on Record

Note: Records showing the ownership history in alphabetical order.

Current Owners on Record
ZEEVI, ELI
Past Owners on Record
None
Past Owners that do not appear in the "Owners on Record" listing will appear in other documentation within the application.
Documents

To view selected files, please enter reCAPTCHA code :



To view images, click a link in the Document Description column. To download the documents, select one or more checkboxes in the first column and then click the "Download Selected in PDF format (Zip Archive)" or the "Download Selected as Single PDF" button.

List of published and non-published patent-specific documents on the CPD .

If you have any difficulty accessing content, you can call the Client Service Centre at 1-866-997-1936 or send them an e-mail at CIPO Client Service Centre.


Document
Description 
Date
(yyyy-mm-dd) 
Number of pages   Size of Image (KB) 
Abstract 2020-03-11 2 64
Claims 2020-03-11 15 535
Drawings 2020-03-11 54 1,342
Description 2020-03-11 51 2,643
Representative Drawing 2020-03-11 1 9
Patent Cooperation Treaty (PCT) 2020-03-11 2 79
International Search Report 2020-03-11 1 56
Amendment - Claims 2020-03-11 7 314
Statement Amendment 2020-03-11 2 80
National Entry Request 2020-03-11 12 272
Voluntary Amendment 2020-03-11 77 3,497
Correspondence 2020-03-11 3 104
Cover Page 2020-04-30 2 39
Maintenance Fee Payment 2020-09-15 1 33
Maintenance Fee Payment 2021-09-15 1 33
Maintenance Fee Payment 2022-08-30 1 33
RFE Fee + Late Fee / Amendment 2023-12-29 13 470
Maintenance Fee Payment 2024-01-10 1 33
Claims 2023-12-29 7 362
Office Letter 2024-03-28 2 189
Description 2020-03-12 46 3,523
Claims 2020-03-12 21 981
Drawings 2020-03-12 54 1,974