Sélection de la langue

Search

Sommaire du brevet 2934343 

Énoncé de désistement de responsabilité concernant l'information provenant de tiers

Une partie des informations de ce site Web a été fournie par des sources externes. Le gouvernement du Canada n'assume aucune responsabilité concernant la précision, l'actualité ou la fiabilité des informations fournies par les sources externes. Les utilisateurs qui désirent employer cette information devraient consulter directement la source des informations. Le contenu fourni par les sources externes n'est pas assujetti aux exigences sur les langues officielles, la protection des renseignements personnels et l'accessibilité.

Disponibilité de l'Abrégé et des Revendications

L'apparition de différences dans le texte et l'image des Revendications et de l'Abrégé dépend du moment auquel le document est publié. Les textes des Revendications et de l'Abrégé sont affichés :

  • lorsque la demande peut être examinée par le public;
  • lorsque le brevet est émis (délivrance).
(12) Brevet: (11) CA 2934343
(54) Titre français: PROCEDE ET SYSTEME PERMETTANT DE DETERMINER LA COMPATIBILITE DE SYSTEMES INFORMATIQUES
(54) Titre anglais: METHOD AND SYSTEM FOR DETERMINING COMPATIBILITY OF COMPUTER SYSTEMS
Statut: Accordé et délivré
Données bibliographiques
(51) Classification internationale des brevets (CIB):
  • H04L 41/02 (2022.01)
  • G06F 15/00 (2006.01)
(72) Inventeurs :
  • YUYITUNG, TOM (Canada)
  • HILLIER, ANDREW D. (Canada)
(73) Titulaires :
  • CIRBA IP INC.
(71) Demandeurs :
  • CIRBA IP INC. (Canada)
(74) Agent: CPST INTELLECTUAL PROPERTY INC.
(74) Co-agent:
(45) Délivré: 2021-09-07
(22) Date de dépôt: 2007-04-23
(41) Mise à la disponibilité du public: 2007-11-01
Requête d'examen: 2016-06-29
Licence disponible: S.O.
Cédé au domaine public: S.O.
(25) Langue des documents déposés: Anglais

Traité de coopération en matière de brevets (PCT): Non

(30) Données de priorité de la demande:
Numéro de la demande Pays / territoire Date
60/745,322 (Etats-Unis d'Amérique) 2006-04-21

Abrégés

Abrégé français

Des analyses de compatibilité et de consolidation peuvent être effectuées sur un groupe de systèmes pour évaluer la compatibilité 1-1 de chaque paire source-cible, évaluer la compatibilité multidimensionnelle d'ensembles de transfert spécifiques et déterminer la meilleure solution de consolidation sur la base de diverses contraintes parmi lesquelles figurent les résultats de compatibilité des ensembles de transfert. Ces analyses peuvent être effectuées conjointement ou de manière indépendante. Ces analyses sont fondées sur des données de systèmes collectées concernant la configuration technique, les facteurs de fonctionnement et les charges desdits systèmes. Des ensembles de règles différentielles et des algorithmes de compatibilité de charges sont utilisés pour évaluer la compatibilité de systèmes. Les résultats de compatibilité relatifs à la configuration technique, au fonctionnement et aux charges sont combinés pour créer une évaluation de compatibilité globale. Ces résultats sont représentés visuellement au moyen de cartes de marquage codées en couleur.


Abrégé anglais

Compatibility and consolidation analyses can be performed on a collection of systems to evaluate the 1-to-1 compatibility of every source-target pair, evaluate the multi-dimensional compatibility of specific transfer sets, and to determine the best consolidation solution based on various constraints including the compatibility scores of the transfer sets. The analyses can be done together or be performed independently. These analyses are based on collected system data related to their technical configuration, business factors and workloads. Differential rule sets and workload compatibility algorithms are used to evaluate the compatibility of systems. The technical configuration, business and workload related compatibility results are combined to create an overall compatibility assessment. These results are visually represented using color coded scorecard maps

Revendications

Note : Les revendications sont présentées dans la langue officielle dans laquelle elles ont été soumises.


67
Claims:
1. A computer implemented method for placing source systems on target
systems, the
method comprising:
evaluating one or more source systems against other source systems and against
one or
more target systems using at least one rule set that evaluates parameters of
the systems to
determine whether the systems can or can not be placed together on a specific
target system,
wherein the evaluating comprises one or more of: a 1-to-1 compatibility
analysis, an N-to-1
compatibility analysis, or an N-by-N compatibility analysis; and
placing the source systems onto the target systems in accordance with
technical, business,
and workload constraints determined in the compatibility analysis.
2. The method of claim 1, further comprising:
generating a plurality of potential workload placement solutions; and
selecting an optimal workload placement solution, from the plurality of
potential
workload placement solutions.
3. The method of claim 2, wherein the optimal workload placement solution
is selected
according to one or more criteria, the one or more criteria comprising at
least one of a
compatibility score, and a number of transfers.
4. The method of claim 1, further comprising ranking a plurality of target
systems as
candidates to host one or more source workloads according to compatibility
scores and a
resultant target resource utilization and selecting a best candidate.
5. The method of claim 1, wherein at least one compatibility parameter
relates to the
presence of licensed software, the method further comprising determining a
combination of
workloads running particular software applications that when combined onto the
target system
minimizes the software licensing costs.
CPST Doc: 319804.1
Date Recue/Date Received 2020-12-01

68
6. The method of claim 1, further comprising determining at least one
optimal combination
of workloads corresponding to virtual machines operable on the target system.
7. The method of claim 6, wherein a virtual machine corresponds to an
abstraction of a
physical system, enabling an operating system to be independently installed on
each virtual
system residing on a same physical system.
8. The method of claim 6, wherein a virtual machine corresponds a logical
subdivision of an
operating system installed on a physical system, enabling isolation in
operation without
executing a different operating system.
9. The method of claim 1, further comprising determining at least one
optimal combination
of applications operable independently on the target system.
10. The method of claim 9, wherein the optimal combination of applications
coexist in a
same operating system environment.
11. The method of claim 9, wherein the optimal combination of applications
is determined
using a multi-dimensional analysis of inter-compatibility between a plurality
of workloads and
the target system.
12. A computer readable medium comprising computer executable instructions
for placing
source systems on target systems, the computer executable instructions
comprising instructions
for performing the method of any one of claims 1 to 11.
13. A system for placing source systems on target systems, the system
comprising a
processor and memory, the memory comprising computer executable instructions
for operating
the system by:
evaluating one or more source systems against other source systems and against
one or
more target systems using at least one rule set that evaluates parameters of
the systems to
determine whether the systems can or can not be placed together on a specific
target system,
CPST Doc: 319804.1
Date Recue/Date Received 2020-12-01

69
wherein the evaluating comprises one or more of: a 1-to-1 compatibility
analysis, an N-to-1
compatibility analysis, or an N-by-N compatibility analysis; and
placing the source systems onto the target systems in accordance with
technical, business,
and workload constraints determined in the compatibility analysis.
14. The system of claim 13, further comprising instructions for:
generating a plurality of potential workload placement solutions; and
selecting an optimal workload placement solution, from the plurality of
potential
workload placement solutions.
15. The system of claim 14, wherein the optimal workload placement solution
is selected
according to one or more criteria, the one or more criteria comprising at
least one of a
compatibility score, and a number of transfers.
16. The system of claim 13, further comprising instructions for ranking a
plurality of target
systems as candidates to host one or more source workloads according to
compatibility scores
and a resultant target resource utilization and selecting a best candidate.
17. The system of claim 13, wherein at least one compatibility parameter
relates to the
presence of licensed software, further comprising instructions for determining
a combination of
workloads running particular software applications that when combined onto the
target system
minimizes the software licensing costs.
18. The system of claim 13, further comprising instructions for determining
at least one
optimal combination of workloads corresponding to virtual machines operable on
the target
system.
19. The system of claim 18, wherein a virtual machine corresponds to an
abstraction of a
physical system, enabling an operating system to be independently installed on
each virtual
system residing on a same physical system.
CPST Doc: 319804.1
Date Recue/Date Received 2020-12-01

70
20. The system of claim 18, wherein a virtual machine corresponds a logical
subdivision of
an operating system installed on a physical system, enabling isolation in
operation without
executing a different operating system.
21. The system of claim 13, further comprising instructions for determining
at least one
optimal combination of applications operable independently on the target
system.
22. The system of claim 21, wherein the optimal combination of applications
coexist in a
same operating system environment.
23. The system of claim 21, wherein the optimal combination of applications
is determined
using a multi-dimensional analysis of inter-compatibility between a plurality
of workloads and
the target system.
24. A system for determining placement of source computer systems on target
computer
systems, the system configured to execute operations causing the system to:
collect data for a collection of computer systems, the collection of computer
systems
comprising a plurality of source systems and a plurality of target systems;
determine a placement of at least one source system from the collection of
computer
systems on at least one target system from the collection of computer systems
by employing the
following operations:
evaluate compatibility between a specific source system from the plurality
of source systems and a specific target system from the plurality of target
systems
by evaluating one or more rules that operate against attributes or data
relating to
the source and target systems being evaluated;
evaluate compatibility between the specific source system from the
plurality of source systems and one or more other source systems either
already
placed on the specific target system, or being evaluated for placement onto
the
specific target system, to determine if the specific source system can be
placed
with those other source systems on the specific target system, by evaluating
one or
more rules that operate against attributes or data relating to the source
systems;
CPST Doc: 319804.1
Date Recue/Date Received 2020-12-01

71
evaluate compatibility between the specific source system and the specific
target system by evaluating the impact on resource utilization of the specific
target
system of placing the specific source system on the specific target system, in
combination with the one or more other source systems, either already placed
on
the specific target system, or being evaluated for placement onto the specific
target system; and
issue instructions to place the at least one source system on the at least one
target system
in accordance with the determined placement.
25. The system of claim 24, wherein the at least one source system being
placed on the at
least one target system comprises a new computer system that is not currently
running on a target
system in the collection of computer systems.
26. The system of claim 25, wherein the rules relating to the evaluation of
source and target
systems, and the rules relating to the evaluation of source systems against
other source systems,
are capable of being user-defined.
27. The system of claim 26, further configured to factor in user-entered
attributes of any of
the systems in the collection of computer systems.
28. The system of claim 25, wherein the rule-based compatibility analysis
evaluates technical
considerations.
29. The system of claim 28, wherein the technical considerations comprise
an evaluation of
operating system, OS version, patches, application settings, or hardware
devices.
30. The system of claim 25 wherein the rule-based compatibility analysis
evaluates business
considerations.
31. The system of claim 30, wherein the business considerations comprise an
evaluation of
physical location, organization department, data segregation requirements,
owner, service level
CPST Doc: 319804.1
Date Recue/Date Received 2020-12-01

72
agreements, maintenance windows, hardware lease agreements, or software
licensing
agreements.
32. The system of claim 25, wherein the rule-based compatibility analysis
is capable of
evaluating both technical and business considerations between source and
target systems and
between source systems and other source systems.
33. The system of claim 24, wherein the rules relating to the evaluation of
source and target
systems, and the rules relating to the evaluation of source systems against
other source systems,
are capable of being user-defined.
34. The system of claim 33, further configured to factor in user-entered
attributes of any of
the systems in the collection of computer systems.
35. The system of claim 33, wherein the rule-based compatibility analysis
evaluates technical
considerations.
36. The system of claim 35, wherein the technical considerations comprise
an evaluation of
operating system, OS version, patches, application settings, or hardware
devices.
37. The system of claim 33 wherein the rule-based compatibility analysis
evaluates business
considerations.
38. The system of claim 37, wherein the business considerations comprise an
evaluation of
physical location, organization department, data segregation requirements,
owner, service level
agreements, maintenance windows, hardware lease agreements, or software
licensing
agreements.
39. The system of claim 33, wherein the rule-based compatibility analysis
is capable of
evaluating both technical and business considerations between source and
target systems and
between source systems and other source systems.
CPST Doc: 319804.1
Date Recue/Date Received 2020-12-01

73
40. The system of claim 39, wherein the at least one source system being
placed on the at
least one target system comprises a new computer system that is not currently
running on a target
system in the collection of computer systems.
41. The system of claim 24, wherein the rule-based compatibility analysis
evaluates technical
considerations.
42. The system of claim 41, wherein the technical considerations comprise
an evaluation of
operating system, OS version, patches, application settings, or hardware
devices.
43. The system of claim 24, wherein the rule-based compatibility analysis
evaluates business
considerations.
44. The system of claim 43, wherein the business considerations comprise an
evaluation of
physical location, organization department, data segregation requirements,
owner, service level
agreements, maintenance windows, hardware lease agreements, or software
licensing
agreements.
45. The system of claim 24, wherein the rule-based compatibility analysis
is capable of
evaluating both technical and business considerations between source and
target systems and
between source systems and other source systems.
46. The system of claim 45, wherein benchmarks are used to normalize CPU
utilization data
between source and target systems in order to account for differing CPU
performance for
different systems.
47. The system of claim 24, wherein the system is located remotely from the
collection of
computer systems.
CPST Doc: 319804.1
Date Recue/Date Received 2020-12-01

74
48. The system of claim 24, wherein the placement takes into account a pre-
existing source-
target transfer set, and any placements are incremental to the transfer set.
49. The system of claim 24, wherein the at least one target system onto
which the at least one
source system is placed comprises a new computer system.
50. The system of claim 24, wherein the evaluating operations are performed
in two or more
iterations using the specific source system and different specific target
systems, and wherein a
specific iteration is selected to determine placement according to at least
one predetermined
criterion.
1. The system of claim 25, wherein the evaluating operations are performed
in two or more
iterations using the specific source system and different specific target
systems, and wherein a
specific iteration is selected to determine placement according to at least
one predetermined
criterion.
52. The system of claim 33, wherein the evaluating operations are performed
in two or more
iterations using the specific source system and different specific target
systems, and wherein a
specific iteration is selected to determine placement according to at least
one predetermined
criterion.
53. The system of claim 45, wherein the evaluating operations are performed
in two or more
iterations using the specific source system and different specific target
systems, and wherein a
specific iteration is selected to determine placement according to at least
one predetermined
criterion.
54. The system of claim 24, wherein:
the at least one source system being placed on the at least one target system
comprises a
new computer system that is not currently running on a target system in the
collection of
computer systems;
CPST Doc: 319804.1
Date Recue/Date Received 2020-12-01

75
the rules relating to the evaluation of source and target systems, and the
rules relating to
the evaluation of source systems against other source systems, are capable of
being user-defined;
the rule-based compatibility analysis is capable of evaluating both technical
and business
considerations between source and target systems and between source systems
and other source
systems, wherein benchmarks are used to normalize CPU utilization data between
source and
target systems in order to account for differing CPU performance for different
systems; and
the evaluating operations are performed in two or more iterations using the
specific
source system and different specific target systems, and wherein a specific
iteration is selected to
determine placement according to at least one predetermined criterion.
55. A computer implemented method for placing a source system on a target
system, the
method comprising:
collecting data for a collection of computer systems, the collection of
computer systems
comprising a plurality of source systems and a plurality of target systems;
determining a placement of at least one source system from the collection of
computer
systems on at least one target system from the collection of computer systems
by employing the
following operations:
evaluating compatibility between a specific source system from the
plurality of source systems and a specific target system from the plurality of
target
systems by evaluating one or more rules that operate against attributes or
data
relating to the source and target systems being evaluated;
evaluating compatibility between the specific source system from the
plurality of source systems and one or more other source systems either
already
placed on the specific target system, or being evaluated for placement onto
the
specific target system, to determine if the specific source system can be
placed
with those other source systems on the specific target system, by evaluating
one or
more rules that operate against attributes or data relating to the source
systems;
evaluating compatibility between the specific source system and the
specific target system by evaluating the impact on resource utilization of the
specific target system of placing the specific source system on the specific
target
system, in combination with the one or more other source systems, either
already
CPST Doc: 319804.1
Date Recue/Date Received 2020-12-01

76
placed on the specific target system, or being evaluated for placement onto
the
specific target system; and
issuing instructions to place the at least one source system on the at least
one target
system in accordance with the determined placement.
56. The method of claim 55, wherein the at least one source system being
placed on the at
least one target system comprises a new computer system that is not currently
running on a target
system in the collection of computer systems.
57. The method of claim 56, wherein the rules relating to the evaluation of
source and target
systems, and the rules relating to the evaluation of source systems against
other source systems,
are user-defined.
58. The method of claim 57, further comprising factoring in user-entered
attributes of any of
the systems in the collection of computer systems.
59. The method of claim 56, wherein the rule-based compatibility analysis
evaluates
technical considerations.
60. The method of claim 59, wherein the technical considerations comprise
an evaluation of
operating system, OS version, patches, application settings, or hardware
devices.
61. The method of claim 56 wherein the rule-based compatibility analysis
evaluates business
considerations.
62. The method of claim 61, wherein the business considerations comprise an
evaluation of
physical location, organization department, data segregation requirements,
owner, service level
agreements, maintenance windows, hardware lease agreements, or software
licensing
agreements.
CPST Doc: 319804.1
Date Recue/Date Received 2020-12-01

77
63. The method of claim 56, wherein the rule-based compatibility analysis
evaluates both
technical and business considerations between source and target systems and
between source
systems and other source systems.
64. The method of claim 55, wherein the rules relating to the evaluation of
source and target
systems, and the rules relating to the evaluation of source systems against
other source systems,
are user-defined.
65. The method of claim 64, further comprising factoring in user-entered
attributes of any of
the systems in the collection of computer systems.
66. The method of claim 64, wherein the rule-based compatibility analysis
evaluates
technical considerations.
67. The method of claim 66, wherein the technical considerations comprise
an evaluation of
operating system, OS version, patches, application settings, or hardware
devices.
68. The method of claim 64 wherein the rule-based compatibility analysis
evaluates business
considerations.
69. The method of claim 68, wherein the business considerations comprise an
evaluation of
physical location, organization department, data segregation requirements,
owner, service level
agreements, maintenance windows, hardware lease agreements, or software
licensing
agreements.
70. The method of claim 64, wherein the rule-based compatibility analysis
evaluates both
technical and business considerations between source and target systems and
between source
systems and other source systems.
CPST Doc: 319804.1
Date Recue/Date Received 2020-12-01

78
71. The method of claim 70, wherein the at least one source system being
placed on the at
least one target system comprises a new computer system that is not currently
running on a target
system in the collection of computer systems
72. The method of claim 55, wherein the rule-based compatibility analysis
evaluates
technical considerations.
73. The method of claim 72, wherein the technical considerations comprise
an evaluation of
operating system, OS version, patches, application settings, or hardware
devices.
74. The method of claim 55, wherein the rule-based compatibility analysis
evaluates business
considerations.
75. The method of claim 74, wherein the business considerations comprise an
evaluation of
physical location, organization department, data segregation requirements,
owner, service level
agreements, maintenance windows, hardware lease agreements, or software
licensing
agreements.
76. The method of claim 55, wherein the rule-based compatibility analysis
evaluates both
technical and business considerations between source and target systems and
between source
systems and other source systems.
77. The method of claim 76, wherein benchmarks are used to normalize CPU
utilization data
between source and target systems in order to account for differing CPU
performance for
different systems.
78. The method of claim 55, wherein the method is implemented remotely from
the
collection of computer systems.
79. The method of claim 55, wherein the placement takes into account a pre-
existing source-
target transfer set, and any placements are incremental to the transfer set.
CPST Doc: 319804.1
Date Recue/Date Received 2020-12-01

79
80. The method of claim 55, wherein the at least one target system onto
which the at least
one source system is placed comprises a new computer system.
81. The method of claim 55, wherein the evaluating operations are performed
in two or more
iterations using the specific source system and different specific target
systems, and wherein a
specific iteration is selected to determine placement according to at least
one predetermined
criterion.
82. The method of claim 56, wherein the evaluating operations are performed
in two or more
iterations using the specific source system and different specific target
systems, and wherein a
specific iteration is selected to determine placement according to at least
one predetermined
criterion.
83. The method of claim 64, wherein the evaluating operations are performed
in two or more
iterations using the specific source system and different specific target
systems, and wherein a
specific iteration is selected to determine placement according to at least
one predetermined
criterion.
84. The method of claim 76, wherein the evaluating operations are performed
in two or more
iterations using the specific source system and different specific target
systems, and wherein a
specific iteration is selected to determine placement according to at least
one predetermined
criterion.
85. The method of claim 55, wherein:
the at least one source system being placed on the at least one target system
comprises a
new computer system that is not currently running on a target system in the
collection of
computer systems;
the rules relating to the evaluation of source and target systems, and the
rules relating to
the evaluation of source systems against other source systems, are user-
defined;
CPST Doc: 319804.1
Date Recue/Date Received 2020-12-01

80
the rule-based compatibility analysis evaluates both technical and business
considerations
between source and target systems and between source systems and other source
systems,
wherein benchmarks are used to normalize CPU utilization data between source
and target
systems in order to account for differing CPU performance for different
systems; and
the evaluating operations are performed in two or more iterations using the
specific
source system and different specific target systems, and wherein a specific
iteration is selected to
determine placement according to at least one predetermined criterion.
86. A non-transitory computer readable medium comprising computer-
executable
instructions for placing a source system on a target system, comprising
instructions for:
collecting data for a collection of computer systems, the collection of
computer systems
comprising a plurality of source systems and a plurality of target systems;
determining a placement of at least one source system from the collection of
computer
systems on at least one target system from the collection of computer systems
by employing the
following operations:
evaluating compatibility between a specific source system from the
plurality of source systems and a specific target system from the plurality of
target
systems by evaluating one or more rules that operate against attributes or
data
relating to the source and target systems being evaluated;
evaluating compatibility between the specific source system from the
plurality of source systems and one or more other source systems either
already
placed on the specific target system, or being evaluated for placement onto
the
specific target system, to determine if the specific source system can be
placed
with those other source systems on the specific target system, by evaluating
one or
more rules that operate against attributes or data relating to the source
systems;
evaluating compatibility between the specific source system and the
specific target system by evaluating the impact on resource utilization of the
specific target system of placing the specific source system on the specific
target
system, in combination with the one or more other source systems, either
already
placed on the specific target system, or being evaluated for placement onto
the
specific target system; and
CPST Doc: 319804.1
Date Recue/Date Received 2020-12-01

81
issuing instructions to place the at least one source system on the at least
one target
system in accordance with the determined placement.
CPST Doc: 319804.1
Date Recue/Date Received 2020-12-01

Description

Note : Les descriptions sont présentées dans la langue officielle dans laquelle elles ont été soumises.


CA 02934343 2016-06-29
1
1 METHOD AND SYSTEM FOR DETERMINING COMPATIBILITY OF COMPUTER
2 SYSTEMS
3
4 TECHNICAL FIELD
6 [0001] The present invention relates to information technology
infrastructures and has
7 particular utility in determining compatibility of computer systems in
such infrastructures.
8 BACKGROUND
9 [0002] As organizations have become more reliant on computers for
performing day to day
activities, so to has the reliance on networks and information technology (IT)
infrastructures
11 increased. It is well known that large organizations having offices and
other facilities in different
12 geographical locations utilize centralized computing systems connected
locally over local area
13 networks (LAN) and across the geographical areas through wide-area
networks (WAN).
14 [0003] As these organizations grow, the amount of data to be
processed and handled by the
centralized computing centers also grows. As a result, the IT infrastructures
used by many
16 organizations have moved away from reliance on centralized computing
power and towards
17 more robust and efficient distributed systems. Distributed systems are
decentralized computing
18 systems that use more than one computer operating in parallel to handle
large amounts of data.
19 Concepts surrounding distributed systems are well known in the art and a
complete discussion
can be found in, e.g., "Distributed Systems: Principles and Paradigms";
Tanenbaum Andrew S.;
21 Prentice Hall; Amsterdam, Netherlands; 2002.
22 [0004] While the benefits of a distributed approach are numerous
and well understood, there
23 has arisen significant practical challenges in managing such systems for
optimizing efficiency
24 and to avoid redundancies and/or under-utilized hardware. In particular,
one challenge occurs
due to the sprawl that can occur over time as applications and servers
proliferate. Decentralized
26 control and decision making around capacity, the provisioning of new
applications and hardware,
27 and the perception that the cost of adding server hardware is generally
inexpensive, have created
28 environments with far more processing capacity than is required by the
organization.
29 [0005] When cost is considered on a server-by-server basis, the
additional cost of having
underutilized servers is often not deemed to be troubling. However, when
multiple servers in a
22933709.1

CA 02934343 2016-06-29
2
1 large computing environment are underutilized, having too many servers
can become a burden.
2 Moreover, the additional hardware requires separate maintenance
considerations; separate
3 upgrades and requires the incidental attention that should instead be
optimized to be more cost
4 effective for the organization. Heat production and power consumption can
also be a concern.
Even considering only the cost of having redundant licenses, removing even a
modest number of
6 servers from a large computing environment can save a significant amount
of cost on a yearly
7 basis.
8 [0006] As a result, organizations have become increasingly
concerned with such
9 redundancies and how they can best achieve consolidation of capacity to
reduce operating costs.
The cost-savings objective can be evaluated on the basis of consolidation
strategies such as, but
11 not limited to: virtualization strategies, operating system (OS) level
stacking strategies, database
12 consolidation strategies, application stacking strategies, physical
consolidation strategies, and
13 storage consolidation strategies.
14 [0007] Virtualization involves virtualizing a physical system as a
separate guest OS instance
on a host machine. This enables multiple virtualized systems to run on a
single physical
16 machine, e.g. a server. Examples of virtualization technologies include
VMwareTm, Microsoft
17 Virtual ServerTM, IBM LPARTM, Solaris ContainersTM, ZonesTM, etc.
18 [0008] OS-Level application stacking involves moving the
applications and data from one or
19 more systems to the consolidated system. This can effectively replace
multiple operating system
instances with a single OS instance, e.g. system A running application X and
system B running
21 application Y are moved onto system C running application Z such that
system C runs
22 applications X, Y and Z, and system A and B are no longer required. This
strategy is applicable
23 to all operation system types, e.g. WindowsTM, LinuxTM, SolarisTM,
AIXTM, HPUXTM, etc.
24 [0009] Database stacking combines one or more database instances
at the database server
level, e.g. OracleTM, Microsoft SQL ServerTM, etc. Database stacking combines
data within a
26 database instance, namely at the table level. Application stacking
combines one or more
27 database instances at the application server level, e.g. J2EETM
application servers, WeblogicTM,
28 WebSphereTM, JB05sTM, etc.
22933709.1

CA 02934343 2016-06-29
3
1 [0010] Physical consolidation moves physical systems at the OS
level to multi-system
2 hardware platforms such as Blade ServersTm, Dynamic System DomainsTM,
etc. Storage
3 consolidation centralizes system storage through storage technologies
such as Storage Area
4 Networks (SAN), Network Attached Storage (NAS), etc.
[0011] The consolidation strategies to employ and the systems and
applications to be
6 consolidated are to be considered taking into account the specific
environment. Consolidation
7 strategies should be chosen carefully to achieve the desired cost savings
while maintaining or
8 enhancing the functionality and reliability of the consolidated systems.
Moreover, multiple
9 strategies may often be required to achieve the full benefits of a
consolidation initiative.
[0012] Complex systems configurations, diverse business requirements,
dynamic workloads
11 and the heterogeneous nature of distributed systems can cause
incompatibilities between
12 systems. These incompatibilities limit the combinations of systems that
can be consolidated
13 successfully. In enterprise computing environments, the virtually
infinite number of possible
14 consolidation permutations which include suboptimal and incompatibility
system combinations
make choosing appropriate consolidation solutions difficult, error-prone and
time consuming.
16 [0013] It is therefore an object of the following to obviate or
mitigate the above-described
17 disadvantages.
18 SUMMARY
19 [0014] In one aspect, a method for determining compatibilities for
a plurality of computer
systems is provided comprising generating a configuration compatibility score
for each pair of
21 the plurality of systems based on configuration data obtained for each
of the plurality of systems;
22 generating a workload compatibility score for each pair of the plurality
of systems based on
23 workload data obtained for each of the plurality of systems; and
generating a co-habitation score
24 for each pair of the plurality of systems using the respective
configuration compatibility score
and workload compatibility score, the co-habitation score indicating an
overall compatibility for
26 each system with respect to the others of the plurality of systems.
22933709.1

CA 02934343 2016-06-29
4
1 [0015] In another aspect, a computer program is provided for
determining compatibilities for
2 a plurality of computer systems. The program comprises an audit engine
for obtaining
3 information pertaining to the compatibility of the plurality of computer
systems; an analysis
4 engine for generating a compatibility score for each pair of the
plurality of systems based on the
information that is specific to respective pairs; and a client for displaying
the compatibility score
6 on an interface.
7 [0016] In yet another aspect, a method for determining
configuration compatibilities for a
8 plurality of computer systems is provided comprising obtaining
configuration data for each of
9 the plurality of computer systems; assigning a weight to one or more
parameter in the
configuration data indicating the importance of the parameter to the
compatibility of the plurality
11 of systems; generating a rule set comprising one or more of the
parameters; and computing a
12 configuration score for each pair of the plurality of systems according
to the weights in the rule
13 set.
14 [0017] In yet another aspect, a method for determining workload
compatibilities for a
plurality of computer systems is provided comprising obtaining workload data
for each of the
16 plurality of systems; computing a stacked workload value for each pair
of the plurality of
17 systems at one or more time instance according to the workload data; and
computing a workload
18 score for each pair of the plurality of systems using the stacked
workload values.
19 [0018] In yet another aspect, a graphical interface for displaying
compatibility scores for a
plurality of computer systems is provided comprising a matrix of cells, each
the cell
21 corresponding to a pair of the plurality of computer systems, each row
of the matrix indicating
22 one of the plurality of computer systems and each column of the matrix
indicating one of the
23 plurality of computer systems, each cell displaying a compatibility
score indicating the
24 compatibility of the respective pair of the plurality of systems
indicated in the corresponding row
and column, and computed according to predefined criteria.
26 [0019] In yet another aspect, a method of evaluating differences
between a first data set and
27 a second data set for one or more computer system comprising obtaining
the first data set and the
28 second data set; selecting a parameter according to a differential rule
definition; comparing the
22933709.1

CA 02934343 2016-06-29
1 parameter in the first data set to the parameter in the second data set;
determining if a difference
2 in the parameter exists between the data sets; if the difference exists,
applying a weight
3 indicative of the relative importance of the difference in the parameter
according to the
4 differential rule definition; and providing an evaluation of the
difference according to the weight.
5 [0020] In yet another aspect, a computer readable differential
rule definition for evaluating
6 differences between a first data set and a second data set for one or
more computer system is
7 provided comprising a parameter for the one or more computer system; and
a weight for the
8 parameter indicative of the importance of a difference in the parameter;
wherein the differential
9 rule definition is used by a computer application to perform an
evaluation of the difference
according to the weight.
11 [0021] In yet another aspect, a method for determining a
consolidation solution for a
12 plurality of computer systems is provided comprising obtaining a data
set comprising a
13 compatibility score for each pair of the plurality of computer systems,
each the compatibility
14 score being indicative of the compatibility of one of the plurality of
computer systems with
respect to another of the plurality of computer systems; determining one or
more candidate
16 transfer sets each indicating one or more of the computer systems
capable of being transferred to
17 a target computer system; selecting a desired one of the one or more
candidate transfer sets; and
18 providing the desired one as a consolidation solution.
19 [0022] In yet another aspect, there is provided method for
determining compatibilities for a
plurality of computer systems comprising: obtaining at least one transfer set
indicating one or
21 more of the computer systems capable of being transferred to a target
computer system;
22 evaluating compatibilities of the one or more computer systems against
the target computer
23 system to obtain a first compatibility score; evaluating compatibilities
of each the one or more of
24 the computer systems against each other to obtain a second compatibility
score; and computing
an overall compatibility score for the transfer set using the first and second
compatibility scores.
26 [0023] In yet another aspect, there is provided a computer program
for determining
27 compatibilities for a plurality of computer systems comprising an audit
engine for obtaining
28 information pertaining to the compatibility of the plurality of computer
systems; an analysis
22933709.1

CA 02934343 2016-06-29
6
1 engine for generating a compatibility score for each pair of the
plurality of systems based on the
2 information that is specific to respective pairs; and a client for
displaying the compatibility score
3 on an interface, the interface being configured to enable a user to
specify parameters associated
4 with a map summarizing the compatibility scores and to define and
initiate a transfer of one or
more of the computer systems to a target computer system.
6 BRIEF DESCRIPTION OF THE DRAWINGS
7 [0024] An embodiment of the invention will now be described by way of
example only with
8 reference to the appended drawings wherein:
9 [0025] Figure 1 is a block diagram of an analysis program for
evaluating the compatibility of
computer systems to identify consolidation solutions.
11 [0026] Figure 2 is a more detailed diagram of the analysis program
depicted in Figure 1.
12 [0027] Figure 3 is a block diagram illustrating a sample
consolidation solution comprised of
13 multiple transfers.
14 [0028] Figure 4 is an example of a rule-based compatibility
analysis map.
[0029] Figure 5 is an example of a workload compatibility analysis map.
16 [0030] Figure 6 is an example of an overall compatibility analysis
map.
17 [0031] Figure 7 is a schematic block diagram of an underlying
architecture for implementing
18 the analysis program of Figure 1.
19 [0032] Figure 8 is a table mapping audit data sources with
consolidation strategies.
[0033] Figure 9 is a process flow diagram of the compatibility and
consolidation analyses.
21 [0034] Figure 10 is a process flow diagram illustrating the
loading of system data for
22 analysis.
23 [0035] Figure 11 is a high level process flow diagram for a 1-to-1
compatibility analysis.
22933709.1

CA 02934343 2016-06-29
7
1 [0036] Figure 12 is a table showing example rule sets.
2 [0037] Figure 13 is a table showing example workload types.
3 [0038] Figure 14 is a process flow diagram for the 1-to-1
compatibility analysis.
4 [0039] Figure 15 is a flow diagram illustrating operation of the
rule engine analysis.
[0040] Figure 16 shows an example rule set.
6 [0041] Figure 17 is a flow diagram of the 1-to-1 rule-based
compatibility analysis.
7 [0042] Figure 18 is a flow diagram illustrating the evaluation of a
rule set.
8 [0043] Figure 19 is an example of the rule-based compatibility
analysis result details.
9 [0044] Figure 20 is a flow diagram of workload data extraction
process.
[0045] Figure 21 is a flow diagram of the 1-to-lworkload compatibility
analysis.
11 [0046] Figure 22 is an example of the workload compatibility
analysis result details.
12 [0047] Figure 23 is an example of the overall compatibility
analysis result details.
13 [0048] Figure 24(a) is a high level process flow diagram of the
multi-dimensional
14 compatibility analysis.
[0049] Figure 24(b) is a flow diagram showing the multi-dimensional
analysis.
16 [0050] Figure 24(c) is a flow diagram showing use of a rule set in
an N-to-1 compatibility
17 analysis.
18 [0051] Figure 24(d) is a flow diagram showing use of a rule set in
an N-by-N compatibility
19 analysis.
[0052] Figure 25 is a process flow diagram of the multi-dimensional
workload compatibility
21 analysis.
22933709.1

CA 02934343 2016-06-29
8
1 [0053] Figure 26 is an example multi-dimensional compatibility
analysis map.
2 [0054] Figure 27 is an example of the multi-dimensional
compatibility analysis result details
3 for a rule set.
4 [0055] Figure 28 is an example of the multi-dimensional workload
compatibility analysis
result details.
6 [0056] Figure 29 is a process flow diagram of the consolidation
analysis.
7 [0057] Figure 30 is a process flow diagram of an auto fit
algorithm used by the consolidation
8 analysis.
9 [0058] Figure 31 is an example of a consolidation solution
produced by the consolidation
analysis.
11 [0059] Figure 32 shows an example hierarchy of analysis folders
and analyses.
12 [0060] Figure 33 shows the screen for creating and editing the
analysis input parameters.
13 [0061] Figure 34 shows an example of the rule set editor screen.
14 [0062] Figure 35 shows an example of the screen for editing
workload settings.
[0063] Figure 36 shows an example screen to edit the importance factors
used to compute the
16 overall compatibility score.
17 [0064] Figure 37 is an example 1-to-1 compatibility map.
18 [0065] Figure 38 is example configuration compatibility analysis
details.
19 [0066] Figure 39 is an example 1-to-1 compatibility map for
business constraints.
[0067] Figure 40 is an example 1-to-1 workload compatibility map.
21 [0068] Figure 41 is an example workload compatibility report.
22933709.1

CA 02934343 2016-06-29
9
1 [0069] Figure 42 is an example workload details report with
stacked workloads.
2 [0070] Figure 43 is an example of a 1-to-1 overall compatibility
map.
3 [0071] Figure 44 is an example of the 1-to-1 overall compatibility
details report.
4 [0072] Figure 45 shows the workload details of the overall
compatibility report.
[0073] Figure 46 shows example transfers on a compatibility map with net
effect off.
6 [0074] Figure 47 shows example transfers on a compatibility map
with net effect on.
7 [0075] Figure 48 is an example multi-dimensional compatibility
details report.
8 [0076] Figure 49 shows an example of the consolidation analysis
(auto fit) input screen.
9 [0077] Figure 50 is an example overall compatibility map for the
consolidation solution.
[0078] Figure 51 is an example consolidation summary report.
11 [0079] Figure 52 shows the example transfers that comprise the
consolidation solution.
12 DETAILED DESCRIPTION OF THE DRAWINGS
13 Analysis Program Overview
14 [0080] A block diagram of an analysis program 10 for determining
compatibilities in
computing environment 12 is provided in Figure 1. The analysis program 10,
accessed through a
16 computer station 14, gathers data 18 pertaining to a collection of
systems to be consolidated 16.
17 The analysis program 10 uses the gathered data 18 to evaluate the
compatibility of the computer
18 systems 28 and provide a roadmap 20 specifying how the original set of
systems can be
19 consolidated to a smaller number of systems 22.
[0081] The following provides an overview of the principles and
functionality related to the
21 analysis program 10 and its environment depicted in Figure 2.
22
22933709.1

CA 02934343 2016-06-29
1 System Data Parameters
2 [0082] A distinct data set is obtained for each system 16 to
contribute to the combined
3 system data 18 shown in Figure 2. Each data set comprises one or more
parameters that relate
4 preferably to technical 24, business 26 and workload 28 characteristics
or features of the
5 respective system 16. The parameters can be evaluated by scrutinizing
program definitions,
6 properties, objects, instances and any other representation or
manifestation of a component,
7 feature or characteristic of the system 16. In general, a parameter is
anything related to the
8 system 16 that can be evaluated, quantified, measured, compared etc.
9 [0083] Examples of technical parameters relevant of the
consolidation analysis include the
10 operating system, OS version, patches, application settings, hardware
devices, etc.
11 [0084] Examples of business parameters of systems relevant to the
consolidation analysis
12 include the physical location, organization department, data segregation
requirements, owner,
13 service level agreements, maintenance windows, hardware lease
agreements, software licensing
14 agreements, etc.
[0085] Examples of workload parameters relevant to consolidation analysis
include various
16 resource utilization and capacity metrics related to the system
processor, memory, disk storage,
17 disk 1/0 throughput and network bandwidth utilization.
18 System and Entity Models
19 [0086] The system data parameters associated with a system 16
comprise the system model
used in the analyses.
21 [0087] In the following examples, a source system refers to a
system from which
22 applications and/or data are to be moved, and a target server or system
is a system to which such
23 applications and/or data are to be moved. For example, an underutilized
environment having two
24 systems 16 can be consolidated to a target system (one of the systems)
by moving applications
and/or data from the source system (the other of the systems) to the target
system.
22933709.1

CA 02934343 2016-06-29
11
1 [0088] The computer systems 16 may be physical systems, virtual
systems or hypothetical
2 models. In contrast to actual physical systems, hypothetical systems do
not currently exist in the
3 computing environment 12. Hypothetical systems can be defined and
included in the analysis to
4 evaluate various types of "what if' consolidation scenarios. Hypothetical
targets can be used to
simulate a case where the proposed consolidation target systems do not exist
in the environment
6 12, e.g. for adding a system 16. Similarly, hypothetical source systems
can be used to simulate
7 the case where a new application is to be introduced into the environment
12 and "forward
8 consolidated" onto existing target systems 16.
9 [0089] Hypothetical systems can be created through data imports,
cloning from actual
systems models, and manual specification by users, etc. The system model can
be minimal
11 (sparse) or include as much data as an actual system model. These system
models may also be
12 further modified to address the analysis requirements.
13 [0090] The compatibility analysis can also be generalized to
evaluate entities beyond
14 physical, virtual or hypothetical systems. For example, entities can be
components that comprise
systems such as applications and database instances. By analysing the
compatibility of database
16 instances and database servers with database stacking rule sets,
database consolidation can also
17 be assessed. Similarly, application consolidation can be evaluated by
analyzing application
18 servers and instances with application stacking rules. The entity could
also be a logical
19 application system and technical data can pertain to functional aspects
and specifications of the
entity.
21 [0091] It will therefore be appreciated that a "system" or
"computer system" hereinafter
22 referred, can encompass any entity which is capable of being analysed
for any type of
23 compatibility and should not be considered limited to existing or
hypothetical physical or virtual
24 systems etc.
Consolidation and Transfers
26 [0092] Consolidation as described above can be considered to
include one or more
27 "transfers". The actual transfer describes the movement of a single
source entity onto a target,
28 wherein the specification identifies the source, target and transfer
type. The transfer type (or
22933709.1

CA 02934343 2016-06-29
12
1 consolidation strategy) describes how a source entity is transferred onto
a target, e.g.
2 virtualization, OS stacking etc.
3 [0093] A transfer set 23 (see Figure 3) can be considered one or
more transfers that involve a
4 common target, wherein the set specifies one or more source entities, the
target and a transfer
type.
6 [0094] A consolidation solution (or roadmap) is one or more
transfer sets 23 based on a
7 common pool of source and target entities. As can be seen in Figure 2,
the consolidation
8 roadmap can be included in the analysis results 20. Each source or target
entity is referenced at
9 most one time by the transfer sets that comprise the solution.
[0095] Figure 3 shows how an example pool 24 of 5 systems (Si, S2, S3, S4
and S5) can be
11 consolidated through 2 transfer sets 23: stack Si and S2 onto S3, and
stack S4 onto S5. The
12 transfer sets 23 include 3 transfers, and each system 16 is referenced
by the transfer sets 23 only
13 once. In the result, a consolidated pool 26 of 2 systems is achieved.
14 [0096] It will be appreciated that the principles described herein
support many
transformation strategies and consolidation is only one example.
16 Compatibility Analyses
17 [0097] The following discusses compatibilities between systems 16
based on the parameters
18 to determine if efficiencies can be realized by consolidating either
entire systems 16 or aspects or
19 components thereof.
[0098] The analyses employ differential rule sets 28 to evaluate and
quantify the
21 compatibility of systems 16 with respect to technical configuration and
business related factors
22 comprised in the gathered system data 18. Similarly, workload
compatibility of a set of systems
23 16 is assessed using workload stacking and scoring algorithms 30. The
results of configuration,
24 business and workload compatibility analyses are combined to produce an
overall compatibility
score for a set of systems 16.
22933709.1

CA 02934343 2016-06-29
13
1 [0099] In addition to compatibility scores, the analysis provides
details that account for the
2 actual scores. The scores can be presented in color coded maps 32, 34 and
36 that illustrate
3 patterns of the compatibility amongst the analyzed systems as shown in
Figures 4, 5 and 6
4 respectively.
Analysis Modes
6 [00100] A collection of systems 16 to be consolidated can be analyzed in
one of three modes:
7 1-to-1 compatibility, multi-dimensional compatibility and consolidation
analyses. These
8 analyses share many common aspects but can be performed independently.
9 [00101.] The 1-to-1 compatibility analysis evaluates the
compatibility of every possible
source-target pair combination in the collection of systems 16 on a 1-to-1
basis. This analysis is
11 useful in assessing single transfer consolidation candidates. In
practice, it may be prudent to
12 consolidate systems 16 incrementally and assess the impact of each
transfer before proceeding
13 with additional transfers.
14 [00102] The multi-dimensional compatibility analysis evaluates the
compatibility of transfer
sets that can involve multiple sources being transferred to a common target.
The analysis
16 produces a compatibility score for each specified transfer set 23 by
evaluating the compatibility
17 of the systems 16 that comprise the transfer set 23.
18 [00103] The consolidation analysis searches for a consolidation solution
that minimizes the
19 number of remaining source and target entities after the proposed
transfers are applied, while
meeting requisite compatibility constraints. This analysis employs the multi-
dimensional
21 compatibility analysis described above to evaluate the compatibility of
postulated transfer sets.
22 [00104] The analysis program 10 performs consolidation analyses for
virtualization and
23 stacking strategies as will be explained in greater detail below,
however, it will be appreciated
24 that other consolidation strategies may be performed according to
similar principles.
26
22933709.1

CA 02934343 2016-06-29
14
1 Analysis Program Architecture
2 [00105] A block diagram of the analysis program 10 is shown in Figure 7.
The flow of data
3 18 through the program 10 begins as an audit engine 40 pulls audit data
18 from audited
4 environments 42. The data works its way up to the web client 44 which
displays an output on a
user interface, e.g. on computer 14. The program 10 is preferably a client-
server application that
6 is accessed via the web client 44.
7 [00106] An audit engine 40 communicates over one or more communication
protocols with
8 audited environments 42 comprised of the actual systems 16 being
analysed. The audit engine 40
9 typically uses data acquisition adapters to communicate directly with the
end points (e.g. servers)
or through software systems that manage the end points (e.g. management
frameworks 46 and/or
11 agent instrumentation 48 and/or direct access 50).
12 [00107] Alternatively, system data 18 can be imported from third party
tools, e.g. inventory
13 applications, performance monitoring tools etc., or can be obtained
through user data entry.
14 Examples of such third-party data having file formats that can be
imported include comma
separated values (CSV), extensible markup language (XML) and well formatted
text files such as
16 those generated by the UNIXTM system activity reporter (SAR).
17 [00108] The audit engine 40, uses a set of audit request templates 52
that define the data 18 to
18 acquire from the systems 16. Once collected, the data 18 is stored in an
audit data repository 54.
19 Data 18 referenced by the analysis rule sets 28 and required by the
workload analysis 30 are
extracted and stored in separate data caches 56 and 58 respectively.
21 [00109] Aliases 60, differential rule sets 28, workload data types 30
and benchmark
22 specifications comprise some of the analysis-related metadata 62
definitions used by the program
23 10. Aliases 60 extract and normalize system data 18 from a variety of
data sources to a common
24 model for analysis. Rule sets 28 examine system compatibility with
respect to technical
configuration and business-related factors. The workload definitions 30
specify the system
26 resource parameters and benchmarks for analyzing workload compatibility.
22933709.1

CA 02934343 2016-06-29
1 [00110] An analysis engine 64 is also provided, which comprises
compatibility and
2 consolidation analysis engines 66 and 68 respectively. The compatibility
analysis evaluates the
3 compatibility of systems 16 through rule sets 28 and workload stacking
algorithms 30. The
4 consolidation analysis engine 68 leverages the compatibility analysis and
employs constraint-
5 based optimization algorithms to find consolidation solutions that allows
the environment 12 to
6 operate with fewer systems 16.
7 [00111] The program 10 has a report engine 70 that utilizes report
templates 72 for generating
8 reports that convey the analysis results. Typically, the program 10
includes a web interface layer
9 74 that allows web client 44 users to enter settings, initiate an audit
or analysis, view reports etc.
10 Analysis Data Sources
11 [00112] The audit data 18 can be acquired using tools such as the table
76 shown in Figure 8
12 that illustrate the various types of configuration settings that are of
interest and from which
13 sources they can be obtained. Figure 8 also provides a mapping to where
the sample workload
14 data can be obtained. In Figure 8, a number of strategies 78 and sub-
strategies 80 map to various
15 configuration and workload sources, collectively referred to by numeral
82. As discussed
16 making reference to Figure 8, the strategies 78 may relate to database
consolidation, OS-level
17 stacking, application server stacking, virtualization, and many others.
Each strategy 78 includes
18 a set of sub-strategies 80, which in turn map to specific rule sets 28.
The rule sets 28, which will
19 be explained in greater detail below, determine whether or not a
particular setting or system
criterion/criteria have been met and thus how different one system 16 is to
the next. The rule
21 sets 28 can also indicate the cost of remediating such differences.
22 [00113] The table 76 lists the supported consolidation strategies and
the relevant data sources
23 that should be audited to perform the corresponding consolidation
analysis. In general,
24 collecting more basis data 18 improves the analysis results. The table
76 enables the analysis
program 10 to locate the settings and information of interest based on the
strategy 78 or sub-
26 strategy 80 (and in turn the rule set 28) that is to be used to evaluate
the systems 16 in the
27 environment 12. The results can be used to determine source/target
candidates for analysing the
28 environment for the purpose of, e.g. consolidation, compliance measures
etc.
22933709.1

CA 02934343 2016-06-29
16
1 Analysis Process Overview
2 [00114] Referring now to Figure 9, a process flow diagram illustrates the
data flow for
3 performing the compatibility and consolidation analyses discussed above.
The flow diagram
4 outlines four processes: a data load and extraction process (A), a 1-to-1
compatibility analysis
process (B), a multi-dimensional compatibility analysis process (C), and a
consolidation analysis
6 process (D).
7 [00115] In process A, the system data 18 collected via audits or imports
as discussed above is
8 prepared for use by the analyses. The compatibility and consolidation
analyses processes B, C
9 and D can be performed independently. The analyses share a common
analysis input
specification and get system data 18 from the data repository 54 and caches 56
and 58. The
11 multi-dimensional compatibility and consolidation analyses take
additional inputs in the form of
12 a consolidation solution and auto fit input parameters 84 and 86
respectively.
13 [00116] The 1-to-1 compatibility analysis process B evaluates the
compatibility of each
14 system pair on a 1-to-1 basis. In contrast, the multi-dimensional
analysis process C evaluates the
compatibility of each transfer set 23 in the consolidation solution that was
specified as part of its
16 input.
17 [00117] The consolidation analysis process D searches for the best
consolidation solution that
18 fulfills the constraints defined by the auto fit input 86. The
consolidation analysis employs the
19 multi-dimensional compatibility analysis C to assess potential transfer
set candidates.
Data Load and Extraction
21 [00118] A process flow diagram for the data load and extraction process
A is illustrated in
22 Figure 10. System data including technical configuration, business
related and workload
23 collected through audits, data import and user input are prepared for
use by the analyses
24 processes B, C and D.
[00119] When system data 18 and attributes are loaded into the analysis
program 10, they are
26 stored in the audit data repository 54 and system attribute table 55,
respectively. As well, system
22933709.1

CA 02934343 2016-06-29
17
1 data 18 referenced by rule set items 28, workload types 30 and benchmarks
are extracted and
2 loaded into their respective caches 56, 58. Alias specifications 60
describe how data can be
3 extracted and if necessary, normalized from a variety of data sources.
4 [00120] The data repository 54 and caches 56 and 58 thus store audited
data 18, system
attributes, the latest rule set data, historical workload data and system
workload benchmarks.
6 1-to-1 Compatibility Analysis
7 [00121] A high level flow diagram of the 1-to-1 compatibility analysis is
shown in Figure 11.
8 The 1-to-1 compatibility analysis can take into account analysis input,
including input regarding
9 the systems 16 to be analyzed, rule set related parameters, workload
related parameters,
workload benchmarks and importance factors 88 used to compute overall scores.
11 [00122] The compatibility analysis evaluates the compatibility of every
specified system as
12 source-target pairs on a 1-to-1 basis. This analysis produces a
compatibility score for each
13 system pair so that analyzing a collection of ten (10) systems 16
produces 10x10 scores. The
14 compatibility analysis is based on the specified rule sets and workload
types.
[00123] An analysis may be based upon zero or more rule sets and zero or more
workload
16 types, such that at least one rule set or workload type is selected.
Example rule sets 28 and
17 corresponding descriptions are shown in Figure 12, and example workload
types 30 and
18 corresponding descriptions are shown in Figure 13.
19 [00124] The selection of rule sets 28 and workload types 30 for an
analysis depends on the
systems 28 and the consolidation strategy to analyze. For example, to assess
the consolidation of
21 a set of UNIXTM systems 16, an analysis may employ the UNIXTM
application stacking, location,
22 maintenance window and ownership rule sets 28, and CPU, memory, disk
space, disk I/0 and
23 network I/0 workload types 30.
24 1-to-1 Compatibility Analysis Process Flow
[00125] A process flow diagram of the 1-to-1 compatibility analysis is shown
in Figure 14.
26 The analysis generally comprises four stages.
22933709.1

CA 02934343 2016-06-29
18
1 [00126] In the first stage, data referenced by the selected rule sets 28
and workload types 30
2 for the specified date range are retrieved from the data repository 54
and caches 56, 58 for each
3 system 16 to be analyzed. This analysis data is saved as a snapshot and
can be used for
4 subsequent analyses.
[00127] In the second stage, technical and business related compatibility may
be analyzed the
6 using the specified rule sets 28 and weights. Next, workload
compatibility is evaluated based the
7 specified workload types 30 and input parameters. Finally, the overall
compatibility scores are
8 computed for each pair of systems 16.
9 [00128] Upon completion of the compatibility analysis, the results 20 are
provided to the user.
The results 20 include rule item and workload data snapshots, 1-to-1
compatibility score maps
11 for each rule set 28 and workload type 30 as well as an overall score
map. Analysis details for
12 each map may also be provided.
13 [00129] As noted above, the differential rule sets 28 are used to
evaluate the compatibility of
14 systems as they relate to technical and business related constraints.
The rule set 28 defines
which settings are important for determining compatibility. The rule set 28
typically defines a
16 set of rules which can be revised as necessary based on the specific
environment 12. The rule set
17 28 is thus preferably compiled according to the systems 16 being
analysed and prior knowledge
18 of what makes a system 16 compatible with another system 16 for a
particular purpose. As will
19 be discussed below, the rule sets 28 are a form of metadata 62.
Differential Rule Sets
21 [00130] Further detail regarding the differential rules and differential
rule sets 28 is now
22 described making reference to Figures 15 and 16, as also described in co-
pending U.S. Patent
23 Application No. 11/535,308 filed on September 26, 2006, and entitled
"Method for Evaluating
24 Computer Systems", the contents of which are incorporated herein by
reference.
[00131] With respect to the following description of the rule sets 28 and the
general
26 application of the rule sets 28 for detecting system incompatibilities
by evaluating differences
27 between data parameters of systems 16, the following alternative
nomenclature may be used. A
22933709.1

CA 02934343 2016-06-29
19
1 target system refers to a system being evaluated, and a baseline system
is a system to which the
2 target system is being compared. The baseline and target systems may be
the same system 16 at
3 different instances in time (baseline = prior, target = now) or may be
different systems 16 being
4 compared to each other. As such, a single system 16 can be evaluated
against itself to indicate
changes with respect to a datum as well as how it compares to its peers. It
will be appreciated
6 that the terms "source system" and "baseline system" are herein generally
synonymous, whereby
7 a source system is a type of baseline system.
8 [00132] Figure 1 illustrates the relationships between system data
18 and the analysis
9 program 10. Data 18 is obtained from the source and target computer
systems 16 and is used to
analyze the compatibility between the systems 16. In this example, the
parameters are evaluated
11 to determine system compatibilities for a consolidation strategy. A
distinct data set 18 is
12 preferably obtained for each system 16 (or instance in time for the same
system 16 as required).
13 [00133] Rule sets 28 are computer readable and storable so that
they may be accessed by the
14 program 10 and modified if necessary, for use in evaluating the computer
systems 16.
[00134] Rule sets 28 are groupings of rules that represent higher-level
considerations such as
16 business objectives or administrative concerns that are taken into
account when reporting on or
17 analysing the systems 16. In Figure 15, six rules 43, A, B C, D, E and F
are grouped into three
18 rule sets 28, Rule Set 1, 2 and 3. It will be appreciated that there may
be any number of rules in
19 any number of rule sets 28 and those shown in Figure 15 are for
illustrative purposes only.
[00135] Rules evaluate data parameters according to rule definitions to
determine
21 incompatibilities due to differences (or contentious similarities)
between the baseline and target
22 systems. The rule definitions include penalty weights that indicate the
importance of the
23 incompatibility as they relate to the operation of the systems 16. The
penalty weights are applied
24 during an evaluation if the incompatibility is detected. The evaluation
may include the
computation of a score or generation of other information indicative of nature
of the
26 incompatibilities between the baseline and target systems.
27 [00136] Rules comprised by a rule set 28 may reference common
parameters but perform
28 different tests to identify different forms of incompatibilities that
may have different levels of
22933709.1

CA 02934343 2016-06-29
1 importance. For example a version four operating system versus a version
three operating
2 system may be considered less costly to remedy and thus less detrimental
than a version five
3 operating system compared to a version one operating system. As can be
seen, even though the
4 operating systems are different in both cases, the nature of the
difference can also be considered
5 and different weights and/or remedies applied accordingly.
6 [00137] Rules can also test for similarities that indicate
contentions which can result in
7 incompatibilities between systems. For example, rules can check for name
conflicts with respect
8 to system names, database instance names, user names, etc.
9 [00138] The flow of data for applying exemplary rule sets 28 is
shown in Figure 15. In this
10 example, the system data gathered from a pair of systems 16 are
evaluated using three rule sets.
11 The rule engine 90 (see also Figure 7) evaluates the data parameters of
the systems 16 by
12 applying rule sets 1, 2 and 3 which comprise of the exemplary rules A,
B, C, D, E and F. The
13 evaluation of the rules results in compatibility scores and zero or more
matched rule items for
14 each rule set 28. These results can be used for subsequent analyses,
such as combining with
15 workload compatibility results to obtain overall compatibility scores.
16 Rule Set Specification
17 [00139] Each rule set 28 has a unique rule set identifier (UUID),
rule set name, rule set
18 description, rule set version, rule set type (e.g. controlled by system
10 or by user), and a rule set
19 category (e.g. generic rule set categorization such as business,
configuration etc.).
20 [00140] As described above, each rule set 28 is also comprised of
one or more rules.
21 Rule Definition
22 [00141] A rule is conceptually a form of metadata 62 that
specifies a parameter, tests to
23 detect an incompatibility between the baseline and target systems 16,
the relative importance of
24 the incompatibility, and the costs associated with the remediating the
incompatibility. Each rule
is defined by a fixed number of fields as listed in Table 1 below.
26
22933709.1

CA 02934343 2016-06-29
21
Field Description
Name Rule name
Description Rule description
Data Query Type Query type to get data parameter (e.g. URI,
Attribute, Alias)
Query Value Data query specification based on query type
Baseline Test Baseline test specification
Target Test Target test specification
Weight Rule penalty weight
Mutex Flag YIN ¨ This flag is used at multiple levels as
described below
Match Flag Rule match name referenced by suppress flag
Suppress Flag Rule dependency expression to determine whether
to suppress rule
Remediation Costs Estimated remediation costs for rule item if
true.
Enabled Flag True/False
1 Table I ¨ Rule Item Field Specification
2 [00142] The name and description fields are lay descriptions of
the condition or discrepancy
3 detected by the rule. These fields are used to provide management-level
summaries when
4 processing rule sets 28. The fields can provide as much or as little
information as required by the
application.
6 Rule Query Specification
7 [00143] The data query type and value identify the data parameter
to be evaluated by the
8 rule. The query type specifies whether the rule applies to audited data
directly (UriQuery),
9 normalized values (AliasQuery) or attributes (AttrQuery). The query value
specifies the
parameter specification based on the query type. For UriQuery, AliasQuery and
AttrQuery
11 types, query values specify URI, alias and attribute names,
respectively.
12 [00144] The URI value conforms to a URI-like format used for
accessing parameters from
13 the native audit data model. The URI can specify the module, object and
property of the data
14 object or property that is being requested. The optional URI fragment
(i.e. the portion after the
"#" symbol) specifies the specific object instance (table row) that is being
evaluated, with "*"
16 denoting a wildcard that matches all instances.
22933709.1

CA 02934343 2016-06-29
22
1 Rule Match Test
2 [00145] If specified, the baseline field represents the literal
value that would need to match
3 the value of the object/property on the source system in order for the
rule to match. For objects
4 and object instances, the keywords "absent" and "present" are preferably
used to match cases
where that object is absent or present respectively. Similar to the baseline
field, the target field
6 allows a literal match against the value of the object/property on the
target system. The target
7 field also supports the absent/present specifiers. For numeric
properties, relational operators (>,
8 <,=, !=) can be used to cause the rule to trigger if the target value has
the specified relationship
9 with the source value.
[00146] In order to apply a rule to a target/baseline pair, the following
test specification can
11 be followed as shown in Table 2.
Target Baseline Description
1 Values are different
2 != Values are different
3 != Values are different
4 > ANY Target > Baseline
5 < ANY Target < Baseline
6 = Values are the same
7 = Values are the same
8 Values are similar
9 Values are similar
10 X Y Target = X and Baseline = Y
11 X Target = X and Baseline != X
12 Y Target != Y and Baseline = Y
12 Table 2 ¨ Target/Baseline Test Specification
13 [00147] The rule test specifications can be extended to include such
things as regular
14 expression pattern matching, logical test expressions (using AND, OR),
etc. as described below.
Rule Weight
16 [00148] The weight field specifies the relative importance of that
property and combination
17 of source/target values (if specified) in regard to the overall context
of the comparison. Higher
18 values indicate that the condition detected by the rule has a high
impact on the target
22933709.1

CA 02934343 2016-06-29
23
1 environment 12, with 100% being an "absolute constraint" that indicates
complete
2 incompatibility.
3 Mutex Flag
4 [00149] The mutex flag field can be used to avoid multiple
penalties that would otherwise
skew the scores. A "Y" in the mutex flag field specifies that multiple matches
of the same rule
6 43 will incur only a single penalty on the overall score (as specified in
the weight field), as
7 opposed to multiple accumulating penalties (which is the default
behaviour).
8 [00150] The mutex flag can be interpreted at multiple levels. When
comparing a target and
9 source system, should the rule specifier expand to a list (e.g. software
list), the rule is evaluated
for each instance. In this case, if the flag is a "Y", the score is penalized
by the rule weight a
11 maximum of one time, even if the rule was true for multiple instances.
If the flag is "N", the
12 score should be penalized for each instance that was evaluated to be
true. Furthermore, when
13 computing multi-dimensional compatibility scores (multiple sources
transferred to a single
14 target), the calculation is based on the union of the rule items that
were true. In this case, the flag
is used to determine whether to penalize the multi-stack score once or for
each unique rule
16 instance. The multi-dimensional compatibility analysis is described in
detail below.
17 Rule Dependency
18 [00151] The match flag field enables an optional symbolic flag to
be "set" when a rule
19 matches, and which can subsequently be used to suppress other rules
(through the "Suppress
Flags" field). This effectively allows rule dependencies to be modeled in the
rule set 28. The
21 suppress flag field allows symbolic flags (as specified in the "Match
Flag" field) to be used to
22 suppress the processing of rules. This allows specific checks to be
skipped if certain higher-level
23 conditions exist. For example, if the operating systems are different,
there is no need to check
24 the patches. It should be noted that this allows any arbitrary logic to
be constructed, such as how
logic can be built from NAND gates.
26 [00152] The remediation cost field is preferably optional. The
remediation field represents
27 the cost of "fixing" the system(s) (i.e. eliminating the condition or
discrepancy detected by the
22933709.1

CA 02934343 2016-06-29
24
1 rule 43). When analyzing differences between (or changes to) IT systems
this is used to
2 represent hardware/software upgrade costs, administrative costs and other
costs associated with
3 making the required changes to the target systems. The calculations
behind this field vary based
4 on the nature of the system and the parameter that would need to be
added, upgraded etc.
[00153] Each rule can include a true/false enable flag for enabling and
disabling the rule
6 item.
7 Rule Set Example
8 [00154] Figure 16 provides an example rule set 28, which includes
a number of rules. The
9 following refers to the number indicated in the leftmost column of Figure
16.
[00155] Rule 1 scrutinizes the normalized (AliasQuery) representation of
the operating
11 systems (e.g. WindowsTM, SolarisTM, AIXTM, LinuxTM, etc.) on both the
source and target
12 systems and heavily penalizes cases where these are different as evident
from the high weight
13 factor (70%). Rule 2 penalizes systems that have different operating
system versions (e.g.
14 WindowsTM NT vs WindowsTM 2000), and is suppressed (i.e. not processed)
in cases where the
systems have different overall operating systems (as detected in the previous
rule). Rule 3
16 detects if systems are in different time zones. Rule 4 penalizes
combinations of systems where
17 the target has less memory than the source (this is what is referred to
as a directional rule, which
18 can give differing results if sources and targets are reversed, e.g.
asymmetric results). Rule 5
19 operates directly against audit data and detects cases where the
operating system patch level
differs. This rule is not processed if either the operating system or the
operating system version
21 are different (since this renders the comparison of patches
meaningless).
22 [00156] Rule 6 scrutinizes the lists of all patches applied to the
source and target systems and
23 penalizes cases where they differ. The mutex flag is set, indicating
that the penalty is applied
24 only once, no matter how many patch differences exist. This rule is
ignored in cases where
either the operating system or operating system version are different. Rule 7
penalizes system
26 combinations of servers that are running the same OS but are configured
to run a different
27 number of kernel bits (e.g. 64-bit vs 32-bit). Rule 8 penalizes
combinations where there are
22933709.1

CA 02934343 2016-06-29
1 kernel parameters defined on the source that are not defined on the
target. This rule is not
2 applied if the operating systems are different.
3 [00157] Rule 9 scrutinizes a specific kernel setting (SHMMAX, the
setting that specifies how
4 much shared memory a system can have) and penalizes combinations where it
is set to a lower
5 value on the target than it is on the source system. Rule 10 penalizes
combinations of systems
6 that are running different database version, e.g. OracleTM 9 vs. OracleTM
8. Rule 11 penalizes
7 combinations of systems that are running different versions of OracleTM.
Rule 11 is suppressed
8 if the more specific Rule 10 is true. It should be noted that the
remediation cost is relatively
9 high, owing to the fact that it will take a software upgrade to eliminate
this discrepancy. In some
10 cases the remediation cost can be low where the upgrade is less
expensive. Rule 12 penalizes
11 combinations of systems that are running different versions of Apache.
It should be noted that
12 the remediation cost is relatively low, as apache is an open source
product and the cost of
13 upgrade is based on the hourly cost of a system administrator and how
long it will take to
14 perform the upgrade.
15 [00158] Rule 13 scrutinizes a windows-specific area of the audit
data to determine if the
16 source and target systems are running different service pack levels. It
should be noted that this
17 rule closely mirrors rule 5, which uses a rule specifier that
scrutinizes the UNIXTm/LinuxTm area
18 of the audit data. Rule 14 scrutinizes the lists of all hotfixes applied
to the source and target
19 systems and penalizes cases where they differ. This rule closely mirrors
rule 6, which scrutinizes
20 patches on UNIXTM and LinuxTM. Rule 15 detects differing startup
commands between systems.
21 Rule 16 is a rule to detect differing Paths between systems, and rule 17
detects differing System
22 Paths between systems.
23 [00159] Rule 18 penalizes system combinations where there are
services installed on the
24 source that are not installed on the target. This rule has the mutex
flag set, and will therefore
25 only penalize a system combination once, no matter how many services are
missing. Rule 19
26 penalizes system combinations where there are services started on the
source that are not started
27 on the target. It should be noted that both the weight and the
remediation cost are lower than the
28 previous rule, owing to the fact that it is generally easier and less
expensive to start a service than
22933709.1

CA 02934343 2016-06-29
26
1 install it. Finally, rule 20 penalizes combinations where the target
system is missing the virus
2 scanner software.
3 [00160] It will be appreciated that the above described rules and rule
set 28 are shown for
4 illustrative purposes only and that any combination of rules can be used
to achieve specific goals.
For example, rules that are applicable to the OS can be grouped together to
evaluate how a
6 system 16 compares to its peers. Similarly, rules pertaining to database,
Java applications etc.
7 can also be grouped.
8 [00161] As discussed above, Figure 12 provides a table listing several
additional example rule
9 sets and associated descriptions.
[00162] The system consolidation analysis computes the compatibility of a set
of systems 16
11 based not only on technical and workload constraints as exemplified
above, but also business
12 constraints. The business constraints can be expressed in rule sets 28,
similar to the technical
13 constraints discussed above.
14 Rule Specification Enhancements
[00163] In another embodiment, the rule test specification shown above in
Table I can be
16 extended to support the "NOT" expression, e.g. "!X" where "!" indicates
not equal to; and to
17 support regular expression patterns, e.g. "rx:Pattern" or "!rx:Pattern".
An example is shown in
18 Table 3 below.
Target Baseline Description
X !Y Target = X and Baseline != Y
!X Target != X
!Y Baseline != Y
rx:P 1 !rx:P2 Target matches P1 and Baseline does not
match P2
rx :P1 Target matches P1
!rx:P2 Baseline does not match P2
X rx:P2 Target = X and Baseline matches P2
!X rx:P2 Target != X and Baseline matches P2
19 Table 3: Extended Rule Test Specification
22933709.1

CA 02934343 2016-06-29
27
1 [00164] In yet another embodiment, an advanced rule set specification may
also be used to
2 support more flexible rule logic by considering the following rule item
specification shown in
3 Table 4.
Field Description
Name As before
Description As before
Data Query Type As before
Query Value As before
Test Test expression
Weight As before
Mutex Flag As before
Match Flag As before
Match Test Rule dependency expression to determine whether
to perform the corresponding match action
Match Action Action (run/suppress/stop) to execute based on
match test result
Remediation Costs As before
Enabled Flag As before
4 Table 4: Advanced Rule Item Specification
[00165] The highlighted fields above are incorporated to include a test
expression and match
6 test and corresponding match action to be executed according to the match
test result.
7 [00166] In the advanced rule specification, the following elements are
supported: variables,
8 such as target, source etc.; regular expressions, e.g. regex("pattern");
constants, such as quoted
9 string
values "lala", "1" etc.; test operators, =, !=, <,<=, >, >=, !¨; logical
operators, AND
&&, OR, II; and logical operator precedence, ( ) for nested expressions.
11 [00167] For example:
12 [00168] Target != Source;
13 [00169] Target >= Source &&> "1024";
14 [00170] Target = regex("Windows") && Source != regex("SolarislAIX"); and
[00171] (Target = "X" && Source = "Y") II (Target = "A" && Source = "B").
22933709.1

CA 02934343 2016-06-29
28
1 [00172] The test expression supports the following elements: match flag
names; logical
2 operators, AND &&, OR, II; test operator, NOT !; and logical operator
precedence, ( ) for nested
3 expressions. Examples assuming match flags of A, B, C, D, etc. are as
follows:
4 [00173] A
[00174] A II B
6 [00175] A && B
7 [00176] (A && !B) II C
8 [00177] The match action specifies the action to perform if the match
test rule is true.
9 Possible actions are: run, evaluate the rule item; suppress, do not
evaluate the rule item; and stop.
[00178] Where basic and advanced rule sets are available for the same analysis
program, there
11 are a number of options for providing compatibility. The rule set
specification can be extended
12 to include a property indicating the minimum required rule engine
version that is compatible
13 with the rule set. In addition, the basic rule sets can be automatically
migrated to the advanced
14 rule set format since the advanced specification provides a super set of
functionality relative to
the basic rule set specification. It will be appreciated that as new rules and
rule formats are
16 added, compatibility can be achieved in other ways so long as legacy
issues are considered where
17 older rule versions are important to the analysis.
18 1-to-1 Rule-based Compatibility Analysis
19 [00179] An exemplary process flow for a rule-based compatibility
analysis is shown in greater
detail in Figures 17 and 18. When analyzing system compatibility, the list of
target and source
21 systems 16 are the same. The compatibility is evaluated in two
directions, e.g. for a Server A
22 and a Server B, migrating A to B is considered as well as migrating B to
A.
23 [00180] Turning first to Figure 17, for each rule set R (R = 1 to M
where M is the number of
24 rule sets) and for each target system T (T = 1 to N where N is the
number of systems), the rule
engine 90 first looks at each source system S (S = 1 to N). If the
source=target then the
22933709.1

CA 02934343 2016-06-29
29
1 configuration compatibility score for that source is set to 100, no
further analysis is required and
2 the next pair is analyzed. If the source and target are different, the
rules are evaluated against the
3 source/target pair to compute the compatibility score, remediation cost
and to compile the
4 associated rule details. Estimated remediation costs are optionally
specified with each rule item.
As part of the rule evaluation and subsequent compatibility score calculation,
if a rule is true, the
6 corresponding cost to address the deficiency is added to the remediation
cost for the pair of
7 systems 16 being analysed.
8 [00181] The evaluation of the rules is shown in Figure 18. The evaluation
of the rules
9 considers the snapshot data 18 for the source system and the target
system, as well as the
differential rule set 28 that being applied. For each rule in the set 28, the
data referenced by the
11 rule is obtained for both the target and source. The rule is evaluated
by having the rule engine 90
12 compare the data. If the rule is not true (i.e. if the systems 16 are
the compatible according to the
13 rule definition) then the data 18 is not considered in the compatibility
score and the next rule is
14 evaluated. If the rule is true, the rule details are added to an
intermediate result. The
intermediate result includes all true rules.
16 [00182] Preferably, a suppression tag is included with each rule. As
discussed above, the
17 suppression tag indicates other rules that are not relevant if that rule
is true. The suppression flag
18 allows the program 10 to avoid unnecessary computations. A mutex flag is
also preferably used
19 to avoid unfairly reducing the score for each true rule when the rules
are closely affected by each
other.
21 [00183] Once each rule has been evaluated, a list of matched rules is
created by removing
22 suppressed rule entries from the intermediate results based on rule
dependencies, which are
23 defined by rule matching and suppression settings (e.g. match flags and
suppression tags). The
24 compatibility score for that particular source/target pair is then
computed based on the matched
rules, weights and mutex settings. Remediation costs are also calculated based
on the cost of
26 updating/upgrading etc. and the mutex settings.
27 [00184] Turning back to Figure 17, the current target is then evaluated
against all remaining
28 sources and then the next target is evaluated. As a result, an N x N map
32 can be created that
22933709.1

CA 02934343 2016-06-29
1 shows a compatibility score for each system against each other system.
The map 32 can be
2 sorted by grouping the most compatible systems. The sorted map 32 is
comprised of every
3 source/target combination and thus provides an organized view of the
compatibilities of the
4 systems 16.
5 [00185] Preferably, configuration compatibility results are then
generated for each rule set 28,
6 comprising the map 32 (e.g. Figure 4) and for each source-target pair
details available pertaining
7 to the configuration compatibility scoring weights, remediation costs and
applicable rules. The
8 details can preferably be pulled for each source/target pair by selecting
the appropriate cell 92
9 (see Figure 19).
10 1-to-1 Workload Compatibility Analysis
11 [00186] The workload compatibility analysis evaluates the compatibility
of each source-target
12 pair with respect to one or more workload data types 30. The analysis
employs a workload
13 stacking model to combine the source workloads onto the target system.
The combined
14 workloads are then evaluated using threshold and a scoring algorithm to
calculate a compatibility
15 score for each workload type.
16 Workload Data Types and Benchmarks
17 [00187] System workload constraints must be assessed when considering
consolidation to
18 avoid performance bottlenecks. Workload types representing particularly
important system
19 resources include % CPU utilization, memory usage, disk space used, disk
I/0 throughput and
20 network I/0 throughput. The types of workload analyzed can be extended
to support additional
21 performance metrics. As noted above, example workload types are
summarized in the table
22 shown in Figure 13.
23
24
22933709.1

CA 02934343 2016-06-29
31
1 [00188] Each workload type can be defined by the properties listed in
Table 5 below:
Property Description Examples
Name Workload key name CPU_Utilization
Display Name Workload display name for UI CPU Utilization
Benchmark Benchmark type corresponding to workload cpu
Type type
Alias Name Alias to get workload values from repository CpuDays
Alias File Alias file containing above alias
cpu_workload_alias.xml
Description Short description of workload type CPU Utilization
Unit Unit of workload value
Test as Boolean flag indicating whether to test the true
percent? workload against a threshold as a percentage
(true) or as an absolute value (false)
2 Table 5: Workload Type Definition
3 [00189] Workload values can be represented as percentages (e.g. %CPU
used) or absolute
4 values (e.g. disk space used in MB, disk 1/0 in MB/sec).
[00190] The term workload benchmark refers to a measure of a system's
capability that may
6 correspond to one or more workload types. Workload benchmarks can be
based on industry
7 benchmarks (e.g. CINT2000 for processing power) or the maximum value of a
system resource
8 (e.g. total disk space, physical memory, network I/0 bandwidth, maximum
disk 1/0 rate).
9 Benchmarks can be used to normalize workload types that are expressed as
a percentage (e.g.
%CPU used) to allow direct comparison of workloads between different systems
16.
11 [00191] Benchmarks can also be used to convert workload types 30 that
are expressed as
12 absolute values (e.g. disk space used in MB) to a percentage (e.g. %
disk space used) for
13 comparison against a threshold expressed as a percentage. Each benchmark
type can be defined
14 by the following in Table 6:
22933709.1

CA 02934343 2016-06-29
32
Property Description Example
Name Benchmark name cpu
Default value Default benchmark value if not resolved by <none>
the other means (optional)
Alias name Alias to get benchmark value for specific cpuBenchmark
system (optional)
Alias file File containing alias specified above benchmark_alias.xml
Attribute name System attribute value to lookup to get ClNT2000
benchmark value (optional)
Use alias first Boolean flag indicating whether to try the true
alias or the attribute lookup first
1 Table 6: Workload Benchmark Definition
2 [00192] System benchmarks can normalize workloads as follows. For systems
X and Y, with
3 CPU benchmarks of 200 and 400 respectively (i.e. Y is 2x more powerful
than X), if systems X
4 and Y have average CPU utilizations of 10% and 15% respectively, the
workloads can be
normalized through the benchmarks as follows. To normalize X's workload to Y,
multiply X's
6 workload by the benchmark ratio X/Y, i.e. 10% x 200/400 = 5%.
7 [00193] Stacking X onto Y would then yield a total workload of 5% + 15% =
20%.
8 Conversely, stacking Y onto X would yield the following total workload:
10% + 15% x 400/200
9 =40%.
Workload Data Model
11 [00194] As discussed above, workload data is collected for each system
16 through various
12 mechanisms including agents, standard instrumentation (e.g. Windows
Performance MonitorTM,
13 UNIXTM System Activity Reporter), custom scripts, third party
performance monitoring tools,
14 etc. Workload data is typically collected as discrete time series data.
Higher sample frequencies
provide better accuracy for the analysis (5 minute interval is typical). The
workload data values
16 should represent the average values over the sample period rather than
instantaneous values. An
17 hour of CPU workload data for three fictional systems (A, B and C) is
listed below in Table 7:
22933709.1

CA 02934343 2016-06-29
33
Timestamp %CPU used (A) %CPU used (B) %CPU used (C)
03/01/07 00:00:00 10 0 0
03/01/07 00:05:00 12 0 0
03/01/07 00:10:00 18 10 4
03/01/07 00:15:00 22 10 6
03/01/07 00:20:00 25 15 7
03/01/07 00:25:00 30 25 8
03/01/07 00:30:00 30 35 12
03/01/07 00:35:00 35 39 15
03/01/07 00:40:00 39 45 19
03/01/07 00:45:00 41 55 21
03/01/07 00:50:00 55 66 28
03/01/07 00:55:00 80 70 30
1 Table 7: Sample Workload Data (Time series)
2 [00195] Data from different sources may need to be normalized to common
workload data
3 types 30 to ensure consistency with respect to what and how the data is
measured. For example,
4 %CPU usage may be reported as Total %CPU utilization, %CPU idle, %CPU
system, %CPU
user, %CPU 1/0, etc. Disk utilization may be expressed in different units such
as KB, MB,
6 blocks, etc.
7 [00196] The time series workload data can be summarized into hourly
quartiles. Specifically,
8 the minimum, 1st quartile, median, 3rd quartile, maximum, and average
values are computed for
9 each hour. Based on the workload data from the previous example, the
corresponding
summarized workload statistics are listed below in Table 8:
11
12
22933709.1

CA 02934343 2016-06-29
34
System Hour %CPU %CPU %CPU %CPU %CPU %CPU
(Avg) (Min) (Q1) (Q2) (Q3) (Max)
A 0 33.1 10 20 30 40 80
0 30.8 0 10 30 50 70
0 12.5 0 5 10 20 30
1
2 Table 8: Summarized Workload Data (Quartiles) for Hour 0
3
4 [00197] The compatibility analysis for workload uses the hourly
quartiles. These statistics
allow the analysis to emphasize the primary operating range (e.g. 3rd
quartile) while reducing
6 sensitivity to outlier values.
7 Workload Data Extraction
8 [00198] Workload data is typically collected and stored in the workload
data cache 58 for
9 each system 16 for multiple days. At least one full day of workload data
should be available for
the analysis. When analyzing workloads, users can specify a date range to
filter the workload
11 data under consideration. A representative day is selected from this
subset of workload data for
12 the analysis. The criteria for selecting a representative day should be
flexible. A preferable
13 default assessment of the workload can select the worst day as the
representative day based on
14 average utilization. A less conservative assessment may consider the Nth
percentile (e.g. 95th)
day to eliminate outliers. Preferably, the worst days (based on daily average)
for each system
16 and for each workload type are chosen as the representative days.
17 [00199] The data extraction process flow for the workload compatibility
analysis is shown in
18 Figure 20. Preferably, the workload data cache 58 includes data obtained
during one or more
19 days. For each system 16 in the workload data set, for each workload
data type 30, get the
workload data for the specified date range, determine the most representative
day of data, (e.g. if
21 it is the worst day) and save it in the workload data snapshot. In the
result, a snapshot of a
22 representative day of workload data is produced for each system 16.
23
22933709.1

CA 02934343 2016-06-29
1 Workload Stacking
2 [00200] To evaluate the compatibility of one or more systems with respect
to server
3 consolidation, the workloads of the source systems are combined onto the
target system. Some
4 types of workload data are normalized for the target system. For example,
the %CPU utilization
5 is normalized using the ratio of target and source CPU processing power
benchmarks. The
6 consolidated workload for a specific hour in the representative day is
approximated by
7 combining the hourly quartile workloads.
8 [00201] There are two strategies for combining the workload quartiles,
namely original and
9 cascade. The original strategy simply adds like statistical values (i.e.
maximum, third quartile,
10 medians, etc.) of the source systems to the corresponding values of the
target system. For
11 example, combining the normalized CPU workloads of systems A and B onto
C (from Table 8),
12 the resulting consolidated workload statistics for hour 0 are:
13 [00202] Maximum = MaxA + MaxB + Maxc
= 180%
14 [00203] 3rd Quartile = Q3A + Q3s + Q3c =
110%
15 [00204] Median = Q2A + Q2B + Q2c =
70%
16 [00205] 1' Quartile = Q1A + Q1B + Qlc =
35%
17 [00206] Minimum = MinA + MinB + Minc
= 10%
18 [00207] The resulting sums can exceed theoretical maximum
capacity/benchmark values (e.g.
19 >100% CPU used).
20 [00208] The cascade strategy processes the statistical values in
descending order, starting with
21 the highest statistical value (i.e. maximum value). The strategy adds
like statistical values as
22 with original, but may clip the resulting sums if they exceed a
configurable limit and cascades a
23 portion of the excess value to the next statistic (i.e. the excess of
sum of the maximum values is
24 cascaded to 3'd quartile). The relevant configuration properties for the
cascade calculation are
25 listed below in Table 9.
22933709.1

CA 02934343 2016-06-29
36
Property Description Example
wci.cascade_overflow Boolean flag indicating whether to apply
truelfalse
cascade (true) or original (false) strategy
wci.cascade_overflow_proportion Fraction of overflow to cascade to next value
0.5
wci.clip_type Flag indicating whether to use benchmark
limitlmax
value (max) or analysis workload threshold
(limit) as the clipping limit
wci.clip_limit_ratio If clip_type is limit, this is the ratio to
apply 2
to workload limit to determine actual
clipping limit
1 Table 9: Workload Cascade Configuration
2 [00209] The following example applies cascade calculation to the example
workload data
3 from Table 2 with the following settings:
4 [00210] wci.cascade_overflow = true
[00211] wci.cascade_overflow_proportion = 0.5
6 [00212] wci.clip_type = max
7 [00213] The max clip type indicates that the clipping level is the
maximum value of the
8 workload type. In this example, the clipping level is 100% since the
workload type is expressed
9 as a percentage (%CPU). The overflow proportion indicates that 0.5 of the
excess amount above
the clipping level should be cascaded to the next statistic. The consolidated
workload statistics
11 are computed as follows:
12 [00214] MaxOriginal = MaxA + MaxB + Maxc
13 [00215] = 180%
14 [00216] MaxClipped = Minimum of (Maxoriginal,
clipping level)
[00217] = Minimum of (180, 100)
16 [00218] = 100%
22933709.1

CA 02934343 2016-06-29
37
1 [00219] MaxExcess = MaXOriginal MaXClipped
2 [00220] = 180% - 100%
3 [00221] = 80%
4 [00222] Maxovernow = MaXExcess*
wci.cascade_overflow_proportion
[00223] = 80% * 0.5
6 [00224] = 40%
7 [00225] Q30figinal = Q3A Q3B Q3c
8 [00226] = 110%
9 [00227] Q3Cascade = Q3Original MaX0verflow
[00228] = 110% + 40%
11 [00229] = 150%
12 [00230] Q3Clipped = Minimum of (Q3cascade, clipping level)
13 [00231] = Minimum of (150, 100)
14 [00232] = 100%
[00233] Q3Excess = 50%
16 [00234] Q3overuow = 25%
17 [00235] Q2Original = 70%
18 [00236] Q2cascade = Q2Original Q3Overflow
19 [00237] = 70% + 25%
22933709.1

CA 02934343 2016-06-29
38
1 [00238] = 95%
2 [00239] Q2Clipped = Minimum of (Q2cascade,
clip level)
3 [00240] = Minimum of (95, 100)
4 [00241] = 95%
[00242] Q2overnow = 0%
6 [00243] Qlofigina: = 35%
7 [00244] Q1 Cascade = 35%
8 [00245] Q I Clipped = 35%
9 [00246] Q1 Overflow = 0%
[00247] MinOriginal = 10%
11 [00248] Minchpped = 10%
12 [00249] The consolidated statistics for the above example are summarized
in the following
13 Table 10. The clipped values are net results of the analysis, namely the
new answer.
Statistic Original Cascade Clipped Excess Overflow
Max 180 180 100 80 40
Q3 110 150 100 50 25
Q2 70 95 95 0 0
Q1 35 35 35 0 0
Min 10 10 10 0 0
14 Table 10: Cascaded Statistics (Example 1)
[00250] Similarly, the following example applies cascade calculation to the
example workload
16 data from Table 7 with the following settings:
22933709.1

CA 02934343 2016-06-29
39
1 [00251] wci.cascade_overflow = true
2 [00252]
wci.cascade_overflow_proportion = 0.5
3 [00253] wci.clip_type = limit
4 [00254] wci.clip_limit_ratio = 2
[00255] This example specifies a limit clip type to indicate that the clipping
level is based on
6 the analysis threshold for the workload type. The clip_limit_ratio
specifies the ratio to apply to
7 the threshold to calculate the actual clipping level. For instance, if
the threshold is 80% and the
8 clip limit ratio is 2, the clipping level is 160%. The consolidated
workload statistics based on the
9 above settings listed below in Table 11.
Statistic Original Cascade Clipped Excess Overflow
Max 180 180 160 20 10
Q3 110 120 120 0 0
Q2 70 70 70 0 0
Qi 35 35 35 0 0
Min 10 10 10 0 0
Table 11: Cascaded Statistics (Example 2)
11 Workload Compatibility Scoring
12 [00256] Workload compatibility scores quantify the compatibility of
consolidating one or
13 more source systems onto a target system. The scores range from 0 to 100
with higher scores
14 indicating better compatibility. The scores are computed separately for
each workload type 30
and are combined with the system configuration and business-related
compatibility scores to
16 determine the overall compatibility scores for the systems 16. The
workload scores are based on
17 the following: combined system workload statistics at like times and
worst case, user-defined
18 workload thresholds, penalty calculation, score weighting factors, and
workload scoring formula.
22933709.1

CA 02934343 2016-06-29
1 [00257] Workloads are assessed separately for two scenarios: like-times
and worst case. The
2 like times scenario combines the workload of the systems at like times
(i.e. same hours) for the
3 representative day. This assumes that the workload patterns of the
analyzed systems are
4 constant. The worst case scenario time shifts the workloads for one or
more systems 16 to
5 determine the peak workloads. This simulates the case where the workload
patterns of the
6 analyzed systems may occur earlier or be delayed independently. The
combined workload
7 statistics (maximum, 31d quartile, median, 1st quartile and minimum) are
computed separately for
8 each scenario.
9 [00258] For a specific analysis, workload thresholds are specified for
each workload type.
10 The workload scores are penalized as a function of the amount the
combined workload exceeds
11 the threshold. Through the workload type definition (Table 6), the
workload data (Table 7) and
12 corresponding thresholds can be specified independently as percentages
or absolute values. The
13 workload data type 30 is specified through the unit property and the
threshold data type is
14 specified by the test as percent flag. The common workload/threshold
data type permutations
15 are handled as follows.
16 [00259] If the workload is expressed as a percentage and test as percent
is true (e.g. %CPU),
17 normalize workload percentage using the benchmark and compare as
percentages.
18 [00260] If the workload is expressed as an absolute value and test as
percent is true (e.g. disk
19 space), convert the workload to a percentage using benchmark and compare
as percentages.
20 [00261] If workload unit is expressed as an absolute value and test as
percent if false (e.g.
21 network I/0), compare workload value against threshold as absolute
values.
22 [00262] A penalty value ranging from 0 to 1 can be calculated for each
workload statistic and
23 for each scenario as a function of the threshold and the clipping level.
The penalty value is
24 computed as follows:
25 [00263] If Workload <= Threshold,
26 [00264] Penalty = 0
22933709.1

CA 02934343 2016-06-29
41
1 [00265] If Workload >, Clipping Level,
2 [00266] Penalty = 1
3 [00267] If Threshold < Workload < Clipping
Level,
4 [00268] Penalty = (Workload Value - Threshold) / (Clipping level
¨ Threshold)
[00269] Using Example 2 from above (threshold = 80%, clipping level = 160%),
the sliding
6 scale penalty values are computed as follows:
7 [00270] Penaltymax = (160 ¨ 80) / (160 ¨ 80)
8 [00271] = 1
9 [00272] PenaltyQ3 = (120 ¨ 80) / (160 ¨ 80)
[00273] = 0.5
11 [00274] PenaltyQ2 = 0 [since 70 < 801
12 [00275] PenaltyQi = 0 [since 35 < 80]
13 [00276] Penaltymin = 0 [since 10 < 80]
14
[00277] The workload score is composed of the weighted penalty values. Penalty
weights can
16 be defined for each statistic and scenario as shown in Table 12 below.
17
18
19
21
22933709.1

CA 02934343 2016-06-29
42
Statistic Scenario Property Example
Maximum Like Times wci.score.max_like_times 0.2
Maximum Worst Times wci.score.max_worst_times 0.1
3rd Quartile Like Times wci.score.q3 _like_times 0.4
3rd Quartile Worst Times wci.score.q3_worst_times 0.3
Median Like Times wci.score.q2 Jike_times 0
Median Worst Times wci.score.q2_worst_times 0
1st Quartile Like Times wci.score.q l_like_times 0
1st Quartile Worst Times wci.score.ql_worst_times 0
Minimum Like Times wci.score.min_like_times 0
Minimum Worst Times wci.score.min_worst_times 0
1
2 Table 12: Score Weighting Factors
3 [00278] The weights are used to compute the workload score from the
penalty values. If the
4 sum of the weights exceeds 1, the weights should be normalized to 1.
[00279] The actual score is computed for a workload type by subtracting the
sum of the
6 weighted penalties from 1 and multiplying the result by 100:
7 [00280] Score = 100 * (1 - Sum (Weight * Penalty))
8 [00281] Using the previous example and assuming that the like times are
the same as the
9 worst times, the score is calculated as follows:
[00282] Score = 100 * (1¨(Weight
-max Worst * Penaltymax Worst + Weightmax Like * Penaltymax Like +
11 [00283] Weight()3 Worst * PenaltyQ3 Worst + Weighto3 Like *
PenaltYQ3 Like +
12 [00284] Weight
-Q2 Worst * Penalty
Q2 Worst WeightQ2 Like * PenaltyQ2 Like +
13 [00285] WeightQl Worst * Penaltm worst + WeightQl Like * PellaltyQ1 Like
+
14 [00286] Weightmin Worst * Penalt
-ymin Worst + Weightmin Like * PenaltyMin Like))
[00287] = 100 * (1 ¨ (0.1*1 + 0.2*1 + 0.3*0.5 + 0.4*0.5)
16 [00288] =30
17
22933709.1

CA 02934343 2016-06-29
43
1 1-to-1 Workload Compatibility Analysis Process Flow
2 [00289] A flow chart illustrating a workload compatibility analysis is
shown in Figure 21.
3 When analyzing 1-to-1 workload compatibility, the list of target and
source systems 16 is the
4 same. The compatibility is evaluated in two directions, e.g. for Server A
and Server B, migrating
A to B is considered as well as migrating B to A.
6 [00290] The workload analysis considers one or more workload types, e.g.
CPU busy, the
7 workload limits 94, e.g. 75% of the CPU being busy, and the system
benchmarks 96, e.g. relative
8 CPU power. Each system 16 in the workload data set is considered as a
target (T = 1 to N) and
9 compared to each other system 16 in the data set 18 as the source (S = 1
to N). The analysis
engine 64 first determines if the source and target are the same. If yes, then
the workload
11 compatibility score is set to 100 and no additional analysis is required
for that pair. If the source
12 and target are different, the system benchmarks are then used to
normalize the workloads (if
13 required). The normalized source workload histogram is then stacked on
the normalized target
14 system.
[00291] System benchmarks can normalize workloads as follows. For systems X
and Y, with
16 CPU benchmarks of 200 and 400 respectively (i.e. Y is 2x more powerful
than X), if systems X
17 and Y have average CPU utilization of 10% and 15% respectively, the
workloads can be
18 normalized through the benchmarks as follows. To normalize X's workload
to Y, multiply X's
19 workload by the benchmark ratio X/Y, i.e. 10% x 200/400 = 5%. Stacking X
onto Y would then
yield a total workload of 5% + 15% = 20%. Conversely, stacking Y onto X would
yield the
21 following total workload: 10% + 15% x 400/200 = 40%.
22 [00292] Using the stacked workload data, the workload compatibility
score is then computed
23 for each workload type as described above.
24 [00293] Each source is evaluated against the target, and each target is
evaluated to produce an
N x N map 34 of scores, which can be sorted to group compatible systems (see
Figure 5).
26 Preferably, a workload compatibility results is generated that includes
the map 34 and workload
27 compatibility scoring details and normalized stacked workload histograms
that can be viewed by
28 selecting the appropriate cell 92 (see Figure 22). The workload
compatibility results are then
22933709.1

CA 02934343 2016-06-29
44
1 combined with the rule-based compatibility results to produce the overall
compatibility scores,
2 described below.
3 1-to-1 Overall Compatibility Score Calculation
4 [00294] The results of the rule and workload compatibility analyses are
combined to compute
an overall compatibility score for each server pair. These scores preferably
range from 0 to 100,
6 where higher scores indicate greater compatibility and 100 indicating
complete or 100%
7 compatibility.
8 [00295] As noted above, the analysis input can include importance
factors. For each rule set
9 28 and workload type 30 included in the analysis, an importance factor 88
can be specified to
adjust the relative contribution of the corresponding score to the overall
score. The importance
11 factor 88 is an integer, preferably ranging from 0 to 10. A value of 5
has a neutral effect on the
12 contribution of the component score to the overall score. A value
greater than 5 increase the
13 importance whereas a value less than 5 decreases the contribution.
14 [00296] The overall compatibility score for the system pair is computed
by combining the
individual compatibility scores using a formula specified by an overlay
algorithm which
16 performs a mathematical operation such as multiply or average, and the
score is recorded.
17 [00297] Given the individual rule and workload compatibility scores, the
overall compatibility
18 score can be calculated by using the importance factors as follows for a
"multiply" overlay:
100 ¨ (100 ¨ ) * F/ 100 ¨ (100 ¨ S2) * F/ 100 - (100 Sõ ) * 5
19 0=100* 5 * 5 *
100 100 100
[00298] where 0 is the overall compatibility score, n is the total number of
rule sets 28 and
21 workload types 30 included in the analysis, Si is the compatibility
score of the ith rule set 28 or
22 workload type 30 and F, is the importance factor of the ith rule set 28
or workload type 30.
22933709.1

CA 02934343 2016-06-29
1 [00299] It can be appreciated that setting the importance factor 88 to
zero eliminates the
2 contribution of the corresponding score to the overall score. Also,
setting the importance factor
3 to a value less than 5 reduces the score penalty by 20% to %100 of its
original value.
4 [00300] For example, a compatibility score of 90 implies a score penalty
of 10 (i.e. 100-
5 90=10). Given an importance factor of 1, the adjusted score is 98 (i.e.
100-10*1/5=100-2=98).
6 On the other hand, setting the importance factor to a value greater than
5 increases the score
7 penalty by 20% to 100% of its original value. Using the above example,
given a score of 90 and
8 an importance factor of 10, the adjusted score would be 80 (i.e. 100-
10*10/5=100-20=80). The
9 range of importance factors 88 and their impact on the penalty scores are
summarized below in
10 Table 13.
Importance Affect on Original Original Adjusted Adjusted I
Factor Score Score Score Penalty Score
Penalty Penalty Score
0 -100% 90 10 0 100
1 -80% 90 10 2 98
2 -60% 90 10 4 96
3 -40% 90 10 6 94
4 -20% 90 10 8 92
5 0 90 10 10 90
6 +20% 90 10 12 88
7 +40% 90 10 14 86
8 +60% 90 10 16 84
9 +80% 90 10 18 82
10 +100% 90 10 20 80
11 Table 13: Importance Factors
12 [00301] If more systems 16 are to be examined, the above process is
repeated. When overall
13 compatibility analysis scores for all server pairs have been computed,
the map 36 is displayed
14 graphically (see Figure 6) and each cell 92 is linked to a scorecard 98
that provides further
15 information (Figure 23). The further information can be viewed by
selecting the cell 92. A
16 sorting algorithm is then preferably executed to configure the map 36 as
shown in Figure 6.
17
18
22933709.1

CA 02934343 2016-06-29
46
1 Visualization and Mapping of Compatibility Scores
2 [00302] As mentioned above, the 1-to-1 compatibility analyses of N system
computes NxN
3 compatibility scores by individually considering each system 16 as a
consolidation source and as
4 a target. Preferably, the scores range from 0 to 100 with higher scores
indicating greater system
compatibility. The analysis will thus also consider the trivial cases where
systems 16 are
6 consolidated with themselves and would be given a maximum score, e.g.
100. For display and
7 reporting purposes, the scores are preferably arranged in an NxN map
form.
8 Rule-based Compatibility Analysis Visualization
9 [00303] An example of a rule-based compatibility analysis map 32 is shown
in Figure 4. The
compatibility analysis map 32 provides an organized graphical mapping of
system compatibility
11 for each source/target system pair on the basis of configuration data.
The map 32 shown in
12 Figure 4 is structured having each system 16 in the environment 12
listed both down the leftmost
13 column and along the uppermost row. Each row represents a consolidation
source system, and
14 each column represents the possible consolidation target. Each cell 92
contains the score
corresponding to the case where the row system is consolidated onto the column
(target) system
16 16.
17 [00304] The preferred output shown in Figure 4 arranges the systems 16
in the map 32 such
18 that a 100% compatibility exists along the diagonal where each server is
naturally 100%
19 compatible with itself. The map 32 is preferably displayed such that
each cell 92 includes a
numerical score and a shade of a certain colour. As noted above, the higher
the score (from zero
21 (0) to one hundred (100)), the higher the compatibility. The scores are
pre-classified into
22 predefined ranges that indicate the level of compatibility between two
systems 16. Each range
23 maps to a corresponding colour or shade for display in the map 32. For
example, the following
24 ranges and colour codes can be used: score = 100, 100% compatible, dark
green; score = 75-99,
highly compatible, green; score = 50-74, somewhat compatible, yellow; score =
25-49, low
26 compatibility, orange; and score = 0-24, incompatible, red.
22933709.1

CA 02934343 2016-06-29
47
1 [00305] The above ranges are only one example. Preferably, the ranges can
be adjusted to
2 reflect more conservative and less conservative views on the
compatibility results. The ranges
3 can be adjusted using a graphical tool similar to a contrast slider used
in graphics programs.
4 Adjustment of the slider would correspondingly adjust the ranges and in
turn the colours. This
allows the results to be tailored to a specific situation.
6 [00306] It is therefore seen that the graphical output of the map 32
provides an intuitive
7 mapping between the source/target pairs in the environment 12 to assist
in visualizing where
8 compatibilities exist and do not exist. In Figure 4, it can be seen that
a system pair having a
9 score = 100 indicates complete compatibility between the two systems 16
for the particular
strategy being observed, e.g. based on a chosen rule set(s) 28. It can also be
seen that a system
11 pair with a relatively lower score such as 26 is relatively less
compatible for the strategy being
12 observed.
13 [00307] The detailed differences shown in Figures 19 can be viewed by
clicking on the
14 relevant cell 92. Selecting a particular cell 92 accesses the detailed
differences table 100 shown
in Figure 19 which shows the important differences between the two systems,
the rules and
16 weights that were applied and preferably a remediation cost for making
the servers more
17 compatible. As shown in Figure 19, a summary differences table 102 may
also be presented
18 when selecting a particular cell 92, which lists the description of the
differences and the weight
19 applied for each difference, to give a high level overview of where the
differences arise.
System Workload Compatibility Visualization
21 [00308] An example workload compatibility analysis map 34 is shown in
Figure 5. The map
22 34 is the analog of the map 32 for workload analyses. The map 34
includes a similar graphical
23 display that indicates a score and a colour or shading for each cell to
provide an intuitive
24 mapping between candidate source/target server pairs. The workload data
is obtained using tools
such as the table 76 shown in Figure 8 and corresponds to a particular
workload factor, e.g. CPU
26 utilization, network I/0, disk I/0, etc. A high workload score indicates
that the candidate server
27 pair being considered has a high compatibility for accommodating the
workload on the target
28 system. The specific algorithms used in determining the score are
discussed in greater detail
22933709.1

CA 02934343 2016-06-29
48
1 below. The servers are listed in the upper row and leftmost column and
each cell 92 represents
2 the compatibility of its corresponding server pair in the map. It can be
appreciated that a
3 relatively high score in a particular cell 92 indicates a high workload
compatibility for
4 consolidating to the target server, and likewise, relatively lower
scores, e.g. 42 indicate lower
workload compatibility for a particular system pair.
6 [00309] The workload analysis details shown in Figures 22 can be viewed
by clicking on the
7 relevant cell 92. Selecting a particular cell 92 accesses the information
about the workload
8 analysis that generated the score shown in Figure 22, which shows the key
stacked workload
9 values, the workload benchmarks that were applied and preferably workload
charts for each
system separately, and stacked together.
11 Overall Compatibility Visualization
12 [00310] An example overall compatibility analysis map 36 is shown in
Figure 6. The map 36
13 comprises a similar arrangement as the maps 32 and 34, which lists the
servers in the uppermost
14 row and leftmost column to provide 100% compatibility along the
diagonal. Preferably the same
scoring and shading convention is used by all types of compatibility maps. The
map 36 provides
16 a visual display of scoring for candidate system pairs that considers
the rule-based compatibility
17 maps 32 and the workload compatibility maps 34.
18 [00311] The score provided in each cell 92 indicates the overall
compatibility for
19 consolidating systems 16. It should be noted that in some cases two
systems 16 can have a high
configuration compatibility but a low workload compatibility and thus end up
with a reduced or
21 relatively low overall score. It is therefore seen that the map 36
provides a comprehensive score
22 that considers not only the compatibility of systems 28 at the setting
level but also in its
23 utilization. By displaying the configuration maps 32, business maps,
workload maps 34 and
24 overall map 36 in a consolidation roadmap, a complete picture of the
entire system can be
ascertained in an organized manner. The maps 32, 34 and 36 provide a visual
representation of
26 the compatibilities and provide an intuitive way to evaluate the
likelihood that systems can be
27 consolidated, to analyse compliance and drive remediation measures to
modify systems 16 so
28 that they can become more compatible with other systems 16 in the
environment 12. It can
22933709.1

CA 02934343 2016-06-29
49
1 therefore be seen that a significant amount of quantitative data can be
analysed in a convenient
2 manner using the graphical maps 32, 34 and 36, and associated reports and
graphs (described
3 below).
4 [00312] For example, a system pair that is not compatible only for the
reason that certain
critical software upgrades have not been implemented, the information can be
uncovered by the
6 map 32, and then investigated, so that upgrades can be implemented,
referred to herein as
7 remediation. Remediation can be determined by modeling cost of
implementing upgrades, fixes
8 etc that are needed in the rule sets. If remediation is then implemented,
a subsequent analysis
9 may then show the same server pair to be highly compatible and thus
suitable candidates for
consolidation.
11 [00313] The overall analysis details 98 shown in Figures 23 can be
viewed by clicking on the
12 relevant cell 92. Selecting a particular cell 92 accesses the
information about the rule-based and
13 workload analyses that generated the score shown in Figure 23, which
shows the key differences
14 and stacked workload values and charts.
Sorting Examples
16 [00314] The maps 32, 34 and 36 can be sorted in various ways to convey
different
17 information. For example, sorting algorithms such as a simple row sort,
a simple column sort
18 and a sorting by group can be used.
19 [00315] A simple row sort involves computing the total scores for each
source system (by
row), and subsequently sorting the rows by ascending total scores. In this
arrangement, the
21 highest total scores are indicative of source systems that are the best
candidates to consolidate
22 onto other systems.
23 [00316] A simple column sort involves computing the total scores for
each target system (by
24 column) and subsequently sorting the columns by ascending total score.
In this arrangement, the
highest total scores are indicative of the best consolidation target systems.
22933709.1

CA 02934343 2016-06-29
1 [00317] Sorting by group involves computing the difference between each
system pair, and
2 arranging the systems to minimize the total difference between each pair
of adjacent systems in
3 the map. The difference between a system pair can be computed by taking
the square root of the
4 sum of the squares of the difference of a pair's individual compatibility
score against each other
5 system in the analysis. In general, the smaller the total difference
between two systems, the
6 more similar the two systems with respect to their compatibility with the
other systems. The
7 group sort promotes the visualization of the logical breakdown of an
environment by producing
8 clusters of compatible systems 18 around the map diagonal. These clusters
are indicative of
9 compatible regions in the environment 12. In virtualization analysis,
these are often referred to
10 as "affinity regions."
11 Analysis Results-User Interaction
12 [00318] It can also be seen that users can customize and interact with
the analysis program 10
13 during the analysis procedure to sort map scores, modify colour coding
(as discussed above),
14 show/specify source/targets, adjust weights and limits etc., and to show
workload charts. This
15 interaction enables the user to modify certain parameters in the
analysis to take into account
16 differences in objectives and different environments to provide a more
accurate analysis.
17 Multi-Dimensional Compatibility Analysis
18 [00319] The high level process flow of the multi-dimensional
compatibility analysis is
19 illustrated in Figure 24(a). In addition to the common compatibility
analysis input, this analysis
20 takes a consolidation solution as input. In contrast to the 1-to-1
compatibility analysis that
21 evaluates the compatibility of each system pair, this multi-dimensional
compatibility analysis
22 evaluates the compatibility of each transfer set 23 specified in the
consolidation solution.
23 [00320] The multi-dimensional compatibility analysis extends the
original 1-to-1
24 compatibility analysis that assessed the transfer of a single source
entity to a target. As with the
25 1-to-1 compatibility analysis, the multi-dimensional analysis produces
an overall compatibility
26 scorecard 98 based on technical, business and workload constraints.
Technical and business
22933709.1

CA 02934343 2016-06-29
51
1 compatibility are evaluated through one or more rule sets 28. Workload
compatibility is
2 assessed through one or more workload types 30.
3 [00321] This produces multi-dimensional compatibility analysis results,
which includes multi-
4 dimensional compatibility scores, maps and details based on the proposed
transfer sets 23.
[00322] For each transfer set 23, a compatibility score is computed for each
rule set 28 and
6 workload type 30. An overall compatibility score the transfer set 23 is
then derived from the
7 individual scores. For example, consider an analysis comprised of 20
systems, 3 rule sets, 2
8 workload types and 5 transfer sets 23:
9 = Systems: Si, S2, S3, ... S20
= Analyzed with rule sets: R1, R2, R3
11 = Analyzed with workload types: Wl, W2
12 = Transfer sets:
13 o Ti (Si, S2, S3 stacked onto S4)
14 o T2 (S5, S6, S7, S8, S9 stacked onto S10)
o T3 (S11, S12, S13 stacked onto S14)
16 o T4 (S15 stacked onto S16)
17 o T5 (S17 stacked onto S18)
18 = Unaffected systems: S19, S20
19 [00323] For the above example, the multi-dimensional compatibility
analysis would comprise
5 overall compatibility scores (one for each transfer set), 15 rule-based
compatibility scores (5
21 transfer sets x 3 rule sets), and 10 workload compatibility scores (5
transfer sets x 2 workload
22 types).
23 [00324] The systems 16 referenced in the transfer sets 23 of the
consolidation solution
24 correspond to the systems 16 specified in the analysis input. Typically,
the consolidation
solution is manually specified by the user, but may also be based on the
consolidation analysis,
26 as described later.
27 [00325] In addition to evaluating the compatibility of the
specified transfer sets, the
28 compatibility analysis can evaluate the incremental effect of adding
other source systems
22933709.1

CA 02934343 2016-06-29
52
1 (specified in the analysis input) to the specified transfer sets. From
the above example consisting
2 of systems Si to S20, the compatibility of the source systems S5 to S20
can be individually
3 assessed against the transfer set Ti. Similarly, the compatibility of the
source systems Si to S4
4 and Sll to S20 can be assessed with respect to the transfer set T2.
[00326] Similar to the 1-to-1 compatibility analysis, this analysis
involves 4 stages. The first
6 stage is gets the system data 18 required for the analysis to produce the
analysis data snapshot.
7 The second stage performs a multi-dimensional compatibility analysis for
each rule set 28 for
8 each transfer set 23. Next, the workload compatibility analysis is
performed for each workload
9 type 30 for each transfer set 23. Finally, these analysis results are
combined to determine overall
compatibility of each transfer set.
11 [00327] The multi-dimensional rule-based compatibility analysis differs
from the 1-to-1
12 compatibility analysis since a transfer set can include multiple sources
(N) to be transferred to
13 the target, the analysis may evaluate the compatibility of sources
amongst each other (N-by-N)
14 as well as each source against the target (N-to-1) as will be explained
in greater detail below.
The multi-dimensional workload and overall compatibility analysis algorithms
are analogous to
16 their 1-to-1 analysis counterparts.
17 Multi-dimensional Rule-based Compatibility Analysis
18 [00328] To assess the compatibility of transferring multiple
source entities (N) to a target (1),
19 the rule-based analysis can compute a compatibility score based on a
combination of N-to-1 and
N-by-N compatibility analyses. An N-to-1 intercompatibility analysis assesses
each source
21 system against the target. An N-by-N intracompatibility analysis
evaluates each source system
22 against each of the other source systems. This is illustrated in a
process flow diagram in Figure
23 24(b).
24 [00329] Criteria used to choose when to employ an N-to-1, N-by-N or both
compatibility
analyses depend upon the target type (concrete or malleable), consolidation
strategy (stacking or
26 virtualization), and nature of the rule item.
22933709.1

CA 02934343 2016-06-29
53
1 [00330] Concrete target models are assumed to rigid with respect to their
configurations and
2 attributes such that source entities to be consolidated are assumed to be
required to conform to
3 the target. To assess transferring source entities onto a concrete
target, the N-to-1 inter-
4 compatibility analysis is performed. Alternatively, malleable target
models are generally
adaptable in accommodating source entities to be consolidated. To assess
transferring source
6 entities onto a malleable target, the N-to-1 inter-compatibility analysis
can be limited to the
7 aspects that are not malleable.
8 [00331] When stacking multiple source entities onto a target, the source
entities and targets
9 coexist in the same operating system environment. Because of this
inherent sharing, there is
little flexibility in accommodating individual application requirements, and
thus the target is
11 deemed to be concrete. As such, the multi-dimensional analysis considers
the N-to-1 inter-
12 compatibility between the source entities and the target as the primary
analysis mechanism, but,
13 depending on the rule sets in use, may also consider the N-by-N intra-
compatibility of the source
14 entities amongst each other.
[00332] When virtualizing multiple source entities onto a target, the source
entities are often
16 transferred as separate virtual images that run on the target. This
means that there is high
17 isolation between operating system-level parameters, and causes
virtualization rule sets to
18 generally ignore such items. What is relevant, however, is the affinity
between systems at the
19 hardware, storage and network level, and it is critical to ensure that
the systems being combined
are consistent in this regard. In general, this causes the multi-dimensional
analysis to focus on
21 the N-to-N compatibility within the source entities, although certain
concrete aspects of the
22 target systems (such as processor architecture) may still be subjected
to (N-to-1) analysis.
23 N-to-1 Intercompatibility Score Calculation
24
[00333] N-to-1 intercompatibility scores reflect the compatibility between N
source entities
26 and a single target as defined by a transfer set 23 as shown in Figure
24(c). This analysis is
27 performed with respect to a given rule set and involves: 1) Separately
evaluate each source entity
28 against the target with the rule set to compile a list of the union of
all matched rule items; 2) For
29 each matched rule item, use the rule item's mutex (mutually exclusive)
flag to determine whether
22933709.1

CA 02934343 2016-06-29
54
1 to count duplicate matched rule items once or multiple times; and 3)
Compute the score based on
2 the product of all the penalty weights associated with the valid matched
rule items:
3 [00334] S = 100 * (1 ¨ w 1) * (1 ¨w2) * (1 ¨w3) * = = = (1 ¨we);
4 [00335] where S is the score and w, is the penalty weight of the ith
matched item.
N-to-1 Score Example
6
7 [00336] For example, assuming a transfer set ti comprises of systems s 1,
s2 and s3 stacked
8 onto s16, the union of matched rule items is based on evaluating sl
against s16, s2 against s16
9 and s3 against s16. Assuming this analysis produces a list of matched
items comprising those
shown below in Table 14:
# Source Target Rule Item Source Target Mutex Weight
Value Value
1 51 S16 Different Patch Levels SP2 SP3 Y 0.03
2 51 S16 Different Default 10Ø0.1 192.168Ø1 N 0.02
Gateways
3 S2 S16 Different Patch Levels SP2 SP3 Y 0.03
4 S3 S16 Different Patch Levels SP4 5P3 Y 0.03
5 S3 S16 Different Default 10Ø0.1 192.168Ø1 N 0.02
Gateways
6 S3 S16 Different Boot TRUE FALSE N 0.01
Settings
11
12 Table 14: Matched Items ¨ Multi-Dimensional N-to-I example
13
14 [00337] Although the target and source values vary, items 1, 3 and 4
apply to the same rule
item and are treated as duplicates due to the enabled mutex flag so that the
penalty weight is
16 applied only once. Items 2 and 5 apply to the same item and are exact
duplicates (same values)
17 so the penalty weight is applied only once, even though the mutex flag
is not enabled for this
18 item. As such, the compatibility score is computed as follows:
19 [00338] S = 100 * (1 ¨0.03) * (1 ¨0.02) * (1 ¨0.01) = 94
22933709.1

CA 02934343 2016-06-29
1
2 N-by-N Intracompatibility Score Calculation
3
4 [00339] N-by-N intracompatibility scores reflect the compatibility
amongst N source entities
5 with respect to a given rule set as shown in Figure 24(d). This analysis
involves: I) Separately
6 evaluate each source entity against the other source entities with the
rule set to compile a list of
7 the union of all matched rule items; 2) For each matched rule item, use
the rule item's mutex
8 (mutually exclusive) flag to determine whether to count duplicate matched
rule items once or
9 multiple times; and 3) Compute the score based on the product of all the
penalty weights
10 associated with the valid matched rule items:
11 [00340] S= 100* (1 ¨ \Nil) * (1 ¨ w2) * (1 ¨ w3) * === (l¨ w);
12 [00341] where S is the score and w, is the penalty weight of the ith
matched item.
13 N-by-N Score Example
14
15 [00342] For example, assuming a transfer set t 1 comprises of systems s
1, s2 and s3 stacked
16 onto s16, the union of matched rule items is based on evaluating s 1
against s2, s2 against sl, s2
17 against s3, s3 against s2, s 1 against s3 and s3 against s 1. Assuming
this analysis produces a list
18 of matched items comprising those shown below in Table 15:
# Source Target Rule Item Source Target
Mutex Weight
Value Value
1 Si S3 Different Patch Levels SP2 SP4 Y 0.03
2 S3 Si Different Patch Levels SP4 SP2 Y 0.03
3 S2 S3 Different Patch Level SP2 SP4 Y 0.03
4 S3 S2 Different Patch Level SP4 SP2 Y 0.03
5 Si S2 Different Default 10Ø0.1 192.168Ø1 N 0.02
Gateways
6 S2 Si Different Default 192.168Ø1 10Ø0.1 N 0.02
Gateways
7 S3 S2 Different Default 10Ø0.1 192.168Ø1 N 0.02
Gateways
8 S2 S3 Different Default 192.168Ø1 10Ø0.1 N 0.02
Gateways
9 Si S3 Different Boot TRUE FALSE N 0.01
Settings
22933709.1

CA 02934343 2016-06-29
56
# Source Target Rule Item Source Target Mutex Weight
Value Value
S3 Si Different Boot FALSE TRUE N 0.01
Settings
11 S2 S3 Different Boot TRUE FALSE N 0.01
Settings
12 S3 S2 Different Boot FALSE TRUE N 0.01
Settings
1
2 Table 15: Matched Items ¨ Multi-Dimensional N-by-N example
3
4 [00343] Items 1-4, 5-8 and 9-12, respectively are duplicates as they
apply to the same rule
5 items and have the same values. The compatibility score is computed as
follows:
6 [00344] S = 100 * (1 ¨0.03) * (1 ¨0.02) * (1 ¨0.01) = 94.
7 Multi-dimensional Workload Compatibility Analysis
8 [00345] A procedure for stacking the workload of multiple source systems
on a target system
9 is shown in Figure 25. The multi-stacking procedure considers the
workload limits that is
10 specified using the program 150, the per-system workload benchmarks
(e.g. CPU power), and
11 the data snapshot containing the workload data for the source and target
systems 16 that
12 comprise the transfer sets 23 to analyze. The analysis may evaluate
transfer sets 23 with any
13 number of sources stacked on a target for more than one workload type
30.
14 [00346] For each workload type 30, each transfer set 23 is evaluated.
For each source in the
transfer set 23, the system benchmarks are used to normalize the workloads as
discussed above,
16 and the source workload is stacked on the target system. Once every
source in the set is stacked
17 on the target system, the workload compatibility score is computed as
discussed above. The
18 above is repeated for each transfer set 23. A multi-stack report may
then be generated, which
19 gives a workload compatibility scorecard for the transfer sets along
with workload compatibility
scoring details and normalized multi-stacked workload charts.
21 [00347] Sample multi-dimensional compatibility analysis results are
shown in Figure 26-28.
22 Figure 26 shows a compatibility score map 110 with the analyzed transfer
sets. Figures 27 and
22933709.1

CA 02934343 2016-06-29
57
1 28 show the summary 112 and charts 114 from the multi-stack workload
compatibility analysis
2 details.
3 Consolidation Analysis
4 [00348] The consolidation analysis process flow is illustrated as D in
Figure 9. Using the
common compatibility analysis input and additional auto fit inputs, this
analysis seeks the
6 consolidation solution that maximizes the number of transfers while still
fulfilling the several
7 pre-defined constraints. The consolidation analysis repeatedly employs
the multi-dimensional
8 compatibility analysis to assess potential transfer set candidates. The
result of the consolidation
9 analysis comprises of the consolidation solution and the corresponding
multi-dimensional
compatibility analysis.
11 [00349] A process flow of the consolidation analysis is shown in Figure
29.
12 [00350] The auto fit input includes the following parameters: transfer
type (e.g. virtualize or
13 stacking), minimum allowable overall compatibility score for proposed
transfer sets, minimum
14 number of source entities to transfer per target, maximum number of
source entities to transfer
per target, and quick vs. detailed search for the best fit. Target systems can
also be designated as
16 malleable or concrete models.
17 [00351] As part of a compatibility analysis input specification, systems
can be designated for
18 consideration as a source only, as a target only or as either a source
or a target. These
19 designations serve as constraints when defining transfers in the context
of a compatibility
analysis. The analysis can be performed on an analysis with pre-existing
source-target transfers.
21 Analyses containing systems designated as source or target-only (and no
source or target
22 designations) are referred to as "directed analysis."
23 [00352] The same transfer type may be assumed for all automatically
determined transfers
24 within an analysis. The selected transfer type affects how the
compatibility analysis is
performed. The minimum overall compatibility score dictates the lowest
allowable score
26 (sensitivity) for the transfer sets to be included in the consolidation
solution. Lowering the
27 minimum allowable score permits a greater degree of consolidation and
potentially more
22933709.1

CA 02934343 2016-06-29
58
1 transfers. The minimum and maximum limits for source entities to be
transferred per target
2 (cardinality) define additional constraints on the consolidation
solution. The quick search
3 performs a simplified form of the auto fit calculation, whereas the
detailed search performs a
4 more exhaustive search for the optimal solution. This distinction is
provided for quick
assessments of analyses containing a large numbers of systems to be analyzed.
6 [00353] The transfer auto fit problem can be considered as a
significantly more complex form
7 of the classic bin packing problem. The bin packing problem involves
packing objects of
8 different volumes into a finite number of bins of varying volumes in a
way that minimizes the
9 number of bins used. The transfer auto fit problem involves transferring
source entities onto a
finite number of targets in a way that maximizes the number of transfers. The
basis by which
11 source entities are assessed to "fit" onto targets is based on the
highly nonlinear compatibility
12 scores of the transfer sets. As a further consideration, which can
increase complexity, some
13 entities may be either source or targets. The auto fit problem is a
combinatorial optimization
14 problem that is computationally expensive to solve through a brute force
search of all possible
transfer set permutations. Although straightforward to implement, this
exhaustive algorithm is
16 impractical due to its excessive computational and resource requirements
for medium to large
17 data sets. Consequently, this class of problem is most efficiently
solved through heuristic
18 algorithms that yield good but likely suboptimal solutions.
19 [00354] There are four variants of the heuristic auto fit algorithm that
searches for the best
consolidation solution:
21 [00355] Quick Stack ¨ quick search for a stacking-based consolidation
solution;
22 [00356] Detailed Stack ¨ more comprehensive search for a stacking-based
consolidation
23 solution;
24 [00357] Quick Virtualization ¨ quick search for a virtualization-based
consolidation solution;
and
26 [00358] Detailed Virtualization ¨ more comprehensive search for a
virtualization-based
27 consolidation solution.
22933709.1

CA 02934343 2016-06-29
59
1 [00359] The auto fit algorithms are iterative and involve the following
common phases:
2 Compile Valid Source and Target Candidates
3
4 [00360] The initial phase filters the source and target lists by
eliminating invalid entity
combinations based on the 1-to-1 compatibility scores that are less than the
minimum allowable
6 compatibility score. It also filters out entity combinations based on the
source-only or target-
7 only designations.
8 Set up Auto Fit Parameters
9 [00361] The auto fit algorithm search parameters are then set up. The
parameters can vary for
each algorithm. Example search parameters include the order by which sources
and targets are
11 processed and the criteria for choosing the best transfer set 23.
12 Compile Candidate Transfer Sets
13
14 [00362] The next phase compiles a collection of candidate transfer sets
23 from the available
pool of sources and targets. The candidate transfer sets 23 fulfill the auto
fit constraints (e.g.
16 minimum allowable score, minimum transfers per transfer set, maximum
transfers per transfer
17 set). The collection of candidate transfer sets may not represent a
consolidation solution (i.e.
18 referenced sources and targets may not be mutually exclusive amongst
transfer sets 23). The
19 algorithms vary in the criteria employed in composing the transfer sets.
In general, the detailed
search algorithms generate more candidate transfer sets than quick searches in
order to assess
21 more transfer permutations.
22 Choose Best Candidate Transfer Set
23
24 [00363] The next phase compares the candidate transfer sets 23 and
chooses the "best"
transfer set 23 amongst the candidates. The criteria employed to select the
best transfer set 23
26 varies amongst the algorithms. Possible criteria include the number of
transfers, the
27 compatibility score, general compatibility of entities referenced by set
and whether the transfer
28 set target is a target-only.
29
22933709.1

CA 02934343 2016-06-29
I Add Transfer Set to Consolidation Solution
2 [00364] Once a transfer set is chosen, it is added to the intermediate
consolidation solution.
3 The entities referenced by the transfer set are removed from the list of
available sources and
4 targets and the three preceding phases are repeated until the available
sources or targets are
5 consumed.
6 Compile Consolidation Solution Candidates
7
8 [00365] Once all the sources or targets are consumed or ruled out, the
consolidation solution
9 is considered complete and added to a list of candidate solutions.
Additional consolidation
10 solutions can be compiled by iterating from the second phase with
variations to the auto fit
11 parameters for compiling and choosing candidate transfer sets.
12 Choose Best Consolidation Solution
13
14 [00366] The criteria used to stop compiling additional solutions can be
based on detecting that
15 the solution is converging on a pre-defined maximum number of
iterations.
16 [00367] Finally, the best candidate consolidation solution can be
selected based on some
17 criteria such as the largest reduction of systems with the highest
average transfer set scores. The
18 general algorithm is shown in the flow diagram depicted in Figure 30.
19 Auto Fit Example
20 [00368] The following example demonstrates how a variant of the auto fit
algorithm searches
21 for a consolidation solution for a compatibility analysis comprised of
20 systems (Si, S2, S3, ...
22 S20) where 15 systems (S1-15) have been designated to be source-only and
5 (S16-20) to be
23 target-only. For this example, the auto fit input parameters are: 1)
Transfer type: stacking; 2)
24 Minimum allowable compatibility score: 80; 3) Minimum sources per
target: 1; 4) Maximum
25 sources per target: 5; and 6) Search type: quick.
26 [00369] The auto fit would be performed as follows:
27
28
22933709.1

CA 02934343 2016-06-29
61
1 1 - Compile Valid Source and Target Candidates
2
3 [00370] For each target (S16-S20), compile the list of possible sources.
Since some of the 1-
4 to-1 source-target compatibility scores are less than 80 (specified
minimum allowed score), the
following source-target candidates are found as shown in Table 16.
Target Sources (1-to-1 scores)
S16 S1 (100), S2 (95), S3 (90), S4 (88), S5 (88), S6 (85), S7
(82), S8 (81)
S17 Si, S2, S3, S5, S6, S7, S8, S9, S10
S18 Sl, S2, S3, S6, S7, S8, S9, S10, Sll
S19 S9, S10, S11, S12, S13, S14, S15
S20 S9, S10, S11, S12, S13, S14, S15
6 Table 16: Target-Source Candidates
7
8 2 - Setup Auto Fit Parameters
9
[00371] The auto fit search parameters initialized for this iteration
assume the following: 1)
11 When compiling candidate transfer sets 23, sort source systems in
descending order when
12 stacking onto target; and 2) When choosing the best transfer set 23,
choose set with most
13 transfers and if there is a tie, choose the set 23 with the higher
score.
14 3 - Compile Candidate Transfer Sets
16 [00372] The candidate transfer sets 23 are then compiled from the source-
target candidates.
17 For each target, the candidate sources are sorted in descending order
and are incrementally
18 stacked onto the target. The transfer set score is computed as each
source is stacked and if the
19 score is above the minimum allowable score, the source is added to the
transfer set 23. If the
score is below the minimum allowable, the source is not included in the set.
This process is
21 repeated until all the sources have been attempted or the maximum number
of sources per target
22 has been reached.
23 [00373] For this quick search algorithm, only a single pass is performed
for each target so that
24 only one transfer set candidate is created per target. Other search
algorithms can perform
multiple passes per target to assess more transfer permutations. The candidate
transfer sets 23
26 are shown below in Table 17.
22933709.1

CA 02934343 2016-06-29
62
1
Transfer Target Sources # Sources Score
Set
Ti S16 Sl, S2, S3, S4, S5 5 82
T2 S17 Sl, S2, S3, S5 4 81
T3 S18 Sl, S2, S3, S6, S7, S10 6 85
T4 S19 S9, S10, S11, S12, S13, S14 6 83
T5 S20 S9, S10, S11, S12, S13, S14 6 84
2 Table 17: Candidate Transfer Sets
3
4 4- Choose Best Candidate Transfer Set
6 [00374] The transfer set T3 is chosen as the best transfer set from the
candidates. For this
7 example, the criteria are the greatest number of sources and if tied, the
highest score.
8 5 - Add Transfer Set to Consolidation Solution
9 [00375] The transfer set T3 is added to the consolidation solution, and
the entities referenced
by this transfer set are removed from the target-source candidates list. The
updated target-source
11 candidates are shown below in Table 18.
Target Sources (1-to-1 scores)
S16 S4, S5, S8
S17 S5, S8, S9
S19 S9, S11, S12, S13, S14, S15
S20 S9, S11, S12, S13, S14, S15
12 Table 18: Updated Target-Source Candidates
13
14 [00376] Since there are available target-source candidates,
another iteration to compile
candidate transfer sets and choose the best set is performed. These iterations
are repeated until
16 there are no more target-source candidates, at which time a
consolidation solution is considered
17 complete. The consolidation solution candidates are shown below in Table
19.
18
Target Sources # Sources Score
S16 S4, S5, S8 3 86
S18 Sl, S2, S3, S6, S7, S10 6 85
S19 S9, S11, S12, S15 6 83
S20 S13, S14 2 87
19 Table 19: Consolidation Solution Candidates
22933709.1

CA 02934343 2016-06-29
63
1 6 ¨ Compile Consolidation Solution Candidates
2
3 [00377] The consolidation solution is then saved, and if warranted by the
selected algorithm,
4 another auto fit can be performed with a different search parameters
(e.g. sort source systems in
ascending order before stacking onto target) to generate another consolidation
solution.
6 7 ¨ Choose Best Consolidation Solution
7
8 [00378] The consolidation solution candidates are compared and the best
one is chosen based
9 on some pre-defined criteria. A sample consolidation solution summary 116
is shown in Figure
31.
11 Example Compatibility Analysis
12 [00379] Compatibility and consolidation analyses are described by way of
example only
13 below to illustrate an end-to-end data flow in conducting complete
analyses for an arbitrary
14 environment 12. These analyses are performed through the web client user
interface 74. These
examples assume that the requisite system data 18 for the analyses have
already been collected
16 and loaded into the data repository 54 and caches 56 and 58.
17 1-to-1 Compatibility Analysis Example
18 [00380] This type of analysis typically involves of the following steps:
1) Create a new
19 analysis in the desired analysis folder; 2) Specify the mandatory
analysis input; 3) Optionally,
adjust analysis input parameters whose default values are not desired; 4) Run
the analysis; and 5)
21 View the analysis results.
22 [00381] The analysis can be created in an analysis folder on a computer
14, to help organize
23 the analyses. Figure 32 shows an example analysis folder hierarchy
containing existing analyses.
24 [00382] An analysis is created through a web page dialog as shown in the
screen shot in
Figure 33. The analysis name is preferably provided along with an optional
description. Other
26 analysis inputs comprise a list of systems 16 to analyze and one or more
rule sets 28 and/or
27 workload types 30 to apply to evaluate the 1-to-1 compatibility of the
systems 16.
22933709.1

CA 02934343 2016-06-29
64
1 [00383] In this example, two rule sets 28 and one workload type 30 are
selected. The
2 following additional input may also be specified if the default values
are not appropriate: 1)
3 Adjustment of one or more rule weights, disable rules, or modify
remediation costs in one or
4 more of the selected rule sets; 2) Adjustment of the workload data date
range; 3) Adjustment of
one or more workload limits of the selected workload types; 4) Selection of
different workload
6 stacking and scoring parameters; and 5) Changing the importance factors
for computing the
7 overall scores. Figures 34, 35 and 36 show the pages used to customize
rule sets 28, workloads
8 30 and importance factors 88, respectively. Once the analysis input is
specified, the analysis can
9 be executed. The analysis results can be viewed through the web client
user interface 44/74.
The results include the 1-to-1 compatibility maps for the overall and one for
each rule set 28 and
11 workload type 30. Such compatibility maps and corresponding analysis map
details are shown
12 in Figures 37 to 45.
13 Multi-dimensional Compatibility Analysis Example
14 [00384] Continuing with the above example, one or more transfer sets 23
can be defined and
the transfer sets 23 evaluated through a multi-dimensional compatibility
analysis. The transfer
16 sets 23 can be defined through the user interface, and are signified by
one or more arrows that
17 connect the source to the target. The color of the arrow can be used to
indicate the transfer type
18 (see Figure 46). Once the transfer sets 23 have been defined, the net
effect mode may be
19 selected to run the multi-dimensional analysis. In the resulting
compatibility maps, the score in
the cells that comprise the transfer set 23 reflects the multi-dimensional
compatibility score (see
21 Figure 47).
22 [00385] The corresponding overall compatibility details report is shown
in Figure 48. Note
23 that there are two sources transferred to the single target.
24 Consolidation Analysis Example
[00386] Again continuing from the above example; a consolidation analysis may
be
26 performed to search for a consolidation solution by automating the
analysis of the multi-
27 dimensional scenarios. The input screen for the consolidation analysis
is shown in Figure 49.
28 Users can specify several parameters including the minimum allowable
overall compatibility
22933709.1

CA 02934343 2016-06-29
1 score and the transfer type. As well, users can choose to keep or remove
existing transfer sets
2 before performing the consolidation analysis.
3 [00387] Once specified, the consolidation analysis can be executed and
the chosen transfer
4 sets 23 are presented in the map as shown in Figure 50. The multi-
dimensional compatibility
5 score calculations are also used. Finally, a consolidation summary is
provided as shown in
6 Figures 51 and 52. Figure 51 shows that should the proposed transfers be
applied, 28 systems
7 would be consolidated to 17 systems, resulting in a 39% reduction of
systems. Figure 52 lists the
8 actual transfers proposed by the consolidation analysis.
9 Commentary
10 [00388] Accordingly, the compatibility and consolidation analyses can be
performed on a
11 collection of system to 1) evaluate the 1-to-1 compatibility of every
source-target pair, 2)
12 evaluate the multi-dimensional compatibility of specific transfer sets,
and 3) to determine the
13 best consolidation solution based on various constraints including the
compatibility scores of the
14 transfer sets. Though these analyses share many common elements, they
can be performed
15 independently.
16 [00389] These analyses are based on collected system data related to
their technical
17 configuration, business factors and workloads. Differential rule sets
and workload compatibility
18 algorithms are used to evaluate the compatibility of systems. The
technical configuration,
19 business and workload related compatibility results are combined to
create an overall
20 compatibility assessment. These results are visually represented using
color coded scorecard
21 maps.
22 [00390] It will be appreciated that although the system and workload
analyses are performed
23 in this example to contribute to the overall compatibility analyses,
each analysis is suitable to be
24 performed on its own and can be conducted separately for finer analyses.
The finer analysis may
25 be performed to focus on the remediation of only configuration settings
at one time and
26 spreading workload at another time. As such, each analysis and
associated map may be
27 generated on an individual basis without the need to perform the other
analyses.
22933709.1

CA 02934343 2016-06-29
66
1 [00391] It will be appreciated that each analysis and associated map
discussed above may
2 instead be used for purposes other than consolidation such as capacity
planning, regulatory
3 compliance, change, inventory, optimization, administration etc. and any
other purpose where
4 compatibility of systems is useful for analyzing systems 16. It will also
be appreciated that the
program 10 may also be configured to allow user-entered attributes (e.g.
location) that are not
6 available via the auditing process and can factor such attributes into
the rules and subsequent
7 analysis.
8 [00392] It will further be appreciated that although the examples
provided above are in the
9 context of a distributed system of computer servers, the principles and
algorithms discusses are
applicable to any system having a plurality of sub-systems where the sub-
systems perform
11 similar tasks and thus are capable theoretically of being consolidation.
For example, a local
12 network having a number of personal computers (PCs) could also benefit
from a consolidation
13 analysis.
14 [00393] Although the invention has been described with reference to
certain specific
embodiments, various modifications thereof will be apparent to those skilled
in the art without
16 departing from the spirit and scope of the invention as outlined in the
claims appended hereto.
22933709.1

Dessin représentatif
Une figure unique qui représente un dessin illustrant l'invention.
États administratifs

2024-08-01 : Dans le cadre de la transition vers les Brevets de nouvelle génération (BNG), la base de données sur les brevets canadiens (BDBC) contient désormais un Historique d'événement plus détaillé, qui reproduit le Journal des événements de notre nouvelle solution interne.

Veuillez noter que les événements débutant par « Inactive : » se réfèrent à des événements qui ne sont plus utilisés dans notre nouvelle solution interne.

Pour une meilleure compréhension de l'état de la demande ou brevet qui figure sur cette page, la rubrique Mise en garde , et les descriptions de Brevet , Historique d'événement , Taxes périodiques et Historique des paiements devraient être consultées.

Historique d'événement

Description Date
Inactive : CIB attribuée 2022-07-11
Inactive : CIB enlevée 2022-07-11
Inactive : CIB en 1re position 2022-07-11
Inactive : CIB expirée 2022-01-01
Inactive : CIB enlevée 2021-12-31
Accordé par délivrance 2021-09-07
Inactive : Octroit téléchargé 2021-09-07
Inactive : Octroit téléchargé 2021-09-07
Lettre envoyée 2021-09-07
Inactive : Page couverture publiée 2021-09-06
Inactive : Taxe finale reçue 2021-07-15
Préoctroi 2021-07-15
Inactive : Coagent retiré 2021-07-07
Lettre envoyée 2021-03-25
Un avis d'acceptation est envoyé 2021-03-25
Inactive : Q2 réussi 2021-02-22
Inactive : Approuvée aux fins d'acceptation (AFA) 2021-02-22
Inactive : Dem retournée à l'exmntr-Corr envoyée 2020-12-15
Retirer de l'acceptation 2020-12-15
Modification reçue - modification volontaire 2020-12-01
Inactive : Dem reçue: Retrait de l'acceptation 2020-12-01
Représentant commun nommé 2020-11-07
Requête pour le changement d'adresse ou de mode de correspondance reçue 2020-10-23
Un avis d'acceptation est envoyé 2020-08-03
Lettre envoyée 2020-08-03
Un avis d'acceptation est envoyé 2020-08-03
Lettre envoyée 2020-07-16
Lettre envoyée 2020-07-16
Exigences relatives à la révocation de la nomination d'un agent - jugée conforme 2020-07-15
Exigences relatives à la nomination d'un agent - jugée conforme 2020-07-15
Inactive : Q2 réussi 2020-06-23
Inactive : Approuvée aux fins d'acceptation (AFA) 2020-06-23
Demande visant la révocation de la nomination d'un agent 2020-06-10
Demande visant la nomination d'un agent 2020-06-10
Inactive : Coagent ajouté 2020-04-29
Exigences relatives à la révocation de la nomination d'un agent - jugée conforme 2020-03-17
Demande visant la nomination d'un agent 2020-03-17
Demande visant la révocation de la nomination d'un agent 2020-03-17
Exigences relatives à la nomination d'un agent - jugée conforme 2020-03-17
Représentant commun nommé 2019-10-30
Représentant commun nommé 2019-10-30
Modification reçue - modification volontaire 2019-10-02
Inactive : Dem. de l'examinateur par.30(2) Règles 2019-04-02
Inactive : Q2 échoué 2019-04-01
Entrevue menée par l'examinateur 2019-03-06
Modification reçue - modification volontaire 2019-03-06
Inactive : Demande ad hoc documentée 2018-10-18
Modification reçue - modification volontaire 2018-10-18
Inactive : Dem. de l'examinateur par.30(2) Règles 2018-04-18
Inactive : Rapport - Aucun CQ 2018-04-18
Modification reçue - modification volontaire 2017-10-19
Inactive : Demande ad hoc documentée 2017-10-19
Modification reçue - modification volontaire 2017-10-19
Inactive : Dem. de l'examinateur art.29 Règles 2017-04-26
Inactive : Dem. de l'examinateur par.30(2) Règles 2017-04-26
Inactive : Rapport - Aucun CQ 2017-04-24
Inactive : Page couverture publiée 2016-08-03
Lettre envoyée 2016-08-02
Inactive : CIB attribuée 2016-07-11
Exigences applicables à une demande divisionnaire - jugée conforme 2016-07-08
Lettre envoyée 2016-07-07
Lettre envoyée 2016-07-07
Lettre envoyée 2016-07-07
Inactive : CIB attribuée 2016-07-05
Inactive : CIB en 1re position 2016-07-05
Inactive : CIB attribuée 2016-07-05
Demande reçue - nationale ordinaire 2016-06-30
Demande reçue - divisionnaire 2016-06-29
Exigences pour une requête d'examen - jugée conforme 2016-06-29
Toutes les exigences pour l'examen - jugée conforme 2016-06-29
Demande publiée (accessible au public) 2007-11-01

Historique d'abandonnement

Il n'y a pas d'historique d'abandonnement

Taxes périodiques

Le dernier paiement a été reçu le 2021-03-23

Avis : Si le paiement en totalité n'a pas été reçu au plus tard à la date indiquée, une taxe supplémentaire peut être imposée, soit une des taxes suivantes :

  • taxe de rétablissement ;
  • taxe pour paiement en souffrance ; ou
  • taxe additionnelle pour le renversement d'une péremption réputée.

Veuillez vous référer à la page web des taxes sur les brevets de l'OPIC pour voir tous les montants actuels des taxes.

Historique des taxes

Type de taxes Anniversaire Échéance Date payée
TM (demande, 3e anniv.) - générale 03 2010-04-23 2016-06-29
TM (demande, 2e anniv.) - générale 02 2009-04-23 2016-06-29
TM (demande, 8e anniv.) - générale 08 2015-04-23 2016-06-29
TM (demande, 6e anniv.) - générale 06 2013-04-23 2016-06-29
TM (demande, 9e anniv.) - générale 09 2016-04-25 2016-06-29
TM (demande, 5e anniv.) - générale 05 2012-04-23 2016-06-29
TM (demande, 7e anniv.) - générale 07 2014-04-23 2016-06-29
Enregistrement d'un document 2016-06-29
TM (demande, 4e anniv.) - générale 04 2011-04-26 2016-06-29
Requête d'examen - générale 2016-06-29
Taxe pour le dépôt - générale 2016-06-29
TM (demande, 10e anniv.) - générale 10 2017-04-24 2017-03-22
TM (demande, 11e anniv.) - générale 11 2018-04-23 2018-03-14
TM (demande, 12e anniv.) - générale 12 2019-04-23 2019-04-09
TM (demande, 13e anniv.) - générale 13 2020-04-23 2020-03-13
Enregistrement d'un document 2020-06-30
2020-12-01 2020-12-01
TM (demande, 14e anniv.) - générale 14 2021-04-23 2021-03-23
Pages excédentaires (taxe finale) 2021-07-26 2021-07-15
Taxe finale - générale 2021-07-26 2021-07-15
TM (brevet, 15e anniv.) - générale 2022-04-25 2022-03-23
TM (brevet, 16e anniv.) - générale 2023-04-24 2023-03-30
TM (brevet, 17e anniv.) - générale 2024-04-23 2024-03-20
Titulaires au dossier

Les titulaires actuels et antérieures au dossier sont affichés en ordre alphabétique.

Titulaires actuels au dossier
CIRBA IP INC.
Titulaires antérieures au dossier
ANDREW D. HILLIER
TOM YUYITUNG
Les propriétaires antérieurs qui ne figurent pas dans la liste des « Propriétaires au dossier » apparaîtront dans d'autres documents au dossier.
Documents

Pour visionner les fichiers sélectionnés, entrer le code reCAPTCHA :



Pour visualiser une image, cliquer sur un lien dans la colonne description du document. Pour télécharger l'image (les images), cliquer l'une ou plusieurs cases à cocher dans la première colonne et ensuite cliquer sur le bouton "Télécharger sélection en format PDF (archive Zip)" ou le bouton "Télécharger sélection (en un fichier PDF fusionné)".

Liste des documents de brevet publiés et non publiés sur la BDBC .

Si vous avez des difficultés à accéder au contenu, veuillez communiquer avec le Centre de services à la clientèle au 1-866-997-1936, ou envoyer un courriel au Centre de service à la clientèle de l'OPIC.


Description du
Document 
Date
(aaaa-mm-jj) 
Nombre de pages   Taille de l'image (Ko) 
Dessins 2016-06-29 54 4 377
Description 2016-06-29 66 2 787
Abrégé 2016-06-29 1 20
Revendications 2016-06-29 4 127
Dessin représentatif 2016-08-03 1 34
Page couverture 2016-08-03 1 63
Revendications 2017-10-19 4 128
Revendications 2018-10-18 4 155
Revendications 2019-03-06 4 157
Revendications 2019-10-02 4 131
Revendications 2020-12-01 15 565
Dessin représentatif 2021-08-06 1 25
Page couverture 2021-08-06 1 59
Paiement de taxe périodique 2024-03-20 32 1 329
Courtoisie - Certificat d'enregistrement (document(s) connexe(s)) 2016-07-07 1 102
Accusé de réception de la requête d'examen 2016-07-07 1 176
Courtoisie - Certificat d'enregistrement (document(s) connexe(s)) 2016-07-07 1 102
Avis du commissaire - Demande jugée acceptable 2020-08-03 1 551
Courtoisie - Avis d'acceptation considéré non envoyé 2020-12-15 1 412
Avis du commissaire - Demande jugée acceptable 2021-03-25 1 546
Modification / réponse à un rapport 2018-10-18 8 251
Nouvelle demande 2016-06-29 8 211
Courtoisie - Certificat de dépôt pour une demande de brevet divisionnaire 2016-08-02 1 147
Demande de l'examinateur 2017-04-26 4 186
Modification / réponse à un rapport 2017-10-19 10 359
Modification / réponse à un rapport 2017-10-19 10 368
Demande de l'examinateur 2018-04-18 3 149
Note relative à une entrevue 2019-03-06 1 17
Modification / réponse à un rapport 2019-03-06 7 229
Demande de l'examinateur 2019-04-02 4 241
Modification / réponse à un rapport 2019-10-02 9 291
Retrait d'acceptation / Modification / réponse à un rapport 2020-12-01 22 846
Taxe finale 2021-07-15 4 152
Certificat électronique d'octroi 2021-09-07 1 2 527