Computational complexity: Difference between revisions
No edit summary |
SYSEN5800TAs (talk | contribs) No edit summary |
||
(17 intermediate revisions by 4 users not shown) | |||
Line 1: | Line 1: | ||
Authors: | Authors: Sarang Harshey (sh2669) (SysEn 5800 Fall 2024) | ||
Stewards: Nathan Preuss, Wei-Han Chen, Tianqi Xiao, Guoqing Hu | |||
== Introduction == | == Introduction == | ||
Computational complexity is a study of resources, especially time and space, required to solve a computational problem. It provides an understanding of how the resource requirement scales-up as the problem gets bigger and bigger. Another objective is to know which problems are solvable using algorithms and which problems are truly difficult and practically impossible to solve. | |||
'''Complexity Classes''' | |||
Problems can be grouped into sets that share a common property and require similar amount of computational resources. These sets or groups are known as ''Complexity Classes''. Some basic types of complexity classes are mentioned below. | |||
'''P (Polynomial Time) Class''': If a problem can be solved in polynomial time, then it is considered as a Class P problem. Problems in this class are efficiently solvable, however, the time required to solve grows polynomially with the size of input. Linear programming problems are considered under P class. [1] | |||
'''NP (Non-deterministic Polynomial Time) Class''': If the solution of a problem can be verified in polynomial time, then the problem is considered as a Class NP problem. These problems require a solution to be available which can be verified, however finding solution itself may take more than polynomial time. [1] | |||
'''NP-Hard Class''': NP-Hard problems may or may not fall under NP Class. A problem H is NP-Hard if for every problem L in NP, there exists a polynomial time reduction from L to H. These are usually optimization problems. [1] | |||
'''NP-Complete Class''': NP-Complete problems are a subset of NP class. If all the problems in a NP can be reduced to problem H in polynomial time, then problem H is considered as NP-Complete. NP Complete problems are the hardest problems to solve. These are usually decision problems. [1] | |||
'''History''' | |||
The history of computational complexity can be traced back to the paper, ''“On computable numbers with an application to the Entsheidungs problem”,'' published in 1936 by A M Turing [2]. In this paper, Turing introduced his famous Turing Machine and provided a convincing formalization of computable function. In following years, once the theory was established to determine which problem can and cannot be solved, questions regarding relative computational difficulty came up [3]. This can be considered as the birth of computational complexity. Rabin M O in his paper, ''“Degree of difficulty of computing a function and partial ordering or recursive sets”,'' in 1960 made an attempt to measure the amount of work inherent in computing a function along with defining the degree of difficulty of computing a function [4]. Another influential paper that was published in 1965 by Hartmanis and Stearns, “''On the computational complexity of algorithms''”, discussed the method of categorizing problems according to how hard they are to compute. As part of this paper, a variety of theorems were established. It also provided a way to classify functions or recognition problems according to their computational complexity and laid out definitions of time complexity and space complexity [5]. Paper by Alan Cobham in 1965, ''“The intrinsic computational difficulty of a function”'', discussed the relative difficulty between addition and multiplication and characterised an important class of functions computable in time bounded by a polynomial in the length of input [6]. These papers from Cobhem, Hartemanis and Rabin can be considered as the founding papers for the field of computational complexity. Manuel Blum in 1967, formulated axioms known as Blum Axioms, that specify desirable properties of complexity measures on the set of computation function. Field of computational complexity started to grow from 1971 onwards, with studies from Steven Cook, Leonid Levin and Richard Karp which expanded the study of problems belonging to class NP-Complete. | |||
'''Need for Computational Complexity''' | |||
Computational complexity provides a method to analyse an algorithm in terms of complexity and provides information on the performance that can be expected. In a complex algorithm, through computational complexity, costliest steps (in terms of space and time) can be identified and efforts can be made for improving efficiency by tuning these steps before implementation. It also helps in selecting the best algorithms and eliminate inefficient algorithms. | |||
== | == Algorithm Discussion == | ||
Computational complexity can be calculated by algorithm analysis by establishing a relationship between length of the algorithm input and the number of steps it takes (time complexity) or amount of space it takes (space complexity). This relationship is usually represented as a function. The smaller the value of this function, the higher the efficiency of the algorithm. Function value also indicates the growth rate with respect to size of the input. | |||
Mathematical notations, usually known as ''Asymptotic Notations'', are used to describe the complexity of any algorithm. There are broadly three notations used to describe complexity, namely – Big-O (''O'' - Notation), Big-Theta (Θ - Notation) and Big-Omega (Ω - Notation). | |||
'''''O - notation''''' represents the upper bound or the worst case, i.e. maximum time (or space) required for an algorithm. For functions ''f'' and ''t,'' | |||
= | ''f(n) = O (t(n))'' | ||
if ''f(n) ≤ c.t(n) for'' all ''n ≥ d,'' | |||
where ''c'' and ''d'' are positive integers ''[7] [8]'' | |||
'''''Ω''''' '''''- notation''''' represents lower bound or the best case, i.e. minimum time (or space) required for an algorithm. For functions, ''f'' and ''t,'' | |||
= | ''f(n) = Ω (t(n))'' | ||
if ''c.t(n) ≤ f(n)'' for all ''n ≥ d,'' | |||
where ''c'' and ''d'' are positive integers ''[7] [8]'' | |||
'''''Θ - notation''''' represents average-case as it gives both upper and lower bounds of the algorithm. For functions, ''f'' and ''t,'' | |||
''f(n) = Θ (t(n))'', | |||
if ''c<sub>1</sub>.t(n) ≤ f(n) ≤ c<sub>2</sub>.t(n)'' for all ''n ≥ d,'' | |||
where ''c<sub>1</sub>'', ''c<sub>2</sub>'' and ''d'' are positive integers '' [7] [8]'' | |||
'''Big-O (O-notation)''' is most commonly used to analyse the complexity of an algorithm as it provides the worst-case estimate. It considers only the number of steps involved for the problem resolution and not the hardware configuration that’s being used. Some of the common complexities are mentioned below. | |||
* '''''Constant Time Complexity O(1)''''' : computation time for the problem is independent of size of the input data. | |||
* '''''Linear Time Complexity O(n)''''' : computation time for the problem is proportional to the size of the input n and increase linearly. | |||
* '''''Polynomial Time Complexity O(n<sup>c</sup>)''''' : computation time for the problem is polynomial times of size of the input ''n'' where ''c'' is a constant depending on the problem. | |||
* '''''Exponential Time Complexity O(2<sup>n</sup>)''''' : computation time for the problem doubles with every addition of input ''n.'' | |||
== Numerical Example == | == Numerical Example == | ||
Complexity is defined based on the number of steps involved. Below are few examples demonstrating complexity calculation by analysing the algorithm. | |||
'''Example of Linear Time Complexity''' or '''''O(n)'':''' | |||
* Let’s consider below algorithm with a “for” loop. | |||
''x = 0'' | |||
''y = 0'' | |||
''a = 3'' | |||
''b = 3'' | |||
''for i in range (a):'' | |||
''x = x + 5'' | |||
''for i in range (b):'' | |||
''y = y + 10'' | |||
''print (x,y)'' | |||
* Algorithm Output: 15 30 | |||
* For the first loop, total 3 operations were performed, one each for a=1, a=2, a=3 | |||
* Similarly for the second loop, 3 more operations were performed one each for b=1, b=2, b=3 | |||
* Therefore, total 6 operations were performed | |||
* Hence, time complexity for this algorithm is ''O(a+b)'' ⇒ ''O(n)'' where ''n'' is generic representation of the total input size | |||
'''Example of Polynomial Time Complexity''' or '''''O(n<sup>k</sup>)'':''' | |||
* Let’s consider an example of algorithm with a nested “''for''” loop | |||
''x = 0'' | |||
''y = 0'' | |||
''a = 3'' | |||
''b = 3'' | |||
''for i in range (a):'' | |||
''for j in range (b):'' | |||
''x = x + j'' | |||
''print (x, end = “ “)'' | |||
''print ( )'' | |||
* Output of this algorithm would be: | |||
1 3 6 | |||
7 9 12 | |||
13 15 18 | |||
* In this algorithm, for every iteration of outer loop, i.e. for each "a" (a=1, a=2, a=3), inner loop runs for "b" times (b=1, b=2, b=3) | |||
* Therefore, a total of 9 operations were performed | |||
* Hence, the complexity of this algorithm is ''O(a*b) ⇒ O(n<sup>2</sup>)'' where ''n'' is generic representation of input size | |||
== Applications == | == Applications == | ||
Computational complexity helps developers to determine the most efficient algorithm that consumes least number of resources (time and space). It is widely used in various fields. Some of the major application areas are discussed below. | |||
'''Computer science and algorithm efficiency analysis''': It helps in designing efficient algorithms and aids in software/ hardware selection. It enables comparison of multiple algorithms that can be used for solving a specific problem. Computational complexity helps in optimizing database queries and information retrieval processes. | |||
'''Quantum Computing''': Quantum computing is majorly used for cybersecurity, artificial intelligence, data management, data analytics and optimization. With the help of quantum complexity theory, quantum algorithms can be made exponentially efficient compared to classical algorithms. This enables businesses to solve problems which cannot be solved by conventional computing devices. | |||
'''Game Theory:''' Game theory deals with optimal decision making in any strategic setting. It is extensively used for analysing customer behaviour, market trend, develop effective business strategies, competitive product pricing etc. Computational complexity helps in defining the most efficient algorithm for approaching the game and to come up with best outcome while considering all the possible variations. | |||
'''Healthcare system:''' Various aspects of healthcare system such as patient management, continuity of patient care, nursing etc requires efficient decision-making processes. These complex tasks are broken down into fairly smaller and manageable tasks using computational models while giving due consideration to interrelatedness of systems. '''[9]''' | |||
== Conclusion == | |||
Computational complexity provides a framework for analysing, comparing and selecting the most efficient algorithm and also enables the selection of best suited hardware and software combination. The algorithms or problems are categorised in classes P, NP, NP-Complete, NP-Hard and the worst-case complexity is represented using Big-O notation. Computational complexity has been evolving continuously for past few decades and remains a major area of study. Computational complexity is extensively used in the field of game theory, quantum mechanics, health system where the number of computing is huge and cannot be solved using conventional methods. | |||
== Sources == | == Sources == | ||
<references /> | |||
Latest revision as of 13:32, 10 December 2024
Authors: Sarang Harshey (sh2669) (SysEn 5800 Fall 2024)
Stewards: Nathan Preuss, Wei-Han Chen, Tianqi Xiao, Guoqing Hu
Introduction
Computational complexity is a study of resources, especially time and space, required to solve a computational problem. It provides an understanding of how the resource requirement scales-up as the problem gets bigger and bigger. Another objective is to know which problems are solvable using algorithms and which problems are truly difficult and practically impossible to solve.
Complexity Classes
Problems can be grouped into sets that share a common property and require similar amount of computational resources. These sets or groups are known as Complexity Classes. Some basic types of complexity classes are mentioned below.
P (Polynomial Time) Class: If a problem can be solved in polynomial time, then it is considered as a Class P problem. Problems in this class are efficiently solvable, however, the time required to solve grows polynomially with the size of input. Linear programming problems are considered under P class. [1]
NP (Non-deterministic Polynomial Time) Class: If the solution of a problem can be verified in polynomial time, then the problem is considered as a Class NP problem. These problems require a solution to be available which can be verified, however finding solution itself may take more than polynomial time. [1]
NP-Hard Class: NP-Hard problems may or may not fall under NP Class. A problem H is NP-Hard if for every problem L in NP, there exists a polynomial time reduction from L to H. These are usually optimization problems. [1]
NP-Complete Class: NP-Complete problems are a subset of NP class. If all the problems in a NP can be reduced to problem H in polynomial time, then problem H is considered as NP-Complete. NP Complete problems are the hardest problems to solve. These are usually decision problems. [1]
History
The history of computational complexity can be traced back to the paper, “On computable numbers with an application to the Entsheidungs problem”, published in 1936 by A M Turing [2]. In this paper, Turing introduced his famous Turing Machine and provided a convincing formalization of computable function. In following years, once the theory was established to determine which problem can and cannot be solved, questions regarding relative computational difficulty came up [3]. This can be considered as the birth of computational complexity. Rabin M O in his paper, “Degree of difficulty of computing a function and partial ordering or recursive sets”, in 1960 made an attempt to measure the amount of work inherent in computing a function along with defining the degree of difficulty of computing a function [4]. Another influential paper that was published in 1965 by Hartmanis and Stearns, “On the computational complexity of algorithms”, discussed the method of categorizing problems according to how hard they are to compute. As part of this paper, a variety of theorems were established. It also provided a way to classify functions or recognition problems according to their computational complexity and laid out definitions of time complexity and space complexity [5]. Paper by Alan Cobham in 1965, “The intrinsic computational difficulty of a function”, discussed the relative difficulty between addition and multiplication and characterised an important class of functions computable in time bounded by a polynomial in the length of input [6]. These papers from Cobhem, Hartemanis and Rabin can be considered as the founding papers for the field of computational complexity. Manuel Blum in 1967, formulated axioms known as Blum Axioms, that specify desirable properties of complexity measures on the set of computation function. Field of computational complexity started to grow from 1971 onwards, with studies from Steven Cook, Leonid Levin and Richard Karp which expanded the study of problems belonging to class NP-Complete.
Need for Computational Complexity
Computational complexity provides a method to analyse an algorithm in terms of complexity and provides information on the performance that can be expected. In a complex algorithm, through computational complexity, costliest steps (in terms of space and time) can be identified and efforts can be made for improving efficiency by tuning these steps before implementation. It also helps in selecting the best algorithms and eliminate inefficient algorithms.
Algorithm Discussion
Computational complexity can be calculated by algorithm analysis by establishing a relationship between length of the algorithm input and the number of steps it takes (time complexity) or amount of space it takes (space complexity). This relationship is usually represented as a function. The smaller the value of this function, the higher the efficiency of the algorithm. Function value also indicates the growth rate with respect to size of the input.
Mathematical notations, usually known as Asymptotic Notations, are used to describe the complexity of any algorithm. There are broadly three notations used to describe complexity, namely – Big-O (O - Notation), Big-Theta (Θ - Notation) and Big-Omega (Ω - Notation).
O - notation represents the upper bound or the worst case, i.e. maximum time (or space) required for an algorithm. For functions f and t,
f(n) = O (t(n))
if f(n) ≤ c.t(n) for all n ≥ d,
where c and d are positive integers [7] [8]
Ω - notation represents lower bound or the best case, i.e. minimum time (or space) required for an algorithm. For functions, f and t,
f(n) = Ω (t(n))
if c.t(n) ≤ f(n) for all n ≥ d,
where c and d are positive integers [7] [8]
Θ - notation represents average-case as it gives both upper and lower bounds of the algorithm. For functions, f and t,
f(n) = Θ (t(n)),
if c1.t(n) ≤ f(n) ≤ c2.t(n) for all n ≥ d,
where c1, c2 and d are positive integers [7] [8]
Big-O (O-notation) is most commonly used to analyse the complexity of an algorithm as it provides the worst-case estimate. It considers only the number of steps involved for the problem resolution and not the hardware configuration that’s being used. Some of the common complexities are mentioned below.
- Constant Time Complexity O(1) : computation time for the problem is independent of size of the input data.
- Linear Time Complexity O(n) : computation time for the problem is proportional to the size of the input n and increase linearly.
- Polynomial Time Complexity O(nc) : computation time for the problem is polynomial times of size of the input n where c is a constant depending on the problem.
- Exponential Time Complexity O(2n) : computation time for the problem doubles with every addition of input n.
Numerical Example
Complexity is defined based on the number of steps involved. Below are few examples demonstrating complexity calculation by analysing the algorithm.
Example of Linear Time Complexity or O(n):
- Let’s consider below algorithm with a “for” loop.
x = 0
y = 0
a = 3
b = 3
for i in range (a):
x = x + 5
for i in range (b):
y = y + 10
print (x,y)
- Algorithm Output: 15 30
- For the first loop, total 3 operations were performed, one each for a=1, a=2, a=3
- Similarly for the second loop, 3 more operations were performed one each for b=1, b=2, b=3
- Therefore, total 6 operations were performed
- Hence, time complexity for this algorithm is O(a+b) ⇒ O(n) where n is generic representation of the total input size
Example of Polynomial Time Complexity or O(nk):
- Let’s consider an example of algorithm with a nested “for” loop
x = 0
y = 0
a = 3
b = 3
for i in range (a):
for j in range (b):
x = x + j
print (x, end = “ “)
print ( )
- Output of this algorithm would be:
1 3 6
7 9 12
13 15 18
- In this algorithm, for every iteration of outer loop, i.e. for each "a" (a=1, a=2, a=3), inner loop runs for "b" times (b=1, b=2, b=3)
- Therefore, a total of 9 operations were performed
- Hence, the complexity of this algorithm is O(a*b) ⇒ O(n2) where n is generic representation of input size
Applications
Computational complexity helps developers to determine the most efficient algorithm that consumes least number of resources (time and space). It is widely used in various fields. Some of the major application areas are discussed below.
Computer science and algorithm efficiency analysis: It helps in designing efficient algorithms and aids in software/ hardware selection. It enables comparison of multiple algorithms that can be used for solving a specific problem. Computational complexity helps in optimizing database queries and information retrieval processes.
Quantum Computing: Quantum computing is majorly used for cybersecurity, artificial intelligence, data management, data analytics and optimization. With the help of quantum complexity theory, quantum algorithms can be made exponentially efficient compared to classical algorithms. This enables businesses to solve problems which cannot be solved by conventional computing devices.
Game Theory: Game theory deals with optimal decision making in any strategic setting. It is extensively used for analysing customer behaviour, market trend, develop effective business strategies, competitive product pricing etc. Computational complexity helps in defining the most efficient algorithm for approaching the game and to come up with best outcome while considering all the possible variations.
Healthcare system: Various aspects of healthcare system such as patient management, continuity of patient care, nursing etc requires efficient decision-making processes. These complex tasks are broken down into fairly smaller and manageable tasks using computational models while giving due consideration to interrelatedness of systems. [9]
Conclusion
Computational complexity provides a framework for analysing, comparing and selecting the most efficient algorithm and also enables the selection of best suited hardware and software combination. The algorithms or problems are categorised in classes P, NP, NP-Complete, NP-Hard and the worst-case complexity is represented using Big-O notation. Computational complexity has been evolving continuously for past few decades and remains a major area of study. Computational complexity is extensively used in the field of game theory, quantum mechanics, health system where the number of computing is huge and cannot be solved using conventional methods.