Computational complexity: Difference between revisions

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
No edit summary
mNo edit summary
 
(26 intermediate revisions by 4 users not shown)
Line 1: Line 1:
Authors: Steve Bentioulis, Augie Bravo, Will Murphy
Authors: Sarang Harshey (sh2669) (SysEn 5800 Fall 2024)
 
Stewards: Nathan Preuss, Wei-Han Chen, Tianqi Xiao, Guoqing Hu


== Introduction ==
== Introduction ==
<blockquote>''“The subject of my talk is perhaps most directly indicated by simply asking two questions: first, is it harder to multiply than to add? and second, why?...there is no algorithm for multiplication computationally as simple as that for addition, and this proves something of a stumbling block” - Alan Cobham, 1965'' <ref>[https://www.cs.toronto.edu/~sacook/homepage/cobham_intrinsic.pdf A. Cobham, The intrinsic computational difficulty of functions], in Y. Bar-Hillel, ed., Logic, Methodology and Philosophy of Science: Proceedings of the 1964 International Congress, North-Holland Publishing Company, Amsterdam, 1965, pp. 24-30 </ref></blockquote>
Computational complexity is a study of resources, especially time and space, required to solve a computational problem. It provides an understanding of how the resource requirement scales-up as the problem gets bigger and bigger. Another objective is to know which problems are solvable using algorithms and which problems are truly difficult and practically impossible to solve.
Computational complexity refers to the amount of resources needed to solve a problem. Complexity increases as the amount of resources required increases. While this notion may seem straightforward enough, computational complexity has profound impacts.  
 
=== Complexity Classes ===
Problems can be grouped into sets that share a common property and require similar amount of computational resources. These sets or groups are known as ''Complexity Classes''. Some basic types of complexity classes are mentioned below.
 
===== P (Polynomial Time) Class =====
If a problem can be solved in polynomial time, then it is considered as a Class P problem. Problems in this class are efficiently solvable, however, the time required to solve grows polynomially with the size of input.<ref name=":0">R. Vanderbei, Linear Programming Foundations and Extensions, Springer, 2014. https://doi.org/10.1057/palgrave.jors.2600987</ref><ref name=":3">Sanjeev Arora, Boaz Barak, Computational Complexity: A Modern Approach, New York: Cambridge University Press, 2009. http://dx.doi.org/10.1017/CBO9780511804090</ref> Linear programming problems are considered under P class.
 
===== NP (Non-deterministic Polynomial Time) Class =====
If the solution of a problem can be verified in polynomial time, then the problem is considered as a Class NP problem. These problems require a solution that can be verified, but finding a solution itself may take more than polynomial time.<ref name=":0" /><ref name=":3" />


The quote above from Alan Cobham is some of the earliest thinking on defining computational complexity and set the stage for defining problems based on complexity classes to indicate the feasibility of computational problems.
===== NP-Hard Class =====
NP-Hard problems may or may not fall under NP Class. A problem H is NP-Hard if for every problem L in NP, there exists a polynomial time reduction from L to H.<ref name=":0" /><ref name=":3" /> These are usually optimization problems.


Additionally, the theory of computational complexity is in its infancy and has only been studied in earnest starting in the 20<sup>th</sup> century.  
===== NP-Complete Class =====
NP-Complete problems are a subset of NP class. If all the problems in a NP can be reduced to problem H in polynomial time, then problem H is considered as NP-Complete.<ref name=":0" /><ref name=":3" /> NP Complete problems are the hardest problems to solve. These are usually decision problems.


== Theory, Methodology ==
=== History ===
The concept of computation has evolved since the advent of the standard universal electronic computer and the associated widespread societal adoption. And while the electronic computer is often synonymous with computation, it is important to remember that computation is a major scientific concept irrespective of whether it is conducted by machine, man, or otherwise.
The history of computational complexity can be traced back to the paper, ''On computable numbers with an application to the Entsheidungs problem<ref>A. M. Turing, “On  computable numbers with an application to the Entsheidungs problem,” in ''The  London Mathematical Society'', 1936. https://doi.org/10.1112/plms/s2-42.1.230</ref>,'' published in 1936 by A M Turing. In this paper, Turing introduced his famous Turing Machine and provided a convincing formalization of computable function. In following years, once the theory was established to determine which problem can and cannot be solved, questions regarding relative computational difficulty came up.<ref>S. A. Cook, “An  overview of computational complexity,” ''Communications of the ACM,'' vol.  26, no. 6, pp. 400-408, 1983. https://doi.org/10.1145/358141.358144</ref> This can be considered as the birth of computational complexity. Rabin M O in his paper, ''Degree of difficulty of computing a function and partial ordering or recursive sets<ref>M. O. Rabin, “Degree of difificulty of computing a function and a partial ordering or recursive sets,” Hebrew University, Jerusalem, Israel, 1960.</ref>,'' in 1960 made an attempt to measure the amount of work inherent in computing a function along with defining the degree of difficulty of computing a function. Another influential paper that was published in 1965 by Hartmanis and Stearns, ''On the computational complexity of algorithms<ref>J. Hartmanis, R. Stearns, “On the Computational Complexity of Algorithms,” ''American Mathematical Society,'' 1965. http://dx.doi.org/10.2307/1994208</ref>'', discussed the method of categorizing problems according to how hard they are to compute. As part of this paper, a variety of theorems were established. It also provided a way to classify functions or recognition problems according to their computational complexity and laid out definitions of time complexity and space complexity. Paper by Alan Cobham in 1965, ''The intrinsic computational difficulty of functions<ref>A. Cobham, “The intrinsic computational difficulty of functions,” ''Logic, methodology and philosophy of science,'' pp. 24-30, 1965. https://doi.org/10.2307/2270886</ref>'', discussed the relative difficulty between addition and multiplication and characterised an important class of functions computable in time bounded by a polynomial in the length of input. These papers from Cobhem, Hartemanis and Rabin can be considered as the foundation for the field of computational complexity. Manuel Blum in 1967, formulated axioms known as Blum Axioms, that specify desirable properties of complexity measures on the set of computation function. Field of computational complexity started to grow from 1971 onwards, with studies from Steven Cook, Leonid Levin and Richard Karp which expanded the study of problems belonging to class NP-Complete.


When studying computation, a key area of interest is understanding what problems are, in fact, computable. Researchers have shown that some tasks are inherently incomputable in that no computer can solve them without going into infinite loops on certain inputs. This phenomenon begs the question how do you determine if a problem can be computed, moreover, for those problems that can be computed how can you calculate the complexity associated with computing the answer.
=== Need for Computational Complexity ===
Computational complexity provides a method to analyse an algorithm in terms of complexity and provides information on the performance that can be expected. In a complex algorithm, through computational complexity, costliest steps (in terms of space and time) can be identified and efforts can be made for improving efficiency by tuning these steps before implementation.  It also helps in selecting the best algorithms and eliminate inefficient algorithms.


The focus of computational complexity is the measure of computational efficiency quantifying the amount of computational resources required to solve a problem.<ref>Arora, S., & Barak, B. (2009). Computational complexity: a modern approach. Cambridge: Cambridge University Press. Retrieved from https://cornell-library.skillport.com/skillportfe/main.action?path=summary/BOOKS/31235</ref>
== Algorithm Discussion ==
Complexity of an algorithm can be calculated by establishing a relationship between the length of the input and the number of steps (time complexity) or amount of space that algorithm takes (space complexity) to compute the problem. This relationship is usually represented as a function. The smaller the value of this function, the higher the efficiency of the algorithm. Function value also indicates the growth rate with respect to size of the input.


Within the study of computational complexity there exists the notion of a complexity class defined as a set of functions that can be computed within given resource bounds.<ref>Du, D., & Ko, K.-I. (2014). Theory of computational complexity. (Second edition.). Hoboken, New Jersey: Wiley. Retrieved from http://onlinelibrary.wiley.com/book/10.1002/9781118595091</ref>
==== Asymptotic Notations ====
Mathematical notations, usually known as ''Asymptotic Notations'', are used to describe the complexity of any algorithm. There are broadly three notations used to describe complexity, namely – Big-O (''O'' - Notation), Big-Theta (Θ - Notation) and Big-Omega (Ω - Notation).


=== Class P ===
===== Big-O Notation =====
Class P computational complexity problems refer to those that can solved in polynomial running time, where “P” stands for polynomial and running time is a function of the number of bits in the input.<ref>Arora, S., & Barak, B. (2009). Computational complexity: a modern approach. Cambridge: Cambridge University Press. Retrieved from https://cornell-library.skillport.com/skillportfe/main.action?path=summary/BOOKS/31235</ref>
'''''O - notation''''' represents the upper bound or the worst case, i.e. maximum time (or space) required for an algorithm. For functions ''f'' and ''t,''


A complexity class refers to a specific decision problem rather than generic types of problems. For example, it is not acceptable to state that integer multiplication is in class P. Rather you must state the specific decision problem, e.g. the product of 3 and 5 is a class P problem.
<math>f(n) = O(t(n))</math>,


Furthermore, the running time is defined by minutes or nanoseconds, but refers to the number of operations to be performed to resolve or verify an answer to a problem.  Running time is a function of the number of bits being input into the decision problem. This allows us to ignore the efficiency of the machine running the computation and judge the complexity of the decision problem solely by the merits of such a problem.
if <math>f(n) \leq c.t(n)</math> for all <math>n\geq d</math>, where ''c'' and ''d'' are positive integers<ref name=":1">Reza Zanjirani Farahani, Masoud Hekmatfar, Facility Location - Concepts, Models, Algorithms and Case Studies, Physica-Verlag, 2009. https://doi.org/10.2307/2270886</ref><ref name=":2">M. Sipser,  Introduction To The Theory Of Computation, Thomson Course Technology, 2006.</ref>


=== Class NP ===
===== Big-Omega Notation =====
NP stands for nondeterministic polynomial time, originally referring to nondeterministic Turing machines (NDTM) in which the Turing machine has two transition functions and the computer arbitrarily determines which transition function to apply.
'''Ω''' '''''- notation''''' represents lower bound or the best case, i.e. minimum time (or space) required for an algorithm. For functions, ''f'' and ''t,''


Complexity class NP consists of problems that can be efficiently verified within a running time upper bounded by polynomial function. Verifiability is the concept that if given a potential solution to the problem it can be confirmed or rejected.
<math>f(n) = \Omega (t(n))</math>,


==== Class NP-hard and NP-complete ====
if <math>f(n)\geq c.t(n)</math> for all <math>n\geq d</math>, where ''c'' and ''d'' are positive integers<ref name=":1" /><ref name=":2" />
The NP-hard complexity class is a subcategory of the NP complexity class that defines those problems that are at least as hard as any other language in NP. If P ≠ NP, then NP-hard problems cannot be decided in polynomial time. See P vs NP on this page.


NP-complete refers to those problems within the NP complexity-class that are the hardest problems to solve within the NP class. Examples of NP-complete problems include Independent Set, [https://optimization.cbe.cornell.edu/index.php?title=Traveling_salesman_problem Traveling Salesperson], Subset Sum, and Integer Programming problems. The implication of these problems is that they are not in P unless P = NP.
===== Big-Theta Notation =====
'''''Θ - notation''''' represents average-case as it gives both upper and lower bounds of the algorithm. For functions, ''f'' and ''t,''


=== P vs NP ===
<math>f(n)=\Theta(t(n))</math>,
The difference between class P and class NP computational complexity is illustrated simply by considering a Sudoku puzzle. Ask yourself, is it easier to solve a Sudoku puzzle or verify whether an answer to a Sudoku puzzle is correct? Class P refers to computational complexity problems that can be efficiently solved. Class NP refers to those problems which have answers that can be efficiently verified. The answer to a Sudoku problem can be efficiently verified and for that reason is considered a class NP complexity problem.


This then begs the question that for every class NP problem, i.e. those that can be efficiently verified, does that mean they can also be efficiently solved? If so, then P = NP. However, we have not yet been able to prove that P = NP and thus the implications that P ≠ NP must also be considered.
if <math>c_1.t(n)\leq f(n) \leq c_2.t(n)</math> for all <math>n\geq d</math>, where ''c<sub>1</sub>'', ''c<sub>2</sub>'' and ''d'' are positive integers<ref name=":1" /><ref name=":2" />


The importance of understanding P vs NP is the subject of much discussion and has even sparked competition in the scientific community. The problem of P vs NP was selected by the Clay Mathematics Institute (CMI) of Cambridge, Massachusetts as one of seven most difficult and important problems to be solved at the dawn of the 21<sup>st</sup> century. A prize of $1 million has been allocated for anyone that can bring forward a solution.<ref>Clay Mathematics Institute, The Millennium Prize Problems. Retrieved from http://https://www.claymath.org/millennium-problems/millennium-prize-problems</ref>
=== Common Big-O Notations ===
Big-O (O-notation) is commonly used to describe the complexity of algorithms, as it provides the worst-case estimate. It considers only the number of steps involved for the problem resolution and not the hardware configuration that’s being used. Some of the common complexities are mentioned below.


=== Methodology ===
'''Constant Time Complexity <math>O(1)</math>''': computation time for the problem is independent of size of the input data.
The methodology for determining computational complexity is built upon the notion of a Turing machine and quantifying the number of computational operations the machine is to perform to resolve or verify a problem. A straight-forward approach would be to quantify the number of operations required considering every possible input to the Turing machine’s algorithm. This approach is referred to as worst-case complexity as it is the most possible number of operations to be performed in order to solve the problem.


However, critics of worst-case complexity highlight that in practice the worst-case behaviors of algorithms may never actually be encountered, and the worst-case approach can be unnecessarily cumbersome. As an alternative, average-case analysis seeks to design efficient algorithms that apply to most real-life instances. An important component of average-case analysis is the concept of an average-graph distribution of the inputs. There are several approaches to determining the average-graph including randomization. An average-case problem consists of both a decision problem and an average-graph distribution of inputs, implying that the complexity of a decision problem can vary with the inputs.<ref>Arora, S., & Barak, B. (2009). Computational complexity: a modern approach. Cambridge: Cambridge University Press. Retrieved from https://cornell-library.skillport.com/skillportfe/main.action?path=summary/BOOKS/31235</ref>  
'''Linear Time Complexity''' <math>O(n)</math>: computation time for the problem is proportional to the size of the input n and increase linearly.
 
'''Polynomial Time Complexity''' <math>O(n^c)</math>: computation time for the problem is polynomial times of size of the input ''n'' where ''c'' is a constant depending on the problem.
 
'''Exponential Time Complexity''' <math>O(2^n)</math>: computation time for the problem doubles with every addition of input ''n.''
 
'''Logarithmic Time Complexity''' <math>O(log n)</math>''''':''''' computation time for the problem is proportional to the logarithm of input size ''n.''


== Numerical Example ==
== Numerical Example ==
The efficiency of a computation problem is measured by the operations executed to solve, not the seconds (or years) required to solve it. The number of operations executed is a function of input size and arrangement. The big O notation is used to determine an algorithm’s complexity class according to the number of operations it performs as a function of input.<ref>[http://www.austinmohr.com/Work_files/complexity.pdf Mohr, A. "Quantum Computing in Complexity Theory and Theory of Computation" (PDF)]. p. 2. Retrieved 7 June 2014.</ref>
Complexity is defined based on the number of steps involved. Below are few examples demonstrating complexity calculation by analysing the algorithm.
 
=== Example 1: Complexity calculation of a time function ===
Let, <math>f(n)=3n^2+2n+6</math>, where, <math>f(n)</math> represents the time it takes for an algorithm to solve the problem of size <math>n</math>.
 
Function <math>f(n)</math> is a sum of three terms <math>3n^2, 2n, 6</math>. The term with the highest growth rate is <math>3n^2</math>(product of coefficient <math>3</math> and <math>n^2</math>) and the growth does not depend on coefficient <math>3</math>. Thus omitting the coefficient, results in simplified term <math>n^2</math>.
 
Thus we can say that <math>f(n)=O(n^2)</math>
 
Above result can be verified using the formal definition: <math>f(n)=O(t(n))</math>, if <math>f(n)\leq c.t(n)</math>, for all <math>n\geq d</math>, where c and d are positive integers.
 
Let's consider a function <math>t(n)=n^2</math>, and <math>c=11</math>, <math>d=1</math>
 
Then, <math>3n^2+2n+6\leq 11.n^2</math>, for all <math>n\geq 1</math>
 
=== Example 2: Complexity calculation for sorting algorithm ===
Let’s consider an array in descending order (6,4,3,1). We will use different sorting methods and compute the complexity for each algorithm. Since the array is in descending order, it will be the worst-case for the algorithms in terms of number of operations or time. Based on the number of operations performed or the time taken by each algorithm, we can calculate the worst-case complexity and determine the best suited algorithm for this sorting.
 
===== Bubble Sort Complexity Analysis =====
Bubble sort starts at the beginning of the array, compares each adjacent pair, and swaps their position if they are not in order. The algorithm stops when there is no more swap required during the pass, indicating that the array is now sorted. For the array (6,4,3,1), below operations will take place.
 
First Pass,
 
('''6 4''' 3 1) → ('''4 6''' 3 1), here 6 and 4 were swapped
 
(4 '''6 3''' 1) → (4 '''3 6''' 1), here 6 and 3 were swapped
 
(4 3 '''6 1''') → (4 3 '''1 6'''), here 6 and 1 were swapped
 
Second Pass,
 
('''4 3''' 1 6) → ('''3 4''' 1 6), here 4 and 3 swapped
 
(3 '''4 1''' 6) → (3 '''1 4''' 6), here 4 and 1 swapped
 
(3 1 '''4 6''') → (3 1 '''4 6'''), no swapping as 4 and 6 are already in the correct order
 
Third Pass,
 
('''3 1''' 4 6) → ('''1 3''' 4 6), here 1 and 3 swapped
 
(1 '''3 4''' 6) → (1 '''3 4''' 6), no swapping as 2 and 4 are in correct order
 
(1 3 '''4 6''') → (1 3 '''4 6'''), no swapping as 4 and 6 are in correct order
 
Fourth Pass,
 
('''1 3''' '''4 6''') → ('''1 3''' 4 6), no swapping as 1 and 3 are in correct order
 
(1 '''3 4''' 6) → (1 '''3 4''' 6), no swapping as 3 and 4 are in correct order
 
(1 3 '''4 6''') → (1 3 '''4 6'''), no swapping as 4 and 6 are in correct order
 
Even though the array was already sorted after third pass, the stopping condition of no swaps during the pass was not met. Hence the algorithm made the fourth pass through all the array elements, comparing the adjacent pairs. Since, no swaps were made during fourth pass, algorithm stopped.
 
Summarizing the operations performed by the algorithm for the array size of 4, during each pass;
 
* First Pass    : 3 comparisons; 3 swaps
* Second Pass: 3 comparisons; 2 swaps
* Third Pass    : 3 comparisons; 1 swap
* Fourth Pass  : 3 comparisons; 0 swap
 
By generalising above scenario and considering ''n'' as the input size, we can compute the complexity for Bubble Sort;
 
* Number of passes required to sort the array: <math>n-1</math>
* Number of swaps in each pass: <math>(n-1),(n-2),...,2,1</math>
* Using formula for sum of <math>(n-1)</math> natural numbers, we get total swaps = <math>(n-1)*((n-1)+1)/2 = n*(n-1)/2 = n^2/2 - n/2</math>
* Omitting the lower terms and coefficients of the highest growth term <math>n^2/2</math>, we get worst time complexity of <math>O(n^2)</math>
 
===== Merge Sort Complexity Analysis =====
Merge sort divides the unsorted list in n sub-lists, each sub-list contains one element and is considered sorted. Then it repeatedly merges the sub-lists to produce new sorted sub-lists until there is only one sub-list remaining. This last sub-list will be the sorted array. Sorting of the array (6,4,3,1) will be as per the diagram below
[[File:Merge Sort of Array.png|thumb|Merge Sort]]
Summarizing the operations performed by the algorithm for an array of size 4


The notation O(n) is used where ‘O’ refers to the order of a function and ‘n’ represents the size of the input.<ref>A. Mejia, How you can change the world by learning Data Structures and Algorithms. Retrieved from: https://towardsdatascience.com/how-you-can-change-the-world-by-learning-data-structures-and-algorithms-84566c1829e3</ref>
* Number of splits to reach level 3 where each sub-list contains single element: 3
* Number of merges to reach the final sorted list: 3
* Number of comparisons made: 2 comparisons at first merging level, 3 comparisons at second merging level


An example of an O(1) problem includes determining whether one number is odd or even. The algorithm reads a bit of input and performs one operation to determine whether or not the number is odd or even. No matter how large or small the quantity of inputs the number of operations holds constant at 1; for that reason this is an O(1) problem.
By generalising above scenario and considering n as the input size, we can compute the complexity for merge sort;


An example of O(n) problem includes identifying the minimum input within an unsorted array. To compute this problem the computer must read every bit of input to determine whether or not it is less than the prior bit of input. For this reason, the number of operations is linearly correlated to the quantity of inputs. For example, the decision problem of finding the minimum of {5,9,3,2,7,1,4} requires the computer to check every element in the array. This array has n=7 inputs, so it will require 7 operations to read each bit an determine if it is less than the prior bit. This scales linearly as the size of input increases.  
* Number of splits to reach level ''k'' where each sub-list contains single element is (<math>n-1</math>)
* Number of merges required from level ''k'' to reach final sorted list is (<math>n-1</math>)
* Assuming <math>(n-1)\approx n</math>, when <math>n</math> is very large
* Number of comparisons made at each level is <math>n</math>
* Number of levels, <math>k=log_2 n</math> (as <math>n=2^k</math>)
* Therefore, total computation performed is equal to summation of total number of splits and total number of merges with comparisons
* Total computations are <math>(n-1)+n.log_2 n</math>
* Omitting the lower growth terms, we get worst case complexity of <math>O(n.log_2 n)</math>
 
===== Complexity Comparison =====
Complexity for Bubble sort is <math>O(n^2)</math>,
 
Complexity of Merge sort is <math>O(n.log_2 n)</math>,
 
Since, <math>O(n.log_2 n)</math> < <math>O(n^2)</math>, Merge Sort is more efficient algorithm for sorting this array.


An example of O(2<sup>n</sup>) problem includes finding all subsets of a group. For example, an input of {0,1} where n=2 has 4 possible subsets: [0], [1], [0,1], [1,0]. And input of {0,1,2,3} where n=4 has 16 possible subsets. An input of n=10 has 1,024 possible subsets. As the input increases the computational complexity scales exponentially.<ref>A. Mejia, 8 time complexity examples that every programmer should know. Retrieved from: https://medium.com/@amejiarosario/8-time-complexity-examples-that-every-programmer-should-know-171bd21e5ba</ref>
== Applications ==
== Applications ==
Computational Complexity is influential to numerous scientific fields including [https://optimization.cbe.cornell.edu/index.php?title=Quantum_computing_for_optimization quantum computing], game theory, data mining, and cellular automata.<ref>Robert A. Meyers. (2012). Computational Complexity: Theory, Techniques, and Applications. Springer: Springer. Retrieved from https://search.ebscohost.com/login.aspx?custid=s9001366&groupid=main&profid=pfi&authtype=ip,guest&direct=true&db=edspub&AN=edp1880523&site=eds-live&scope=site</ref>
Computational complexity helps developers to determine the most efficient algorithm that consumes least number of resources (time and space). It is widely used in various fields. Some of the major application areas are discussed below.
 
==== Computer science and algorithm efficiency analysis ====
It helps in designing efficient algorithms and aids in software/hardware selection. It enables comparison of multiple algorithms that can be used for solving a specific problem. Computational complexity helps in optimizing database queries and information retrieval processes.
 
==== Quantum Computing ====
Quantum computing is majorly used for cybersecurity, artificial intelligence, data management, data analytics and optimization. With the help of quantum complexity theory, quantum algorithms can be made exponentially efficient compared to classical algorithms. This enables businesses to solve problems which cannot be solved by conventional computing devices.
 
==== Game Theory<ref>T. Roughgarden, “Complexity Theory, Game Theory, and Economics: The Barbados Lectures,” ''Foundations and Trends® in Theoretical Computer Science,'' vol. 14, pp. 222-407, 2020. http://dx.doi.org/10.1561/0400000085</ref> ====
Game theory deals with optimal decision making in any strategic setting. It is extensively used for analysing customer behaviour, market trend, develop effective business strategies, competitive product pricing etc. Computational complexity helps in defining the most efficient algorithm for approaching the game and to come up with best outcome while considering all the possible variations.
 
==== Healthcare system<ref>Thomas G. Kannampallil, Guido F. Schauer, Trevor Cohen, Vimla L. Patel, “Considering complexity in healthcare systems,” ''Journal of Biomedical Informatics,'' vol. 44, no. 6, pp. 943-947, 2011. https://doi.org/10.2307/2270886</ref> ====
Various aspects of healthcare system such as patient management, continuity of patient care, nursing etc requires efficient decision-making processes. These complex tasks are broken down into fairly smaller and manageable tasks using computational models while giving due consideration to interrelatedness of systems.


== Conclusion ==
== Conclusion ==
Computational complexity has important implications in the field of computer science and far reaching applications that span numerous fields and industries. As computable problems become more complex the ability to increase the efficiency in which they are solved becomes more important. Advancements toward solving P vs NP will have far reaching impacts on how we approach the computability of problems and the ability to efficiently allocate resources.
Computational complexity provides a framework for analysing, comparing and selecting the most efficient algorithm and also enables the selection of best suited hardware and software combination. The algorithms or problems are categorised in classes P, NP, NP-Complete, NP-Hard and the worst-case complexity is represented using Big-O notation. Computational complexity has been evolving continuously for past few decades and remains a major area of study. Computational complexity is extensively used in the field of game theory, quantum mechanics, health system where the number of computations are huge and cannot be solved using conventional methods.


== Sources ==
== Sources ==
<references />
<references responsive="0" />

Latest revision as of 23:56, 15 December 2024

Authors: Sarang Harshey (sh2669) (SysEn 5800 Fall 2024)

Stewards: Nathan Preuss, Wei-Han Chen, Tianqi Xiao, Guoqing Hu

Introduction

Computational complexity is a study of resources, especially time and space, required to solve a computational problem. It provides an understanding of how the resource requirement scales-up as the problem gets bigger and bigger. Another objective is to know which problems are solvable using algorithms and which problems are truly difficult and practically impossible to solve.

Complexity Classes

Problems can be grouped into sets that share a common property and require similar amount of computational resources. These sets or groups are known as Complexity Classes. Some basic types of complexity classes are mentioned below.

P (Polynomial Time) Class

If a problem can be solved in polynomial time, then it is considered as a Class P problem. Problems in this class are efficiently solvable, however, the time required to solve grows polynomially with the size of input.[1][2] Linear programming problems are considered under P class.

NP (Non-deterministic Polynomial Time) Class

If the solution of a problem can be verified in polynomial time, then the problem is considered as a Class NP problem. These problems require a solution that can be verified, but finding a solution itself may take more than polynomial time.[1][2]

NP-Hard Class

NP-Hard problems may or may not fall under NP Class. A problem H is NP-Hard if for every problem L in NP, there exists a polynomial time reduction from L to H.[1][2] These are usually optimization problems.

NP-Complete Class

NP-Complete problems are a subset of NP class. If all the problems in a NP can be reduced to problem H in polynomial time, then problem H is considered as NP-Complete.[1][2] NP Complete problems are the hardest problems to solve. These are usually decision problems.

History

The history of computational complexity can be traced back to the paper, On computable numbers with an application to the Entsheidungs problem[3], published in 1936 by A M Turing. In this paper, Turing introduced his famous Turing Machine and provided a convincing formalization of computable function. In following years, once the theory was established to determine which problem can and cannot be solved, questions regarding relative computational difficulty came up.[4] This can be considered as the birth of computational complexity. Rabin M O in his paper, Degree of difficulty of computing a function and partial ordering or recursive sets[5], in 1960 made an attempt to measure the amount of work inherent in computing a function along with defining the degree of difficulty of computing a function. Another influential paper that was published in 1965 by Hartmanis and Stearns, On the computational complexity of algorithms[6], discussed the method of categorizing problems according to how hard they are to compute. As part of this paper, a variety of theorems were established. It also provided a way to classify functions or recognition problems according to their computational complexity and laid out definitions of time complexity and space complexity. Paper by Alan Cobham in 1965, The intrinsic computational difficulty of functions[7], discussed the relative difficulty between addition and multiplication and characterised an important class of functions computable in time bounded by a polynomial in the length of input. These papers from Cobhem, Hartemanis and Rabin can be considered as the foundation for the field of computational complexity. Manuel Blum in 1967, formulated axioms known as Blum Axioms, that specify desirable properties of complexity measures on the set of computation function. Field of computational complexity started to grow from 1971 onwards, with studies from Steven Cook, Leonid Levin and Richard Karp which expanded the study of problems belonging to class NP-Complete.

Need for Computational Complexity

Computational complexity provides a method to analyse an algorithm in terms of complexity and provides information on the performance that can be expected. In a complex algorithm, through computational complexity, costliest steps (in terms of space and time) can be identified and efforts can be made for improving efficiency by tuning these steps before implementation.  It also helps in selecting the best algorithms and eliminate inefficient algorithms.

Algorithm Discussion

Complexity of an algorithm can be calculated by establishing a relationship between the length of the input and the number of steps (time complexity) or amount of space that algorithm takes (space complexity) to compute the problem. This relationship is usually represented as a function. The smaller the value of this function, the higher the efficiency of the algorithm. Function value also indicates the growth rate with respect to size of the input.

Asymptotic Notations

Mathematical notations, usually known as Asymptotic Notations, are used to describe the complexity of any algorithm. There are broadly three notations used to describe complexity, namely – Big-O (O - Notation), Big-Theta (Θ - Notation) and Big-Omega (Ω - Notation).

Big-O Notation

O - notation represents the upper bound or the worst case, i.e. maximum time (or space) required for an algorithm. For functions f and t,

,

if for all , where c and d are positive integers[8][9]

Big-Omega Notation

Ω - notation represents lower bound or the best case, i.e. minimum time (or space) required for an algorithm. For functions, f and t,

,

if for all , where c and d are positive integers[8][9]

Big-Theta Notation

Θ - notation represents average-case as it gives both upper and lower bounds of the algorithm. For functions, f and t,

,

if for all , where c1, c2 and d are positive integers[8][9]

Common Big-O Notations

Big-O (O-notation) is commonly used to describe the complexity of algorithms, as it provides the worst-case estimate. It considers only the number of steps involved for the problem resolution and not the hardware configuration that’s being used. Some of the common complexities are mentioned below.

Constant Time Complexity : computation time for the problem is independent of size of the input data.

Linear Time Complexity : computation time for the problem is proportional to the size of the input n and increase linearly.

Polynomial Time Complexity : computation time for the problem is polynomial times of size of the input n where c is a constant depending on the problem.

Exponential Time Complexity : computation time for the problem doubles with every addition of input n.

Logarithmic Time Complexity : computation time for the problem is proportional to the logarithm of input size n.

Numerical Example

Complexity is defined based on the number of steps involved. Below are few examples demonstrating complexity calculation by analysing the algorithm.

Example 1: Complexity calculation of a time function

Let, , where, represents the time it takes for an algorithm to solve the problem of size .

Function is a sum of three terms . The term with the highest growth rate is (product of coefficient and ) and the growth does not depend on coefficient . Thus omitting the coefficient, results in simplified term .

Thus we can say that

Above result can be verified using the formal definition: , if , for all , where c and d are positive integers.

Let's consider a function , and ,

Then, , for all

Example 2: Complexity calculation for sorting algorithm

Let’s consider an array in descending order (6,4,3,1). We will use different sorting methods and compute the complexity for each algorithm. Since the array is in descending order, it will be the worst-case for the algorithms in terms of number of operations or time. Based on the number of operations performed or the time taken by each algorithm, we can calculate the worst-case complexity and determine the best suited algorithm for this sorting.

Bubble Sort Complexity Analysis

Bubble sort starts at the beginning of the array, compares each adjacent pair, and swaps their position if they are not in order. The algorithm stops when there is no more swap required during the pass, indicating that the array is now sorted. For the array (6,4,3,1), below operations will take place.

First Pass,

(6 4 3 1) → (4 6 3 1), here 6 and 4 were swapped

(4 6 3 1) → (4 3 6 1), here 6 and 3 were swapped

(4 3 6 1) → (4 3 1 6), here 6 and 1 were swapped

Second Pass,

(4 3 1 6) → (3 4 1 6), here 4 and 3 swapped

(3 4 1 6) → (3 1 4 6), here 4 and 1 swapped

(3 1 4 6) → (3 1 4 6), no swapping as 4 and 6 are already in the correct order

Third Pass,

(3 1 4 6) → (1 3 4 6), here 1 and 3 swapped

(1 3 4 6) → (1 3 4 6), no swapping as 2 and 4 are in correct order

(1 3 4 6) → (1 3 4 6), no swapping as 4 and 6 are in correct order

Fourth Pass,

(1 3 4 6) → (1 3 4 6), no swapping as 1 and 3 are in correct order

(1 3 4 6) → (1 3 4 6), no swapping as 3 and 4 are in correct order

(1 3 4 6) → (1 3 4 6), no swapping as 4 and 6 are in correct order

Even though the array was already sorted after third pass, the stopping condition of no swaps during the pass was not met. Hence the algorithm made the fourth pass through all the array elements, comparing the adjacent pairs. Since, no swaps were made during fourth pass, algorithm stopped.

Summarizing the operations performed by the algorithm for the array size of 4, during each pass;

  • First Pass  : 3 comparisons; 3 swaps
  • Second Pass: 3 comparisons; 2 swaps
  • Third Pass  : 3 comparisons; 1 swap
  • Fourth Pass  : 3 comparisons; 0 swap

By generalising above scenario and considering n as the input size, we can compute the complexity for Bubble Sort;

  • Number of passes required to sort the array:
  • Number of swaps in each pass:
  • Using formula for sum of natural numbers, we get total swaps =
  • Omitting the lower terms and coefficients of the highest growth term , we get worst time complexity of
Merge Sort Complexity Analysis

Merge sort divides the unsorted list in n sub-lists, each sub-list contains one element and is considered sorted. Then it repeatedly merges the sub-lists to produce new sorted sub-lists until there is only one sub-list remaining. This last sub-list will be the sorted array. Sorting of the array (6,4,3,1) will be as per the diagram below

Merge Sort

Summarizing the operations performed by the algorithm for an array of size 4

  • Number of splits to reach level 3 where each sub-list contains single element: 3
  • Number of merges to reach the final sorted list: 3
  • Number of comparisons made: 2 comparisons at first merging level, 3 comparisons at second merging level

By generalising above scenario and considering n as the input size, we can compute the complexity for merge sort;

  • Number of splits to reach level k where each sub-list contains single element is ()
  • Number of merges required from level k to reach final sorted list is ()
  • Assuming , when is very large
  • Number of comparisons made at each level is
  • Number of levels, (as )
  • Therefore, total computation performed is equal to summation of total number of splits and total number of merges with comparisons
  • Total computations are
  • Omitting the lower growth terms, we get worst case complexity of
Complexity Comparison

Complexity for Bubble sort is ,

Complexity of Merge sort is ,

Since, < , Merge Sort is more efficient algorithm for sorting this array.

Applications

Computational complexity helps developers to determine the most efficient algorithm that consumes least number of resources (time and space). It is widely used in various fields. Some of the major application areas are discussed below.

Computer science and algorithm efficiency analysis

It helps in designing efficient algorithms and aids in software/hardware selection. It enables comparison of multiple algorithms that can be used for solving a specific problem. Computational complexity helps in optimizing database queries and information retrieval processes.

Quantum Computing

Quantum computing is majorly used for cybersecurity, artificial intelligence, data management, data analytics and optimization. With the help of quantum complexity theory, quantum algorithms can be made exponentially efficient compared to classical algorithms. This enables businesses to solve problems which cannot be solved by conventional computing devices.

Game Theory[10]

Game theory deals with optimal decision making in any strategic setting. It is extensively used for analysing customer behaviour, market trend, develop effective business strategies, competitive product pricing etc. Computational complexity helps in defining the most efficient algorithm for approaching the game and to come up with best outcome while considering all the possible variations.

Healthcare system[11]

Various aspects of healthcare system such as patient management, continuity of patient care, nursing etc requires efficient decision-making processes. These complex tasks are broken down into fairly smaller and manageable tasks using computational models while giving due consideration to interrelatedness of systems.

Conclusion

Computational complexity provides a framework for analysing, comparing and selecting the most efficient algorithm and also enables the selection of best suited hardware and software combination. The algorithms or problems are categorised in classes P, NP, NP-Complete, NP-Hard and the worst-case complexity is represented using Big-O notation. Computational complexity has been evolving continuously for past few decades and remains a major area of study. Computational complexity is extensively used in the field of game theory, quantum mechanics, health system where the number of computations are huge and cannot be solved using conventional methods.

Sources

  1. 1.0 1.1 1.2 1.3 R. Vanderbei, Linear Programming Foundations and Extensions, Springer, 2014. https://doi.org/10.1057/palgrave.jors.2600987
  2. 2.0 2.1 2.2 2.3 Sanjeev Arora, Boaz Barak, Computational Complexity: A Modern Approach, New York: Cambridge University Press, 2009. http://dx.doi.org/10.1017/CBO9780511804090
  3. A. M. Turing, “On computable numbers with an application to the Entsheidungs problem,” in The London Mathematical Society, 1936. https://doi.org/10.1112/plms/s2-42.1.230
  4. S. A. Cook, “An overview of computational complexity,” Communications of the ACM, vol. 26, no. 6, pp. 400-408, 1983. https://doi.org/10.1145/358141.358144
  5. M. O. Rabin, “Degree of difificulty of computing a function and a partial ordering or recursive sets,” Hebrew University, Jerusalem, Israel, 1960.
  6. J. Hartmanis, R. Stearns, “On the Computational Complexity of Algorithms,” American Mathematical Society, 1965. http://dx.doi.org/10.2307/1994208
  7. A. Cobham, “The intrinsic computational difficulty of functions,” Logic, methodology and philosophy of science, pp. 24-30, 1965. https://doi.org/10.2307/2270886
  8. 8.0 8.1 8.2 Reza Zanjirani Farahani, Masoud Hekmatfar, Facility Location - Concepts, Models, Algorithms and Case Studies, Physica-Verlag, 2009. https://doi.org/10.2307/2270886
  9. 9.0 9.1 9.2 M. Sipser, Introduction To The Theory Of Computation, Thomson Course Technology, 2006.
  10. T. Roughgarden, “Complexity Theory, Game Theory, and Economics: The Barbados Lectures,” Foundations and Trends® in Theoretical Computer Science, vol. 14, pp. 222-407, 2020. http://dx.doi.org/10.1561/0400000085
  11. Thomas G. Kannampallil, Guido F. Schauer, Trevor Cohen, Vimla L. Patel, “Considering complexity in healthcare systems,” Journal of Biomedical Informatics, vol. 44, no. 6, pp. 943-947, 2011. https://doi.org/10.2307/2270886