Computational complexity: Difference between revisions

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
(updated Applications section)
No edit summary
 
(10 intermediate revisions by one other user not shown)
Line 1: Line 1:
Authors: Steve Bentioulis, Augie Bravo, Will Murphy
Authors: Steve Bentioulis, Augie Bravo, Will Murphy (SysEn 6800 Fall 2020)


== Introduction ==
== Introduction ==
<blockquote>''“The subject of my talk is perhaps most directly indicated by simply asking two questions: first, is it harder to multiply than to add? and second, why?...there is no algorithm for multiplication computationally as simple as that for addition, and this proves something of a stumbling block” - Alan Cobham, 1965''</blockquote>Computational complexity refers to the amount of resources needed to solve a problem. Complexity increases as the amount of resources required increases.  
<blockquote>''“The subject of my talk is perhaps most directly indicated by simply asking two questions: first, is it harder to multiply than to add? and second, why?...there is no algorithm for multiplication computationally as simple as that for addition, and this proves something of a stumbling block” - Alan Cobham, 1965'' <ref>[https://www.cs.toronto.edu/~sacook/homepage/cobham_intrinsic.pdf A. Cobham, The intrinsic computational difficulty of functions], in Y. Bar-Hillel, ed., Logic, Methodology and Philosophy of Science: Proceedings of the 1964 International Congress, North-Holland Publishing Company, Amsterdam, 1965, p. 24-30 </ref></blockquote>
Computational complexity refers to the amount of resources needed to solve a problem. Complexity increases as the amount of resources required increases. While this notion may seem straightforward enough, computational complexity has profound impacts.  


While this notion may seem straightforward enough, the notion of computational complexity has profound impacts. Additionally, the theory of computational complexity is in its infancy as it has only been studied in earnest starting in the 20th century.
The quote above from Alan Cobham is some of the earliest thinking on defining computational complexity and set the stage for defining problems based on complexity classes to indicate the feasibility of computational problems.
 
Additionally, the theory of computational complexity is in its infancy and has only been studied in earnest starting in the 20<sup>th</sup> century.  


== Theory, Methodology ==
== Theory, Methodology ==
The concept of computation has evolved since the advent of the standard universal electronic computer and it’s widespread societal adoption. And while the electronic computer is based on the refined definition of computation, it’s important to remember that computation is a major scientific concept in it’s own right.
The concept of computation has evolved since the advent of the standard universal electronic computer and the associated widespread societal adoption. And while the electronic computer is often synonymous with computation, it is important to remember that computation is a major scientific concept irrespective of whether it is conducted by machine, man, or otherwise.
 
When studying computation, a key area of interest is understanding what problems are, in fact, computable. Researchers have shown that some tasks are inherently incomputable in that no computer can solve them without going into infinite loops on certain inputs. This phenomenon begs the question how do you determine if a problem can be computed, moreover, for those problems that can be computed how can you calculate the complexity associated with computing the answer.
 
The focus of computational complexity is the measure of computational efficiency quantifying the amount of computational resources required to solve a problem.<ref>Arora, S., & Barak, B. (2009). Computational complexity: a modern approach. Cambridge: Cambridge University Press. Retrieved from https://cornell-library.skillport.com/skillportfe/main.action?path=summary/BOOKS/31235</ref>


When studying computation, a key area of interest is understanding what problems are, in fact, computable. Researchers have shown that some tasks are inherently incomputable in that no computer can solve them without going into infinite loops on certain inputs. This phenomena begs the question how do you determine if a problem can be computed, and moreover, for those problems that can be computed how can you calculate the complexity associated with computing the answer.
Within the study of computational complexity there exists the notion of a complexity class defined as a set of functions that can be computed within given resource bounds.<ref>Du, D., & Ko, K.-I. (2014). Theory of computational complexity. (Second edition.). Hoboken, New Jersey: Wiley. Retrieved from http://onlinelibrary.wiley.com/book/10.1002/9781118595091</ref>


The focus of computational complexity is the measure of computational efficiency quantifying the amount of computational resources required to solve a problem.
=== Class P ===
Class P computational complexity problems refer to those that can solved in polynomial running time, where “P” stands for polynomial and running time is a function of the number of bits in the input.<ref>Arora, S., & Barak, B. (2009). Computational complexity: a modern approach. Cambridge: Cambridge University Press. Retrieved from https://cornell-library.skillport.com/skillportfe/main.action?path=summary/BOOKS/31235</ref>


=== Complexity Class ===
A complexity class refers to a specific decision problem rather than generic types of problems. For example, it is not acceptable to state that integer multiplication is in class P. Rather you must state the specific decision problem, e.g. the product of 3 and 5 is a class P problem.
Within the study of computational complexity there exists the notion of a complexity class defined as a set of functions that can be computed within given resource bounds.


==== P ====
Furthermore, the running time is defined by minutes or nanoseconds, but refers to the number of operations to be performed to resolve or verify an answer to a problem.  Running time is a function of the number of bits being input into the decision problem. This allows us to ignore the efficiency of the machine running the computation and judge the complexity of the decision problem solely by the merits of such a problem.
Class P Computational complexity problems refer to those that can solved in polynomial running time, where “P” stands for polynomial. Important to note that class P only includes specific decision problems, e.g. product of 3 and 5; its is not acceptable to state that integer multiplication is in class P. Also, running time is a function of the number of bits in the input


==== NP ====
=== Class NP ===
NP stands for nondeterministic polynomial time, originally referring to nondeterministic Turing machines (NDTM) in which the Turing machine has two transition functions and the computer arbitrarily determines which transition function to apply.
NP stands for nondeterministic polynomial time, originally referring to nondeterministic Turing machines (NDTM) in which the Turing machine has two transition functions and the computer arbitrarily determines which transition function to apply.


Complexity class NP consists of problems that can be verified within a running time upper bounded by polynomial function. Verifiability is the concept that if given a potential solution to the problem it can be confirmed or rejected.
Complexity class NP consists of problems that can be efficiently verified within a running time upper bounded by polynomial function. Verifiability is the concept that if given a potential solution to the problem it can be confirmed or rejected.


===== NP-hard =====
==== Class NP-hard and NP-complete ====
The NP-hard complexity class is a subcategory of the NP complexity class that defines those problems that are at least as hard as any other language in NP. If P ≠ NP, then NP-hard problems cannot be decided in polynomial time. See P vs NP on this page.  
The NP-hard complexity class is a subcategory of the NP complexity class that defines those problems that are at least as hard as any other language in NP. If P ≠ NP, then NP-hard problems cannot be decided in polynomial time. See P vs NP on this page.  


===== NP-complete =====
NP-complete refers to those problems within the NP complexity-class that are the hardest problems to solve within the NP class. Examples of NP-complete problems include Independent Set, [https://optimization.cbe.cornell.edu/index.php?title=Traveling_salesman_problem Traveling Salesperson], Subset Sum, and Integer Programming problems. The implication of these problems is that they are not in P unless P = NP.
NP-complete refers to those problems within the NP complexity-class that are the hardest problems t solve within the NP class. Examples of NP-complete problems include Independent Set, Traveling Salesperson, Subset Sum, and Integer Programming problem. The implication of these problems is that they are not in P unless P = NP.  


=== P vs NP ===
=== P vs NP ===
The difference between class P and class NP computational complexity is illustrated simply with a crossword, sudoku, or another puzzle type. The effort required to solve a puzzle from scratch would be categorized as class P complexity, whereas the effort required to verify a friend’s solution to a sudoku puzzle would be classified as a class NP complexity.
The difference between class P and class NP computational complexity is illustrated simply by considering a Sudoku puzzle. Ask yourself, is it easier to solve a Sudoku puzzle or verify whether an answer to a Sudoku puzzle is correct? Class P refers to computational complexity problems that can be efficiently solved. Class NP refers to those problems which have answers that can be efficiently verified. The answer to a Sudoku problem can be efficiently verified and for that reason is considered a class NP complexity problem.
 
This then begs the question that for every class NP problem, i.e. those that can be efficiently verified, does that mean they can also be efficiently solved? If so, then P = NP. However, we have not yet been able to prove that P = NP and thus the implications that P ≠ NP must also be considered.
 
The importance of understanding P vs NP is the subject of much discussion and has even sparked competition in the scientific community. The problem of P vs NP was selected by the Clay Mathematics Institute (CMI) of Cambridge, Massachusetts as one of seven most difficult and important problems to be solved at the dawn of the 21<sup>st</sup> century. A prize of $1 million has been allocated for anyone that can bring forward a solution.<ref>Clay Mathematics Institute, The Millennium Prize Problems. Retrieved from http://https://www.claymath.org/millennium-problems/millennium-prize-problems</ref>


This then begs the question that for every class NP problem, i.e. those that can be verified, does that mean it can also be solved? If so, then P = NP. However, we have not yet been able to prove that P = NP and thus the implications that P ≠ NP must also be considered.
=== Methodology ===
The methodology for determining computational complexity is built upon the notion of a Turing machine and quantifying the number of computational operations the machine is to perform to resolve or verify a problem. A straight-forward approach would be to quantify the number of operations required considering every possible input to the Turing machine’s algorithm. This approach is referred to as worst-case complexity as it is the most possible number of operations to be performed in order to solve the problem.


The importance of understanding P vs NP is the subject of much discussion and has even sparked competition in the scientific community. The problem of P vs NP was selected by the Clay Mathematics Institute (CMI) of Cambridge, Massachusetts as one of seven most difficult and important problems to be solved at the dawn of the 21<sup>st</sup> century. A prize of $1 million has been allocated for anyone that can bring forward a solution
However, critics of worst-case complexity highlight that in practice the worst-case behaviors of algorithms may never actually be encountered, and the worst-case approach can be unnecessarily cumbersome. As an alternative, average-case analysis seeks to design efficient algorithms that apply to most real-life instances. An important component of average-case analysis is the concept of an average-graph distribution of the inputs. There are several approaches to determining the average-graph including randomization. An average-case problem consists of both a decision problem and an average-graph distribution of inputs, implying that the complexity of a decision problem can vary with the inputs.<ref>Arora, S., & Barak, B. (2009). Computational complexity: a modern approach. Cambridge: Cambridge University Press. Retrieved from https://cornell-library.skillport.com/skillportfe/main.action?path=summary/BOOKS/31235</ref>  


== Numerical Example ==
== Numerical Example ==
The efficiency of a computation problem is measured by the operations executed to solve, not the seconds (or years) required to solve it. The number of operations executed is a function of input size and arrangement. The big O notation is used to determine an algorithm’s complexity class according to the number of operations it performs as a function of input.<ref>Mohr, A. Quantum Computing in Complexity Theory and Theory of Computation (PDF). Retrieved from http://www.austinmohr.com/Work_files/complexity.pdf</ref>
The notation O(n) is used where ‘O’ refers to the order of a function and ‘n’ represents the size of the input.<ref>A. Mejia, How you can change the world by learning Data Structures and Algorithms. Retrieved from: https://towardsdatascience.com/how-you-can-change-the-world-by-learning-data-structures-and-algorithms-84566c1829e3</ref>
An example of an O(1) problem includes determining whether one number is odd or even. The algorithm reads a bit of input and performs one operation to determine whether or not the number is odd or even. No matter how large or small the quantity of inputs the number of operations holds constant at 1; for that reason this is an O(1) problem.


An example of O(n) problem includes identifying the minimum input within an unsorted array. To compute this problem the computer must read every bit of input to determine whether or not it is less than the prior bit of input. For this reason, the number of operations is linearly correlated to the quantity of inputs. For example, the decision problem of finding the minimum of {5,9,3,2,7,1,4} requires the computer to check every element in the array. This array has n=7 inputs, so it will require 7 operations to read each bit an determine if it is less than the prior bit. This scales linearly as the size of input increases.
== Applications ==
== Applications ==
Computational Complexity is influential to numerous scientific fields including quantum computing, game theory, data mining, and cellular automata.
Computational Complexity is influential to numerous scientific fields including [https://optimization.cbe.cornell.edu/index.php?title=Quantum_computing_for_optimization quantum computing], game theory, data mining, and cellular automata.<ref>Robert A. Meyers. (2012). Computational Complexity: Theory, Techniques, and Applications. Springer: Springer. Retrieved from https://search.ebscohost.com/login.aspx?custid=s9001366&groupid=main&profid=pfi&authtype=ip,guest&direct=true&db=edspub&AN=edp1880523&site=eds-live&scope=site</ref> Focusing in on quantum computing, there are important applications to the study of computational complexity as the theory of complexity is largely based upon the Turing machine and the Church-Turing thesis that any physically realizable computation device can be simulated by a Turing machine. If quantum computers are to be physically realizable they could alter our understanding of how complex a decision problem may be by providing enhanced methods in which algorithms may be computed and potentially lowering the number of operations to be performed.<ref> Arora, S., & Barak, B. (2009). Computational complexity: a modern approach. Cambridge: Cambridge University Press. Retrieved from https://cornell-library.skillport.com/skillportfe/main.action?path=summary/BOOKS/31235 </ref>


== Conclusion ==
== Conclusion ==
Line 46: Line 62:


== Sources ==
== Sources ==
 
<references />
# [https://www.cs.toronto.edu/~sacook/homepage/cobham_intrinsic.pdf A. Cobham, The intrinsic computational difficulty of functions], in Y. Bar-Hillel, ed., Logic, Methodology and Philosophy of Science: Proceedings of the 1964 International Congress, North-Holland Publishing Company, Amsterdam, 1965, pp. 24-30
# Arora, S., & Barak, B. (2009). ''Computational complexity: a modern approach.'' Cambridge: Cambridge University Press. Retrieved from https://cornell-library.skillport.com/skillportfe/main.action?path=summary/BOOKS/31235
# Du, D., & Ko, K.-I. (2014). ''Theory of computational complexity.'' (Second edition.). Hoboken, New Jersey: Wiley. Retrieved from http://onlinelibrary.wiley.com/book/10.1002/9781118595091
# Clay Mathematics Institute, The Millennium Prize Problems. Retrieved from http://https://www.claymath.org/millennium-problems/millennium-prize-problems
# Robert A. Meyers. (2012). ''Computational Complexity: Theory, Techniques, and Applications.'' Springer: Springer. Retrieved from https://search.ebscohost.com/login.aspx?custid=s9001366&groupid=main&profid=pfi&authtype=ip,guest&direct=true&db=edspub&AN=edp1880523&site=eds-live&scope=site

Latest revision as of 07:30, 21 December 2020

Authors: Steve Bentioulis, Augie Bravo, Will Murphy (SysEn 6800 Fall 2020)

Introduction

“The subject of my talk is perhaps most directly indicated by simply asking two questions: first, is it harder to multiply than to add? and second, why?...there is no algorithm for multiplication computationally as simple as that for addition, and this proves something of a stumbling block” - Alan Cobham, 1965 [1]

Computational complexity refers to the amount of resources needed to solve a problem. Complexity increases as the amount of resources required increases. While this notion may seem straightforward enough, computational complexity has profound impacts.

The quote above from Alan Cobham is some of the earliest thinking on defining computational complexity and set the stage for defining problems based on complexity classes to indicate the feasibility of computational problems.

Additionally, the theory of computational complexity is in its infancy and has only been studied in earnest starting in the 20th century.

Theory, Methodology

The concept of computation has evolved since the advent of the standard universal electronic computer and the associated widespread societal adoption. And while the electronic computer is often synonymous with computation, it is important to remember that computation is a major scientific concept irrespective of whether it is conducted by machine, man, or otherwise.

When studying computation, a key area of interest is understanding what problems are, in fact, computable. Researchers have shown that some tasks are inherently incomputable in that no computer can solve them without going into infinite loops on certain inputs. This phenomenon begs the question how do you determine if a problem can be computed, moreover, for those problems that can be computed how can you calculate the complexity associated with computing the answer.

The focus of computational complexity is the measure of computational efficiency quantifying the amount of computational resources required to solve a problem.[2]

Within the study of computational complexity there exists the notion of a complexity class defined as a set of functions that can be computed within given resource bounds.[3]

Class P

Class P computational complexity problems refer to those that can solved in polynomial running time, where “P” stands for polynomial and running time is a function of the number of bits in the input.[4]

A complexity class refers to a specific decision problem rather than generic types of problems. For example, it is not acceptable to state that integer multiplication is in class P. Rather you must state the specific decision problem, e.g. the product of 3 and 5 is a class P problem.

Furthermore, the running time is defined by minutes or nanoseconds, but refers to the number of operations to be performed to resolve or verify an answer to a problem.  Running time is a function of the number of bits being input into the decision problem. This allows us to ignore the efficiency of the machine running the computation and judge the complexity of the decision problem solely by the merits of such a problem.

Class NP

NP stands for nondeterministic polynomial time, originally referring to nondeterministic Turing machines (NDTM) in which the Turing machine has two transition functions and the computer arbitrarily determines which transition function to apply.

Complexity class NP consists of problems that can be efficiently verified within a running time upper bounded by polynomial function. Verifiability is the concept that if given a potential solution to the problem it can be confirmed or rejected.

Class NP-hard and NP-complete

The NP-hard complexity class is a subcategory of the NP complexity class that defines those problems that are at least as hard as any other language in NP. If P ≠ NP, then NP-hard problems cannot be decided in polynomial time. See P vs NP on this page.

NP-complete refers to those problems within the NP complexity-class that are the hardest problems to solve within the NP class. Examples of NP-complete problems include Independent Set, Traveling Salesperson, Subset Sum, and Integer Programming problems. The implication of these problems is that they are not in P unless P = NP.

P vs NP

The difference between class P and class NP computational complexity is illustrated simply by considering a Sudoku puzzle. Ask yourself, is it easier to solve a Sudoku puzzle or verify whether an answer to a Sudoku puzzle is correct? Class P refers to computational complexity problems that can be efficiently solved. Class NP refers to those problems which have answers that can be efficiently verified. The answer to a Sudoku problem can be efficiently verified and for that reason is considered a class NP complexity problem.

This then begs the question that for every class NP problem, i.e. those that can be efficiently verified, does that mean they can also be efficiently solved? If so, then P = NP. However, we have not yet been able to prove that P = NP and thus the implications that P ≠ NP must also be considered.

The importance of understanding P vs NP is the subject of much discussion and has even sparked competition in the scientific community. The problem of P vs NP was selected by the Clay Mathematics Institute (CMI) of Cambridge, Massachusetts as one of seven most difficult and important problems to be solved at the dawn of the 21st century. A prize of $1 million has been allocated for anyone that can bring forward a solution.[5]

Methodology

The methodology for determining computational complexity is built upon the notion of a Turing machine and quantifying the number of computational operations the machine is to perform to resolve or verify a problem. A straight-forward approach would be to quantify the number of operations required considering every possible input to the Turing machine’s algorithm. This approach is referred to as worst-case complexity as it is the most possible number of operations to be performed in order to solve the problem.

However, critics of worst-case complexity highlight that in practice the worst-case behaviors of algorithms may never actually be encountered, and the worst-case approach can be unnecessarily cumbersome. As an alternative, average-case analysis seeks to design efficient algorithms that apply to most real-life instances. An important component of average-case analysis is the concept of an average-graph distribution of the inputs. There are several approaches to determining the average-graph including randomization. An average-case problem consists of both a decision problem and an average-graph distribution of inputs, implying that the complexity of a decision problem can vary with the inputs.[6]

Numerical Example

The efficiency of a computation problem is measured by the operations executed to solve, not the seconds (or years) required to solve it. The number of operations executed is a function of input size and arrangement. The big O notation is used to determine an algorithm’s complexity class according to the number of operations it performs as a function of input.[7]

The notation O(n) is used where ‘O’ refers to the order of a function and ‘n’ represents the size of the input.[8]

An example of an O(1) problem includes determining whether one number is odd or even. The algorithm reads a bit of input and performs one operation to determine whether or not the number is odd or even. No matter how large or small the quantity of inputs the number of operations holds constant at 1; for that reason this is an O(1) problem.

An example of O(n) problem includes identifying the minimum input within an unsorted array. To compute this problem the computer must read every bit of input to determine whether or not it is less than the prior bit of input. For this reason, the number of operations is linearly correlated to the quantity of inputs. For example, the decision problem of finding the minimum of {5,9,3,2,7,1,4} requires the computer to check every element in the array. This array has n=7 inputs, so it will require 7 operations to read each bit an determine if it is less than the prior bit. This scales linearly as the size of input increases.

Applications

Computational Complexity is influential to numerous scientific fields including quantum computing, game theory, data mining, and cellular automata.[9] Focusing in on quantum computing, there are important applications to the study of computational complexity as the theory of complexity is largely based upon the Turing machine and the Church-Turing thesis that any physically realizable computation device can be simulated by a Turing machine. If quantum computers are to be physically realizable they could alter our understanding of how complex a decision problem may be by providing enhanced methods in which algorithms may be computed and potentially lowering the number of operations to be performed.[10]

Conclusion

Computational complexity has important implications in the field of computer science and far reaching applications that span numerous fields and industries. As computable problems become more complex the ability to increase the efficiency in which they are solved becomes more important. Advancements toward solving P vs NP will have far reaching impacts on how we approach the computability of problems and the ability to efficiently allocate resources.

Sources

  1. A. Cobham, The intrinsic computational difficulty of functions, in Y. Bar-Hillel, ed., Logic, Methodology and Philosophy of Science: Proceedings of the 1964 International Congress, North-Holland Publishing Company, Amsterdam, 1965, p. 24-30
  2. Arora, S., & Barak, B. (2009). Computational complexity: a modern approach. Cambridge: Cambridge University Press. Retrieved from https://cornell-library.skillport.com/skillportfe/main.action?path=summary/BOOKS/31235
  3. Du, D., & Ko, K.-I. (2014). Theory of computational complexity. (Second edition.). Hoboken, New Jersey: Wiley. Retrieved from http://onlinelibrary.wiley.com/book/10.1002/9781118595091
  4. Arora, S., & Barak, B. (2009). Computational complexity: a modern approach. Cambridge: Cambridge University Press. Retrieved from https://cornell-library.skillport.com/skillportfe/main.action?path=summary/BOOKS/31235
  5. Clay Mathematics Institute, The Millennium Prize Problems. Retrieved from http://https://www.claymath.org/millennium-problems/millennium-prize-problems
  6. Arora, S., & Barak, B. (2009). Computational complexity: a modern approach. Cambridge: Cambridge University Press. Retrieved from https://cornell-library.skillport.com/skillportfe/main.action?path=summary/BOOKS/31235
  7. Mohr, A. Quantum Computing in Complexity Theory and Theory of Computation (PDF). Retrieved from http://www.austinmohr.com/Work_files/complexity.pdf
  8. A. Mejia, How you can change the world by learning Data Structures and Algorithms. Retrieved from: https://towardsdatascience.com/how-you-can-change-the-world-by-learning-data-structures-and-algorithms-84566c1829e3
  9. Robert A. Meyers. (2012). Computational Complexity: Theory, Techniques, and Applications. Springer: Springer. Retrieved from https://search.ebscohost.com/login.aspx?custid=s9001366&groupid=main&profid=pfi&authtype=ip,guest&direct=true&db=edspub&AN=edp1880523&site=eds-live&scope=site
  10. Arora, S., & Barak, B. (2009). Computational complexity: a modern approach. Cambridge: Cambridge University Press. Retrieved from https://cornell-library.skillport.com/skillportfe/main.action?path=summary/BOOKS/31235