few changes (#31175)

* few changes

* Update index.md
pull/34923/head
winetoys 2019-01-22 02:03:43 +05:30 committed by Christopher McCormack
parent 783338410b
commit cb28bc838a
1 changed files with 12 additions and 16 deletions

View File

@ -6,7 +6,6 @@ title: Algorithm Performance
To understand the performance of algorithms, first we need to know the big-O notation.
### The Big-O Notation
####
In mathematics, big-O notation is a symbolism used to describe and compare the _limiting behavior_ of a function.
A function's limiting behavior is how the function acts as it approaches a specific value (usually trends towards infinity).
@ -15,48 +14,45 @@ In short, big-O notation is used to describe the growth or decline of a function
In algorithm design, we usually use big-O notation because we can see how good or bad an algorithm's performance will be. From a denotative perspective, big-O denotes the runtime in the worst possible case scenario. However, in many instances, people use it as a synonym for average expected runtime.
In mathematics, big-O notation is a symbolism used to describe and compare the _limiting behavior_ of a function.
A function's limiting behavior is how the function acts as it trends towards a particular value and in big-O notation it is usually as it trends towards infinity.
In mathematics, the big-O notation is a symbolism used to describe and compare the _limiting behavior_ of a function.
In short, big-O notation is used to describe the growth or decline of a function, usually with respect to another function. In programming, big-O notation compares the growth/decline of the runtime in respects to the size of the input.
NOTE: x^2 is equivalent to x * x or 'x-squared'
For example we say that x = O(x^2) for all x > 1 or in other words, x^2 is an upper bound on x and therefore it grows faster.
The symbol of a claim like x = O(x^2) for all x > _n_ can be substituted with x <= x^2 for all x > _n_ where _n_ is the minimum number that satisfies the claim, in this case 1.
For example, we say that x = O(x^2) for all x > 1 or in other words, x^2 is an upper bound on x and therefore it grows faster.
The symbol of a claim like x = O(x^2) for all x > _n_ can be substituted with x <= x^2 for all x > _n_ where _n_ is the minimum number that satisfies the claim, in this case, 1.
Effectively, we say that a function f(x) that is O(g(x)) grows slower than g(x) does.
Comparatively, in computer science and software development we can use big-O notation in order to describe the efficiency of algorithms via its time and space complexity.
**Space Complexity** of an algorithm refers to its memory footprint with respect to the input size.
Specifically when using big-O notation we are describing the efficiency of the algorithm with respect to an input: _n_, usually as _n_ approaches infinity.
When examining algorithms, we generally want a lower time and space complexity. Time complexity of o(1) is indicative of constant time.
Specifically, when using big-O notation we are describing the efficiency of the algorithm with respect to an input: _n_, usually as _n_ approaches infinity.
When examining algorithms, we generally want a lower time and space complexity. The time complexity of o(1) is indicative of constant time.
Through the comparison and analysis of algorithms we are able to create more efficient applications.
Through the comparison and analysis of algorithms, we are able to create more efficient applications.
For algorithm performance we have two main factors:
- **Time**: We need to know how much time it takes to run an algorithm for our data and how it will grow by data size (or in some cases other factors like number of digits and etc).
- **Time**: We need to know how much time it takes to run an algorithm for our data and how it will grow by data size (or in some cases other factors like the number of digits and etc).
- **Space**: our memory is finate so we have to know how much free space we need for this algorithm and like time we need to be able to trace its growth.
- **Space**: Our memory is finite so we have to know how much free space we need for this algorithm and like the time we need to be able to trace its growth.
The following 3 notations are mostly used to represent time complexity of algorithms:
The following 3 notations are mostly used to represent the time complexity of algorithms:
1. **Θ Notation**: The theta notation bounds a functions from above and below, so it defines exact behavior. we can say that we have theta notation when worst case and best case are the same.
1. **Θ Notation**: The theta notation bounds a function from above and below, so it defines exact behavior. we can say that we have theta notation when the worst case and the best case are the same.
>Θ(g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 <= c1*g(n) <= f(n) <= c2*g(n) for all n >= n0}
<img src = "https://cdncontribute.geeksforgeeks.org/wp-content/uploads/AlgoAnalysis-1.png" />
2. **Big O Notation**: The Big O notation defines an upper bound of an algorithm. For example Insertion Sort takes linear time in best case and quadratic time in worst case. We can safely say that the time complexity of Insertion sort is *O*(*n^2*).
2. **Big O Notation**: The Big O notation defines an upper bound of an algorithm. For example, Insertion Sort takes linear time in the best case and quadratic time in the worst case. We can safely say that the time complexity of Insertion sort is *O*(*n^2*).
>O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 <= f(n) <= cg(n) for all n >= n0}
<img src = "https://cdncontribute.geeksforgeeks.org/wp-content/uploads/AlgoAnalysis-2.png"/>
3. **Ω Notation**: Ω notation provides an lower bound to algorithm. it shows fastest possible answer for that algorithm.
3. **Ω Notation**: Ω notation provides a lower bound to algorithm. it shows fastest possible answer for that algorithm.
>Ω (g(n)) = {f(n): there exist positive constants c and n0 such that 0 <= cg(n) <= f(n) for all n >= n0}.
<img src = "https://cdncontribute.geeksforgeeks.org/wp-content/uploads/AlgoAnalysis-3.png"/>