Understanding the time complexity of an algorithm is crucial for several reasons:
Ø
Performance Evaluation: Time complexity
provides an estimate of how the running time of an algorithm increases with the
size of the input. This information helps in evaluating the efficiency of an
algorithm and predicting its performance on larger datasets. It allows you to
choose the most appropriate algorithm for a specific problem.
Ø
Algorithm Selection: Different algorithms
may solve the same problem but have different time complexities. Knowing the
time complexity helps in selecting the most efficient algorithm for a
particular scenario. In practice, the choice of algorithm can have a
significant impact on the efficiency of a program.
Ø
Resource Utilization: Time complexity is
closely related to the amount of computational resources (such as CPU time)
that an algorithm consumes. Efficient algorithms can lead to better resource
utilization, making programs run faster and use less memory.
Ø
Scaling Behavior: Time complexity
provides insights into how an algorithm will behave as the input size grows.
Algorithms with lower time complexity are more scalable, meaning they can
handle larger datasets without a substantial increase in running time.
Ø
Comparing Algorithms: Time complexity
allows for a systematic way to compare the efficiency of different algorithms
for the same problem. This is especially important when choosing between
algorithms with different approaches or strategies.
Ø
Optimization: Understanding the time
complexity of an algorithm can guide optimization efforts. It helps identify
critical parts of the code that may need improvement and provides a roadmap for
making the algorithm more efficient.
Ø
Algorithm Design: When designing
algorithms, considering time complexity helps in making informed decisions
about the trade-offs between time and space efficiency. It encourages the
development of algorithms that strike a balance between these factors.
In summary, time complexity analysis is a fundamental aspect
of algorithm design and analysis. It allows developers and researchers to make
informed decisions about algorithm selection, optimization, and resource
allocation, ultimately leading to more efficient and scalable software systems.
·
Time
Complexity understand:
Ø
Time complexity is a fundamental concept in
computer science that enables the evaluation and explanation of an algorithm's
efficiency by examining the time required for execution in relation to the
input size. This assessment allows us to predict the total time an algorithm
will take to complete based on varying input sizes.
Ø
In the realm of computer science, time
complexity is typically represented using Big O notation, which indicates the
maximum growth rate of an algorithm. This notation succinctly conveys how an
algorithm's performance changes as the input size increases. For example, O(1)
indicates constant time, O(log n) represents logarithmic time, O(n) signifies
linear time, O(n log n) suggests linearithmic time, and O(n^2) denotes
quadratic time, among others.
Ø
For example, if an algorithm has a time
complexity of O(n), this implies that the time taken for execution increases
linearly with the input size. Thus, if the input size is doubled, the execution
time will also double. Conversely, an algorithm with a time complexity of O(log
n) demonstrates a more efficient growth pattern, where the execution time rises
logarithmically as the input size increases.
Ø
Evaluating the time complexity of an algorithm
is essential for comparing and choosing algorithms based on their efficiency in
addressing particular problems. It also offers important insights into how an
algorithm's performance evolves as the input size expands.
·
Time
complexity for daily life:
While you may not explicitly think about
time complexity in your daily activities, the concept influences the
performance of the software and applications you use regularly. Here are a few
examples of how time complexity impacts your daily life:
Ø
Web Browsing: When you load a webpage,
the efficiency of the algorithms used to render and display content affects how
quickly the page appears. Efficient algorithms with good time complexity
contribute to a faster browsing experience.
Ø
Search Engines: When you perform a
search, the search engine's algorithms need to sift through vast amounts of
data to provide relevant results quickly. Algorithms with lower time complexity
allow search engines to return results faster.
Ø
Smartphone Apps: Mobile applications that
load quickly and respond promptly to user inputs often rely on algorithms with
favorable time complexity. This ensures a smooth and responsive user
experience.
Ø
GPS Navigation: Navigation applications
use algorithms to calculate the fastest route based on real-time traffic data.
Efficient algorithms contribute to quicker route calculations, helping you
reach your destination faster.
Ø
Social Media Feeds: Algorithms used by
social media platforms to curate and display content on your feed impact how
quickly you see updates and new posts. Algorithms with good time complexity
ensure a timely and dynamic feed.
Ø
Online Shopping: When you search for
products or browse through items on an e-commerce platform, the efficiency of
algorithms affects how quickly the platform can present relevant results and
recommendations.
Ø
Data Processing in Productivity Tools:
Tools like spreadsheets, word processors, and other productivity software
utilize algorithms for various tasks. The efficiency of these algorithms
influences how quickly the software performs operations like sorting,
searching, and data manipulation.
While users may not interact directly with
the underlying algorithms, the time complexity of these algorithms plays a
crucial role in determining the speed and responsiveness of the software and
applications that are an integral part of daily life. Efficient algorithms
contribute to a more seamless and enjoyable user experience.
·
Time
complexity for some java program:
o
Program
No: 1
public static
void method(int n) { int i = 1, s = 1; while (s < n) { s = s + i; i++; } } |
The time complexity of the given method is O(√n).
Explanation:
·
The method uses a while loop that iterates as
long as s < n.
·
In each iteration, s is incremented by i, which
starts from 1 and increases by 1 in every loop iteration.
·
The sum s increases according to the series
1+2+3+...+i, which forms a triangular number sequence.
The total sum s after i iterations is approximately: \(
s=\frac{(i(i+1))}2 \)
The loop stops when s ≥ n, which gives us: \(\frac{(i(i+1))}2 ≥ n\)
For large i, this simplifies to: \(\frac{i^{2}}{2} ≈
ni^{2} ≈ 2n i ≈\sqrt{2n} \)
Thus, the number of iterations (which depends on i) grows as O(√n). Therefore, the time
complexity of the method is O(√n).
o
Program
No: 2
public static
void method(int n) { if (n < 5)
System.out.print("KamicalMandal"); else { for (int i = 0; i < n; i++) { System.out.print(i + "
"); } } } |
The time complexity of the given method is O(n).
Explanation:
·
The method has two parts:
1.
Base case: If n<5, it prints
"KamicalMandal". This operation is constant and does not depend on n.
Hence, its time complexity is O(1).
2.
Loop case: If n≥5, it runs a for loop
that iterates n times. In each iteration, it prints the current value of i.
Printing a value is an O(1) operation, and since the loop runs n times, the
time complexity of this part is O(n).
Thus, the overall time complexity depends on the larger part, which is
the loop, and hence the time complexity of the method is O(n).
o
Program
No: 3
public static
void method(int a, int b) { while (a != b) { if (a > b) a = a - b; else b = b - a; } } |
The given method implements the subtraction-based Euclidean algorithm
to compute the greatest common divisor (GCD) of two numbers a and b. The
time complexity of this algorithm is O(max(a, b)) in the worst case.
However, for the more efficient division-based Euclidean algorithm, the
complexity is O(log(min(a, b))).
Detailed Explanation:
The method reduces the value of a and b through subtraction until both
numbers become equal, and the result is their GCD.
Example:
Let’s consider the inputs a = 15 and b = 10:
1.
Initial values: a = 15, b = 10
2.
a > b → a = a - b = 15 - 10 = 5
3.
New values: a = 5, b = 10
4.
b > a → b = b - a = 10 - 5 = 5
5.
New values: a = 5, b = 5
At this point, a == b, and the loop terminates. The GCD is 5.
Worst-Case Example:
The worst case occurs when the numbers are consecutive Fibonacci numbers.
For example, take a = 21 and b = 13:
1.
Initial values: a = 21, b = 13
2.
a > b → a = a - b = 21 - 13 = 8
3.
New values: a = 8, b = 13
4.
b > a → b = b - a = 13 - 8 = 5
5.
New values: a = 8, b = 5
6.
a > b → a = a - b = 8 - 5 = 3
7.
New values: a = 3, b = 5
8.
b > a → b = b - a = 5 - 3 = 2
9.
New values: a = 3, b = 2
10.
a > b → a = a - b = 3 - 2 = 1
11.
New values: a = 1, b = 2
12.
b > a → b = b - a = 2 - 1 = 1
13.
New values: a = 1, b = 1
The loop terminates when a == b, and the GCD is 1.
Time Complexity Analysis:
In the worst case, where the numbers are consecutive Fibonacci numbers,
the number of iterations is proportional to the size of the numbers,
specifically, O(max(a, b)). This is because each subtraction can reduce
one number by a relatively small amount, which results in a large number of
iterations.
For general cases, a more efficient version of the Euclidean algorithm
(using modulo instead of subtraction) has a time complexity of O(log(min(a,
b))). However, for this subtraction-based method, the complexity is O(max(a,
b)).
Conclusion:
·
Time complexity (worst case): O(max(a,
b))
·
Example: For a = 21 and b = 13, the loop
runs 6 times before termination.
o
Program
No: 4
public static
void method(int n) { for(int i=0;i*i<n;i++)
System.out.print("KamicalMandal"); } |
The time complexity of the given method is O(√n).
Explanation:
The method involves a for loop that iterates as long as \( i^{2} = n.\) The
loop variable i starts at 0 and increments by 1 in each iteration. The
condition \( i^{2} = n.\) implies that the loop will continue until i≈n.
·
In each iteration, the statement
System.out.print("KamicalMandal"); is executed, which takes O(1)
time.
·
The loop runs from i=0 to \(i ≈ \sqrt{n}\) so
the number of iterations is approximately .
Time Complexity:
·
The loop condition is based on \( i^{2} = n,\) and
the value of i grows linearly, so the loop runs O(n) times.
·
Since each iteration involves a constant-time
operation (printing), the overall time complexity is O(√n).
Example:
Let’s consider an example where n = 16:
1.
The loop starts with i = 0:
o \(i^{2}
= 0^{2} = 0.\)so the condition \(0^{2} < 16\)holds. It prints
"KamicalMandal".
2.
Next, i = 1:
o \(i^{2}
= 1^{2} = 1.\)so the condition \(1^{2} < 16\)holds. It prints "KamicalMandal".
3.
Then, i = 2:
o \(i^{2}
= 2^{2} = 4.\) so the condition \(2^{2} < 16\)holds. It prints
"KamicalMandal".
4.
Then, i = 3:
o \(i^{2}
= 3^{2} = 9.\) so the condition \(3^{2} < 16\)holds. It prints
"KamicalMandal".
5.
Finally, i = 4:
o \(i^{2}
= 4^{2} = 16.\), so the condition \(4^{2} < 16\)no longer holds, and the
loop terminates.
The loop runs for values of i = 0, 1, 2, 3, which are 4 iterations. This
aligns with the expected behavior, as \( \sqrt{16} = 4\)
Conclusion:
·
Time complexity: O(√n)
·
Example: For n = 16, the loop runs 4
times (i = 0, 1, 2, 3).
o
Program
No: 5
public static
void method(int n) { for (int i = 0; i < n / 2; i++) for (int j = 1; j + n / 2 <=
n; j++) for (int k = 1; k <= n; k
= k * 2)
System.out.print("KamicalMandal"); } |
Let's break down the three nested loops to determine the overall time
complexity.
1.
Outer loop (variable i):
o Runs
from i = 0 to i < n / 2, which means it iterates n / 2 times.
o This
loop's time complexity is O(n / 2), which simplifies to O(n).
2.
Middle loop (variable j):
o Runs
from j = 1 to j + n / 2 <= n. This condition simplifies to \( j≤ \frac{n}{2}\)
so j iterates n / 2 times.
o This
loop's time complexity is O(n / 2), which simplifies to O(n).
3.
Inner loop (variable k):
o Runs
from k = 1 and increments k = k * 2 in each iteration. This means k takes
values like 1, 2, 4, 8, ..., up to n. The number of iterations in this loop is
logarithmic, specifically O(log n), because the value of k doubles each
time.
Total Time Complexity:
The three loops are nested, so the total time complexity is the product
of their individual complexities:
·
Outer loop: O(n)
·
Middle loop: O(n)
·
Inner loop: O(log n)
Thus, the overall time complexity is: \( O(n)×O(n)×O(logn)=O(n^{2}logn)\)
Example:
Let’s consider the case where n = 8:
1.
Outer loop (i runs from 0 to 3):
o It
runs 4 times (since n/2 = 4).
2.
Middle loop (j runs from 1 to 4):
o For
each value of i, the j loop runs 4 times.
3.
Inner loop (k runs from 1 to 8):
o For
each combination of i and j, the k loop runs logarithmically, taking values 1,
2, 4, 8 (4 times since \(log_{2}8 =3 \) ).
Thus, the total number of times "KamicalMandal" is printed is:
\(4×4×4=64\)
Conclusion:
·
Time complexity: O(n² log n)
·
Example: For n = 8, the method prints
"KamicalMandal" 64 times.
o
Program
No: 6
Case 1: { for (int i = 1; i < n; i = i *
x) //or for(int i = n; i >=1;
i = i / x)
System.out.print("KamicalMandal"); } { for(int i = n; i >=1; i = i / x)
System.out.print("KamicalMandal"); } |
The loop in the method increments (or decrements) the variable i by
multiplying (or dividing) it by x in each iteration. This results in a logarithmic
progression, making the time complexity dependent on how fast i reaches the
boundary conditions (either exceeding n or falling below 1).
Case 1: for (int i = 1; i < n; i = i * x)
·
Here, i starts at 1 and is multiplied by x in
each iteration.
·
The loop runs as long as i<n.
·
After each iteration, i becomes i * x, meaning
the values of i follow a geometric progression: 1,x,x2,x3,….
The number of iterations is determined by how many times you need to
multiply 1 by x to exceed or equal n. That happens when \(x^{k} ≥n \) where k
is the number of iterations.
Thus, solving for k:
\(x^{k}≥n⟹k≥log_{x}(n)\)
Therefore,
the time complexity for this case is O(logâ‚“(n)).
Case 2: for
(int i = n; i >= 1; i = i / x)
·
Here, i starts at n and is divided by x in each
iteration.
·
The loop runs until i ≥ 1. The values of i again
follow a geometric progression:
\(n,\frac{n}{x},\frac{n}{x^{2}},\frac{n}{x^{3}},….\)
The number of iterations is determined by how many times you can divide n
by x until the value is less than or equal to 1. This happens when \(\frac{n}{x^{k}}≤1\)
where k is the number of iterations.
Thus, solving for k:
\(\frac{n}{x^{k}}≤1⟹n≤x^{k}⟹k≥log_{x}(n)\) Therefore, the time complexity for this case is also O(logâ‚“(n)).
Example:
Let’s take an example with n = 16 and x = 2 for both cases.
Case 1: i = 1; i < n; i = i * 2
·
Iteration 1: i = 1
·
Iteration 2: i = 2
·
Iteration 3: i = 4
·
Iteration 4: i = 8
·
Iteration 5: i = 16 (loop stops here since i≥ni
\geq ni≥n)
The loop runs 5 times, printing "KamicalMandal" 5 times.
Case 2: i = n; i >= 1; i = i / 2
·
Iteration 1: i = 16
·
Iteration 2: i = 8
·
Iteration 3: i = 4
·
Iteration 4: i = 2
·
Iteration 5: i = 1 (loop stops here since
i<1)
The
loop also runs 5 times, printing "KamicalMandal" 5 times.
Conclusion:
·
Time complexity for both cases: O(logâ‚“(n)).
·
Example: For n = 16 and x = 2, the method
prints "KamicalMandal" 5 times in both cases.
o
Program
No: 7
public static
void method(int n) |
The method contains two nested loops:
1.
Outer loop (variable i):
o The
outer loop runs from i = 1 to i <= n, so it iterates n times.
o Time
complexity of the outer loop is O(n).
2.
Inner loop (variable j):
o The
inner loop runs from j = 1 to j <= n, but the increment is by i in each
iteration, meaning the value of j increases by i in every step.
o This
means the number of iterations of the inner loop depends on i. Specifically,
for a given i, the number of iterations is approximately \( \frac{n}{i}\) , since j is incremented by i each time.
Total Time Complexity:
The total number of iterations across both loops can be computed by
summing up the iterations of the inner loop for each value of i:
\( \sum_{1}^{n}\frac{n}{i}\)
This sum is approximately \(n * H_{n}\), where \(H_{n}\)
is the Harmonic number. The harmonic number grows
logarithmically, approximately
\( H_{n} ≈ O(log_{2}n)\)
Thus, the total time complexity is:
\(
O(n*log_{2}n)\).
Example:
Let’s take an example with n = 5.
1.
Outer loop (i = 1):
o Inner
loop runs from j = 1 to 5, incrementing j by 1 in each step. It runs 5 times.
2.
Outer loop (i = 2):
o Inner
loop runs from j = 1 to 5, incrementing j by 2 in each step. It runs 3 times
(for j = 1, 3, 5).
3.
Outer loop (i = 3):
o Inner
loop runs from j = 1 to 5, incrementing j by 3 in each step. It runs 2 times
(for j = 1, 4).
4.
Outer loop (i = 4):
o Inner
loop runs from j = 1 to 5, incrementing j by 4 in each step. It runs 2 times
(for j = 1, 5).
5.
Outer loop (i = 5):
o Inner
loop runs from j = 1 to 5, incrementing j by 5 in each step. It runs 1 time
(for j = 1).
Total iterations of the inner loop: 5+3+2+2+1=13.
Conclusion:
·
Time complexity: \( O(nlog_{2}n)\).
·
Example: For n = 5, the method prints
"KamicalMandal" 13 times. For larger n, the total iterations grow as \(
O(nlog_{2}n)\).
o
Program
No: 8
public static
void method(int n) { for (int i = 0; i <= n / 3; i++) for (int j = 1; j <= n; j = j
+ 4)
System.out.print("KamicalMandal"); } |
The method consists of two nested loops:
1.
Outer loop (variable i):
o The
outer loop runs from i = 0 to i <= n / 3, which means it iterates
\(\frac{n}{3} +1\)times.
o The
time complexity of the outer loop is O(n / 3), which simplifies to O(n).
2.
Inner loop (variable j):
o The
inner loop runs from j = 1 to j <= n, with an increment of 4 in each
iteration (j = j + 4).
o The
number of iterations for the inner loop is approximately \(\frac{n}{4}\) since
j is incremented by 4 in each step.
o The
time complexity of the inner loop is O(n / 4), which simplifies to O(n).
Total Time Complexity:
The total time complexity is the product of the complexities of the outer
and inner loops:
·
Outer loop: O(n)
·
Inner loop: O(n / 4)
Thus, the overall time complexity is:
\( O(n)×O(\frac{n}{4})=O(\frac{n^{2}}{4})\)
Since constants are ignored in Big-O notation, this simplifies to O(n²).
Example:
Let’s take an example with n = 12.
1.
Outer loop (i = 0 to i <= 4):
o The
outer loop runs 5 times (since n/3=4).
2.
Inner loop (j = 1 to j <= 12):
o The
inner loop increments j by 4 in each iteration, so it runs 4 times (for j = 1,
5, 9).
Thus, for each of the 5 iterations of the outer loop, the inner loop runs
4 times, leading to a total of:
\(5×4=20 times\)
Conclusion:
·
Time complexity: O(n²).
·
Example: For n = 12, the method prints
"KamicalMandal" 20 times. For larger values of n, the total
iterations grow quadratically as \(n^{2}\)
o
Program
No: 9
public static
void method(int n) { int i = 1; while (i < n) { int j = n; while (j > 0) { j = j / 2; } i = i * 2; } } |
The method contains two nested while loops:
1.
Outer loop (variable i):
o The
loop starts with i = 1 and doubles i each iteration (i = i * 2), continuing
until i < n.
o The
number of iterations of the outer loop is determined by how many times you can
double i before it reaches n. This is approximately \(O(log_{2}n)\), as
doubling i corresponds to a logarithmic growth.
2.
Inner loop (variable j):
o For
each iteration of the outer loop, the inner loop starts with j = n and halves j
each iteration (j = j / 2), continuing until j <= 0.
o The
number of iterations of the inner loop is determined by how many times you can
divide j by 2 before it becomes 0. This is approximately \(O(log_{2}n)\), as
halving j corresponds to a logarithmic reduction.
Total Time Complexity:
The total time complexity is the product of the complexities of the outer
and inner loops:
·
Outer loop: O(log n)
·
Inner loop: O(log n)
Thus, the overall time complexity is:
\( O(log_{2}n)* O(log_{2}n)=O((log_{2}n)^{2})\)
Example:
Let’s consider n = 16:
1.
Outer loop:
o i
starts at 1 and is doubled each iteration: 1, 2, 4, 8, 16.
o The
loop stops when i reaches 16 (i.e., 5 iterations).
2.
Inner loop:
o For
each value of i, j starts at 16 and is halved each iteration: 16, 8, 4, 2, 1.
o The
loop stops when j is reduced to 0 (i.e., 5 iterations).
So, for each of the 5 iterations of the outer loop, the inner loop runs
approximately 5 times.
Total operations:
5×5=25
Conclusion:
·
Time complexity: O((log n)²)
·
Example: For n = 16, the method performs
approximately 25 operations. For larger values of n, the number of operations
grows as \(O((log_{2}n)^{2})\)
o
Program
No: 10
public static
void method(int n) { int j = 1; while (j <= n) { j = j * 2; } } |
The method contains a single while loop:
1.
Loop (variable j):
o The
loop starts with j = 1 and doubles j in each iteration (j = j * 2), continuing
until j > n.
o The
number of iterations of the loop is determined by how many times you can double
j before it exceeds n. This is approximately \(O(log_{2}n)\), because each
doubling corresponds to a logarithmic growth.
Total Time Complexity:
The time complexity of the loop is O(log n), where the base of the
logarithm is 2.
Example:
Let’s consider n = 16:
1.
Iterations of the loop:
o j
= 1 (start)
o j
= 2 (after 1st iteration)
o j
= 4 (after 2nd iteration)
o j
= 8 (after 3rd iteration)
o j
= 16 (after 4th iteration)
o j
= 32 (after 5th iteration, which is greater than n)
The loop runs 5 times in total before j exceeds n.
Conclusion:
·
Time complexity: O(log n)
·
Example: For n = 16, the method performs
5 iterations before terminating. For larger values of n, the number of
iterations grows logarithmically as \(O(log_{2}n)\),
o
Program
No: 11
public static
void method() { int i, n = 8; for (i = 2; i <= n;
i=(int)Math.pow(i,2)) {
System.out.println("KamicalMandal !!!"); } } |
In this method, the for loop's variable i is updated using i =
(int)Math.pow(i, 2) in each iteration. This means i is squared every time,
leading to an exponential growth of i.
Detailed Analysis:
1.
Initialization:
o i
starts at 2.
2.
Loop Condition:
o The
loop continues as long as i <= n, where n = 8.
3.
Update Step:
o In
each iteration, i is updated to i^2 (i.e., i is squared).
The value of i grows exponentially as follows:
·
Iteration 1: i = 2
·
Iteration 2: i = 2^2 = 4
·
Iteration 3: i = 4^2 = 16 (which is
greater than n, so the loop stops here)
The loop runs only 3 times because after the second iteration, i exceeds
n.
Time Complexity:
The time complexity is determined by the number of iterations of the
loop. Since i is squared in each iteration, the number of iterations grows
logarithmically with respect to n. Specifically, the number of iterations is
proportional to \(O(log_{2}(log_{2}n)))\) because each
iteration squares i, causing an exponential growth.
Thus, the time complexity is O(log log n).
Example:
For n = 8:
1.
Iteration 1: i = 2
2.
Iteration 2: i = 4 (next value of i is 4)
3.
Iteration 3: i = 16 (next value of i
exceeds 8, loop stops)
The loop runs 3 times in total, printing "KamicalMandal !!!" 3
times.
Conclusion:
·
Time complexity: O(log log n)
·
Example: For n = 8, the method prints
"KamicalMandal !!!" 3 times. For larger values of n, the number of
iterations grows as \(O(log_{2}(log_{2}n)))\)
o
Program
No: 12
public static
int method(int[] A, int n) { int sum = 0; // Cost = 1, executed 1
time for (int i = 0; i < n; i++) { //
Cost = 2, executed n+1 times (+1 for // the end false condition) sum = sum + A[i]; // Cost = 2,
executed n times } return sum; // Cost = 1, executed 1
time } |
Let's break down the method to determine its time complexity.
Method Breakdown:
1.
Initialization:
o int
sum = 0;
§ This
operation is constant time, executed once.
§ Cost:
O(1)
2.
Loop:
o for
(int i = 0; i < n; i++) {
§ This
loop runs from i = 0 to i < n. It iterates n times.
§ Inside
the loop: sum = sum + A[i];
§ This
operation is performed n times, each time taking constant time.
§ Cost
per iteration: O(1)
§ Total
cost for the loop: O(n)
3.
Return Statement:
o return
sum;
§ This
operation is constant time, executed once.
§ Cost:
O(1)
Total Time Complexity:
Combining all parts:
·
Initialization: O(1)
·
Loop: O(n)
·
Return Statement: O(1)
Thus, the overall time complexity of the method is:
O(1)+O(n)+O(1)=O(n)
Example:
Let’s use an example where the array A has n = 5 elements: [1, 2, 3, 4,
5].
1.
Initialization:
o sum
is set to 0.
2.
Loop Iterations:
o Iteration
1: i = 0, sum = 0 + A[0] = 1
o Iteration
2: i = 1, sum = 1 + A[1] = 3
o Iteration
3: i = 2, sum = 3 + A[2] = 6
o Iteration
4: i = 3, sum = 6 + A[3] = 10
o Iteration
5: i = 4, sum = 10 + A[4] = 15
The loop executes 5 times.
3.
Return Statement:
o Returns
sum = 15.
Conclusion:
·
Time complexity: O(n)
·
Example: For an array of length 5, the
method computes the sum of elements in O(5) time, which is linear with respect
to the size of the array.
o
Program
No: 13
public static
void method() { int n = 3; int m = 3; int arr[][] = { { 3, 2, 7 }, { 2, 6, 8 },
{ 5, 1, 9 } }; int sum = 0; // Iterating over all 1-D arrays in
2-D array for (int i = 0; i < n; i++) { // Printing all elements in ith
1-D array for (int j = 0; j < m; j++) { // Printing jth element of
ith row sum += arr[i][j]; } } System.out.println(sum); } |
The method performs operations on a 2-dimensional array. Let's break down
its complexity:
1.
Initialization:
o int
n = 3;, int m = 3;, and int arr[][] = { { 3, 2, 7 }, { 2, 6, 8 }, { 5, 1, 9 }
};
§ These
operations are constant time, executed once.
§ Cost:
O(1)
2.
Nested Loops:
o The
outer loop runs from i = 0 to i < n, iterating n times.
§ For
each iteration of the outer loop, the inner loop runs from j = 0 to j < m,
iterating m times.
§ Each
element of the 2D array is accessed once and added to sum.
The total number of operations in the nested loops is proportional to the
number of elements in the 2D array, which is n×m.
o Time
complexity of the outer loop: O(n)
o Time
complexity of the inner loop: O(m)
Thus, the total time complexity for the nested loops is:
\( O(n)×O(m)=O(n×m)\)
3.
Print Statement:
o System.out.println(sum);
§ This
operation is constant time, executed once.
§ Cost:
O(1)
Total Time Complexity:
Combining all parts:
·
Initialization: O(1)
·
Nested Loops: O(n \times m)
·
Print Statement: O(1)
The overall time complexity is:
O(1)+O(n×m)+O(1)=O(n×m)
Example:
Given the 2D array:
java
Copy code
int arr[][] = { { 3, 2, 7 }, { 2, 6, 8 }, { 5, 1, 9 } };
·
n = 3 (number of rows)
·
m = 3 (number of columns)
1.
Outer Loop Iterations:
o i
= 0: Inner loop runs for j = 0 to 2.
§ sum
+= arr[0][0] = 3
§ sum
+= arr[0][1] = 2
§ sum
+= arr[0][2] = 7
o i
= 1: Inner loop runs for j = 0 to 2.
§ sum
+= arr[1][0] = 2
§ sum
+= arr[1][1] = 6
§ sum
+= arr[1][2] = 8
o i
= 2: Inner loop runs for j = 0 to 2.
§ sum
+= arr[2][0] = 5
§ sum
+= arr[2][1] = 1
§ sum
+= arr[2][2] = 9
Total sum = 3+2+7+2+6+8+5+1+9=43.
2.
Output:
o Prints
43.
Conclusion:
·
Time complexity: O(n \times m) or O(n×m)
·
Example: For a 3x3 array, the method
calculates the sum of all elements, which is 43. The time complexity grows
linearly with the number of elements in the 2D array.
·
Time
complexity for some critical program:
The recurrence relation T(n)=3T(n/2)+log2n with a
logarithmic term log2n suggests a divide-and-conquer algorithm where the
subproblems are divided into three parts, each of size n/2, and there's an
additional cost related to log2n.
A common algorithm with a similar structure is the Karatsuba
algorithm for fast multiplication of large integers. The standard recurrence
relation for the Karatsuba algorithm is T(n)=3T(n/2)+O(n), where O(n)
represents the linear cost of combining the results.
Below is a simplified Java implementation of the Karatsuba
algorithm using the given recurrence relation:
import java.math.BigInteger;
public class KaratsubaMultiplication
{
public static BigInteger karatsubaMultiply(BigInteger x, BigInteger y)
{ // Base case if (x.bitLength() <= 2 || y.bitLength()
<= 2) { return x.multiply(y); }
int n = Math.max(x.bitLength(),
y.bitLength()); int m = (n + 1) / 2;
// Divide x and y into two halves BigInteger xHigh = x.shiftRight(m); BigInteger xLow =
x.subtract(xHigh.shiftLeft(m)); BigInteger yHigh = y.shiftRight(m); BigInteger yLow =
y.subtract(yHigh.shiftLeft(m));
// Recurrence relation BigInteger term1 =
karatsubaMultiply(xHigh, yHigh); BigInteger term2 =
karatsubaMultiply(xLow.add(xHigh), yLow.add(yHigh)); BigInteger term3 =
karatsubaMultiply(xLow, yLow);
// Combine the results with
logarithmic cost return term1.shiftLeft(2 *
m).add(term2.subtract(term1).subtract(term3).shiftLeft(m)).add(term3);
}
public static void main(String[] args) { // Example usage BigInteger num1 = new
BigInteger("12345678901234567890"); BigInteger num2 = new
BigInteger("98765432109876543210");
BigInteger result =
karatsubaMultiply(num1, num2); System.out.println("Result:
" + result);
} } |
Fibonacci
series:
fib(n) = fib(n-1) + fib(n-2) → for
n > 1
fib(n) = 1→ for n = 0, 1
Fibonacci can be solved iteratively
as well as recursively
Recursive approach:
The recursive approach seems to be
much simpler and smaller, but there is a caveat, as it is calculating the
Fibonacci of a number multiple times.
Time Complexity:
The time complexity of the
iterative code is linear, as the loop runs from 2 to n, i.e. it runs in O(n)
time
Calculating the time complexity of
the recursive approach is not so straightforward, so we are going to dive in
fib(n):
if n <= 1
return 1
return fib(n - 1) + fib(n - 2)
for n > 1:
T(n) = T(n-1) + T(n-2) + 4 (1
comparison, 2 subtractions, 1 addition)
Let’s say c = 4 and try to first
establish a lower bound by approximating that T(n-1) ~ T(n-2) , though T(n-1) ≥
T(n-2), hence lower bound
T(n) = T(n-1) + T(n-2) + c
= 2T(n-2) + c //from the
approximation T(n-1) ~ T(n-2)
= 2*(2T(n-4) + c) + c
= 4T(n-4) + 3c
= 8T(n-6) + 7c
= 2^k * T(n - 2k) + (2^k - 1)*cLet's find the value of k for which: n -
2k = 0
k = n/2T(n) = 2^(n/2) * T(0) +
(2^(n/2) - 1)*c
= 2^(n/2) * (1 + c) - ci.e. T(n) ~ 2^(n/2)
now for the upper bound, we can
approximate T(n-2) ~ T(n-1) as T(n-2) ≤ T(n-1)
T(n) = T(n-1) + T(n-2) + c
= 2T(n-1) + c //from the
approximation T(n-1) ~ T(n-2)
= 2*(2T(n-2) + c) + c
= 4T(n-2) + 3c
= 8T(n-3) + 7c
= 2^k * T(n - k) + (2^k - 1)*cLet's find the value of k for which: n - k
= 0
k = nT(n) = 2^n * T(0) + (2^n -
1)*c
= 2^n * (1 + c) - ci.e. T(n) ~ 2^n
Hence the time taken by recursive
Fibonacci is O(2^n) or exponential.
Space Complexity:
For the iterative approach, the
amount of space required is the same for fib(6) and fib(100), i.e. as N changes
the space/memory used remains the same. Hence it’s space complexity is O(1) or
constant.
For Fibonacci recursive
implementation or any recursive algorithm, the space required is proportional
to the maximum depth of the recursion tree, because, that is the maximum number
of elements that can be present in the implicit function call stack.
Below is a diagrammatic representation of the Fibonacci recursion tree for fib(6):
Fig 1 |
·
Time
complexity for recurrence program using master method:
\(T(n) = aT(\frac{n}{b})+ θ(n^{k}log^{p}n)\)
where
n = size of the problem
a = number of subproblems in the recursion and a >= 1
n/b = size of each subproblem
b > 1, k >= 0 and p is a real number.
Then,
1. if \(a>b^{k}\)then \(T(n)=θ(n^{log_{b}(a))})\)
2. if \(a=b^{k}\), then
(a) if p > -1, then \(T(n)=θ(n^{log_{b}(a))}log^{p+1}n)\)
(b) if p = -1, then \(T(n)=θ(n^{log_{b}(a))}loglogn)\)
(c) if p < -1, then \(T(n)=θ(n^{log_{b}(a))})\)
3. if \(a< b^{k}\), then
(a) if p >= 0, then \(T(n)=θ(n^k log^{p}n)\)
(b) if p < 0, then \(T(n)=θ(n^k)\)
4. Flow Chart:
master method flow chart |
0 Comments