Description:

Abstract:

  • The purpose of this chapter is to:
    • be able to decide which is the correct algorithm to implement.
    • Know the time/space requirements for each algorithm given the use case.
  • Main Findings:
    1. How to: estimate the time required for a program
    2. How to: reduce the running time of a program from days/years to second
    3. Consequences of implementing careless recursion
    4. Use Cases: Efficient Algorithms to raise number to a power & computing greatest common divisor of 2 numbers
  • Conclusion:

    Conclusion:

    • How do the main points and findings relate to the thesis?

Introduction

Background Info:

Research Questions:

  • What is the difference between the different growth rates?
  • What is the difference between O, o, Omega, omega, etc?
  • What are legal permutations? What is the difference from normal permutations?
  • How do you calculate Worst-case, average case, and best-case running time?
  • What is "interpolating the running times"?
    • What do you assume when done?
  • Is the Horner's rule Important to this chapter?
  • What are subsequence sum algorithms?
  • What is the Sieve of Eratosthenes? [1]

Existing Related Media:

Body:

Sections/Headings:

Mathematical Background:

  • Asymptotic Notations:
    - Create bounds for functions, very useful for calculating the running time of a program.
    - The Theta notation is the most useful
    Pasted image 20241009191232.png
Symbol Name Bound Information
O Big-oh upper bound If theta notation not given, give the upper bound instead
Big-omega lower bound
theta Average bound The most useful of all the Notations. Closest to the actual running time
  • Big-oh Notation Definition:
    • The function iff there exists a positive constant c and such that for all
    • What does it mean?
      • ex:
        • if we have a function
        • then something
          • something can be ANYTHING greater than or equal to 2n +3
            • Note: ONLY WRITE A FUNCTION WITH A SINGLE TERM, AND COEFFICIENT
          • Ex: for 2n+3, you can write
            • 10n
              • why? bc 2n+3 <= 10n
                • when n>=1
              • c = coefficient = 10
              • n = g(n)
            • 7n
            • 1000n
            • as long as the coefficient is greater than 2n+3
            • Tip: If you're strugging to get a value, leave the first term as is and multiply the rest by n.
              • 2n+3 -> 2n+3n -> 5n
              • 2n+3 <= 5n when n>=1
            • Can you write: 2n+3 <= 2n^2+3n^ 2
              • YES!
              • 2n+3<= 5n^2, when n>=1
      • Pasted image 20241009191442.png
      • ALL FUNCTIONS ABOVE AND INCLUDING N ARE THE UPPER BOUND
      • JUST N is the average bound
    • If all these are true:
      • Pasted image 20241009191558.png
      • why would we need to write all of them as the upper bound?
        • Although technically all of them are Big-Oh, its only the one closest to the average bound that is really useful, which would be O(n) in this case.
  • Omega Notation Definition
    • The function f(n)=(g(n)) iff there exists a positive constant c and such that for all
    • What does this mean:
      • ALL FUNCTIONS BELOW AND INCLUDING N ARE LOWER BOUND
      • ex:
        • f(n) =2n+3
          • for all
          • all this answers are true:
            • f(n) =
            • f(n)=
            • f(n) =
          • HOWEVER
            • The closest to the average bound is the most useful, aka f(n) =
  • Theta Notation Definition:
    • The function f(n)= iff there exists a positive constant c1,c2, and n0 such that $c{1}g(n)\leq f(n)\leq c{2}g(n)$ for all $n\geq n{0}$
      • ex:
        • Let's say we have 2n+3
        • c_1g(n) will be
        • c2g(n) will be
        • so
            • for both sides we have g(n) so we will have theta be equal to n
            • f(n)=
              • the average bound of the function of n
              • You CANNOT say anything else outside of theta of n
  • small-oh Notation Definition
    • T(N)= o(p(N)) if for all positive constants c there exists an n_0 such that T(N) < cp(N) when N > n_0.
      • Basically its as if you took out the average case from the big-Oh notation solutions.
  • Note!
    • DO NOT MIX THIS UP WITH BEST CASE AND WORST CASE ALGORITHM!
      • Not related! You can use any notation when doing the best and worst case analysis

Model

What to Analyze

Running Time Calculations

Evidence/Arguments:

Discussion:

Interpretation:

  • Why is this significant, how does it relate to SE-3345.503's objectives:
    • This chapter allows me to do simple Algorithm Analysis for simple functions.

Implications:

  • Broader impact and relevance on the field
  • Cryptography:

    • In cryptography both the gcd Algorithm and Exponential Algorithm are used.
    • Ex:
      • A 600 digit number is raised to the power of another 600 digit number. After each multiplication, only the lower 600-digits are retained.
        • Brute force Algorithm:
          • Would require multiplications
        • Chapter's Algorithms:
          • Worst Case: 4000 multiplications

Summary:

Conclusion:

  • How do the main points and findings relate to the thesis?

Future Research:

  • In Weiss-Ch7, we will cover lower-bound analysis that requires an comparison in the worst case.
    • Generally the most difficult proofs.
      • bc They apply not to an algorithm but a class of algorithms that solve a problem
  • Examples of future non-simple algorithms we will cover later:
    • In Weiss-Ch7 we'll analyze the Shellsort algorithm.
    • In Weiss-Ch8 we'll analyze an algorithm for maintaining disjoint sets.

Footnote:


  1. The Sieve of Eratosthenes is a method used to compute all primes less than N.↩︎