Note From Aaron: I am locking the guide to view only until after the second cla

Note From Aaron: I am locking the guide to view only until after the second class is completed with the exam. Thanks for being honest guys! 2720 Study Guide ­ Exam 1 Exam Date: 02/12/15 [ TODAY ] LINK TO PROFESSOR’S “STUDY GUIDE​” LINK TO "INTRO TO ALGORITHMS" TEXTBOOK Welcome! ​Please contribute​ where you can and feel free to delete anything and rewrite it for ease of reading. I will begin adding topics and explanations as I go, but please feel free to add if you’re ahead of me! ­Aaron Apparently the test will be “80% writing code”. No, we have no idea what code she might want us to write. Yes, that’s ridiculous. But just be forewarned. if I have to write out merge sort I might get up and leave If anybody understand what she means by “Recursion Formula”, an explanation would be greatly appreciated :D Didn’t we learn a method to solve the complexity of a recursive formula before we learned the master theorem? I think that might be what she is talking about The two things we have to use are the substitution method and the recursion­tree method. The substitution method involves 1) making a good guess about what it could be and then 2)using mathematical induction to solve it. The recursion­tree is used when we can’t accurately make a good guess for the first step. This method gives us a more accurate guess that we will then use in the substitution method Key Concepts/Ideas Insertion Sort Binary Search Recursion Formula Time Complexity Master Theorem Merge sort Bubble sort Linked List Stack/Queue How does it become 2 ­ 2 (1/(2^k) in HW1 prob 4 below ­ Can anyone explain what she means by “generalization of case 2 of the master theorem” on her study guide? If you look at wikipedia’s case two, that’s the extended case 2. It allows you to solve anything like T(n) = aT(n/b) + nlogn Is it in the textbook anywhere? I need a bit more explanation than wikipedia offers... Do we need to know the extended? Or just the generalized? Might be better to know the extended. It’s the same as the generalized because when k == 0, that log raised to k becomes 1 which makes it the same. But… on second thought, I don’t know how to apply it to the different masters method(the one from the youtube video) so I might not worry about it. INSERTION SORT General Info: ● Best case: O(n) (linear)­ b/c it still has to go through each element ● Worst case: O(n​2​) (quadratic) ○ Every iteration of the loop will scan and shift the entire sorted portion over before insertion. ● Efficient for sorting a small number of elements ● Example run: 3​ 7 4 9 5 2 6 1 3 ​7​ 4 9 5 2 6 1 3 7​ ​4​ 9 5 2 6 1 3 4 7​ ​9​ 5 2 6 1 3 4 7 9​ ​5​ 2 6 1 3 4 5 7 9​ ​2​ 6 1 2 3 4 5 7 9​ ​6​ 1 2 3 4 5 6 7 9​ ​1 1 2 3 4 5 6 7 9 Need to know: Algorithm: void insertionSort(int* array, int length) { int j, temp; for(int i = 1; i < length; i++) { j = i; while(j > 0 && array[j] < array[j­1]) { //swap temp = array[j]; array[j] = array[j­1]; array[j­1] = temp; j­­; } } } BINARY SEARCH General Info: ● Best case: O(1) ○ When the min and max are the same and the loops never initiate. ● Worst case: O(log n) Need to know: Algorithm: Iterative: int​ ​binary_search​(​int​ A[], ​int​ key, ​int​ imin, ​int​ imax){ ​// continue searching while [imin,imax] is not empty ​while​ (imax >= imin){ ​// calculate the midpoint for roughly equal partition ​int​ ​imid = (imax + imin)/2; ​if​(A[imid] == key) ​// key found at index imid ​return​ imid; ​// determine which subarray to search ​else​ ​if​ ​(A[imid] < key) ​// change min index to search upper subarray ​ imin = imid + 1; ​else ​// change max index to search lower subarray ​ imax = imid - 1; ​} ​// key was not found ​return​ ​KEY_NOT_FOUND; } Recursive: int​ ​binary_search​(​int​ ​A[],​ ​int​ ​key,​ ​int​ ​imin,​ ​int​ imax){ ​// test if array is empty ​if​ ​(imax < imin) ​// set is empty, so return value showing not found ​return​ ​KEY_NOT_FOUND; ​else​{ ​// calculate midpoint to cut set in half ​int​ ​imid = (max + min)/2; ​// three-way comparison ​if​ ​(A[imid] > key) ​// key is in lower subset ​return​ ​binary_search​(A, key, imin, imid - 1); ​else​ ​if​ (A[imid] < key) ​// key is in upper subset ​return​ ​binary_search​(A, key, imid + 1, imax); ​else ​// key has been found ​return​ ​imid; } MERGE SORT General Info: ○ Best case: O(n) ○ Worst case/Average Case: O(nlogn) ○ Need to know: Algorithm(THIS IS WRONG...Care to explain why?...sorry it’s not wrong, it’s just not how we learned in class...cool, cause I already “memorized” this one and was panicking.. haha): void​ merge(​int​*,​int​*,​int​,​int​,​int​); void​ mergesort(​int​ *a, ​int ​*b, ​int​ low, ​int​ high) { ​int​ pivot; ​if​(low<high) { pivot=(low+high)/2; mergesort(a,b,low,pivot); mergesort(a,b,pivot+1,high); merge(a,b,low,pivot,high); } } void​ merge(​int​ *a, ​int​ *b, ​int​ low, ​int​ pivot, ​int​ high) { ​int​ h,i,j,k; h=low; i=low; j=pivot+1; ​while​((h<=pivot)&&(j<=high)) { ​if​(a[h]<=a[j]) { b[i]=a[h]; h++; } ​else { b[i]=a[j]; j++; } i++; } ​if​(h>pivot) { ​for​(k=j; k<=high; k++) { b[i]=a[k]; i++; } } ​else { ​for​(k=h; k<=pivot; k++) { b[i]=a[k]; i++; } } ​for​(k=low; k<=high; k++) a[k]=b[k]; } BUBBLE SORT General Info: Best Case ​(when list is already sorted)​: O(n) Worst/Average case: ​О(n​2​) Need to know: Algorithm: int​ main(){ ​//declaring array ​int​ array[5]; cout<<​"Enter 5 numbers randomly : "​<<endl; ​for​(​int​ i=0; i<5; i++){ ​//Taking input in array cin>>array[i]; } cout<<endl; cout<<​"Input array is: "​<<endl; ​for​(​int​ j=0; j<5; j++) { ​//Displaying Array cout<<​"\t\t\tValue at "​<<j<<​" Index: "​<<array[j]<<endl; } cout<<endl; ​// Bubble Sort Starts Here ​int​ temp; ​for​(​int​ i2=0; i2<=4; i2++) { ​for​(​int​ j=0; j<4; j++) { ​//Swapping element in if statement ​if​(array[j]>array[j+1]) { temp=array[j]; array[j]=array[j+1]; array[j+1]=temp; } } } ​// Displaying Sorted array cout<<​" Sorted Array is: "​<<endl; ​for​(​int​ i3=0; i3<5; i3++) { cout<<​"\t\t\tValue at "​<<i3<<​" Index: "​<<array[i3]<<endl; } ​return​ 0; } LINKED LIST General Info: ​Here's an awesome resource from Stanford on linked lists. Need to know: Basically a linked list is a collection of nodes with each node containing only the data it stores, as well as the address for the next node. To iterate through a Linked List, you must start at the head and go to the next node until you get to the end (NULL). Algorithm: struct​ Node { ​//create node structure ​int​ data; ​Node​* next; }; void​ insertFront(​struct​ ​Node​ **head, ​int​ n) { ​//add new node to front of the linked list ​Node​ *newNode = ​new​ ​Node​; newNode->​data​ = n; newNode->​next​ = *head; *head = newNode; } RECURSION FORMULA General Info: Need to know: Algorithm: TIME COMPLEXITY General Info: Think of Big O as (controlled by): x is big O of x​2 f(x) = O(x​2​) x is controlled by x​2 Think of Big Ω as (controls): x​2​ is big Ω of x f(x​2​) = Ω(x) x​2​ controls x Think of Big Θ as (the same as): x​2​ is big Θ of x​2 f(x​2​) = Θ(x​2​) x​2​ is the same as x​2 In Layman’s Terms... O:​ The function f(n) takes less than or as much time as ... i.e. The theoretical upper bounds we talk about with sorting methods. Omega: ​The function takes as much time as ... or more. i.e. the theoretical “best case” scenarios we talk about with sorting methods. Theta: ​The function takes exactly ... time, variable only by a constant multiplier. i.e. “tight asymptotic bounds”, as found in problems to which the master method can be applied. Think of the ​capital​ letters as ​>=​ or ​<=. Think of the ​lowercase​ letters as ​>​ or ​<. Need to know: I see 6 lines and 7 functions on the right side… da fuq? ​O(1) is right under O(logn) if you zoom into the very end of the graph you can see a hint of blue ​thx If you are having issues seeing the colors/lines, just know that: <­ you rock O(n!) > O(2​n​) > O(n​2​) > O(nlogn) > O(n) > O(logn) > O(1) Algorithm: MASTER THEOREM General Info: Need to know: Could someone explain what T(1) = c means? And why c has to be >0? Sure! T is a recurrence relation representing how long it takes to complete a given method, right? So T(x) is a formula that represents how long it will take for that method to complete x times. Below, c is a numeric value. It has to be >0 because time can’t be negative. And T(1) = c is a requirement because the method needs to take a constant amount of time to run each time, otherwise this whole thing would be pointless. Thanks! Let be a monotonically increasing function that satisfies: (n) T (n) aT( ) uploads/Ingenierie_Lourd/ csci2720-exam1studyguide.pdf

  • 22
  • 0
  • 0
Afficher les détails des licences
Licence et utilisation
Gratuit pour un usage personnel Attribution requise
Partager