Cs GRE Notes

June 25, 2018 | Author: Daring Blue | Category: Linear Programming, Computational Complexity Theory, Central Processing Unit, Theory Of Computation, Theoretical Computer Science
Report this link


Description

Eta Kappa Nu – Mu ChapterCS GRE Notes CS GRE Notes Eta Kappa Nu – Mu Chapter Department of Electrical Engineering and Computer Sciences College of Engineering UC Berkeley Revision History Section(s) 162 61C, 150, 152, 170, 172, 174 170, 172 170, 172 150 61C, 152 61C, 152 Editor Karl Chen Brandon Ooi Alex Fabrikant Michael Martin James Donald Steve VanDeBogart Brenda Liu Email quarl@hkn brandono@hkn alexf@hkn mcmartin@hkn vandebo@hkn Semester Fall 2004 Fall 2004 Fall 2003 Fall 2002 Fall 2001 Fall 2001 Fall 2001 Comments Unified document, integrated some notes. Many new notes. Updated notes by Michael Martin TABLE OF CONTENTS I. SOFTWARE SYSTEMS AND METHODOLOGY..............................................................................................3 A. DATA ORGANIZATION ...........................................................................................................................................3 1. Data types ........................................................................................................................................................3 2. Data structures and implementation techniques ..............................................................................................3 B. PROGRAM CONTROL AND STRUCTURE ...................................................................................................................3 1. Iteration and recursion ....................................................................................................................................3 2. Procedures, functions, methods, and exception handlers ................................................................................3 3. Concurrency, communication, and synchronization ........................................................................................3 C. PROGRAMMING LANGUAGES AND NOTATION ........................................................................................................3 1. Constructs for data organization and program control ...................................................................................3 2. Scope, binding, and parameter passing ...........................................................................................................3 3. Expression evaluation ......................................................................................................................................3 D. SOFTWARE ENGINEERING ......................................................................................................................................3 1. Formal specifications and assertions...............................................................................................................3 2. Verification techniques.....................................................................................................................................3 3. Software development models, patterns, and tools ..........................................................................................3 E. SYSTEMS ...............................................................................................................................................................3 1. Compilers, interpreters, and run-time systems ................................................................................................3 2. Operating systems, including resource management and protection/security .................................................4 3. Networking, Internet, and distributed systems .................................................................................................4 4. Databases.........................................................................................................................................................4 5. System analysis and development tools............................................................................................................4 Page 1 of 41 Eta Kappa Nu – Mu Chapter CS GRE Notes II. COMPUTER ORGANIZATION AND ARCHITECTURE...............................................................................5 A. DIGITAL LOGIC DESIGN .........................................................................................................................................5 1. Implementation of combinational and sequential circuits ...............................................................................5 2. Optimization and analysis..............................................................................................................................11 B. PROCESSORS AND CONTROL UNITS ......................................................................................................................16 1. Instruction sets ...............................................................................................................................................17 2. Computer arithmetic and number representation ..........................................................................................17 3. Register and ALU organization......................................................................................................................20 4. Data paths and control sequencing................................................................................................................24 C. MEMORIES AND THEIR HIERARCHIES ...................................................................................................................24 1. Performance, implementation, and management...........................................................................................25 2. Cache, main, and secondary storage .............................................................................................................25 3. Virtual memory, paging, and segmentation ...................................................................................................25 D. NETWORKING AND COMMUNICATIONS................................................................................................................25 1. Interconnect structures (e.g., buses, switches, routers) .................................................................................25 2. I/O systems and protocols ..............................................................................................................................25 3. Synchronization..............................................................................................................................................25 E. HIGH-PERFORMANCE ARCHITECTURES ................................................................................................................25 1. Pipelining superscalar and out-of-order execution processors .....................................................................25 2. Parallel and distributed architectures ...........................................................................................................28 III. THEORY AND MATHEMATICAL BACKGROUND ..................................................................................28 A. ALGORITHMS AND COMPLEXITY .........................................................................................................................29 1. Exact and asymptotic analysis of specific algorithms....................................................................................30 2. Algorithmic design techniques (e.g. greedy, dynamic programming, divide and conquer) ...........................32 3. Upper and lower bounds on the complexity of specific problems ..................................................................34 4. Computational complexity, including NP-completeness................................................................................34 B. AUTOMATA AND LANGUAGE THEORY .................................................................................................................35 1. Models of computation (finite automata, Turing machines) ..........................................................................35 2. Formal languages and grammars (regular and context free)........................................................................39 3. Decidability....................................................................................................................................................40 C. DISCRETE STRUCTURES .......................................................................................................................................40 1. Mathematical logic ........................................................................................................................................40 2. Elementary combinatorics and graph theory.................................................................................................40 3. Discrete probability, recurrence relations, and number theory.....................................................................40 VI. OTHER TOPICS ................................................................................................................................................40 V. REFERENCES .....................................................................................................................................................41 Page 2 of 41 Eta Kappa Nu – Mu Chapter CS GRE Notes I. Software Systems and Methodology A. Data organization 1. Data types 2. Data structures and implementation techniques B. Program control and structure 1. Iteration and recursion 2. Procedures, functions, methods, and exception handlers 3. Concurrency, communication, and synchronization C. Programming languages and notation 1. Constructs for data organization and program control 2. Scope, binding, and parameter passing 3. Expression evaluation D. Software engineering 1. Formal specifications and assertions 2. Verification techniques 3. Software development models, patterns, and tools E. Systems 1. Compilers, interpreters, and run-time systems Page 3 of 41 Eta Kappa Nu – Mu Chapter CS GRE Notes 2. Operating systems, including resource management and protection/security 3. Networking, Internet, and distributed systems 4. Databases 5. System analysis and development tools Page 4 of 41 Eta Kappa Nu – Mu Chapter CS GRE Notes II. In the below figure shows where in the design hierarchy we will be looking at. most of the material on the CS GRE Test that is covered in CS 150 is about combinational logic. you’re in luck because the CS GRE does not dig very deep and is limited to mostly combinational logic Switch Logic and Basic Gates Page 5 of 41 . Finite state machines are very important too. Combinational Logic Hopefully you’ve seen a little bit of digital design by now. Computer Organization and Architecture This part of the test focuses on the following courses. Implementation of combinational and sequential circuits i. • 1. If not. James Donald estimates that generally no more than 12% (<= 9 questions) on the test are on CS 150 material. but on the CS GRE they are tested more from the standpoint of CS 164 or CS 172. For the hardware stuff. Digital logic design • To summarize. CS 61C (or CS 152) can cover significantly more. • • • • • Machine Structures (CS61C) Components and Design Techniques for Digital Systems (CS150) Computer Architecture (CS152) Introduction to Communication Networks (EE122) Graduate Computer Architecture (CS252) A. NAND – NOT of the AND gate. To prove that these are equivalent.Eta Kappa Nu – Mu Chapter CS GRE Notes Here is a breakdown of the basic gates. you can actually create any gate from the NAND gate. NOT – Gate is high when input is low. not both. NOR – NOT of the OR gate. Page 6 of 41 . XOR – Gate is high if the first or second input is high. Lets look a little into how that might be done. You will not have to know details below gate-level analysis. • • • • • • • AND – This gate is high when both its inputs are high OR – Gate is high when either of its inputs are high. Although all of these gates can be manufactured separately. we can draw a truth table. XNOR – NOT of the XOR gate. Below is a diagram of their relationships. Many alternative gate realizations may exist for any given truth table. AND} => {+. OR. *} • Unary Operator {NOT} => {‘} (also seen as ~ or a bar on top of the element. truth tables and the digital design are all related. NOT.Eta Kappa Nu – Mu Chapter CS GRE Notes Truth tables can be made from any digital circuit and. Digital gates can be thought of as feasible ways of implementing these Boolean functions. Boolean Algebra It turns out that Boolean expressions. are complete descriptions of a circuit’s functionality. Truth tables are pure definitions for a Boolean function. Page 7 of 41 . • Such that the following axioms hold. *Note that the truth table is a unique signature of a Boolean function. These three gates are the most fundamental gates. A Boolean expression is helpful for manipulation using Boolean algebra. Boolean Algebraic Structure • Set of elements B • Binary operations {OR. as we’ll see later. They can be written as an expression in Boolean Algebra using AND. Minimization of gate is possible as we will see later. This example shows the truth table for simple circuit. (X')' = X Complementarity: 5. Boolean algebra is defined and manipulated much like regular algebra. X • 0 = 0 Idempotency: 3.b. a. X + X' = 1 5D. Set B contains at least 2 elements. X • X = X Involution: 4. 6. X • Y = Y • X Associativity: 7. Axioms and Theorems of Boolean Algebra • • • • • • • • • • • • Identity 1. such that a != b Closure: a + b is in B a * b is in B Commutativity: a+b=b+a a*b=b*a Associativity: a + (b + c) = (a + b) + c a * (b * c) = (a * b) * c Identity: a+0=a a*1=a Distributivity: a + (b * c) = (a + b) * (a + c) a * (b + c) = (a * b) + (a * c) Complementary a + a’ = 1 a * a’ = 0 As you can see. 7. (X • Y) • Z = X • (Y • Z) Distributivity: 8. X + X = X 3D. X • Y + X • Y' = X 9D. (X + Y) • (X + Y') = X Absorption: 10. 5. X • 1 = X Null 2. 4. X + 0 = X 1D. (X + Y) + Z = X + (Y + Z) 7D. 3. X + Y = Y + X 6D. (X + Y) • (X' + Z) =12D. X • (X + Y) = X 11. (X + Y) • (Y + Z) • (X' + Z) = X • Y + X' • Z (X + Y) • (X' + Z) De-Morgan’s Law Page 8 of 41 . X • X' = 0 Commutativity: 6. X + 1 = 1 2D. X + X • Y = X 10D. X • (Y + Z) = (X • Y) + (X • Z) 8D.Eta Kappa Nu – Mu Chapter CS GRE Notes 1. X • Y + X' • Z = X • Z + X' • Y (X + Z) • (X' + Y) Concensus: 13. (X • Y') + Y = X + Y Factoring: 12. (X + Y') • Y = X • Y 11D. 2. (X • Y) + (Y • Z) + (X' • Z) = 13D. X + (Y • Z) = (X + Y) • (X + Z) Uniting: 9. Page 9 of 41 . Change all OR operations to ANDs.z) = w’ + x’ + = w’ + x’ + = w’ + x’ + = w’ + x’ + = w’ + x’ + = wx(y’z + yz’) = w’ + x’ + (y’z +yz’)’ (y’z)’(yz’)’ (y + z’)(y’ + z) yy’ + yz + z’y’ + z’z 0 + yz + z’y’ + 0 yz + y’z’ //negate operators and plus We can also see De Morgan’s law in action at the gate level. 3.y. we see how De Morgan’s law is applied at the gate level. The general idea is to sum all the cases (OR) where the function is true. In the following diagram. we want to AND the NOT of that element.Eta Kappa Nu – Mu Chapter CS GRE Notes (a + b)’ = a’ b’ a + b = (a’ b’)’ (a b)’ = a’ + b’ (a b) = (a’ + b’)’ Basic Steps: 1. 4. Take the complement of the entire expression. If the element is false. AND together all the elements. Change all AND operations to ORs.x.z) f’(w. For each case. Change all variables to their complements.x. Sum Of Products (SOP) .This is useful (read: easiest) for deriving a (usually not optimal) Boolean expression given a truth table. For example: F (w. 2.y. Eta Kappa Nu – Mu Chapter CS GRE Notes Sample Truth Table Problem: Solution: C The easiest way to do this problem is to just write the truth tables of each of the choices until you get one that is identical. Here’s how to do it. we Page 10 of 41 . Here x2 = C. C 0 0 0 0 1 1 1 1 F = AB’C’ + A’B’C + AB’C + A’BC = AB’(C’ + C) + A’C(B’ + B) = AB’(1) + A’C(1) = AB’ + A’C = x1’x0 + x2x0’ B A F 0 0 0 0 1 1 1 0 0 1 1 0 0 0 1 0 1 1 1 0 1 1 1 0 //Sum of products //Reverse distribution //Complimentary axiom Two-level Logic Extension of what you already know. The better way is to use Sum of Products. First write the truth table for the function. If the shaded area represents the function. x1 = B. In the image to the left is a visualization of a Boolean function with three variables (represented by the 3 digit binary number at each corner). x0 = A. The figure to the right denotes the sum of products that you just used above. This will prove crucial in simplifying Boolean functions. Optimization and analysis Karnaugh Maps These are also useful for deriving a Boolean expression from a truth table but better than sum of pairs as it can give a minimal expression. 2. register. are implemented with some kind of flip-flop that is used to store data. devices used to store and/or process data. Flip flops are the basic part of registers. Sequential Logic The most basic component of sequential logic is a flip flop which basically stores 1 bit of data and is clocked. Q takes the value of D at that time. On the clock edge. Page 11 of 41 .Eta Kappa Nu – Mu Chapter CS GRE Notes are able to simplify its expression (rather than the shortsighted sum of pairs). ii. There are many kinds of FF but the most common kind is the D-flip flop It takes 1 input. Below is the timing diagram for the D flip-flop In general. One example of these is the processor register in CPUs. 101.10. Another scheme for counting (other than sequential) but allows that only 1 digit change at each iteration.D) = Σm(0.3.14. 111.8.11. 001. Set up k-map table.Eta Kappa Nu – Mu Chapter CS GRE Notes Gray-code – 00. Circle groups (rectangles and powers of 2 only!) 4.B. 10. 100 How to use K-Maps Steps 1.5. Provably infinitely extendable. 01.15) F = C + A’BD + B’D’ Page 12 of 41 . Derive Boolean expression! For example: F(A.C. 3. Fill in data from truth table. 010.2. 2.6. 11. 011. 000. 110.7. their parts and how those parts work together. Here is an example of a Determinisitic finite automaton (DFA). There are other types of DFA’s such as Non-deterministic finite automatons (NFAs) but in practice are not easily translatable into a circuit. This section will only talk about implementing FSMs with digital logic.Eta Kappa Nu – Mu Chapter CS GRE Notes K-maps can also take advantage of don’t cares (the value doesn’t change the resultant expression). Page 13 of 41 . you can draw your circles around don’t cares Finite State Machines (FSMs) / Finite State Automaton Here’s a general overview of what they are. (most common) • Recognizers – Variant of an acceptor (think of acceptance as recognition). Overview – Finite state machines can be represented by a finite state diagram and accompanying transitions. the accept state and the transitions. FSMs are also a very important topic in Computability / Complexity and the study of programming languages. As seen below. There are 3 major types of FSMs • Acceptors – either accept input or not. Note the start state. • Transducers – Generate output from given input. This is useful for regular expression matching. These FSMs are represented in digital logic in two distinct groups. this Moore machine looks like the following: There are 3 major components in finite state machine design. δ. State logic 2. We can see this in acceptors and recognizers but in general the output can be anything. Page 14 of 41 . q0 } consisting of • • • • • • a finite set of states ( Q ) a finite set called the input alphabet ( Σ ) a finite set called the output alphabet ( ∆ ) a transition function (δ : Q x Σ → Q ) mapping a state and an input to the next state an output function ( λ : Q → ∆ ) mapping each state to the output alphabet. Σ.Eta Kappa Nu – Mu Chapter CS GRE Notes • Moore Machine – output is determined by the current state alone. Next state logic – based on its state and input 3. λ. Formal Definition A Moore machine can be defined an n-tuple { Q. 1. ∆. FSMs are completely defined in their state transition table. An example state transition diagram is given below. a start state (q0) The number of states in a Moore machine will be greater than or equal to the number of states in the corresponding Mealy machine. Output logic – based on its state Much like Boolean functions. In digital design. Moore machines are. only one FF is high at any given time (because you can only be in one state per cycle in a DFA). Page 15 of 41 . In digital design it looks like this. and the second number is the output based on that transition. The combinational logic contains the state and takes in input and gives the next state and output. One-hot encoding – This is a general technique to design an FSM. Each state has its own flip-flip which is high if and only if the machine is in that state. simpler and easier to implement in digital design. in general. Below is an example of a Moore machine in which the first number on the transition is the input. Therefore.Eta Kappa Nu – Mu Chapter CS GRE Notes • Mealy Machine – output is determined by the current state and its input. Not a scientific law. But don’t worry. Processors and control units This section of the test stresses mostly CS61C and CS152 material. The following sections will focus mainly on the MIPS instruction set architecture although all the theory is applicable to any ISA. B. This trend helps companies plan their processing power. the questions are very basic but broad. The following diagram shows where in the design hierarchy Moore’s Law – The speed of a processor doubles every 18 months. Page 16 of 41 .Eta Kappa Nu – Mu Chapter CS GRE Notes Other ways of storing state include a binary encoding of the states. more of an observation. They won’t ask questions on a specific hardware but you should be aware of one and how assembly-level instructions behave in the pipeline. Make sure you know how to convert between decimal. Instruction sets At the very bottom of computer architecture and machine structures themselves is the instruction set designed for it. • There are many other major players in the market such as the SPARC architecture (Sun). Why do we use these bases (as opposed to others?) There exist many different ways of representing signed numbers. ARM. 2. Computer arithmetic and number representation To do these problems you need to understand how computers internally store numbers. fixed length instructions). PowerPC (IBM). Here is a table of some of the instructions of the MIPS architecture. • Most embedded computing devices use the MIPS architecture. hex (base 16) and of course binary (base 2). The design of the CPU as well as the corresponding system is heavily dependent on the ISA (MIPS branch delay slots. • Most desktop computers today use the x86 instruction set. There may not be any one instruction set architecture (ISA) that is the best. octal (base 8). The ISA defines the machine language for that CPU (read: assembly language). The drawback is that there exist 2 numbers for zero (namely all 0’s Page 17 of 41 .Eta Kappa Nu – Mu Chapter CS GRE Notes 1. One naïve way is to do it is to use one bit to be the sign bit. Motorola 6800. IA-64 etc. how many numbers can be represented in 2's complement? Total 2n numbers can be represented. Half are positive. Subtraction is done in much the same way as 2’s complement except any carry bit is added back into the answer. the remaining bits are interpreted according to the customary rules of the binary numeral system.Addition is done in the standard way (carry will overflow into nothingness). => -2n-1 to 2n-1 . all positive numbers are represented the same as in standard binary form. effectively allowing subtraction of one binary number from another using only the addition operation. the remaining bits contain a numeral in two's complement form.1 Page 18 of 41 . we just do A + ~B where ~B is the 2’s complement 11111 111 (carry) 0000 0101 (5) + 1111 1011 (-5) =========== 0000 0000 (0) // 5 – 5 = 0 1’s complement . 2’s complement . If the sign bit is 0. How to represent a number in 2’s complement. If the sign bit is 1. however. Consequently. a signed binary numeral uses the leftmost bit as the sign bit and the remaining bits to indicate the value of the numeral. half are negative. It is also an operation which may be applied to positive binary values in order to perform subtraction using the method of complements.method of signifying negative numbers in binary. a signed 8-bit binary numeral can represent any integer in the range -128 to +127. To do a subtraction A – B. or 127. then the largest value that can be stored in the remaining seven bits is 27 − 1. For example lets convert the 8-bit number 5(decimel) to -5(decimal): 0000 0101 //binary form 1111 1010 //invert digits = 1111 1011 = -5//add one. But remember 0 is first positive number. Convert the number to standard binary form (with leading 0’s) 2. Invert all the digits 3. Add 1 As you can see.Eta Kappa Nu – Mu Chapter CS GRE Notes and all 1’s) and it makes addition and subtraction more difficult. This is useful in microprocessors for which subtraction is unavailable or too complex to perform efficiently. 1. If the sign bit is 0. Using n bits. The most convenient to store signed numbers is the use of two’s complement. Negative numbers are different. In the two's complement method. MAX_VALUE) Page 19 of 41 . saturating arithmetic: instead of rolling over on overflow or underflow. significand is less than 1 (no implied 1) Infinity: exp 255 (all 1’s).Eta Kappa Nu – Mu Chapter CS GRE Notes Floating Point Fixed point: decimal point is fixed Floating point: IEEE standard This is an improvement over the fixed point which allows the “point” to “float” over a large range of digits. but if the value is exactly halfway in between. significand 0 (division by 0) NaN: exp 255. significand not 0 (0/0.truncate: drop the last bits. (Integer.round to even: normal rounding. Exponent: 11 bits. exponent. round to the nearest even number Overflow and underflow: overflow: if both numbers are positive and result is negative underflow: if both numbers are negative and the result is 0 or positive. implied 1 value: -1sign * (1. 0 for positive numbers exponent: 8 bits.down to -infinity: always round down . bias of 127 [-126…127] significand: 23 bits.up to infinity: always round up . significand] (double) Sign: 1 bit. Single precision (32-bit) [sign bit. the result is set to be the maximum or minimum possible value. bias of 1023 Significand: 52. inf/inf) Double precision (64-bit) [sign bit. exp bias 1023 Rounding: 4 IEEE options: .significand) * 2exp-127 Special Values: • • • • Zero: all bits 0 Denorms: exp 0. round towards 0 . significand] (float) sign: 1 bit. exponent. 1 for negative. • Most common is the 5-stage pipeline. These go to the ALU which uses the values of A and B to calculate the ALUOut which is stored in the ALUOut register. (i. Register and ALU organization Processors are made of x main components. this is an okay tradeoff. Pipelined Processors These processors use pipelining to achieve more performance. usually the address of the instruction. • Increased throughput at the cost of increased latency. The program counter feeds the address of the next instruction to the instruction memory. This is then used to write back to the register file. You can look up a maximum of 2 processor registers which go into registers A and B. 5-Stage Pipeline Page 20 of 41 . • Throughput – Amount of instructions you can execute per time. For our purposes. • Pipelined processors have multiple stages but same overall functionality as a single-cycle processor. The instruction is then decoded and transferred to the register file for lookups of the affected registers. Idea is to split the datapath up into parts which can be overlapped in time. Below is a rough diagram of an unpipelined processor. • PC – Program counter.e.Eta Kappa Nu – Mu Chapter CS GRE Notes 3. inst/sec) • Latency – How long an instruction takes to complete from start to finish. The instruction memory looks up the next instruction which goes into the instruction register. • Register File • ALU – Arithmetic Logic Unit • Data Memory – Also a cache with write buffer. • Instruction Memory – usually a cache. lw) • MEM: Memory – Read or write from data memory. e. Page 21 of 41 . Let’s look at the pipelining from a higher perspective. • ID: Instruction Decode – Decode the instruction and read values from register file. Pipelining allows for less logic per stage and thus a high clock frequency. (optional.g. • EX: Execute – Execute instruction using the ALU. • WB: Write Back – Write result back to the register file.Eta Kappa Nu – Mu Chapter CS GRE Notes Stages: • IF: Instruction Fetch – Use PC to fetch an instruction from the instruction memory. Forward the data still in the pipeline! • Stalling – Sometimes we cannot resolve a data dependency because it is needed before we can determine the solution. This is resolved by stalling the pipeline and arbitrating requests between the two.Eta Kappa Nu – Mu Chapter CS GRE Notes The whole point about pipelining is to split the data path into components which we can use simultaneously. WAW – Write after Write – Not a problem if instructions are sequential. RAR – Read after Read – Not a problem. it will output one instruction per cycle. o Load followed by a branch o Logical/arithmetic followed by a branch. As you can see from the shaded region. Page 22 of 41 . Once the pipeline is filled. all 5 stages of the pipeline are being used at the same time but on different instructions. WAR – Write after Read – Not a problem if sequential. One example of this is Instruction and Data memory both need access to shared data (SDRAM). Caveats • Hazards: o Structural – When a component in a pipeline needs to be used by more than 1 thing per cycle. • Forwarding – Sometimes we need to know the answer to an instruction before it is written back to the register file. o Data RAW – Read after Write – Need to forward data ahead in pipeline. cycles per instruction (CPI) – usually comparable against processors of the same architecture. Factors of System Performance 1. 2. clock rate 3. Performance There are many measures of performance but the one that really matters is execution time.Eta Kappa Nu – Mu Chapter CS GRE Notes Is resolved by stalling… The basic point is that data can only travel forwards in time. instruction count • Performance = 1 / execution • Execution Time = inst_count * CPI * 1/clock_rate Page 23 of 41 . Data paths and control sequencing Solution: B C. Memories and their hierarchies In general. this is what the computer memory hierarchy looks like. Useful in choosing what part of a processor to improve. 4. Page 24 of 41 .Eta Kappa Nu – Mu Chapter CS GRE Notes Solution: C Amdahl’s Law When improving one item. you can only improve the total time by the fraction that task requires. Almost all processors in desktop computers use these high-performance architectures. must identify weaknesses in the pipelined processor. main. Networking and communications 1. • Structural hazards Super-Scalar Processors Idea is to have more than one pipeline. • Stalls due to data hazards. and management 2. Capable of executing > 1 instruction per cycle (theoretically). High-performance architectures 1. routers) 2. • Only executes 1 instruction per cycle. Pipelining superscalar and out-of-order execution processors Material in this section refers mainly to CS152/CS252. Synchronization E. buses. I/O systems and protocols 3.. Interconnect structures (e. switches. and secondary storage 3. paging. In order to see why we need more advanced processors. Page 25 of 41 . Cache. Virtual memory. and segmentation D.g. Performance. implementation.Eta Kappa Nu – Mu Chapter CS GRE Notes 1. This processor has a maximum instructions per cycle of 3.Eta Kappa Nu – Mu Chapter CS GRE Notes Here is an example of a super-scalar processor with 3 pipelines. dividers. Page 26 of 41 . floating point units etc… • Reorder Buffer – This is used to resolve dependencies. Drawbacks: • Heavily increased forwarding and hazard detection logic • Still vulnerable to dependencies in instructions which cannot be avoided w/o a time machine. the reorder buffer makes sure an instruction isn’t committed until it is completed. Out-of-Order Processors (OOO) These are some of the most advanced processor designs that allow for instructions to be executed out of order. Works well with branch prediction. It’s able to execute any three of the execution stages as necessary. multipliers. Because things can complete out of order. • Reservation Stations – These are the functional units such as adders. Basic Parts: • Instruction Fetch – Usually implemented as a queue of instructions to be executed. things get a little messy. Speculative Processors / Dynamic Scheduling In general. branch prediction does about as bad as no branch prediction. Any instruction after a branch must NOT be committed back to the register file and definitely not to memory. This is read by the other reservation stations as well as the reorder buffer. However.Eta Kappa Nu – Mu Chapter CS GRE Notes Life cycle: Instructions are tossed from the instruction queue into reorder buffer and a reservation station or reservation station queue. Dependencies are resolved because when an instruction completes executing in a reservation station. Page 27 of 41 . The reorder buffer (ROB) holds the true ordering of instructions. it is “committed. speculation terms of computer architecture is branch prediction. this results in a flush. The processor will proceed with this assumption at full speed until proven wrong. When wrong. Because branches can cause unnecessary holes in a pipeline (especially if it is long). it keeps the pipeline full and throughput high. local) that predict (in real world situations) at about 80-90% accuracy. most processors will guess their result. the value of that computation is broadcast onto the CDB or Common Data Bus.” Committing includes flushing to memory. when right. If proven wrong. When an instruction is complete at the bottom of the ROB. Flushing the pipeline effectively nullifies instructions in the pipeline usually by replacing all the pipeline stage signals to zeroes. There are many kinds of branch prediction (global. In most pipelines. THEORY AND MATHEMATICAL BACKGROUND This makes up the largest section of the test. They do not address mathematical background. including some of the material presented in CS170 on graph theory and number theory (which are both likely to appear on the GRE). nor do they address material that is rarely or never covered by the test (which includes a substantial fraction of Berkeley's CS170 and CS172. as well as e_ectively all of CS174).Eta Kappa Nu – Mu Chapter CS GRE Notes Solution: C Efficiency = speedup / # processors = (T single_processor / T parallel_processors) / # processors T single_processor = 8 T parallel_processor = 4 Efficiency = (8/4) / 4 = 50% 2. Acronyms: • BST – Binary Search Tree • DAG – Directed Acyclic Graph • DFA – Deterministic Finite Automaton • DP – Dynamic Programming Page 28 of 41 . This covers: • CS170 – Introduction to Algorithms • CS172 – Computability and Complexity • CS174 – Combinatorics and Discrete Probability (Randomized Algorithms) These notes briefly address major subjects in theoretical computer science which are often tested on the CS GRE Subject Test. Parallel and distributed architectures III. Bubble. g ( n) is finite. Dijkstra’s. o o( f ( n)) . Binary search in sorted list or BST. o g (n) = O ( f ( n)) means lim n →∞ f ( n) • Lower Bounds: o Ω ( f ( n)) . g ( n) o g (n) = Θ( f ( n)) means lim is finite and non-zero.grows asymptotically the same as f (n) . Algorithms and complexity Asymptotic Growth Notation • Upper Bounds: o O ( f ( n)) .001n ) < Θ(2 2 ) n Typical worst-case running times O (1) O (log n) O (n) O ( n log n) O(n 2 ) O (n ≈ 2.grows at most as slow as f (n) .001 ) < Θ(n) < Θ( n log n) < Θ(n1000 ) < Θ(1. Search in unsorted list. bucket sort Quick/Shell/Merge sort. arithmetic fixed-length integers.8 ) O(n 3 ) Hashes. o ω ( f (n)) .grows strictly slower than f (n) . n →∞ f ( n) Θ(1) < Θ(log n) < Θ(log1000 n) < Θ(n 0. Some DP problems. Data Structures Important data structures you should know: Page 29 of 41 . Single operations.grows at most as fast as f (n) .grows strictly faster then f (n) . Insertion Sort. (Edit Distance) Clever matrix multiplication Naïve matrix multiplication.Eta Kappa Nu – Mu Chapter CS GRE Notes • • • • • PDA – Push-down automaton. n →∞ g ( n) • Combined: o Θ( f ( n)) . f ( n) o g (n) = Ω ( f ( n)) means lim is finite.3− 2. LP – Linear Programming NFA – Non-deterministic Finite Automaton NP – Non-Deterministic Polynomial Time (ND)TM – (Non-deterministic) Turing Machine A. DFS / BFS. Union Binary Trees Linked Lists Hash Tables Solution: C Solution: D 1. Stable. directed. undirected) – G = (V. E). Unstable. acyclic.Eta Kappa Nu – Mu Chapter CS GRE Notes • • • • • • • • Graphs (cyclic. Stable.O ( n log n) worst and average case. swapping these two items if they are in the wrong order. • Heap Sort – In-place. Similar to bubble sort. Priority Queues Make-Set. comparing two items at a time. The algorithm gets its name from the way smaller elements "bubble" to the top of the list via the swaps. The heap places an ordering upon elements added to it. It works by repeatedly stepping through the list to be sorted. O (n) best case.O(n 2 ) worst case. Find-Set. V: Set of vertices E: Set of edges. Exact and asymptotic analysis of specific algorithms Sorting • Bubble Sort – O(n 2 ) . The pass through the list is repeated until no swaps are needed. but is more efficient as it reduces element comparisons somewhat with each pass. which means the list is sorted. • Insertion Sort . so the largest elements will be placed at the top of the heap (we term this heap Page 30 of 41 . An element is compared to all the prior elements until a lesser element is found • Merge Sort . O ( n log n) . Sorts in-place. 802 • Quick Sort . 2. 66 • sorting by least significant digit (1s place) gives: • 170. Recursively sort the sub-list of elements less than the pivot and the sub-list of elements greater than the pivot. Pick a pivot element from the list. 90 • sorting by most significant digit (100s place) gives: • 2. Radix Sort – O(n * k) where k is the average key length. or the smallest elements will be placed at the top of the heap (we term this heap a min-heap). 24. Requires O(n) additional space. the algorithm puts at least one element in its final place on each pass over the list.Eta Kappa Nu – Mu Chapter CS GRE Notes • a max-heap). 802. This step is commonly referred to as "partitioning". 90. 66. 24. This means that the pivot is in its final place. 45. Reorder the list so that all elements less than the pivot precede all elements greater than the pivot. 66 • sorting by next digit (10s place) gives: • 2. 2. 45. 24. o take the least significant digit (or group bits) of each key. Bucket Sort – • Solution: D String Algorithms Bloom Filters Page 31 of 41 . it can be ignored.O ( n log n) . 3. 90. 45. If one of the sub-lists is empty or contains one element. O(n 2 ) in worst case. 90. 75. Example: 170. 75. The Quicksort algorithm uses a recursive divide and conquer strategy to sort a list. reordering what elements are left in the heap. but keep the order of elements with the same digit (this is the definition of a stable sort). 802. Unstable. o repeat the sort with each more significant digit. 66. o sort the list of elements based on that digit. 45. The heap data structure also allows us to perform the operation of extracting the minimum or maximum element in the heap. The steps are: 1. 24. 75. 75. 170. 802. 2. 170. divide and conquer) Greedy Algorithms Page 32 of 41 . d[v] will be the cost of the shortest path from s to v or infinity. In every step we choose the side with the least cost. Algorithmic design techniques (e. if no such path exists. In every step of the algorithm. Min-cut / Max-flow Theorem – The maximal amount of flow is equal to the capacity of a minimal cut. It builds upon a single partial minimum spanning tree. dynamic programming. consists only by one node and nothing else. Starts with a forest which consists of n trees. representing the fact that we do not know any path leading to those vertices. 2. • Randomized Min-cut • Ford-Fulkerson Algorithm • Edmond’s-Karp Algorithm Minimum Spanning Tree • Kruskal’s Algorithm – O ( E lg V ) . which means that we are still under greedy policy.O (VE ) .Eta Kappa Nu – Mu Chapter CS GRE Notes Graph Theory and Algorithms Shortest-Path Dijkstra’s Algorithm . If the chosen side connects nodes which belong in the same tree the side is rejected. at each step adding an edge connecting the vertex nearest to but not already in the current partial minimum spanning tree. two different trees of this forest are connected to a bigger tree. and not examined again because it could produce a circle which will destroy our tree. Each tree.g. Initially. When the algorithm finishes. Greedy. • Prim’s Algorithm – Only works on connected graph with positive edge weights. greedy. this value is 0 for the source vertex s and infinity for all other vertices. The algorithm works by keeping for each vertex v the cost d[v] of the shortest path found so far. and then combines the results in f(n) time. do whatever looks best at the time.g. solution of integer programs is. and the answer for any instance of size n depends on small (e. but are in practice solved by the "simplex" method which is fast on average but is exponential in the worst case. string alignment (e. The set of feasible solutions is a polyhedron in n-dimensional space (if there are n variables). Many problems have natural representations as linear or integer programs. Page 33 of 41 . Divide and Conquer A divide & conquer algorithm is one that. GRE questions are unlikely to ask much more than definitions. longest common subsequence. linear) number of instances of size < n. If f (n) = O(n logb a −ε ) (for some ε > 0).Eta Kappa Nu – Mu Chapter CS GRE Notes Algorithms that. Dynamic Programming Vaguely-defined paradigm of approaching a problem instance of size n by solving all possible smaller sub problems. given a problem instance of size n. on the other hand. such as edit distance (UNIX diff). but tend to be very efficient when they do. building up from smallest cases to cases of size n.g. the problem becomes “Integer Programming" (and solutions become restricted to grid points). Then. the running time T(n) of the algorithm is described by the recurrence relation T ( n) = aT ( n / b) + f ( n) The Master Theorem fixes the asymptotic behavior of T as follows. splits it into a cases of size n=b. Linear programs are solvable in worst-case polynomial time by a technique known as the ellipsoid method. and af ( n / b) / f ( n) ≤ 1 for sufficiently large n. then T (n) = Θ(n logb a log n) If f (n) = O(n logb a +ε ) . Common in string comparison algorithms.: given a subspace of a vector space and a basis for it.g. A typical question may ask you to solve a simple LP problem. recursively solves each of them. A typical question may sketch a DP algorithm and ask for its running time. then T (n) = Θ(n logb a ) If f (n) = O(n logb a ) . at each step. E. a solution is a setting of the variables which maximizes (or minimizes) the objective function while still satisfying all of the inequalities. in biology). Consider DP whenever the problem looks otherwise difficult. it can be shown that an optimal solution can always be found at a vertex of the polyhedron (note that a 2-D polyhedron is just a polygon). They work on a restricted class of problems. If an extra constraint is added that all variable may only take on integer values. NP-Complete. just keep picking vectors out of the subspace that haven't been spanned by other vectors. then T (n) = Θ( f ( n)) Example problem: Solution: C Linear Programming A Linear Programming problem is a set of linear inequalities and a real-valued “objective function" over a set of variables. including NP-completeness Complexity Classes For any given function f(n).) It can be shown that NTIME(f(n)) ⊆ TIME(2f(n)) and NSPACE(f(n)) ⊆ SPACE(f(n)2) (Savitch’s Theorem). and NSPACE(f(n)) of languages which can be decided by a Turing machine (deterministic & nondeterministic respectively) using O(f(n)) time and space. and other classes such as EXPTIME. If you have a language A which is known to be NPcomplete. and EXPSPACE for larger functions. Since non-determinism yields extra power. but not the other way around. PSPACE is defined similarly for polynomial space. Many well-known combinatorial problems are known to be NP-complete.Eta Kappa Nu – Mu Chapter CS GRE Notes Simplex Algorithm – Exponential in worst case but fast in practice. and a language B in NP which may or may not be NP-complete. SPACE(f(n)). N-SAT – satisfiability. More general classes are defined for large groups of functions. coNP is the set of complements of languages in NP. NP-Completeness A language which is in NP and can be reduced from any language in NP by a poly-time reduction is called NP-Complete. since deciding this language in polynomial time would allow one to decide any language in NP in polynomial-time. Most NP-completeness questions in the GRE concern solely the direction of this reduction. TIME(f(n)) ⊆ NTIME(f(n)) and SPACE(f(n)) ⊆ NSPACE(f(n)) are trivial. Specifically. NEXPTIME. Stochastic. • Traveling Salesman • 3-SAT. Computational complexity. Here are some examples of NP-Complete problems you might encounter. you can define classes TIME(f(n)). • K-clique – Independent Sets • Vertex-cover • Hamiltonian Path/Cycle • Integer linear programming • Sub-graph Isomorphism Page 34 of 41 . NTIME(f(n)). respectively (“space” refers to memory or the number of cells used on the tape. Upper and lower bounds on the complexity of specific problems 4. you will show that B is NP-complete if you reduce from A to B. Probabilistic Algorithms Randomized Min-cut Heuristic Algorithms 3. P is defined as ∪ TIME(nk) – the set of all languages decidable in polynomial time by TM – and NP is defined similarly for NTIME – the set of all languages decidable in polynomial time by a n NDTM. Whenever you're at some state.Eta Kappa Nu – Mu Chapter CS GRE Notes Solution: E Solution: D B. If the machine is in an accepting state after processing some finite sequence of symbols (a string). The set of string accepted by an automaton is the language of the automaton. it is said to accept this sequence. Any NFA can be converted into an equivalent DFA. Turing machines) (Non-)Deterministic Finite Automaton ([N]DFA) / Regular Expressions A DFA (Deterministic Finite Automaton) is a set of states with transitions defined on them. One state is distinguished as the starting state. transitions from any given state on any given symbol can go to a set of states as well. the automaton is also said to recognize this language. Models of computation (finite automata. Any language recognized by a DFA or an NFA can also be described by a regular expression (and vice versa). Page 35 of 41 . An NFA (Nondeterministic Finite Automaton) is an automaton whose configuration at any point is a set of states rather than a single state. the next symbol determines the exact next state you move to. Automata and language theory 1. and a subset of states are considered accepting states. but sometimes at exponential cost. based on input which comes in form of symbols from an alphabet set. the standard example is the language of strings containing n a's followed by n b's. a start state (s ∈ S) a set of accept states (A ⊆ S) Regular Expression for the above DFA is: 1*(01*01*)* NFAs are an extension of DFA that also allow epsilon transitions.Eta Kappa Nu – Mu Chapter CS GRE Notes Languages described by regular expressions (or recognized by DFAs) are called regular languages. you could not use them to detect when a string has the same number of 0s and 1s.g. Virtually always. you cannot use DFAs to do any kind of counting. A DFA is a 5-tuple. a language is rendered non-regular due to its reliance on \counting" some aspect of the string. The most important feature of DFAs and NDFAs are that they have a finite number of states and finite alphabet. A condition known as the pumping lemma is often used to show that a language is not regular. On an epsilon transition. The NFA accepts if there exists a Page 36 of 41 . the state of an NFA is defined by the set of states it could possibly be in. Consequently. s. δ. E. A). the machine can be in either of those states (as seen above. consisting of a finite set of states (Q) a finite set called the alphabet (Σ) a transition function (δ : Q × Σ → Q). (Q. Σ. "cc". respectively. but also some others. "ef"} = {"abd". with which it can perform either of the standard push or pop operations at each processing step. "cd". (empty set) ∅ denoting the set ∅ 2. (set union) R U S denoting the set union of R and S. which of these regular expressions is it equivalent to? (or vice versa. . "c". remember the counting argument!) Push-Down Automata / Context-Free Languages If an NFA is also allowed to maintain a stack. For example {"ab". "ababab". Although they seem more powerful.) o Given this English-language description of a regular language. "ab". the resulting machine is known as a PDA (Pushdown Automaton). the languages described by NFAs is exactly that of DFAs and Regular Expressions / Languages. symbols in the alphabet).. {"ab". "c"}{"d". defined as a set of non-terminals (or variables) and productions (or substitution rules) mapping single non-terminals to sequences of non-terminals and terminals (i. "c"}* = {ε. (literal character) a in Σ denoting the set {"a"} and the following operations: 1. Typical question categories are: o Given this automaton. find the value of a property of the language it recognizes (or generates)? o Given these language. (star)R* denoting the smallest superset of R that contains ε and is closed under string concatenation. (concatenation) RS denoting the set { αβ | α in R and β in S }. PDA's can accept any language accepted by an NFA (can you see why?).. "cef"}. Given a finite alphabet Σ the following constants are defined: 1. which is regular (or non-regular. 2. This is the set of all strings that can be made by concatenating zero or more strings in R. which of these finite automata or regular expression does it correspond to? o Given an automaton (or a regular expression). "abef".Eta Kappa Nu – Mu Chapter CS GRE Notes path to an accepting state. "abab". For example. "abc". 3. Page 37 of 41 . Regular Expressions Regular expressions consist of constants and operators that denote sets of strings and operations over these sets.e. A language is accepted by a PDA if and only if it can be generated by a context-free grammar. one non-terminal is designated as the start variable. and a string of terminals is said to be generated by the grammar if it can be derived from the string containing just the start variable by repeatedly applying substitution rules to replace non-terminals. "cab". }. (empty string) ε denoting the set {ε} 3. a characteristic of such languages is their reliance on counting two independent quantities. empty stack and acceptance in final state.Eta Kappa Nu – Mu Chapter CS GRE Notes CFGs are typically heavily covered on the GRE (unlike PDAs).σ. Often.Ω. known as CSG (Context-Sensitive Grammars) results. we’re talking about the NPDA or non-deterministic variety because standard PDAs cannot recognize all context-free languages. the start state Ω is the inital stack symbol F is subset of Q. A NPDA W can be defined as a 7-tuple: W = (Q.Σ. PDAs and NPDAs.F) where • • • • • • • Q is a finite set of states Σ is a finite set of the input alphabet Φ is a finite set of the stack alphabet σ is a finite transition relation (Q × ( Σ & {ε}) × Φ) × ( Q × Φ* ) s is an element of Q. a more powerful class of grammars. how is it derived from the start symbol (what is its parse tree)? • Given this string. Usually when we speak of PDAs.s. consisting of the final states. PDAs are basically DFAs fitted with an infinite stack which can be used as another variable (with the state and input) to be used to determine the next move.Φ. or given a string as well. They are equivalent in all respects. There are two kinds of push-down automata. any language generated by a CSG is recognized by a Page 38 of 41 . a classical example is anbncn. Below is an example NPDA which accepts the language of binary strings with equal number of 0’s and 1’s (impossible to do with DFAs) Typical question categories are: • Given this grammar. which of these strings does it generate. which of these grammars generate it? If substitution rules are allowed to map sequences of terminals and non-terminals to longer sequences of terminals and non-terminals. Two possible acceptance criteria. A version of the pumping lemma may also be used to show that a language is not recognizable by a CFG. These entities are not well-studied. reject or infinitely loop. Any such language is referred to as recursively enumerable or Turing recognizable. and are generally only addressed on the GRE by definition questions. which can only recognize Turing-recognizable languages. among other things. which is identical to a CSG. but may potentially do so faster than a TM in some cases. most cellular automata. and is equally powerful to any other model of computation known to be physically implementable. • Any language recognized by a TM may be described by a Universal Grammar. A Turing Machine is equivalent in computational power to. and PDA's with 2 or more stacks. RAM Machine 2. a Turing Machine with multiple (infinite) tapes. Turing Machines If a DFA is allowed to write on an (1-way) unbounded tape and move back and forth on it. Replacing the DFA with an NFA yields a Non-Deterministic Turing Machine.Eta Kappa Nu – Mu Chapter CS GRE Notes LBA (Linear Bounded Automaton). • If the machine is guaranteed to terminate and reject any string it doesn't accept. it is said to decide its language. and its language is referred to as recursive or Turing-decidable. Formal languages and grammars (regular and context free) Solution: A Page 39 of 41 . other than for the fact that its rules are not constrained to map strings of non-terminals and terminals to longer ones. • Three possible outputs for a TM: accept. the resulting machine is called a Turing Machine. It is believed that it is not possible to physically implement an NDTM. o Prime numbers • Uncountable: o All real numbers [0. Here are some examples: • Countable: o Even natural numbers. recurrence relations. Decidability See the Turing Machines section. OTHER TOPICS This section tends to be random questions from a range of fields including: Page 40 of 41 . Discrete probability. Elementary combinatorics and graph theory 3.1) o Power Set of N (or anything infinite) o Cantor Set Solution: D VI. Discrete structures 1. or there exists a one-to-one mapping with N (natural numbers).Eta Kappa Nu – Mu Chapter CS GRE Notes Solution: E 3. and number theory Countability A set A is countable if it is either finite. C. Mathematical logic 2. Eta Kappa Nu – Mu Chapter CS GRE Notes • • Encryption Graphics In the future we may include further sections on this. REFERENCES Page 41 of 41 . V.


Comments

Copyright © 2024 UPDOCS Inc.