CMSC 22240-1: Computer Architecture for Scientists (and other non-engineers)
Winter 2021, TuTh 1120-1240 , Location: TBD
Book: Computer Architecture for Scientists (Cambridge University Press, in print Fall 2021)

The course provides an understanding of the key scientific ideas that underpin the extraordinary capabilities of today’s computers, including speed (gigahertz), illusion of sequential order (relativity), dynamic locality (warping space), parallelism, keeping it cheap - and low-energy (e-field scaling), and of course their ability as universal information processing engines. These scientific principles provide a model for software architects, engineers, data scientists, and other computation users to reason about the performance of their programs on laptops, accelerators (GPUs), servers, and the cloud. Using the scientific principles as underpinnings, we create principled performance models for dynamic locality (caches), scaleup (parallelism), and scaleout (cloud parallelism). This approach arms users of computers with a high-level performance model for computing. Finally, the course provides a longer-term understanding of computer capabilities, performance, and limits to the wealth of computer scientists practicing data science, software development, or machine learning.

Students may not receive credit for both CMSC 22200 and 22240. This course is an optional prerequisite for CMSC 23000 (parallel to others)

Students will use a draft version of upcoming book for readings, discuss lectures. They will learn basics of computer architecture, perform case studies in performance and scalability, and also pursue connections of computer performance scaling principles to other scientific principles. They will do exercises and labs to exercise the performance models derived from these scientific principles; learning to reason about performance and scalability in computer architecture and systems. And further learn about the implications for computing systems (software and applications), that enable them to reason about energy/power and performance in systems ranging from smartphones to cloud datacenters.

    Canvas for this course is HERE

    Syllabus

    • Introduction: Computers are ubiquitous, and amazing powerful and cheap
    • Basics of Computer Instruction Sets, Assembly and High-level Languages
    • Small is Fast! (and Scaling)
    • The Sequential Computing Abstraction, Parallel Implementation
    • Exploiting Dynamic Locality (Caches and more)
    • Beyond Sequential: Parallelism and Scalout (Multicore and the Cloud)
    • Accelerators: GPU's, Machine Learning Accelerators, and more!
    • Computing Futures
    Coursework: Assignments
    • Assignment #1: Computing and Society, Scaling impact. Simple assembly
    • Assignment #2: Instruction Execution, Size Scaling for Computers
    • Assignment #3: Sequential Abstraction, Renaming, Cheating Sequence
    • Assignment #4: Dynamic Locality (for large Memories)
    • Assignment #5: Reuse Distance, Parallelism and Scaleout (Multicore and Cloud)
    • Assignment #6: Accelerators
    Coursework: In-class Discussion and Quizzes Students will be expected to keep up with readings and contribute to in-class discussions. There will be 2-3 in-class quizzes.
Andrew A. Chien
Andrew A. Chien Teaching
Large-Scale Systems Group