ECS 250C: Parallel Architecture

Time: MW 4:10-5:30
Units: 4
Room: Bainer 1062
Prerequisites: 250A or permission of the instructor
Instructor: Prof. Fred Chong
Office Hours: MW 3:10-4

Course grades are available here.
Parallel architectures have evolved from special-purpose machines to commodity servers. This course will emphasize recent machines and the applications that drive them. Readings will consist of current research papers. Grading will be based primarily upon a final paper and presentation which must critique three related readings and extend their area with a small project.

Tentative lecture topics include:

  • Programming models
  • Active messages
  • Synchronization
  • Commodity components
  • Shared-memory multiprocessors
  • Scientific Applications
  • Network and Database Applications
  • Shared memory versus message passing
  • User-level shared memory protocols
  • Message co-processors
  • User-extensible operating systems
  • Networks of workstations
  • Clusters of symmetric multiprocessors
  • Processors/Logic in Memory

    Grading

  • 20% Discussion Topic
  • 20% Draft Project Paper
  • 20% Project Presentation
  • 40% Final Project Paper

    Project Information

    Deadlines:
  • 4/12 Project interests by e-mail
  • 4/26 2-page proposals
  • 5/31 Draft of project paper (20% of course grade)
  • 6/3,5,6 20-minute project presentations (20% of course grade)
  • 6/7 Final Paper Due (40% of course grade)

    Here is an example project paper. The project has two goals:

  • A critique of three related research papers. This is not a book report. Do not just summarize what is in the papers. Point out shortcomings and possible areas for extension.
  • Extension of the area. Address shortcomings or extend the work in the papers. Come up with some ideas and test them with a short project. This can be in the form of some simple analysis, study of application attributes, small machine simulations, or implementation on parallel machines. Remember to pick something that will fit in a quarter.

    Ideally, both goals would be well-addressed in a project. Since we only have a quarter, however, you may emphasize one or the other.


    Lectures

  • Lecture 1 (4/1/02): Introduction and Organizational Meeting


  • Lecture 2 (4/3/02): Overview of Parallel Architectures

    Reading for next time (presentation by Timur Ismagilov): How to get good performance on the CM5 Data Network [Brewer and Kuszmaul 94].

    Additional References (optional): The CM5 Data Network [Leiserson et al 95].


  • Lecture 3 (4/8/02): The CM5 and Programming for Network Performance

    Reading for next time: Active Messages [von Eicken et al 92].


  • Lecture 4 (4/10/02): Active Messages, Polling, and Interrupts

    Reading for next time (presented by Takashi Ishihara): Reactive Synchronization Algorithms [Lim and Agarwal 94].


  • Lecture 5 (4/15/02): Synchronization

    Reading for next time (presented by Debbie Walker): LogP: Towards a Realistic Model of Parallel Computation [Culler et al 93].


  • Lecture 6 (4/17/02): Predicting Performance

    Reading for next time (presented by Jing Tong): SafetyNet: Improving the Availability of Shared Memory Multiprocessors with Global Checkpoint/Recovery [Sorin et al 02].


  • Lecture 7 (4/22/02): Shared Memory Protocols and System Reliability

    Reading for next time: The Case for Intelligent RAM: IRAM [Patterson et al 97] (presented by Felix DeGrood).
    The Energy Efficiency of IRAM Architectures [Fromm et al 97] (presented by Bart Zeydel).


  • Lecture 8 (4/24/02): Intelligent Memory Systems

    Reading for next time (presented by Xiao Yan Yu): Piranha: A Scalable Architecture Based on Single-Chip Multiprocessing [Barroso et al 00].


  • Lecture 9 (4/29/02): Chip Multiprocessors

    Reading for next time:
    Simultaneous Multithreading: Maximizing On-Chip Parallelism [Tullsen et al 95](presented by Runzhen Huang).
    Tuning Compiler Optimizations for Simultaneous Multithreading [Lo et al 97](presented by Chris Lupo).


  • Lecture 10 (5/1/02): Simultaneous Multithreading Processors

    Reading for next time:
    Compiler Technology for Machine-Independent Parallel Programming [Kennedy 94] (presented by Ivan Balepin).


  • Lecture 11 (5/6/02): Parallizing Compilers

    Reading for next time (presented by Greg Streletz): The Anatomy of the Grid: Enabling Scalable Virtual Organizations [Foster et al 01].


  • Lecture 12 (5/8/02): Grid computation

    Reading for next time:
    PixelFlow: The Realization [Eyles et al 97] (presented by Fan-Yin Tzeng).
    Imagine: Media Processing with Streams [Khailany et al 01] (presented by Karim Mahrous).


  • Lecture 13 (5/13/02): Graphics Architectures

    Reading for next time (presented by Brian Carmichael): Configurable Computing Solutions for Automatic Target Recognition [Villasenor et al 96].


  • Lecture 14 (5/15/02): Pattern Recognition

    Reading for next time (presented by Jeremy Brown): MGS: A Multigrain Shared Memory System [Yeung, Kubiatowicz, Agarwal 96].


  • Lecture 15 (5/20/02): Clusters of SMPs

    Reading for next time (presented by Serban Porembescu): The Anatomy of a Large-Scale Hypertextual Web Search Engine [Brin and Page 98].


  • Lecture 16 (5/22/02): Scalable Cluster Computing

    Reading for next time (presented by Keith Mehl): Chapter 1 of Preskill's Lecture Notes on Quantum Computation.

    Additional material available at Preskill's Physics 229 website.


  • Lecture 18 (5/29/02): Quantum Computing (Guest Lecture)


  • Project Presentations (6/3/02):

    CMP technology trends - Bart Zeydel and Brian Carmichael
    Comparative Study of SMT and CMP Architectures - Xiao Yan Yu and Chris Lupo
    Modified SafetyNet - Jing Tong and Ivan Balepin


  • Project Presentations (6/5/02):

    Parallel Volume Rendering - Runzhen Huang and Fan-Yin Tzeng
    Survey of Parallel Graphics Architectures - Serban Porumbescu and Karim Mahrous
    ??? - Jeremy Brown


  • Project Presentations (6/6/02):

    Architectures for Graph Operations: A Performance Model - Keith Mehl, Debbie May, and Greg Streletz
    Bandwidth Expansion of Data Buses Through Data Compression - Felix DeGrood and Takashi Ishihara
    Sensor Networks - Timur Ismagilov



    Last updated May 22, 2002
    chong@cs.ucdavis.edu