John Reppy's Publications

Chronological List


3CPS: The Design of an Environment-focussed Intermediate Representation.
Benjamin Quiring, John Reppy, and Olin Shivers. In 33nd International Symposia on Implementation and Application of Functional Languages (IFL 2021), September 2021. Paper accepted for the conference post-proceedings.
bib ]

We describe the design of 3CPS, a compiler intermediate representation (IR) we have developed for use in compiling call-by-value functional languages such as SML, OCaml, Scheme, and Lisp. The language is a low-level form designed in tandem with a matching suite of static analyses. It reflects our belief that the core task of an optimising compiler for a functional language is to reason about the environment structure of the program. Our IR is distinguished by the presence of extent annotations, added to all variables (and verified by static analysis). These annotations are defined in terms of the semantics of the IR, but they directly tell the compiler what machine resources are needed to implement the environment structure of each annotated variable.

A New Backend for Standard ML of New Jersey.
Kavon Farvardin and John Reppy. In 32nd International Symposia on Implementation and Application of Functional Languages (IFL 2020), September 2020. Awarded the Peter Landin Prize for best paper.
bib | DOI | .pdf ]

This paper describes the design and implementation of a new backend for the Standard ML of New Jersey (SML/NJ) system that is based on the LLVM compiler infrastructure. We first describe the history and design of the current backend, which is based on the MLRisc framework. While MLRisc has many similarities to LLVM, it provides a lower-level, policy agnostic, approach to code generation that enables customization of the code generator for non-standard runtime models (i.e., register pinning, calling conventions, etc). In particular, SML/NJ uses a stackless runtime model based on continuation-passing style with heap-allocated continuation closures. This feature, and others, pose challenges to building a backend using LLVM. We describe these challenges and how we address them in our backend.

From Folklore to Fact: Comparing Implementations of Stacks and Continuations.
Kavon Farvardin and John Reppy. In Proceedings of the SIGPLAN 2020 Conference on Programming Language Design and Implementation, pages 75--90, New York, NY, June 2020. ACM. Awarded Distinguished Paper.
bib | DOI | .pdf ]

The efficient implementation of function calls and non-local control transfers is a critical part of modern language implementations and is important in the implementation of everything from recursion, higher-order functions, concurrency and coroutines, to task-based parallelism. In a compiler, these features can be supported by a variety of mechanisms, including call stacks, segmented stacks, and heap-allocated continuation closures.

An implementor of a high-level language with advanced control features might ask the question “what is the best choice for my implementation?” Unfortunately, the current literature does not provide much guidance, since previous studies suffer from various flaws in methodology and are outdated for modern hardware. In the absence of recent, well-normalized measurements and a holistic overview of their implementation specifics, the path of least resistance when choosing a strategy is to trust folklore, but the folklore is also suspect.

This paper attempts to remedy this situation by providing an “apples-to-apples” comparison of six different approaches to implementing call stacks and continuations. This comparison uses the same source language, compiler pipeline, LLVM-backend, and runtime system, with the only differences being those required by the differences in implementation strategy. We compare the implementation challenges of the different approaches, their sequential performance, and their suitability to support advanced control mechanisms, including supporting heavily threaded code. In addition to the comparison of implementation strategies, the paper's contributions also include a number of useful implementation techniques that we discovered along the way.

The History of Standard ML.
David MacQueen, Robert Harper, and John Reppy. Proceedings of the ACM on Programming Languages, 4(HOPL), June 2020.
bib | DOI | .pdf ]

The ML family of strict functional languages, which includes F#, OCaml, and Standard ML, evolved from the Meta Language of the LCF theorem proving system developed by Robin Milner and his research group at the University of Edinburgh in the 1970s. This paper focuses on the history of Standard ML, which plays a central rôle in this family of languages, as it was the first to include the complete set of features that we now associate with the name “ML” (i.e., polymorphic type inference, datatypes with pattern matching, modules, exceptions, and mutable state).

Standard ML, and the ML family of languages, have had enormous influence on the world of programming language design and theory. ML is the foremost exemplar of a functional programming language with strict evaluation (call-by-value) and static typing. The use of parametric polymorphism in its type system, together with the automatic inference of such types, has influenced a wide variety of modern languages (where polymorphism is often referred to as generics). It has popularized the idea of datatypes with associated case analysis by pattern matching. The module system of Standard ML extends the notion of type-level parameterization to large-scale programming with the notion of parametric modules, or functors.

Standard ML also set a precedent by being a language whose design included a formal definition with an associated metatheory of mathematical proofs (such as soundness of the type system). A formal definition was one of the explicit goals from the beginning of the project. While some previous languages had rigorous definitions, these definitions were not integral to the design process, and the formal part was limited to the language syntax and possibly dynamic semantics or static semantics, but not both.

The paper covers the early history of ML, the subsequent efforts to define a standard ML language, and the development of its major features and its formal definition. We also review the impact that the language had on programming-language research.

Point Movement in a DSL for Higher-Order FEM Visualization.
Teodoro Collin, Charisee Chiw, L. Ridgway Scott, John Reppy, and Gordon Kindlmann. In Proceedings of the 2019 IEEE Visualization Conference (VIS '19), pages 281--285, October 2019.
bib | DOI | .pdf ]

Scientific visualization tools tend to be flexible in some ways (e.g., for exploring isovalues) while restricted in other ways, such as working only on regular grids, or only on unstructured meshes (as used in the finite element method, FEM). Our work seeks to expose the common structure of visualization methods, apart from the specifics of how the fields being visualized are formed. Recognizing that previous approaches to FEM visualization depend on efficiently updating computed positions within a mesh, we took an existing visualization domain-specific language, and added a mesh position type and associated arithmetic operators. These are orthogonal to the visualization method itself, so existing programs for visualizing regular grid data work, with minimal changes, on higher-order FEM data. We reproduce the efficiency gains of an earlier guided search method of mesh position update for computing streamlines, and we demonstrate a novel ability to uniformly sample ridge surfaces of higher-order FEM solutions defined on curved meshes.

Shapes and Flattening.
John Reppy and Joe Wingerter. In 31st International Symposia on Implementation and Application of Functional Languages (IFL 2019), New York, NY, September 2019. ACM.
bib | DOI | .pdf ]

Nesl is a first-order functional language with an apply-to-each construct and other parallel primitives that enable the expression of irregular nested data-parallel (NDP) algorithms. To compile Nesl, Blelloch and others developed a global flattening transformation that maps irregular NDP code into regular flat data parallel (FDP) code suitable for executing on SIMD or SIMT architectures, such as GPUs.

While flattening solves the problem of mapping irregular parallelism into a regular model, it requires significant additional optimizations to produce performant code. Nessie is a compiler for Nesl that generates CUDA code for running on Nvidia GPUs. The Nessie compiler relies on a fairly complicated shape analysis that is performed on the FDP code produced by the flattening transformation. Shape analysis plays a key rôle in the compiler as it is the enabler of fusion optimizations, smart kernel scheduling, and other optimizations.

In this paper, we present a new approach to the shape analysis problem for Nesl that is both simpler to implement and provides better quality shape information. The key idea is to analyze the NDP representation of the program and then preserve shape information through the flattening transformation.

Compiling Successor ML Pattern Guards.
John Reppy and Mona Zahir. In Proceedings of the 2019 ACM SIGPLAN ML Family Workshop, August 2019.
bib | .pdf ]

Successor ML is a collection of proposed language extensions to Standard ML. A number of these extensions address pattern matching; including adding richer record patterns, or-patterns, and pattern guards. Pattern guards in Successor ML are more general than those found in other languages, which raises some interesting implementation issues.

This paper describes the approach to pattern guards that we are developing as part of an effort to add Successor ML features to the Standard ML of New Jersey system. We present our approach in a way that is applicable to either backtracking or decision-tree implementations of pattern matching.

Rendering and Extracting Extremal Features in 3D Fields.
Gordon L. Kindlmann, Charisee Chiw, Tri Huynh, Attila Gyulassy, John Reppy, and PT Bremer. Computer Graphics Forum (Proceedings of the Eurographics/IEEE-VGTC Conference on Visualization “EuroVis”), 37(3):525--536, June 2018. Awarded Best Paper.
bib | DOI | .pdf ]

Visualizing and extracting three-dimensional features is important for many computational science applications, each with their own feature definitions and data types. While some are simple to state and implement (e.g., isosurfaces), others require more complicated mathematics (e.g., multiple derivatives, curvature, eigenvectors, etc.). Correctly implementing mathematical definitions is difficult, so experimenting with new features requires substantial investments. Furthermore, traditional interpolants rarely support the necessary derivatives, and approximations can reduce numerical stability. Our new approach directly translates mathematical notation into practical visualization and feature extraction, with minimal mental and implementation overhead. Using a mathematically expressive domain-specific language, Diderot, we compute direct volume renderings and particle-based feature samplings for a range of mathematical features. Non-expert users can experiment with feature definitions without any exposure to meshes, interpolants, derivative computation, etc. We demonstrate high-quality results on notoriously difficult features, such as ridges and vortex cores, using working code simple enough to be presented in its entirety.

Compiling with Continuations and LLVM.
Kavon Farvardin and John Reppy. In Kenichi Asai and Mark Shinwell, editors, Proceedings ML Family Workshop / OCaml Users and Developers workshops (September 22-23, 2016), Volume 285 of Electronic Proceedings in Theoretical Computer Science, pages 131--142. Open Publishing Association, 2018.
bib | DOI | .pdf ]

LLVM is an infrastructure for code generation and low-level optimizations, which has been gaining popularity as a backend for both research and industrial compilers, including many compilers for functional languages. While LLVM provides a relatively easy path to high-quality native code, its design is based on a traditional runtime model which is not well suited to alternative compilation strategies used in high-level language compilers, such as the use of heap-allocated continuation closures.

This paper describes a new LLVM-based backend that supports heap-allocated continuation closures, which enables constant-time callcc and very-lightweight multithreading. The backend has been implemented in the Parallel ML compiler, which is part of the Manticore system, but the results should be useful for other compilers, such as Standard ML of New Jersey, that use heap-allocated continuation closures.

DATm: Diderot's Automated Testing Model.
Charisee Chiw, Gordon Kindlmann, and John Reppy. In The 12th IEEE/ACM International Workshop on Automation of Software Test (AST 2017), pages 45--51, May 2017.
bib | .pdf ]

Diderot is a parallel domain-specific language for the analysis and visualization of multidimensional scientific images, such as those produced by CT and MRI scanners. Diderot is designed to support algorithms that are based on differential tensor calculus and produces a higher-order mathematical model, which allows direct manipulation of tensor fields. One of the main challenges of the Diderot implementation is bridging this semantic gap by effectively translating high-level mathematical notation of tensor calculus into efficient low-level code in the target language.

A key question for a high-level language, such as Diderot, is how do we know that the implementation is correct. We have previously presented and defended a core set of rewriting rules, but the full translation from source to executable requires much more work. In this paper, we present DATm, Diderot's automated testing model to check the correctness of the core operations in the programming language. DATm can automatically create test programs, and predict what the outcome should be. We measure the accuracy of the computations written in the Diderot language, based on how accurately the output of the program represents the mathematical equivalent of the computations.

This paper describes a model for testing a high-level language based on correctness. It introduces the pipeline for DATm, a tool that can automatically create and test tens of thousands of Diderot test programs and that has found numerous bugs. We make a case for the necessity of extensive testing by describing bugs that are deep in the compiler, and only could be found with a unique application of operations. Lastly, we demonstrate that the model can be used to create other types of tests by visual verification.

Compiling with continuations and LLVM.
Kavon Farvardin and John Reppy. In Proceedings of the 2016 ACM SIGPLAN Workshop on ML, September 2016.
bib | .pdf ]

This paper describes a new LLVM-based backend for the Parallel ML compiler (part of the Manticore system). This backend is novel in that it supports heap-allocated first-class continuations (a first for LLVM), which, in turn enables language features, such as callcc, lightweight concurrency mechanisms, and PML's parallelism features.

λcu --- An Intermediate Representation for Compiling Nested Data Parallelism, John Reppy and Joe Wingerter. Presented at the Compilers for Parallel Computing Workshop (CPC '16), July 2016. Valladolid, Spain.
bib | .pdf ]

Modern GPUs provide supercomputer-level performance at commodity prices, but they are notoriously hard to program. GPUs enable a vast degree of parallelism, but only a small set of control-flow and data access patterns are efficient to run on GPU architectures. In order to provide programmers with a familiar, high-level programming paradigm that nonetheless maps efficiently to GPU hardware, we have been exploring the use of Nested Data Parallelism (NDP), specifically the first-order functional language NESL.

NESL, originally designed for SIMD architectures, is a functional language with an apply-to-each construct and other parallel primitives that enables the expression of irregular parallel algorithms; Blelloch and others developed a global flattening transformation that maps irregular NDP code into regular flat data parallel (FDP) code suitable for SIMD execution. Our prior work on the Nessie compiler targetted SIMT GPUs via CUDA, establishing the feasibility of such a translation, but with poor performance compared to tuned CUDA implementations by human experts, primarily due to allocation of and memory traffic to temporary arrays.

In this work, we focus on a compiler IR, called λcu that we have designed to support effective optimization of the FDP code produced by the flattening transformation. λcu is a three-level language consisting of a top-level representation for the CPU-level control flow, a mid-level language for representing the iteration structure of GPU kernels, and a low-level language for representing the computations performed on the GPU.

λcu facilitates fusion of parallel operations by expressing iteration structures with a language of combinators, which obey a set of fusion rules described in this paper. Some fusion optimizations are mutually exclusive, so following Robinson et al. we use an ILP solver to determine the optimal fusion of kernels and then perform the recommended fusions. Final generation of CUDA code is performed on each fused group of combinators, linking CUDA kernels using a backbone of generated C++ which directs program progress on the CPU.

Diderot: a Domain-Specific Language for Portable Parallel Scientific Visualization and Image Analysis.
Gordon Kindlmann, Charisee Chiw, Lamont Samuels, Nick Seltzer, and John Reppy. IEEE Transactions on Visualization and Computer Graphics, pages 867--876, October 2015.
bib | DOI | .pdf ]

Many algorithms for scientific visualization and image analysis are rooted in the world of continuous scalar, vector, and tensor fields, but are programmed in low-level languages and libraries that obscure their mathematical foundations. Diderot is a parallel domain-specific language that is designed to bridge this semantic gap by providing the programmer with a high-level, mathematical programming notation that allows direct expression of mathematical concepts in code. Furthermore, Diderot provides parallel performance that takes advantage of modern multicore processors and GPUs. The high-level notation allows a concise and natural expression of the algorithms and the parallelism allows efficient execution on real-world datasets.

Nessie: A NESL to CUDA Compiler, John Reppy and Nora Sandler. Presented at the Compilers for Parallel Computing Workshop (CPC '15), January 2015. Imperial College, London, UK.
bib | .pdf ]

Modern GPUs provide supercomputer-level performance at commodity prices, but they are notoriously hard to program. To address this problem, we have been exploring the use of Nested Data Parallelism (NDP), and specifically the first-order functional language Nesl, as a way to raise the level of abstraction for programming GPUs. This paper describes a new compiler for Nesl language that generated CUDA code. Specifically we describe three aspects of the compiler that address some of the challenges of generating efficient NDP code for GPUS.

Bulk-Synchronous Communication Mechanisms in Diderot, John Reppy and Lamont Samuels. Presented at the Compilers for Parallel Computing Workshop (CPC '15), January 2015. Imperial College, London, UK.
bib | .pdf ]

Diderot is a parallel domain-specific language designed to provide biomedical researchers with a high-level mathematical programming model where they can use familiar tensor calculus notations directly in code without dealing with underlying low-level implementation details. These operations are executed as parallel independent computations. We use a bulk synchronous parallel model (BSP) to execute these independent computations as autonomous lightweight threads called strands. The current BSP model of Diderot limits strand creation to initialization time and does not provide any mechanisms for communicating between strands. For algorithms, such as particle systems, where strands are used to explore the image space, it is useful to be able to create new strands dynamically and share data between strands.

In this paper, we present an updated BSP model with three new features: a spatial mechanism that retrieves nearby strands based on their geometric position in space, a global mechanism for global computations (i.e., parallel reductions) over sets of strands and a mechanism for dynamically allocating new strands. We also illustrate through examples how to express these features in the Diderot language. More, generally, by providing a communication system with these new mechanisms, we can effectively increase the class of applications that Diderot can support.

Practical and Effective Higher-Order Optimizations.
Lars Bergstrom, Matthew Fluet, Mike Rainey, Matthew Le, John Reppy, and Nora Sandler. In Proceedings of the 19th ACM SIGPLAN International Conference on Functional Programming (ICFP 2014), New York, NY, September 2014. ACM.
bib | DOI | .pdf ]

Inlining is an optimization that replaces a call to a function with that function's body. This optimization not only reduces the overhead of a function call, but can expose additional optimization opportunities to the compiler, such as removing redundant operations or unused conditional branches. Another optimization, copy propagation, replaces a redundant copy of a still-live variable with the original. Copy propagation can reduce the total number of live variables, reducing register pressure and memory usage, and possibly eliminating redundant memory-to-memory copies. In practice, both of these optimizations are implemented in nearly every modern compiler.

These two optimizations are practical to implement and effective in first-order languages, but in languages with lexically-scoped first-class functions (aka, closures), these optimizations are not available to code programmed in a higher-order style. With higher-order functions, the analysis challenge has been that the environment at the call site must be the same as at the closure capture location, up to the free variables, or the meaning of the program may change. Olin Shivers' 1991 dissertation called this family of optimizations Super-β and he proposed one analysis technique, called reflow, to support these optimizations. Unfortunately, reflow has proven too expensive to implement in practice. Because these higher-order optimizations are not available in functional-language compilers, programmers studiously avoid uses of higher-order values that cannot be optimized (particularly in compiler benchmarks).

This paper provides the first practical and effective technique for Super-β (higher-order) inlining and copy propagation, which we call unchanged variable analysis. We show that this technique is practical by implementing it in the context of a real compiler for an ML-family language and showing that the required analyses have costs below 3% of the total compilation time. This technique's effectiveness is shown through a set of benchmarks and example programs, where this analysis exposes additional potential optimization sites.

SML3d: 3D Graphics for Standard ML.
John Reppy. In Proceedings of the 2014 ACM SIGPLAN Workshop on ML, September 2014.
bib | .pdf ]

The SML3d system is a collection of libraries designed to support real-time 3D graphics programming in Standard ML (SML). This paper gives an overview of the system and briefly highlights some of the more interesting aspects of its design and implementation.

Data-Only Flattening for Nested Data Parallelism.
Lars Bergstrom, Matthew Fluet, Mike Rainey, John Reppy, Stephen Rosen, and Adam Shaw. In Proceedings of the 2013 ACM SIGPLAN Symposium on Principles & Practice of Parallel Programming (PPoPP 2013), pages 81--92, New York, NY, February 2013. ACM.
bib | DOI | .pdf ]

Data parallelism has proven to be an effective technique for high-level programming of a certain class of parallel applications, but it is not well suited to irregular parallel computations. Blelloch and others proposed nested data parallelism (NDP) as a language mechanism for programming irregular parallel applications in a declarative data-parallel style. The key to this approach is a compiler transformation that flattens the NDP computation and data structures into a form that can be executed efficiently on a wide-vector SIMD architecture. Unfortunately, this technique is ill suited to execution on today's multicore machines. We present a new technique, called data-only flattening, for the compilation of NDP, which is suitable for multicore architectures. Data-only flattening transforms nested data structures in order to expose programs to various optimizations while leaving control structures intact. We present a formal semantics of data-only flattening in a core language with a rewriting system. We demonstrate the effectiveness of this technique in the Parallel ML implementation and we report encouraging experimental results across various benchmark applications.

Lazy Tree Splitting.
Lars Bergstrom, Matthew Fluet, Mike Rainey, John Reppy, and Adam Shaw. Journal of Functional Programming, 22(4-5):382--438, September 2012.
bib ]

Nested data-parallelism (NDP) is a language mechanism that supports programming irregular parallel applications in a declarative style. In this paper, we describe the implementation of NDP in Parallel ML (PML), which is part of the Manticore system. One of the main challenges of implementing NDP is managing the parallel decomposition of work. If we have too many small chunks of work, the overhead will be too high, but if we do not have enough chunks of work, processors will be idle. Recently the technique of Lazy Binary Splitting was proposed to address this problem for nested parallel loops over flat arrays. We have adapted this technique to our implementation of NDP, which uses binary trees to represent parallel arrays. This new technique, which we call Lazy Tree Splitting (LTS), has the key advantage of performance robustness; i.e., that it does not require tuning to get the best performance for each program. We describe the implementation of the standard NDP operations using LTS and we present experimental data that demonstrates the scalability of LTS across a range of benchmarks.

Nested Data-Parallelism on the GPU.
Lars Bergstrom and John Reppy. In Proceedings of the 17th ACM SIGPLAN International Conference on Functional Programming (ICFP 2012), pages 247--258, New York, NY, September 2012. ACM.
bib | .pdf ]

Graphics processing units (GPUs) provide both memory bandwidth and arithmetic performance far greater than that available on CPUs but, because of their Single-Instruction-Multiple-Data (SIMD) architecture, they are hard to program. Most of the programs ported to GPUs thus far use traditional data-level parallelism, performing only operations that operate uniformly over vectors.

NESL is a first-order functional language that was designed to allow programmers to write irregular-parallel programs --- such as parallel divide-and-conquer algorithms --- for wide-vector parallel computers. This paper presents our port of the NESL implementation to work on GPUs and provides empirical evidence that nested data-parallelism (NDP) on GPUs significantly outperforms CPU-based implementations and matches or beats newer GPU languages that support only flat parallelism. While our performance does not match that of hand-tuned CUDA programs, we argue that the notational conciseness of NESL is worth the loss in performance. This work provides the first language implementation that directly supports NDP on a GPU.

Diderot: A Parallel DSL for Image Analysis and Visualization.
Charisee Chiw, Gordon Kindlmann, John Reppy, Lamont Samuels, and Nick Seltzer. In Proceedings of the SIGPLAN 2012 Conference on Programming Language Design and Implementation, pages 111--120, New York, NY, June 2012. ACM.
bib | .pdf ]

Research scientists and medical professionals use imaging technology, such as computed tomography (CT) and magnetic resonance imaging (MRI) to measure a wide variety of biological and physical objects. The increasing sophistication of imaging technology creates demand for equally sophisticated computational techniques to analyze and visualize the image data. Analysis and visualization codes are often crafted for a specific experiment or set of images, thus imaging scientists need support for quickly developing codes that are reliable, robust, and efficient.

In this paper, we present the design and implementation of Diderot, which is a parallel domain-specific language for biomedical image analysis and visualization. Diderot supports a high-level model of computation that is based on continuous tensor fields. These tensor fields are reconstructed from discrete image data using separable convolution kernels, but may also be defined by applying higher-order operations, such as differentiation. Early experiments demonstrate that Diderot provides both a high-level concise notation for image analysis and visualization algorithms, as well as high sequential and parallel performance.

Garbage Collection for Multicore NUMA Machines.
Sven Auhagen, Lars Bergstrom, Matthew Fluet, and John Reppy. In Proceedings of the ACM SIGPLAN Workshop on Memory Systems Performance and Correctness (MSPC 2011), pages 51--57, New York, NY, June 2011. ACM.
bib | .pdf ]

Modern high-end machines feature multiple processor packages, each of which contains multiple independent cores and integrated memory controllers connected directly to dedicated physical RAM. These packages are connected via a shared bus, creating a system with a heterogeneous memory hierarchy. Since this shared bus has less bandwidth than the sum of the links to memory, aggregate memory bandwidth is higher when parallel threads all access memory local to their processor package than when they access memory attached to a remote package. This bandwidth limitation has traditionally limited the scalability of modern functional language implementations, which seldom scale well past 8 cores, even on small benchmarks.

This work presents a garbage collector integrated with our strict, parallel functional language implementation, Manticore, and shows that it scales effectively on both a 48-core AMD Opteron machine and a 32-core Intel Xeon machine.

Implicitly threaded parallelism in Manticore.
Matthew Fluet, Mike Rainey, John Reppy, and Adam Shaw. Journal of Functional Programming, 20(5-6):537--576, 2011.
bib | DOI ]

The increasing availability of commodity multicore processors is making parallel computing ever more widespread. In order to exploit its potential, programmers need languages that make the benefits of parallelism accessible and understandable. Previous parallel languages have traditionally been intended for large-scale scientific computing, and they tend not to be well suited to programming the applications one typically finds on a desktop system. Thus, we need new parallel-language designs that address a broader spectrum of applications. The Manticore project is our effort to address this need. At its core is Parallel ML, a high-level functional language for programming parallel applications on commodity multicore hardware. Parallel ML provides a diverse collection of parallel constructs for different granularities of work. In this paper, we focus on the implicitly threaded parallel constructs of the language, which support fine-grained parallelism. We concentrate on those elements that distinguish our design from related ones, namely, a novel parallel binding form, a nondeterministic parallel case form, and the treatment of exceptions in the presence of data parallelism. These features differentiate the present work from related work on functional data-parallel language designs, which have focused largely on parallel problems with regular structure and the compiler transformations --- most notably, flattening --- that make such designs feasible. We present detailed examples utilizing various mechanisms of the language and give a formal description of our implementation.

A Declarative API for Particle Systems.
Pavel Krajcevski and John Reppy. In Proceedings of the Thirteenth International Symposium on Practical Aspects of Declarative Languages (PADL 2011), Volume 6539 of Lecture Notes in Computer Science, pages 130--144, New York, NY, January 2011. Springer-Verlag.
bib | .pdf ]

Recent trends in computer-graphics APIs and hardware have made it practical to use high-level functional languages for real-time graphics applications. Thus we have the opportunity to develop new approaches to computer graphics that take advantage of the high-level features of functional languages. This paper describes one such project that uses the techniques of functional programming to define and implement a combinator library for particle systems. Particle systems are a popular technique for rendering fuzzy phenomena, such as fire, smoke, and explosions. Using our combinators, a programmer can provide a declarative specification of how a particle system behaves. This specification includes rules for how particles are created, how they evolve, and how they are rendered. Our library translates these declarative specifications into a low-level intermediate language that can be compiled to run on the GPU or interpreted by the CPU.

Lazy Tree Splitting.
Lars Bergstrom, Mike Rainey, John Reppy, Adam Shaw, and Matthew Fluet. In Proceedings of the 15th ACM SIGPLAN International Conference on Functional Programming (ICFP 2010), pages 93--104, New York, NY, September 2010. ACM.
bib | .pdf ]

Nested data-parallelism (NDP) is a declarative style for programming irregular parallel applications. NDP languages provide language features favoring the NDP style, efficient compilation of NDP programs, and various common NDP operations like parallel maps, filters, and sum-like reductions. In this paper, we describe the implementation of NDP in Parallel ML (PML), part of the Manticore project. Managing the parallel decomposition of work is one of the main challenges of implementing NDP. If the decomposition creates too many small chunks of work, performance will be eroded by too much parallel overhead. If, on the other hand, there are too few large chunks of work, there will be too much sequential processing and processors will sit idle.

Recently the technique of Lazy Binary Splitting was proposed for dynamic parallel decomposition of work on flat arrays, with promising results. We adapt Lazy Binary Splitting to parallel processing of binary trees, which we use to represent parallel arrays in PML. We call our technique Lazy Tree Splitting (LTS). One of its main advantages is its performance robustness: per-program tuning is not required to achieve good performance across varying platforms. We describe LTS-based implementations of standard NDP operations, and we present experimental data demonstrating the scalability of LTS across a range of benchmarks.

Programming in Manticore, a Heterogeneous Parallel Functional Language.
Matthew Fluet, Lars Bergstrom, Nic Ford, Mike Rainey, John Reppy, Adam Shaw, and Yingqi Xiao. In Zoltan Horváth, editor, Central European Functional Programming School, Volume 6299 of Lecture Notes in Computer Science, pages 94--145, New York, NY, 2010. Springer-Verlag.
bib ]

Arity Raising in Manticore.
Lars Bergstrom and John Reppy. In 21st International Symposia on Implementation and Application of Functional Languages (IFL 2009), Volume 6041 of Lecture Notes in Computer Science, pages 90--106, New York, NY, September 2009. Springer-Verlag.
bib | .pdf ]

Compilers for polymorphic languages are required to treat values in programs in an abstract and generic way at the source level. The challenges of optimizing the boxing of raw values, flattening of argument tuples, and raising the arity of functions that handle complex structures to reduce memory usage are old ones, but take on newfound import with processors that have twice as many registers. We present a novel strategy that uses both control-flow and type information to provide an arity raising implementation addressing these problems. This strategy is conservative --- no matter the execution path, the transformed program will not perform extra operations.

Parallel Concurrent ML.
John Reppy, Claudio Russo, and Yingqi Xiao. In Proceedings of the 14th ACM SIGPLAN International Conference on Functional Programming (ICFP 2009), pages 257--268, New York, NY, September 2009. ACM.
bib | DOI | .pdf ]

Concurrent ML (CML) is a high-level message-passing language that supports the construction of first-class synchronous abstractions called events. This mechanism has proven quite effective over the years and has been incorporated in a number of other languages. While CML provides a concurrent programming model, its implementation has always been limited to uniprocessors. This limitation is exploited in the implementation of the synchronization protocol that underlies the event mechanism, but with the advent of cheap parallel processing on the desktop (and laptop), it is time for Parallel CML.

Parallel implementations of CML-like primitives for Java and Haskell exist, but build on high-level synchronization constructs that are unlikely to perform well. This paper presents a novel, parallel implementation of CML that exploits a purpose-built optimistic concurrency protocol designed for both correctness and performance on shared-memory multiprocessors. This work extends and completes an earlier protocol that supported just a strict subset of CML with synchronization on input, but not output events. Our main contributions are a model-checked reference implementation of the protocol and two concrete implementations. This paper focuses on Manticore's functional, continuation-based implementation but briefly discusses an independent, thread-based implementation written in C# and running on Microsoft's stock, parallel runtime. Although very different in detail, both derive from the same design. Experimental evaluation of the Manticore implementation reveals good performance, despite the extra overhead of multiprocessor synchronization.

Regular-expression derivatives reexamined.
Scott Owens, John Reppy, and Aaron Turon. Journal of Functional Programming, 19(2):173--190, 2009.
bib ]

Regular-expression derivatives are an old, but elegant, technique for compiling regular expressions to deterministic finite-state machines. It easily supports extending the regular-expression operators with boolean operations, such as intersection and complement. Unfortunately, this technique has been lost in the sands of time and few computer scientists are aware of it. In this paper, we reexamine regular-expression derivatives and report on our experiences in the context of two different functional-language implementations. The basic implementation is simple and we show how to extend it to handle large character sets (e.g., Unicode). We also show that the derivatives approach leads to smaller state machines than the traditional algorithm given by McNaughton and Yamada.

Calling Variadic Functions from a Strongly-typed Language.
Matthias Blume, Mike Rainey, and John Reppy. In Proceedings of the 2008 ACM SIGPLAN Workshop on ML, pages 47--58, September 2008.
bib | .pdf ]

The importance of providing a mechanism to call C functions from high-level languages has been understood for many years and, these days, almost all statically-typed high-level-language implementations provide such a mechanism. One glaring omission, however, has been support for calling variadic C functions, such as printf. Variadic functions have been ignored because it is not obvious how to give static types to them and because it is not clear how to generate calling sequence when the arguments to the function may not be known until runtime. In this paper, we address this longstanding omission with an extension to the NLFFI foreign-interface framework used by Standard ML of New Jersey (SML/NJ) and the MLton SML compiler. We describe two different ways of typing variadic functions in NLFFI and an implementation technique based on the idea of using state machines to describe calling conventions. Our implementation is easily retargeted to new architectures and ABIs, and can also be easily added to any HOT language implementation that supports calling C functions.

Implicitly-threaded parallelism in Manticore.
Matthew Fluet, Mike Rainey, John Reppy, and Adam Shaw. In Proceedings of the 13th ACM SIGPLAN International Conference on Functional Programming (ICFP 2008), pages 119--130, September 2008.
bib | .pdf ]

The increasing availability of commodity multicore processors is making parallel computing available to the masses. Traditional parallel languages are largely intended for large-scale scientific computing and tend not to be well-suited to programming the applications one typically finds on a desktop system. Thus we need new parallel-language designs that address a broader spectrum of applications. In this paper, we present Manticore, a language for building parallel applications on commodity multicore hardware including a diverse collection of parallel constructs for different granularities of work. We focus on the implicitly-threaded parallel constructs in our high-level functional language. We concentrate on those elements that distinguish our design from related ones, namely, a novel parallel binding form, a nondeterministic parallel case form, and exceptions in the presence of data parallelism. These features differentiate the present work from related work on functional data parallel language designs, which has focused largely on parallel problems with regular structure and the compiler transformations --- most notably, flattening --- that make such designs feasible. We describe our implementation strategies and present some detailed examples utilizing various mechanisms of our language.

A scheduling framework for general-purpose parallel languages.
Matthew Fluet, Mike Rainey, and John Reppy. In Proceedings of the 13th ACM SIGPLAN International Conference on Functional Programming (ICFP 2008), pages 241--252, September 2008.
bib | .pdf ]

The trend in microprocessor design toward multicore and manycore processors means that future performance gains in software will largely come from harnessing parallelism. To realize such gains, we need languages and implementations that can enable parallelism at many different levels. For example, an application might use both explicit threads to implement course-grain parallelism for independent tasks and implicit threads for fine-grain data-parallel computation over a large array. An important aspect of this requirement is supporting a wide range of different scheduling mechanisms for parallel computation.

In this paper, we describe the scheduling framework that we have designed and implemented for Manticore, a strict parallel functional language. We take a micro-kernel approach in our design: the compiler and runtime support a small collection of scheduling primitives upon which complex scheduling policies can be implemented. This framework is extremely flexible and can support a wide range of different scheduling policies. It also supports the nesting of schedulers, which is key to both supporting multiple scheduling policies in the same application and to hierarchies of speculative parallel computations.

In addition to describing our framework, we also illustrate its expressiveness with several popular scheduling techniques. We present a (mostly) modular approach to extending our schedulers to support cancellation. This mechanism is essential for implementing eager and speculative parallelism. We finally evaluate our framework with a series of benchmarks and an analysis.

Toward a parallel implementation of Concurrent ML.
John Reppy and Yingqi Xiao. In Proceedings of the Workshop on Declarative Aspects of Multicore Programming (DAMP 2008), January 2008.
bib | .pdf ]

Concurrent ML (CML) is a high-level message-passing language that supports the construction of first-class synchronous abstractions called events. This mechanism has proven quite effective over the years and has been incorporated in a number of other languages. While CML provides a concurrent programming model, its implementation has always been limited to uniprocessors. This limitation is exploited in the implementation of the synchronization protocol that underlies the event mechanism, but with the advent of cheap parallel processing on the desktop (and laptop), it is time for Parallel CML.

We are pursuing such an implementation as part of the Manticore project. In this paper, we describe a parallel implementation of Asymmetric CML (ACML), which is a subset of CML that does not support output guards. We describe an optimistic concurrency protocol for implementing CML synchronization. This protocol has been implemented as part of the Manticore system.

Status Report: The Manticore Project.
Matthew Fluet, Nic Ford, Mike Rainey, John Reppy, Adam Shaw, and Yingqi Xiao. In Proceedings of the 2007 ACM SIGPLAN Workshop on ML, pages 15--24, October 2007.
bib | .pdf ]

The Manticore project is an effort to design and implement a new functional language for parallel programming. Unlike many earlier parallel languages, Manticore is a heterogeneous language that supports parallelism at multiple levels. Specifically, we combine CML-style explicit concurrency with fine-grain, implicitly threaded, parallel constructs. We have been working on an implementation of Manticore for the past six months; this paper gives an overview of our design and a report on the status of the implementation effort.

Metaprogramming with Traits.
John Reppy and Aaron Turon. In Proceedings of the European Conference on Object Oriented Programming (ECOOP 2007), pages 373--398, July-August 2007.
bib | .pdf ]

In many domains, classes have highly regular internal structure. For example, so-called business objects often contain boilerplate code for mapping database fields to class members. The boilerplate code must be repeated per-field for every class, because existing mechanisms for constructing classes do not provide a way to capture and reuse such member-level structure. As a result, programmers often resort to ad hoc code generation. This paper presents a lightweight mechanism for specifying and reusing member-level structure in Java programs. The proposal is based on a modest extension to traits that we have termed trait-based metaprogramming. Although the semantics of the mechanism are straightforward, its type theory is difficult to reconcile with nominal subtyping. We achieve reconciliation by introducing a hybrid structural/nominal type system that extends Java's type system. The paper includes a formal calculus defined by translation to Featherweight Generic Java.

Specialization of CML message-passing primitives.
John Reppy and Yingqi Xiao. In Proceedings of the 34th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL 2007), pages 315--326, January 2007.
bib | DOI | .pdf ]

Concurrent ML (CML) is a statically-typed higher-order concurrent language that is embedded in Standard ML. Its most notable feature is its support for first-class synchronous operations. This mechanism allows programmers to encapsulate complicated communication and synchronization protocols as first-class abstractions, which encourages a modular style of programming where the underlying channels used to communicate with a given thread are hidden behind data and type abstraction.

While CML has been in active use for well over a decade, little attention has been paid to optimizing CML programs. In this paper, we present a new program analysis for statically-typed higher-order concurrent languages that enables the compile-time specialization of communication operations. This specialization is particularly important in a multiprocessor or multicore setting, where the synchronization overhead for general-purpose operations are high. Preliminary results from a prototype that we have built demonstrate that specialized channel operations are much faster than the general-purpose operations.

Our analysis technique is modular (i.e., it analyzes and optimizes a single unit of abstraction at a time), which plays to the modular style of many CML programs. The analysis consists of three steps: the first is a type-sensitive control-flow analysis that uses the program's type-abstractions to compute more precise results. The second is the construction of an extended control-flow graph using the results of the CFA. The last step is an iterative analysis over the graph that approximates the usage patterns of known channels. Our analysis is designed to detect special patterns of use, such as one-shot channels, fan-in channels, and fan-out channels. We have proven the safety of our analysis and state those results.

Manticore: A heterogeneous parallel language.
Matthew Fluet, Mike Rainey, John Reppy, Adam Shaw, and Yingqi Xiao. In Proceedings of the Workshop on Declarative Aspects of Multicore Programming (DAMP 2007), pages 37--44, January 2007.
bib | .pdf ]

The Manticore project is an effort to design and implement a new functional language for parallel programming. Unlike many earlier parallel languages, Manticore is a heterogeneous language that supports parallelism at multiple levels. Specifically, we combine CML-style explicit concurrency with NESL/Nepal-style data-parallelism. In this paper, we describe and motivate the design of the Manticore language. We also describe a flexible runtime model that supports multiple scheduling disciplines (e.g., for both fine-grain and course-grain parallelism) in a uniform framework. Work on a prototype implementation is ongoing and we give a status report.

Application-specific foreign-interface generation.
John Reppy and Chunyan Song. In Proceedings of the Fifth International Conference on Generative Programming and Component Engineering, pages 49--58, October 2006.
bib | .pdf ]

A foreign interface (FI) mechanism to support interoperability with libraries written in other languages (especially C) is an important feature in most high-level language implementations. Such FI mechanisms provide a Foreign Function Interface (FFI) for the high-level language to call C functions and marshaling and unmarshaling mechanisms to support conversion between the high-level and C data representations. Often, systems provide tools to automate the generation of FIs, but these tools typically lock the user into a specific model of interoperability. It is our belief that the policy used to craft the mapping between the high-level language and C should be distinct from the underlying mechanism used to implement the mapping.

In this paper, we describe a FI generation tool, called FIG (for Foreign Interface Generator) that embodies a new approach to the problem of generating foreign interfaces for high-level languages. FIG takes as input raw C header files plus a declarative script that specifies the generation of the foreign interface from the header file. The script sets the policy for the translation, which allows the user to tailor the resulting FI to his or her application. We call this approach application-specific foreign-interface generation. The scripting language uses rewriting strategies as its execution model. The other major feature of the scripting language is a novel notion of composable typemaps that describe the mapping between high-level and low-level types.

Type-sensitive control-flow analysis.
John Reppy. In Proceedings of the 2006 ACM SIGPLAN Workshop on ML, pages 74--83, September 2006.
bib | DOI | .pdf ]

Higher-order typed languages, such as ML, provide strong support for data and type abstraction. While such abstraction is often viewed as costing performance, there are situations where it may provide opportunities for more aggressive program optimization. Specifically, we can exploit the fact that type abstraction guarantees representation independence, which allows the compiler to specialize data representations. This paper describes a first step in supporting such optimizations; namely a control-flow analysis that uses the program's type information to compute more precise results. We present our algorithm as an extension of Serrano's version of 0-CFA and we show that it respects types. We also discuss applications of the analysis with examples of optimizations enabled by the analysis that would not be possible normal CFA.

A Foundation for Trait-based Metaprogramming.
John Reppy and Aaron Turon. In 2006 International Workshop on Foundations and Developments of Object-Oriented Languages, January 2006.
bib | .pdf ]

Scharli et al. introduced traits as reusable units of behavior independent of the inheritance hierarchy. Despite their relative simplicity, traits offer a surprisingly rich calculus. Trait calculi typically include operations for resolving conflicts when composing two traits. In the existing work on traits, these operations (method exclusion and aliasing) are shallow, i.e., they have no effect on the body of the other methods in the trait. In this paper, we present a new trait system, based on the Fisher-Reppy trait calculus, that adds deep operations (method hiding and renaming) to support conflict resolution. The proposed operations are deep in the sense that they preserve any existing connections between the affected method and the other methods of the trait. Our system uses Riecke-Stone dictionaries to support these features. In addition, we define a more fine-grained mechanism for tracking trait types than in previous systems. The resulting calculus is more flexible and expressive, and can serve as the foundation for trait-based metaprogramming, an idiom we introduce. A companion technical report proves type soundness for our system; we state the key results in this paper.

The Standard ML Basis Library.
Emden R. Gansner and John H. Reppy, editors. Cambridge University Press, 2004.
bib | http ]

A Typed Calculus of Traits.
Kathleen Fisher and John Reppy. In Proceedings of the 11th Workshop on Foundations of Object-oriented Programming, January 2004.
bib | .pdf ]

Object-oriented aspects of Moby.
Kathleen Fisher and John Reppy. Technical Report TR-2003-10, Department of Computer Science, University of Chicago, Chicago, IL, September 2003.
bib ]

Statically Typed Traits.
Kathleen Fisher and John Reppy. Technical Report TR-2003-13, Department of Computer Science, University of Chicago, Chicago, IL, December 2003.
bib ]

Optimizing Nested Loops Using Local CPS Conversion.
John Reppy. Higher-order and Symbolic Computation, 15(2/3):161--180, September 2002.
bib | .pdf ]

Inheritance-based subtyping.
Kathleen Fisher and John Reppy. Information and Computation, 177(1):28--55, August 2002.
bib | .pdf ]

Classes play a dual role in mainstream statically-typed object-oriented languages, serving as both object generators and object types. In such languages, inheritance implies subtyping. In contrast, the theoretical language community has viewed this linkage as a mistake and has focused on subtyping relationships determined by the structure of object types, without regard to their underlying implementations. In this paper, we explore why inheritance-based subtyping relations are useful, and we describe two different approaches to extending the Moby programming language with inheritance-based subtyping relations. In addition, we present a typed object calculus that supports both structural and inheritance-based subtyping, and which provides a formal accounting of our extensions to Moby.

Compiler support for lightweight concurrency.
Kathleen Fisher and John Reppy. Technical memorandum, Bell Labs, March 2002.
bib | .pdf ]

A framework for interoperability.
Kathleen Fisher, Riccardo Pucella, and John Reppy. In Nick Benton and Andrew Kennedy, editors, Proceedings of the First International Workshop on Multi-Language Infrastructure and Interoperability (BABEL'01), Volume 59 of Electronic Notes in Theoretical Computer Science, New York, NY, September 2001. Elsevier Science Publishers.
bib | .pdf ]

Practical implementations of high-level languages must provide access to libraries and system services that have APIs specified in a low-level language (usually C). An important characteristic of such mechanisms is the foreign-interface policy that defines how to bridge the semantic gap between the high-level language and C. For example, IDL-based tools generate code to marshal data into and out of the high-level representation according to user annotations. The design space of foreign-interface policies is large and there are pros and cons to each approach. Rather than commit to a particular policy, we choose to focus on the problem of supporting a gamut of interoperability policies. In this paper, we describe a framework for language interoperability that is expressive enough to support very efficient implementations of a wide range of different foreign-interface policies. We describe two tools that implement substantially different policies on top of our framework and present benchmarks that demonstrate their efficiency.

Asynchronous exceptions in Haskell.
Simon Marlow, Simon Peyton Jones, Andrew Moran, and John Reppy. In Proceedings of the SIGPLAN 2001 Conference on Programming Language Design and Implementation, June 2001.
bib ]

Local CPS conversion in a direct-style compiler.
John Reppy. In Proceedings of the Third ACM SIGPLAN Workshop on Continuations (CW'01), pages 13--22, January 2001.
bib | .pdf ]

Protium: An infrastructure for partitioned applications.
Cliff Young, Lakshman Y.N., Tom Szymanski, John Reppy, David Presotto, Rob Pike, Girija Narlikar, Sape Mullender, and Eric Grosse. In Proceedings of the Eighth Workshop on Hot Operating Systems (HotOS-VIII), January 2001.
bib | .pdf ]

Extending Moby with inheritance-based subtyping.
Kathleen Fisher and John Reppy. In Proceedings of the European Conference on Object Oriented Programming, Volume 1850 of Lecture Notes in Computer Science, pages 83--107, New York, NY, June 2000. Springer-Verlag.
bib | .pdf ]

A Calculus for Compiling and Linking Classes.
Kathleen Fisher, John Reppy, and Jon Riecke. In Proceedings of the European Symposium on Programming, Volume 1782 of Lecture Notes in Computer Science, pages 134--149, New York, NY, March/April 2000. Springer-Verlag.
bib | .pdf ]

Inheritance-based subtyping.
Kathleen Fisher and John Reppy. In Proceedings of the 7th Workshop on Foundations of Object-oriented Programming, January 2000.
bib | .pdf ]

Data-level interoperability.
Kathleen Fisher, Riccardo Pucella, and John Reppy. Technical Memorandum, Bell Labs, Lucent Technologies, April 2000.
bib ]

Concurrent Programming in ML.
John H. Reppy. Cambridge University Press, Cambridge, England, 1999.
bib ]

The design of a class mechanism for Moby.
Kathleen Fisher and John Reppy. In Proceedings of the SIGPLAN 1999 Conference on Programming Language Design and Implementation, pages 37--49, New York, NY, May 1999. ACM.
bib ]

Foundations for Moby classes.
Kathleen Fisher and John Reppy. Technical Memorandum, Bell Labs, Lucent Technologies, Murray Hill, NJ, February 1999.
bib ]

The Essence of Concurrent ML.
Prakash Panangaden and John Reppy. In Flemming Nielson, editor, ML with Concurrency, Chapter 1. Springer-Verlag, 1997.
bib ]

AML: Attribute Grammars in ML.
S.G. Efremidis, K.A. Mughal, L. Søraas, and John Reppy. Nordic Journal of Computing, 4(1), 1997.
bib ]

Classes in Object ML via modules.
John H. Reppy and Jon G. Riecke. In Proceedings of the Third Workshop on Foundations of Object-oriented Programming, July 1996.
bib ]

Simple objects for SML.
John H. Reppy and Jon G. Riecke. In Proceedings of the SIGPLAN 1996 Conference on Programming Language Design and Implementation, pages 171--180, New York, NY, May 1996. ACM.
bib ]

Supporting SPMD Execution for Dynamic Data Structures.
Martin Carlisle, Laurie J. Hendren, Anne Rogers, and John Reppy. ACM Transactions on Programming Languages and Systems, 17(2):233--263, March 1995.
bib ]

Unrolling lists.
Zhong Shao, John Reppy, and Andrew Appel. In ACM Conference on Lisp and Functional Programming, pages 185--195, June 1994.
bib ]

A portable and optimizing back end for the SML/NJ compiler.
Lal George, Florent Guillame, and John Reppy. In Fifth International Conference on Compiler Construction, pages 83--97, April 1994.
bib ]

Early experiences with Olden.
Martin Carlisle, Anne Rogers, John Reppy, and Laurie Hendren. In 6th International Workshop on Languages and Compilers for Parallel Computing, number 768 in Lecture Notes in Computer Science, August 1993.
bib ]

A Multi-threaded Higher-order User Interface Toolkit.
Emden R. Gansner and John H. Reppy. In User Interface Software, Bass and Dewan (Eds.), Volume 1 of Software Trends, pages 61--80. John Wiley & Sons, 1993.
bib | .pdf ]

Concurrent ML: Design, application and semantics.
John H. Reppy. In Peter Lauer, editor, Functional Programming, Concurrency, Simulation and Automated Reasoning, number 693 in Lecture Notes in Computer Science. Springer-Verlag, New York, NY, 1993.
bib ]

A High-performance Garbage Collector for Standard ML.
John H. Reppy. Technical memo, AT&T Bell Laboratories, December 1993.
bib | .pdf ]

Supporting SPMD Execution for Dynamic Data Structures.
Anne Rogers, John Reppy, and Laurie Hendren. In 5th International Workshop on Languages and Compilers for Parallel Computing, number 757 in Lecture Notes in Computer Science, August 1992.
bib ]

Abstract Value Constructors: Symbolic Constants for Standard ML.
William E. Aitken and John H. Reppy. Technical Report TR 92-1290, Department of Computer Science, Cornell University, June 1992. A shorter version appears in the proceedings of the “ACM SIGPLAN Workshop on ML and its Applications,” 1992.
bib | .pdf ]

Standard ML (SML) has been used to implement a wide variety of large systems, such as compilers, theorem provers, graphics libraries, and even operating systems. While SML provides a convenient, high-level notation for programming large applications, it does have certain deficiencies. One such deficiency is the lack of a general mechanism for assigning symbolic names to constant values. In this paper, we present a simple extension of SML that corrects this deficiency in a way that fits naturally with the semantics of SML. Our proposal is a generalization of SML's datatype constructors: we introduce constants that generalize nullary datatype constructors (like nil), and templates that generalize non-nullary datatype constructors (like ::). Constants are identifiers bound to fixed values, and templates are identifiers bound to structured values with labeled holes. Templates are useful because they allow users to treat the representation of structured data abstractly without having to give up pattern matching.

Attribute grammars in ML.
S.G. Efremidis, K.A. Mughal, and John Reppy. In ACM SIGPLAN Workshop on ML and its Aplications, June 1992.
bib ]

Higher-order Concurrency.
John H. Reppy. PhD thesis, Cornell University, 1992. Available as Computer Science Technical Report 92-1285.
bib ]

A Foundation for User Interface Construction.
Emden R. Gansner and John H. Reppy. In Brad Myers, editor, Languages for Developing User Interfaces, Chapter 14. Jones and Bartlett, 1992.
bib ]

eXene.
Emden R. Gansner and John H. Reppy. In Third International Workshop on ML, September 1991.
bib | .pdf ]

CML: A higher-order concurrent language.
John H. Reppy. In Proceedings of the SIGPLAN 1991 Conference on Programming Language Design and Implementation, pages 293--305, New York, NY, June 1991. ACM.
bib | .pdf ]

Asynchronous signals in Standard ML.
John H. Reppy. Technical Report TR 90-1144, Department of Computer Science, Cornell University, Ithaca, NY, August 1990.
bib | .pdf ]

Synchronous Operations as First-class Values.
John H. Reppy. In Proceedings of the SIGPLAN 1988 Conference on Programming Language Design and Implementation, June 1988.
bib ]

Concurrent Garbage Collection on Stock Hardware.
Steven C. North and John H. Reppy. In Third International Conference on Functional Programming Languages and Computer Architecture, Volume 274 of Lecture Notes in Computer Science, pages 113--133, New York, NY, September 1987. Springer-Verlag.
bib ]

A foundation for programming environments.
John H. Reppy and Emden R. Gansner. In Proceedings of the ACM SIGSOFT/SIGPLAN Software Engineering Symposium on Practical Software Development Environments, pages 218--227, December 1986.
bib ]


This file was generated by bibtex2html 1.98.


Last updated on February 22, 2022
Comments to John Reppy.