Mitchell Wand
[Semantics
at NU
| College of Computer Science
| Northeastern University]
Wed 5/28/97
*** PRACTICE TALK FESTIVAL ***
Will Clinger and Lars Hansen will present their PLDI paper "Generational Garbage Collection and the Radioactive Decay Model"
Olin Shivers will present his ICFP paper "Automatic Management of Operating Systems Resources"
Mitch Wand will present his TIC talk: "Types in Compilation: Scenes from an Invited Lecture."
Wed 4/23/97
Allyn Dimock will present "Infinitary Control Flow Analysis: a Collecting Semantics for Closure Analysis" by Nielson & Nielson (POPL 97).
Wed 4/16/97
Allyn Dimock will present "Infinitary Control Flow Analysis: a Collecting Semantics for Closure Analysis" by Nielson & Nielson (POPL 97).
Wed 4/16/97
Bengt Jonsson of Uppsala University (Sweden) will present
Decidable Model Checking Algorithms Using Constraints
Abstract:
Over the last few years there has been an increasing research effort directed towards the automatic verification of infinite state systems. For different classes of such systems (e.g., hybrid automata, data-independent systems, relational automata, Petri nets, parametrized systems, and lossy channel systems) this research has resulted in numerous highly nontrivial algorithms. As the interest in this area increases, it will be important to extract common principles that underlie these and related results. In this work, we present a framework for deriving existing and new models of infinite-state systems for which safety properties can be decided. The framework departs from a constraint representation of properties of states for which we formulate simple conditions that guarantee termination of the verification algorithm. We will give evidence to the generality of our approach by providing a uniform explanation of existing decidability results, and defining a model of infinite-state systems which combines parameterization over process structure with an unbounded data-structure.
Wed 4/9/97
John Kalamatianos will present
"Evaluating the performance overhead of Virtual Function Tables in C++"
Abstract:
Polymorphism is a feature that allows a variety of functions to be called with the same signature from the same call site. The language C++ construct that supports polymorphism is the virtual function. Although this feature helps programmers write extensible software it comes with a performance price.
In order to better estimate the generated overhead on modern machines we must consider the underlying virtual dispatch mechanism implementing polymorphism via virtual functions. One such mechanism used in C++ is the Virtual Function Table (VFT).
In this presentation we will describe 4 VFT layouts proposed in the literature that provide virtual dispatch given a fixed object layout. We will explain how compilers select the target virtual function at run-time and we will accurately evaluate the performance overhead by looking at the code sequence and the data structures used to support the different VFT layouts. In addition, we will present the additional overhead when using virtual classes and multiple inheritance. Finally we will illustrate how to improve VFT performance by using alter native object layout, instruction scheduling and compiler optimizations such as Class Hierarchy Analysis.
Wed 4/2/97
Lars Hansen will present two papers by George Necula:
George C. Necula and Peter Lee, CMU: "Safe Kernel Extensions Without Run-Time Checking", OSDI '96 George C. Necula, CMU: "Proof-Carrying Code", POPL '97
Abstract of the POPL Paper:
This paper describes proof-carrying code (PCC), a mechanism by which a host system can determine with certainty that it is safe to execute a program supplied (possibly in binary form) by an untrusted source. For this to be possible, the untrusted code producer must supply with the code a safety proof that attests to the code's adherence to a previously defined safety policy. The host can then easily and quickly validate the proof without using cryptography and without consulting any external agents.
In order to gain preliminary experience with PCC, we have performed several case studies. We show in this paper how proof-carrying code might be used to develop safe assembly-language extensions of ML programs. In the context of this case study, we present and prove the adequacy of concrete representations for the safety policy, the safety proofs, and the proof validation. Finally, we briefly discuss how we use proof-carrying code to develop network packet filters that are faster than similar filters developed using other techinques and are formally guaranteed to be safe with respect to a given operating system safety policy.
A web page at CMU is dedicated to the project and has the Postscript for the two papers, a technical report, and two position papers.
W 3/26/97
Arthur Nunes will present "Higher Order Abstract Assembly Language" by M. Wand (MFPS '91).
Abstract: Higher-order assembly language (HOAL) generalizes combinator-based target languages by allowing free variables in terms to play the role of registers. We introduce a machine model for which HOAL is the assembly language, and prove the correctness of a compiler from a tiny language into HOAL. We introduce the notion of a lambda-representation, which is an abstract binding operation, show how some common representations of procedures and continuations can be expressed as lambda-representations. Last, we prove the correctness of a typical procedure-calling convention in this framework.
W 3/19/97
Ali Ozmez will complete his presentation on "Effective Flow Analysis for Avoiding Run-Time Checks", by Jagannathan and Wright (SAS '95).
W 3/5/97
Toby Weinberg (Harlequin, Inc), will present "Dylan/C Interoperability -- The Harlequin Dylan C Foreign Function Interface and Interface Generator".
W 2/26/97
Ali Ozmez will present "Effective Flow Analysis for Avoiding Run-Time Checks", by Jagannathan and Wright (SAS '95).
Abstract: This paper describes a general purpose program analysis that computes global control-flow and data-flow information for higher-order, call-by-value programs.
[This paper introduces the technique called "polymorphic splitting"]
W 2/19/97
Mitch Wand will finish (finally!) "Destructive Update Insertion and Analysis-Based Program Transformation: The Small-Step Story" (joint work with Will Clinger).
Abstract:
We present a formal model of the relationship between program analysis and program transformation, and we apply it to a highly non-trivial analysis and transformation. We believe the formal model is attractive because it closely resembles what we believe is the ordinary programmer's understanding of the transformation process. Our example is destructive update optimization \cite{SastryClinger94}, a critical transformation for writing scientific codes in functional languages.
[** Clarification** : Several people have written me to ask if this paper is available. The answer, I'm sorry to say, is no. There is an almost-complete draft paper, but it is not yet ready for distribution. Sorry for any confusion I may have caused. --Mitch]
W 2/5/97
Mitch Wand will present "Destructive Update Insertion and Analysis-Based Program Transformation: The Small-Step Story" (joint work with Will Clinger).
Abstract:
We present a formal model of the relationship between program analysis and program transformation, and we apply it to a highly non-trivial analysis and transformation. We believe the formal model is attractive because it closely resembles what we believe is the ordinary programmer's understanding of the transformation process. Our example is destructive update optimization \cite{SastryClinger94}, a critical transformation for writing scientific codes in functional languages.
[** Clarification** : Several people have written me to ask if this paper is available. The answer, I'm sorry to say, is no. There is an almost-complete draft paper, but it is not yet ready for distribution. Sorry for any confusion I may have caused. --Mitch]
W 2/5/97
Mitch Wand will present "Destructive Update Insertion and Analysis-Based Program Transformation: The Small-Step Story" (joint work with Will Clinger).
Abstract:
We present a formal model of the relationship between program analysis and program transformation, and we apply it to a highly non-trivial analysis and transformation. We believe the formal model is attractive because it closely resembles what we believe is the ordinary programmer's understanding of the transformation process. Our example is destructive update optimization \cite{SastryClinger94}, a critical transformation for writing scientific codes in functional languages.
W 1/29/97 Igor Siveroni will finish his presentation on:
Is Continuation-Passing Useful for Data Flow Analysis? [PLDI 94] by: Amr Sabry and Matthias Felleisen
The technical report corresponding to this paper is now available locally as /home/wand/people/sabry/cps-flow-tr.ps . It is on the Web at http://www.cs.uoregon.edu/~sabry/papers/index.html.
We will probably have a discussion of least vs. greatest fixpoints for definining abstract interpretations.
W 1/22 Igor Siveroni will present:
Is Continuation-Passing Useful for Data Flow Analysis? [PLDI 94] by: Amr Sabry and Matthias Felleisen
The abstract goes like this:
The widespread use of the continuation-passing style (CPS) transformation in compilers, optimizers, abstract interpreters, and partial evaluators reflects a common belief that the transformation has a positive effect on the analysis of programs. Investigations by Nielson, and Burn/Filho support to some degree, this belief with theoretical results. However, they do not pinpoint the source of increased abstract information and do not explain the observation of many people that continuation-passing confuses some conven- tional data flow analyses.
To study the impact of the CPS transformation on program analyses, we derive
three canonical data flow analyzers for the core of an applicative
higher-order programming language. The first analyzer is based on a direct
semantics of the language, the second on a continuation-semantics of the
language, and the last on the direct semantics of CPS terms. All analyzers
compute the control flow graph of the source program and hence our results
apply to a large class of data flow analyses.
*Mon 12/9* 1000am 107 CN: (Note non-standard day and time)
Carolyn Talcott, of Stanford University, will speak on "Reasoning about
Functions with Effects".
Abstract:
This paper presents a unified framework for reasoning about program
equivalence in imperative functional languages such and Scheme, Lisp, ML. A
notion of uniform semantics for lambda-languages is defined, and general
principles are developed for reasoning about program equivalence in any lambda
language with uniform semantics. For such languages we have the combined
benefits of reduction calculi (modular axiomatization), and operational
equivalence (more equations). A lambda-language is a programming language
obtained by augmenting the lambda calculus with primitive operations that
include not only basic constants, branching, and algebraic operations such as
arithmetic and pairing, but also operations that manipulate the computation
state (store, continuation), and the environment (sending messages, creating
processes). The operational semantics of a lambda-language is given by
providing a reduction rule for each primitive operation. The semantics is
said to be uniform if it satisfies two uniformity conditions spelled out in
the paper. The key requirement is uniform computation which allows
computation steps to be carried out on states with missing information. It has
the property that computing commutes with filling in of missing information.
A key result for lambda languages with uniform semantics is the CIU theorem
(first presented by Mason and Talcott at ICALP89) which is a form of context
lemma. Using CIU and other consequences of uniform semantics two general
principles for proving program equivalence are established that capture
computational intuitions: programs with equivalence reducts are equivalent;
expressions placed in equivalent contexts yield equivalent programs.
Simulation principles are also established for proving equivalence of
imperative functions (aka lambdas) and objects (which have private store).
The semantics of a Scheme-like language with algebraic, control, and memory
primitives is deveoped as an illustration of the ideas.
The full paper (43 pp) is available at
http://www-formal.stanford.edu/MT/96hoots.ps.Z or at
/home/wand/people/talcott/96hoots.ps
Wed 11/20: Lars Hansen: Language Interoperability and Foreign Language
Interfaces
Abstract:
A large amount of functioning, stable, and well-performing library code
written in portable, mid-level languages like Fortran, C, and C++ has
been developed over the last several decades and is being used daily by
applications wrritten in these same languages. However, more modern
languages that emphasize portability and reliability over raw
performance are becoming accepted and will be used for developing an
increasingly larger share of new applications in most application areas;
chief among these will be object-oriented garbage-collected languages
like Java, Modula-3, and Eiffel, and mostly-functional languages like
Standard ML. It is important for the acceptance and eventual success of
these languages that they be able to reuse the existing mass of library
code. However, semantic and pragmatic differences between the old and
new languages and their implementations makes the _language
interoperability problem_ a hard one: data representations and procedure
calling conventions differ, exception handling methods are incompatible,
and garbage collection requires control over memory and operations on it
that the existing libraries cannot allow.
In this context, a _foreign language interface_ (FLI) is a sublanguage
of a programming language that allows programs written in that
p.l. coexist with and use libraries written in another language. This
talk will look at what a programmer might want to express in a FLI, and
the resulting techincal problems and tradeoffs, and their possible
solutions.
Wed 11/6/96: Arthur Nunes will present Jefferson and Friedman's paper "A
Simple Reflective Interpreter" from _Lisp and Symbolic Computation_, vol 9, pp
181 - 202, 1996, and in the process mention some related material.
Abstract:
Jefferson and Friedman note that "reflective programming languages enable user
programs to semantically extend the language itself..." The use of reflection
in object-oriented languages will be mentioned. Examples of their method of
extension will be given, and there will be a comparison with fexprs. Their
finite tower implementation will be explained. Finally, there will be a
critque and some discussion of related work.
Wed 10/30/96: Dave Kaeli will present "Efficient Procedure Mapping For
Improved Instruction Cache Performance"
Abstract:
As the gap between memory and processor performance continues to widen, it
becomes increasingly important to exploit cache memory effectively. Both
hardware and software approaches can be explored to optimize cache
performance. Hardware designers focus on cache organization issues, including
replacement policy, associativity, block size and the resulting cache access
time. Software writers use various optimization techniques, including
software prefetching, data scheduling and code reordering. Our focus is on
improving code reordering techniques.
In this talk we present a link-time procedure mapping technique which can
significantly improve the effectiveness of an instruction cache. The
methodology constructs a program call graph and produces an improved program
layout by first reducing the call graph to several smaller call graphs. Then
each subgraph is organized in the memory space to reduce the number of
conflict misses in the instruction cache.
(joint work with Amir Hooshang Hashemi, NU ECE, and Bradley Calder, DEC WRL.)
Wed 10/23/96: Ali Ozmes will present: Andrew K. Wright, Robert Cartwright, "A
Practical Soft Type System for Scheme", ACM LFP 1994.
Abstract:
"Soft typing is a generalization of static type checking that
acommodates both dynamic typing and static typing in one framework.
A soft type checker infers types for identifiers and inserts explicit
run-time checks to transform untypable programs to typable form..."
We will meet 930-1130 in 107CN. **note new starting time**
Wed 10/16/96: Rene Vestergaard will present "Semantics-Based Compiling: A Case
Study in Type-Directed Partial Evaluation" (joint work with Olivier Danvy).
We will meet 930-1130 in 107CN. **note new starting time**
Wed 10/9/96: Mishka Bukatin will speak on "Computing Distances Between
Programs via Domains"
Abstract:
COMPUTING DISTANCES BETWEEN PROGRAMS VIA DOMAINS:
a Symmetric Continuous Generalized Metric for
Scott Topology on Continuous Scott Domains
Michael A. Bukatin - Brandeis University, Computer Science
Joshua S. Scott - Northeastern University, Mathematics
We present a description of Scott topology on continuous
Scott domains via continuous generalized distances. The continuity
of these distances makes it possible to compute them in many
cases. In particular, applications of this construction to semantic
domains allow to compute meaningful distances between programs.
We will further elaborate on connections between continuity
and computability during this talk.
The full paper can be obtained at
http://www.cs.brandeis.edu/~bukatin/papers.html
or requested at bukatin@cs.brandeis.edu.
Wed 10/2/96: Will Clinger will present "Empirical and analytic study of stack
versus heap cost for languages with closures" by Appel and Shao, JFP, 1/96.
Extracted from the Abstract:
We present a comprehensive analysis of all the components of creation, access,
and disposal of heap-allocated activation records... Overall the execution
cost of stack-allocated and heap-allocated frames is similar, but heap frames
are simpler to implement and allow very efficient first-class continuations.
Wed 9/25: The World as We See It: Open-Problems Presentation
Abstract:
We will begin the year with presentations by Mitch Wand and Will Clinger about
their interests and current open problems and projects.
We will meet 10-12 in 107CN, as usual. We will spend some time setting up the
schedule for the next few meetings.
Wed 9/11: Jacob Kucan will present "Retraction Approach to CPS Transform"
Abstract:
We study the continuation passing style (CPS) transform and its
generalization, the {\em computational transform}, in which the notion
of computation is generalized from continuation passing to an
arbitrary one. To establish a relation between {\em direct style} and
{\em continuation passing style} interpretation of sequential
call-by-value programs, we prove the Retraction Theorem which says
that a lambda term can be recovered from its continuationized form via
a $\lambda$-definable retraction. The Retraction Theorem is proved in
the logic of computational lambda calculus for the simply typable
terms.
Tuesday, 8/20/96 :
Some of Allyn Dimock, Bob Muller, and/or Lyn Turbak will present their paper
on typed closure conversion with intersection types.
Tuesday, 8/13/96 :
David Espinosa (Kestrel Institute, Palo Alto CA) will present
Horizontal and Vertical Structure in Specware, a Category-Theoretic Module
System, by Y.V. Srinivas and Richard Jullig
Specware is a system developed at Kestrel for structuring software
using algebraic specification and abstract data types. Although
Specware's specification language is based on higher-order logic, its
structuring methods are language-independent. I will describe how
Specware formalizes the classical notions of horizontal and vertical
structure in software using category theory, and how these are related
via sheaf theory. The goal throughout will be to make seemingly
complex notions appear inevitable and perhaps even obvious.
The paper on Specware may be obtained from:
http://kestrel.edu/pub/papers/specware/specware.ps
/proj/wand/people/espinosa/specware.ps
Tuesday, 8/6/96 :
I will present a short introduction to category theory & algebraic
specification of data types, by way of preparation for David Espinosa's visit
next week.
Tuesday, 7/23/96:
Lyn Turbak will present "Functional Programming with Bananas, Lenses,
Envelopes, and Barbed Wire" by Meijer, Fokkinga, and Paterson (FPCA '91).
Abstract:
We develoop a calculus for lazy functinal programming based on recursion
operators associated with data type definitions. For these operators we
derive various algebraic laws that are useful in deriving and manipulating
programs. We shall show that all example functions in Bird and Wadler's
"Introduction to Functional Programming" can be expressed using these
operators.
For the summer, we will meet on *Tuesdays*. Coming attractions:
Tues 7/16: no seminar
Tues 7/23: Lyn Turbak will present "Programming with Bananas, Lenses, and
Barbed Wire" (FPCA 91).
Tues 7/30: no seminar, but...
Wed 7/31: (note special day and time): Guy McCusker will present his LICS '96
paper on a Fully-Abstract Game Semantics for FPC.
--Mitch
*Mon* 6/10, 1000-1200, 107 CN [Note special day]:
"Class-graph Inference for Adaptive Programs"
Jens Palsberg
MIT
Lieberherr introduced the idea of an _adaptive_ object-oriented program, where
the class graph is a parameter to the program. The class graph represents the
subclass relationships and the types of instance variables. When the class
graph changes, the adaptive program often stays the same. For example,
suppose we represent grammars as class graphs where each nonterminal is
represented by a class. Possible nonterminals are Exp, ProcExp, VariableExp,
etc. The following program prints the free variables of an Exp, when
represented as an object.
We can add, say, a CondExp to the grammar without changing the
program.
Given a class graph, it is easy to type check an adaptive
program. But how do we type check a program _before_ a class
graph is given? In other words, given an adaptive program,
with respect to which class graphs is it type correct?
In particular, does there at all exist such a class graph?
This problem is akin to type inference for object-oriented programs,
but it seems harder because even the subclass relationships are
unknown and must be inferred.
In this talk I present a polynomial-time class-graph inference algorithm.
It first derives a set of type constraints from the program text,
and from them it builds a representation of the wanted set of class graphs.
It also decides if the set is non-empty, and if so it computes
a particularly simple graph in the solution set.
Several toy programs have been type checked by a prototype
implementation of the algorithm.
*Fri* 5/24, 1000-1200, 107 CN [Note special day]:
Anindya Banerjee, LIX Polytechnique
Polyvariant Closure Analysis with Rank 2 Intersection Types
In his SAS '95 paper, Heintze has shown that given a variety of type
systems instrumented by control-flow information, there exist
corresponding 0-CFA's augmented by type information such that each
type system is equivalent to its corresponding 0-CFA (Nevin Heintze,
SAS95, LNCS 983).
This talk addresses the issue of how to obtain an inference algorithm
to compute control-flow information directly from the structure of the
program text --- an aspect unclear in Heintze's work. The issue is
important for program analysis and compiler optimisation, since we
want to calculate both minimal and polyvariant control-flow
information for any program.
An algorithm is proposed via a simple extension of the framework of
the rank 2 intersection type discipline. As has been recently shown by
Jim, the rank 2 intersection type system has principal typings, can
type more programs than ML and has a complexity of type inference
equivalent to ML's (Trevor Jim, POPL96). This makes it an attractive
tool for type-based program analysis.
Wed 5/1/96:
1. Allyn Dimock will complete his presentation of Ashley's "A
Practical and Flexible Flow Analysis for Higher-Order Languages" (POPL 96).
2. Will Clinger will present a summary of recent proposals for exception and
condition systems for Scheme.
We will meet in 107 CN, 10am-12pm, as usual.
Wednesday, 4/24/96: Allyn Dimock will present "A
Practical and Flexible Flow Analysis for Higher-Order Languages" by
J. Michael Ashley (POPL 96).
operation printFreeVars(boundVars: Stack)
traverse from Exp to Variable -- find all variable-refs.
wrapper ProcExp -- when encountering a
prefix boundVars.push(formal) -- procedure, put the formal
parameter on the stack.
suffix boundVars.pop() -- afterwards, pop it.
wrapper VariableExp -- when encountering a
prefix if boundVars.contains(self) -- variable-ref, check
then