Dr. Rer. Nat. Hellmuth Broda (Monday, April 8, 09:30 (FASE))
title: Jini Software Architecture - The end of Protocols as we know them
abstract:
Jini[tm] network technology provides a simple infrastructure for
delivering services in a network and for creating spontaneous
interaction between programs that use these services regardless of their
hardware/software implementation.
Any kind of network made up of services (applications, databases,
servers, devices, information systems, mobile appliances, storage,
printers, etc.) and clients (requesters of services) of those services
can be easily assembled, disassembled, and maintained on the network
using Jini Technology. Services can be added or removed from the
network, and new clients can find existing services - all without
administration.
The capability to form spontaneous connections makes it unneccessary to
agree on protocols (other than RMI, TCP/IP). With 80000 developers and
over 70 commercial licensees Jini is moving into the main stram of
computing. The lease mechanism gives Jini its unsurpassed resilience and
might lead to Jini becoming the new standard in our connected world.
Greg Morrisett (Tuesday, april 9, 09:00 (ESOP))
title: Type Checking Systems Code
abstract:
Our society is increasingly dependent upon its computing and communications
infrastructure. That infrastructure is built using unsafe, error-prone
C or C++ code where buffer overruns, format string attacks, and space
leaks are not only possible, but frighteningly common. Safe languages,
such as Java, Scheme, or ML do not admit these attacks but relatively
little infrastructure (i.e., operating systems, servers, databases,
protocols, routers) is built using a safe language.
The problem is that it simply isn't cost effective to rewrite this
infrastructure in a modern safe language. Today's operating systems
have tens of millions of lines of code. What we need are tools and
systems that can ensure safety for legacy C/C++ code.
For the past two years, we have been exploring and building type systems
for C code in the context of a project called Cyclone. The challenge
is to find type systems and type inference techniques that are
(a) sound, (b) scalable, (c) admit common idioms (e.g., pointer arithmetic)
without losing the benefits that C provides for writing systems code
(e.g., control over data representations.)
Dr. Michael Lowry (Tuesday, April 9, 14:15 (TACAS))
title: Software Construction and Analysis Tools for Future Space Missions
abstract:
NASA and its international partners will increasingly depend
on software-based systems to implement advanced functions for
future space missions, including:
Daniel Jackson (Wednesday, April 10, 09:00 (ETAPS))
title: Alloy: A New Technology for Software Modelling
abstract:
Alloy is a lightweight language for software modelling. It's designed to
be flexible and expressive, and yet amenable to fully automatic
simulation and checking. At its core, Alloy is a simple first order
logic extended with relational operators. A simple structuring mechanism
allows Alloy to be used in a variety of idioms, and supports incremental
construction of models. Alloy is analyzed by translation to SAT. The
current version of the tool uses the Chaff and Berkmin solvers; these
are powerful enough to handle a search space of 2^100 or more.
Alloy has been applied to problems from very different domains, from checking the conventions of Microsoft COM to debugging the design of a name server. Most recently, we have used it to check distributed algorithms that are designed for arbitrary topologies. We are also investigating the use of Alloy to analyze object-oriented code.
Mary Shaw (Wednesday, April 10, 14:15 (ETAPS))
title: What makes Good Research in Software Engineering?
abstract:
Physics, biology, and medicine have well-refined public explanations
of their research processes. Even in simplified form, these provide
guidance about what counts as "good research" both inside and
outside the field. Software engineering has not yet explicitly
identified and explained either our research processes or the ways we
recognize excellent work.
Science and engineering research fields can be characterized in terms of the kinds of questions they find worth investigating, the research methods they adopt, and the criteria by which they evaluate their results. I will present such a characterization for software engineering, showing the diversity of research strategies and the way they shift as ideas mature. Understanding these strategies should help software engineers design research plans and report the results clearly; it should also help explain the character of software engineering research to computer science at large and to other scientists.
For those of you who could not attend the talk, here is the paper and here is a pointer to the slides.
Patrick Cousot (Friday, april 12, 09:00 (CC/SPIN))
title: Abstract Interpretation: Theory and Practice
abstract:
Our objective in this talk is to give an intuitive account of
abstract interpretation theory and to present and discuss its main
applications.
Abstract interpretation theory formalizes the conservative approximation of the semantics of hardware or software computer systems. The semantics provides a formal model describing all possible behaviors of a computer system in interaction with any possible environment. By approximation we mean the observation of the semantics at some level of abstraction, ignoring irrelevant details. Conservative means that the approximation can never lead to an erroneous conclusion.
Abstract interpretation theory provides thinking tools since the idea of abstraction by conservative approximation is central to reasoning (in particular on computer systems) and mechanical tools since the idea of an effectively computable approximation leads to a systematic and constructive formal design methodology of automatic semantics-based program manipulation algorithms and tools.
We will present various applications of abstract interpretation theory to the design of hierarchies of semantics, program transformations, typing, model-checking and in more details static program analysis. Abstract interpretation provides a general theory behind all program analyzers, which only differ in their choice of considered programming languages, program properties and their abstractions. Finally, we will discuss the various possible designs of program analyzers, from general-purpose to application-specific ones.
Ed Clarke (Thursday, April 11, 14:15 (SPIN/ETAPS))
title: SAT-based Counterexample Guided Abstraction Refinement
abstract:
We describe new techniques for model checking in the counterexample
guided abstraction / refinement framework. The abstraction phase
hides the logic of various variables, hence considering them as
inputs. This type of abstraction may lead to 'spurious'
counterexamples, i.e., traces that cannot be simulated on the original
(concrete) machine. We check whether a counterexample is real or
spurious with a SAT Checker. We then use a combination of Integer
Linear Programming (ILP) and machine learning techniques for refining
the abstraction based on the counterexample. The process is repeated
until either a real counterexample is found or the property is
verified. We have implemented these techniques on top of the model
checker NuSMV and the SAT solver Chaff. Experimental results prove
the viability of these new techniques.
Bruno Courcelle (Thursday, April 11, 09:00 (FOSSACS))
title: Semantical Evaluations as Monadic Second-order Compatible Structure Transformations
abstract:
A transformation of structures tau is monadic
second-order compatible (MS-compatible}) if every monadic
second-order property P can be effectively rewritten into a
monadic second-order property Q such that, for every structure
S, if T is the transformed structure tau(S), then
P(T) holds iff Q(S) holds.
We will review Monadic Second-order definable transductions
(MS-transductions: they are MS-compatible transformations of a
particular form, i.e., defined by monadic second-order formulas.
The unfolding of a directed graph into a tree is an MS-compatible
transformation that is not an MS-transduction.
The MS-compatibility of various transformations of semantical interest
follows. We will present three main cases and discuss applications and
open problems.
Nicolas Halbwachs (Saturday, April 6, 16:00 (DCC))
title: Abstract interpretation: an introduction
abstract:
This tutorial talk is an intuitive introduction to abstract interpretation
techniques, and to their use
in infinite state system analysis and verification.
Jose Meseguer (Saturday, April 6, 14:00 (CMCS))
title: Towards BMaude
abstract:
to be announced
Luigi Santocanale (Sunday, April 7, 09:00 (CMCS))
title: to be announced
abstract:
to be announced
Manfred Broy (Saturday, April 6, 09:30 (VISS))
title: MSCs in the development process - role and limitations
abstract:
Message Sequence Charts (MSCs) are a technique to describe patterns of interactions
between the components of interactive distributed systems by specific diagrams. MSCs
have evolved in telecommunication applications and become very popular in the design
of
software architectures and, generally, of distributed software systems. They are used
frequently
to describe scenarios of interactions illustrating instances of use cases.
Nevertheless, both
the semantics of MSCs as a technique of specification and their methodological and
technical
role in the development process have not been precisely and sufficiently clarified,
so far.
In this talk, we discuss the systematic application of MSCs in the software
development process.
Doron Peled (Saturday, April 6, 14:00 (VISS))
title: Specification and Verification using MSCs
abstract:
to be announced
Giorgio Buttazzo (Saturday, April 6, 14:00 (TPTS))
title: Towards Adaptive Real-Time Systems
abstract:
Modern real-time applications, including multimedia systems, mobile robotics, and distributed monitoring architectures, often operate in highly dynamic
environments where workload conditions are difficult to predict in advance. In addition, real-time activities may have variable computational
requirements and are characterized by more flexible timing constraints than classical real-time theory usually permits. Handling such systems
according to a hard real-time paradigm (based on worst-case assumptions) is inappropriate, because it would cause a waste of resources and would
dramatically increase the cost.Recently, significant work has been devoted at increasing the flexibility and the efficiency of real-time systems, still
providing a form of performance guarantee.
The goal of this talk is to introduce a set of new methodologies that can be adopted to develop adaptive real-time systems, that is, systems which can
modify resource management policies based on the current workload conditions. In this framework, tasks are allowed to have less stringent timing
constraints to achieve higher resource utilization. Moreover, the concept of yes-or-no guarantee is replaced with the notion of quality of service,
which can be specified within a much larger grey-level scale.
Avi Efrati (Sunday, April 7, 09:00 (TPTS))
title: Real Life Timing Analysis at Intel
abstract:
This talk will give an overview of timing analysis at intel. Hierarchical timing models that enable full-chip timing will be
described as well as interaction with academia in timing-related topics.
John Hooker (Saturday, April 6, 17:00 (TPTS))
title: Scheduling by a Combination of Mathematical and Constraint Programming
abstract:
A survey decomposition-methods that combine mathematical and constraint programming for solving scheduling problems.
The idea is to generalize Benders decomposition so that the subproblem is a constraint programming problem in which
Benders cuts are obtained by logical inference, and the master problem is a mixed integer programming problem. I report
results by Jain and Grossmann for a machine scheduling problem, and by Thorsteinsson for a branch-and-check
approach to the same problem. I suggest how to generalize the approach using a continuous relaxation of the cumulative
constraint
Claude Le Pape (Sunday, April 7, 14:00 (TPTS))
title: Temporal and Resource Constraints in Constraint-Based Scheduling
abstract:
Scheduling consists in assigning execution times and resources to activities so as to satisfy a variety of constraints (time
bounds, precedence relations, resource capacity constraints, etc.) and optimize one or several conflicting performance
criteria (total schedule duration, cost, schedule robustness, etc.). Two main issues have to be considered to evaluate the
applicability of a model of time and resources to industrial scheduling applications: "flexibility" and "efficiency". Flexibility
means that the specific constraints of a given application shall be easy to represent in the given model. Efficiency means
that the algorithms that are applicable to this model must provide good solutions in limited CPU time. As scheduling
applications tend to be different one from another, this led to the development of a variety of models of time and
resources, with different characteristics in terms of flexibility and efficiency. The presentation will emphasize the most
widely used models and compare them along these two dimensions.
Joseph Sifakis (Sunday, April 7, 17:00 (TPTS))
title: Restricting the Behavior of Timed Systems
abstract:
Restriction is a central notion in system development. It appears to
be a key concept for definition of parallel composition and refinement
relations. It can be defined as a unary operation on systems whose effect
is the restriction of the enabling conditions of actions; For untimed
systems, restrictions S' of a system S, both simulate S and preserve
its invariants. For timed systems different notions of restriction can be
defined depending on the effect of restriction operations on time progress.
In this talk we,
Gerhard Goos (Sunday, April 13, 9h10 (COCV))
title: Compiler Verification and Compiler Architecture
abstract:
We study issues in verifying compilers for modern imperative and
object-oriented languages. We take the view that it is not the
compiler but the code generated by it which must be correct. It is
this subtle difference that allows for reusing standard compiler
architecture, construction methods and tools also in a verifying
compiler.
Program checking is the main technique for avoiding the cumbersome
task of verifying most parts of a compiler and the tools by which they
are generated. Program checking remaps the result of a compiler phase
to its origin, the input of this phase, in a provably correct
manner. We then only have to compare the actual input to its
regenerated form, a basically syntactic process. The correctness proof
of the generation of the result is replaced by the correctness proof
of the remapping process. The latter turns out to be far easier than
proving the generating process correct.
The only part of a compiler where program checking does not seem to
work is the transformation step which replaces source language
constructs and their semantics, given, e.g., by an attributed syntax
tree, by an intermediate representation, e.g., in SSA-form, which is
expressing the same program but in terms of the target machine. This
transformation phase must be directly proven using Hoare logic and/or
theorem-provers. However, we can show that given the features of
today's programming languages and hardware architectures this
transformation is to a large extent universal: it can be reused for
any pair of source and target language. To achieve this goal we
investigate annotating the syntax tree as well as the intermediate
representation with constraints for exhibiting specific properties of
the source language. Such annotations are necessary during code
optimization anyway.
Charles Consel (Saturday, April 13, 09:15 (LDTA))
title: Domain-Specific Languages: What, Why, How
abstract:
to be announced
Eerke Boiten (Saturday, April 6, 9:30 (INT))
title: Integrating Specifications: Development Relations and Correspondences
abstract:
Realistic system specifications consist of many sub-specifications in a
heterogeneous collection of notations. We have investigated what it
means for such 'partial specifications' to collectively describe
possible implementations of the system, and what it means for them to
be consistent. The crucial notion in this is the 'development relation'
which relates a specification to acceptable developments or
implementations. Our current focus is on correspondences: how do we
cross-reference between two partial specifications, especially when
they
are written in different notations? What does it mean for
specifications
to be integrated with respect to such a correspondence relation?
Jordi Cortadella (Saturday, April 6, 14:00 (INT))
title: Synthesis of Embedded Software for Reactive Systems
abstract:
Embedded systems are usually deployed in environments with tight timing
constraints. Increasing run-time performance is often a critical issue.
This paper addresses the problem of synthesizing efficient software for
embedded systems specified as a set of communicicating sequential
processes. The Quasi-Static Scheduling algorithm is presented, along
with results from an industrial example. Open problems in this field,
such as incorporating data-flow analysis into the scheduling
algorithm,
are discussed.
Arno Ritter (Saturday, April 7, 14:00 (INT))
title: Agentification for Production Systems
abstract:
a new method for the agentification of production systems
within the development of Multi-Agent-Systems (MAS) is presented.