Programme
The schedule consists of regular talks, invited talks (by either CALCO or MFPS), and joint invited talks (by CALCO and MFPS together). For regular papers with multiple authors, the presenting author is indicated by italic text (as far as is known). Click on the respective local time for a conversion into your local timezone.
Days: | Monday (Jun 19) | Tuesday (Jun 20) | Wednesday (Jun 21) | Thursday (Jun 22) | Friday (Jun 23) |
Monday, June 19, 2023
9:30 - 10:20 |
Rawles 107
Registration
|
10:20 - 10:30 |
Lindley 102
Opening
|
Probability (CALCO) Lindley 102 | |
10:30 - 10:55 |
Online
Composition and Recursion for Causal Structures
Abstract Causality appears in various context as a property where present
behaviour can only depend on past events, but not on future events. In
this paper, we compare three different notions of causality that capture
the idea of causality in the form of restrictions on morphisms between
coinductively defined structures, such as final coalgebras and chains,
in fairly general categories. We then focus on one presentation and show
that it gives rise to a traced symmetric monoidal category of causal
morphisms. This shows that causal morphisms are closed under sequential
and parallel composition and, crucially, under recursion. |
10:55 - 11:20 |
Online
Weakly Markov categories and weakly affine monads
Slides
Abstract Introduced in the 1990s in the studies of the algebraic approach to graph rewriting, gs-monoidal categories (shortly, GS categories) are symmetric monoidal categories where each object has the structure of a commutative comonoid. They arise as the underlying structure of Klesli categories for commutative monads on cartesian categories, and as such provide an handy tool for approaching effectfull computations. Recently proposed in the context of categorical probability, Markov categories are GS categories where the monoidal unit object is also terminal, and they characterises those Kleisli categories where the monad is required to preserve such an object.
The aim of this paper is to study the different strengthenings on the monoidal structure leading from GS categories up to Markov and cartesian ones. More precisely, we focus on the introduction of weakly Markov categories, where morphisms to the monoidal unit are not necessarily unique, but form a group. As we show, these categories exhibit a rich theory of conditional independence, generalising the case of Markov categories. As importantly, we introduce the corresponding notion for commutative monads, which we call weakly affine, and for which we give two equivalent characterisations.
The paper argues that such monads are relevant to the study of categorical probability. A case at hand is the monad of non-negative, non-zero measures, which is not affine, yet weakly so. With these structures, one can investigate probability ``up to normalisation'' in a precise categorical way. |
11:20 - 11:45 |
A Category for unifying Gaussian Probability and Nondeterminism
Slides
Abstract We introduce categories of extended Gaussian maps and Gaussian relations which unify Gaussian probability distributions with relational nondeterminism in the form of linear relations. Both have crucial and well-understood applications in statics, engineering, and control theory, but combining them in a single formalism is challenging. It enables us to rigorously describe a variety of phenomena like noisy physical laws, Willems' theory of open systems and uninformative priors in Bayesian statistics. The core idea is to formally admit vector subspaces D as generalized uniform probability distribution. Our formalism represents a first bridge between the literature on categorical systems theory (signal-flow diagrams, linear relations, hypergraph categories) and notions of probability theory. |
Operational Semantics (CALCO) Lindley 102 | |
11:45 - 12:10 |
Higher-Order Mathematical Operational Semantics
Abstract Higher-order abstract GSOS is a recent extension of Turi and Plotkin's bialgebraic framework of Mathematical Operational Semantics to higher-order languages, such as combinatory logic or the lambda-calculus. The fundamental well-
behavedness property of specifications within the framework
is that, under natural conditions, coalgebraic strong and weak (bi)similarity on their operational model is a congruence. In the present paper we outline recent and ongoing developments, including the challenges posed by call-by-value semantics, abstract accounts of operational techniques such as Howe's method or logical relations, and possible extensions to quantitative settings. |
12:10 - 12:35 |
Structural Operational Semantics for Heterogeneously Typed Coalgebras
Slides
Abstract Concurrently interacting components of a modular software architecture are heterogeneously structured behavioural models. We consider them as coalgebras based on different endofunctors. We formalize the composition of these coalgebras as specially tailored segments of distributive laws of the bialgebraic approach of Turi and Plotkin. The resulting categorical rules for structural operational semantics involve many-sorted algebraic specifications, which leads to a description of the components together with the composed system as a single holistic behavioural system. We
evaluate our approach by showing that observational equivalence is a congruence with respect to the algebraic composition operation. |
12:35 - 14:00 |
Lunch
|
14:00 - 15:00 |
Lindley 102
Invited Talk: Local completeness for program correctness and incorrectness
Slides
Abstract Floyd-Hoare logic for program correctness was an eye-opening
contribution to the use of over-approximation in program verification
aimed to prove the absence of errors.
From the programmers point of view, the benefit of program correctness
feedbacks within the software development ecosystem is appreciated if
warnings are reported timely and truly.
The use of over-approximation is necessary to make the correctness problem
tractable and to develop automatic tools, but inevitably it introduces some
imprecision and thus program verification tools can produce false alarms, i.e.,
errors that are reported by the analysis but that cannot happen in practice.
Peter O'Hearn's recent studies on the use of under-approximation lead
to the definition of a logic for program incorrectness, which, by duality,
can be used to prove the presence of errors, and thus the absence of
false alarms.
In this talk we will overview our ongoing research on the use of
abstract interpretation framework to combine under- and over-approximation
in the same analysis to distill a logic for program correctness and incorrectness.
Any triple provable in the logic can be used either to guarantee the correctness
of the program or to expose some (true) errors.
The key role is played by the notion of locally complete abstraction that
provides the necessary proof obligations of logic derivations.
Notably different abstract domains can be combined in the same derivation
and the logic can be instantiated to different settings, like imperative
programming languages and strategy languages for rewrite systems.
References:
Peter W. O'Hearn:
Incorrectness logic.
Proc. ACM Program. Lang. 4(POPL): 10:1-10:32 (2020)
https://doi.org/10.1145/3371078
Roberto Bruni, Roberto Giacobazzi, Roberta Gori, Francesco Ranzato:
A Logic for Locally Complete Abstract Interpretations.
LICS 2021: 1-13
https://doi.org/10.1109/LICS52264.2021.9470608
Roberto Bruni, Roberto Giacobazzi, Roberta Gori, Francesco Ranzato:
Abstract interpretation repair.
PLDI 2022: 426-441
https://doi.org/10.1145/3519939.3523453
Flavio Ascari, Roberto Bruni, Roberta Gori:
Logics for Extensional, Locally Complete Analysis via Domain Refinements.
ESOP 2023: 1-27
https://doi.org/10.1007/978-3-031-30044-8_1
Roberto Bruni, Roberto Giacobazzi, Roberta Gori, Francesco Ranzato:
A Correctness and Incorrectness Program Logic.
Journal of the ACM: 70(2) n.15, 1–45
https://doi.org/10.1145/3582267
|
15:00 - 15:30 |
Rawles 107
Break
|
Data Types and Coinduction (CALCO) Lindley 102 | |
15:30 - 15:55 |
Amortized Analysis via Coinduction
Slides
Abstract Amortized analysis is a program cost analysis technique for data structures in which the cost of operations is specified in aggregate, under the assumption of continued sequential use. Typically, amortized analyses are done inductively, in terms of finite sequences of operations; we demonstrate that coinduction provides a more natural description. We describe a classic amortized data structure, the batched queue, and describe its amortized analysis in calf, a type theory for amortized analysis. |
15:55 - 16:20 |
CRDTs, Coalgebraically
Abstract In distributed data storage systems, conflict-free replicated data types (CRDTs) are data structures designed for replication across multiple physical locations. CRDTs must satisfy strong convergence; informally, this means that any two replicas that have received the same set of updates must agree in state. We propose investigating CRDTs and strong convergence from a coalgebraic perspective. |
16:20 - 16:45 |
Online
Coinductive control of inductive data types
Slides
Abstract We combine the theory of inductive data types with the theory of universal measurings. By doing so, we find that many categories of algebras of endofunctors are actually enriched in the corresponding category of coalgebras of the same endofunctor. The enrichment captures all possible partial algebra homomorphisms, defined by measuring coalgebras. Thus this enriched category carries more information than the usual category of algebras which captures only total algebra homomorphisms. We specify new algebras besides the initial one using a generalization of the notion of initial algebra. |
Logics and Institutions (CALCO) Lindley 102 | |
16:45 - 17:10 |
Many-Valued Coalgebraic Logic: From Boolean Algebras to Primal Varieties
Abstract We study many-valued coalgebraic logics with primal algebras of truth-degrees. We describe a way to lift algebraic semantics of classical coalgebraic logics, given by an endofunctor on the variety of Boolean algebras, to this many-valued setting, and we show that many important properties of the original logic are inherited by its lifting. Then, we deal with the problem of obtaining a concrete axiomatic presentation of the variety of algebras for this lifted logic, given that we know one for the original one. We solve this problem for a class of presentations which behaves well with respect to a lattice structure on the algebra of truth-degrees. |
17:10 - 17:35 |
Interpolation is (not always) easy to spoil
Slides
Abstract We study a version of the Craig interpolation theorem as formulated in the framework of the theory of institutions. This formulation proved crucial in the development of a number of key results concerning foundations of software specification and formal development. We investigate preservation of interpolation under extensions of institution by new models and sentences. We point out that some interpolation properties remain stable under such extensions, even if quite arbitrary new models or sentences are permitted. We give complete characterisations of such situations for institution extensions by new models, by new sentences, as well as by new models and sentences, respectively. |
18:30 |
202 S. Rogers St.
Reception: FAR Center for Contemporary Arts
|
Tuesday, June 20, 2023
9:00 - 10:00 |
Online
Lindley 102
Invited Talk: A tour on ecumenical systems
Slides
Abstract Ecumenism can be seen as the search for unicity, that is, for different thoughts, ideas or points of view to coexist in harmony. In logic, ecumenical systems refer, in a broad sense, to proof systems for combining logics. In this context, the quest of smoothly combining classical and intuitionistic connectives so that they can co-exist in peace has been a fascinating topic of research for decades now.
In 2015, Dag Prawitz proposed a natural deduction system for an ecumenical first-order logic. We start this talk by explaining Prawitz "ecumenism" from philosophical, mathematical and computational views. We will then present a pure sequent calculus version to Prawitz' system, in the sense that connectives are introduced without the use of other connectives. This discussion can be smoothly extended to alethic modalities using Simpson’s meta-logical characterization. We are then able to propose proof systems for ecumenical modal logics.
We will finish the talk by showing a term calculus proposal for the implicational propositional fragment of the ecumenical logic, and we will show some automation that we have been developing using a framework based in rewriting logic.
|
10:00 - 10:30 |
Rawles 107
Break
|
Bisimilarity (CALCO) Lindley 102 | |
10:30 - 10:55 |
Online
Bisimilar States in Uncertain Structures
Slides
Abstract We provide a categorical notion called uncertain bisimilarity which allows to reason about bisimilarity in combination with the lack of knowledge about the involved systems. Those uncertainty notions naturally arise in learning algorithms, where one investigates whether two observed behaviours come from the same internal state of a black-box system that can not be transparently inspected. We model this uncertainty as a set functor equipped with a partial order which describes possible future developments of the learning game. On such a functor, we provide a lifting-based definition of uncertain bisimilarity and verify basic properties. Beside its applications to Mealy-typed automata, our framework also instantiates to an existing compatibility relation on suspension automata, which are used in the IOCO testing theory. We shown that uncertain bisimilarity is a necessary but not sufficient condition for two states being implementable by the same state in the black-box system. To remedy the failure of the one direction, we characterize uncertain bisimilarity in terms of coalgebraic simulations. |
10:55 - 11:20 |
Online
Aczel-Mendler Bisimulations in a Regular Category
Slides
Abstract Aczel-Mendler bisimulations are a coalgebraic extension of a variety of computational relations between systems. It is usual to assume that the underlying category satisfies some form of axiom of choice, so that the theory enjoys desirable properties, such as closure under composition. In this paper, we accommodate the definition in a general regular category -- which does not necessarily satisfy any form of axiom of choice. We show that this general definition 1) is closed under composition without using the axiom of choice, 2) coincide with other types of coalgebraic formulations under milder conditions, 3) coincide with the usual definition when the category has the regular axiom of choice. We then develop the particular case of toposes, where the formulation becomes nicer thanks to the power-object monad, and extend the formalism to simulations. Finally, we describe several examples in Stone spaces, toposes for name-passing, and modules over a ring. |
11:20 - 11:45 |
Forward and Backward Steps in a Fibration
Abstract Distributive laws of various kinds occur widely in the theory of coalgebra, for instance to model automata constructions and trace semantics, and to interpret coalgebraic modal logic. We study steps, which are a general type of distributive law, that allow one to map coalgebras along an adjunction. In this setting, we may ask what such mappings do to well known notions of equivalence, e.g., bisimilarity, behavioural equivalence, and logical equivalence.
In this paper, we address this question, using the characterisation of such notions of equivalence as (co)inductive predicates in a fibration. Our main contribution is the identification of conditions on the interaction between the steps and liftings, which guarantees preservation of fixed points by the mapping of coalgebras along the adjunction. We apply these conditions in the context of lax liftings proposed by Bonchi, Silva, Sokolova (2021), and generalise their result on preservation of bisimilarity in the construction of a belief state transformer. Further, we relate our results to properties of coalgebraic modal logics including expressivity and completeness. |
Topology and Algebras (CALCO) Lindley 102 | |
11:45 - 12:10 |
Strongly Finitary Monads For Varieties of Quantitative Algebras (Extended Abstract)
Abstract Quantitative algebras are algebras enriched in the category Met of metric spaces so that all operations are nonexpanding. Mardare, Plotkin and Panangaden introduced varieties (aka 1-basic varieties) as classes of quantitative algebras presented by quantitative equations. We prove that they bijectively correspond to strongly finitary monads T on Met. This means that T is the left Kan extension of its restriction to finite discrete spaces. An analogous result holds in the categories CMet of complete metric spaces and UMet of ultrametric spaces. |
12:10 - 12:35 |
On Kripke, Vietoris and Hausdorff Polynomial Functors
Abstract The Vietoris space of compact subsets of a given Hausdorff space yields an endofunctor~$\V$ on the category of Hausdorff spaces. Vietoris polynomial endofunctors on that category are built from~$\V$, the identity and constant functors by forming products, coproducts and compositions. These functors are known to have terminal coalgebras and we deduce that they also have initial algebras. We present an analogous class of endofunctors on the category of extended metric spaces, using in lieu of $\V$ the Hausdorff functor $\H$. We prove that the ensuing Hausdorff polynomial functors have terminal coalgebras and initial algebras. Whereas the canonical constructions of terminal coalgebras for Vietoris polynomial functors takes $\omega$ steps, one needs $\omega + \omega$ steps in general for Hausdorff ones. We also give a new proof that the closed set functor on metric spaces has no fixed points. |
12:35 - 14:00 |
Lunch
|
14:00 - 15:00 |
Lindley 102
Invited Talk: The Metatheory of Gradual Typing: State of the Art and Challenges
Abstract Gradually typed languages offer both static and dynamic checking of program invariants, from simple properties such as type safety, to more advanced ones such as information flow control (security), relational parametricity, and program correctness. To ensure that gradually typed languages behave as expected, researchers prove theorems about their language designs. For example, the Gradual Guarantee Theorem states that a programmer can migrate their program to become more or less statically checked and the resulting program will behave the same (modulo errors). As another example, the Noninterference Theorem (for information flow control) states that high security inputs do not affect the low security outputs of a program. These theorems are often proved using simulation arguments or via syntactic logical relations and modal logics. Sometimes the proofs are mechanized in a proof assistant, but often they are simply written in LaTeX. However, as researchers consider gradual languages of growing complexity, the time to conduct such proofs, and/or the likelihood of errors in the proofs, also grows. As a result there is a need for improved proof techniques and libraries of mechanized results that would help to streamline the development of the metatheory of gradually typed languages.
|
15:00 - 15:30 |
Rawles 107
Break
|
Special Session "Category Theory" in Machine Learning Lindley 102 | |
15:30 - 15:40 |
Welcome and Introduction
Slides
Abstract I will introduce the session and our speakers by giving a very brief overview of the state of the art in categorical approaches to machine learning.
|
15:40 - 16:10 |
Differential Categories and Machine Learning
Slides
Abstract Differential categories provide the categorical semantics of differentiation in both mathematics and computer science. Differential categories have also found many interesting applications, most recently in relation to automatic differentiation and machine learning. In this talk, I will give an overview of how differential categories (including reverse differential categories and tangent categories) are used for machine learning and other related areas. I'll talk about the past, what's been done so far, the present, what's currently being worked on, and the future, what are some important questions that need to be solved.
|
16:10 - 16:40 |
A dynamic monoidal category for deep learning
Slides
Abstract A gradient descender can be regarded as an "open dynamical system" in which the state of the system (parameters to a function) is updated based on external input (feedback). Open dynamical systems can be modeled using the theory of coalgebras for polynomial functors, which can be composed in network configurations to build up complex systems from simpler parts, and the structure of series and parallel composition of these systems is described by the notion of dynamic monoidal categories. I will introduce dynamic monoidal categories, describe how gradient descenders form an example, and discuss avenues for using this abstract structure to produce concrete applications.
|
16:40 - 17:10 |
Is there a place for semantics in machine learning?
Abstract Machine learning, especially deep learning, has had unprecedented success and in the past decade and a half and has generated much excitement beyond the confines of academia. As practitioners of formal semantics what can we contribute? In this highly personal and opinionated talk, I will recount some successes, some promising directions and some things that I am less optimistic about. Certainly, we can try to provide proofs of correctness and gain insight into why things work or don’t work. I am less optimistic that we can come up with entirely new algorithms or techniques, but it is still worth trying.
|
17:10 - 17:45 |
Panel
|
Wednesday, June 21, 2023
9:00 - 10:00 |
Online
Lindley 102
Joint Invited Talk: Machine-checked computational mathematics
Abstract Geared with increasingly fast computer algebra libraries and scientific computing software, computers have become amazing instruments for mathematical guesswork. In fact, computer calculations are even sometimes used to substantiate actual reasoning steps in proofs, later published in major venues of the mathematical literature. Yet surprisingly, little of the now standard techniques available today for verifying critical software (e.g., cryptographic components, airborne commands, etc.) have been applied to the programs used to produce mathematics. In this talk, we propose to discuss this state of affairs.
|
10:00 - 10:30 |
Rawles 107
Break
|
Joint Special Session: "Machine-checked mathematics" Lindley 102 | |
10:30 - 11:10 |
Online
Formalizing sphere eversion using Lean's mathematical library
Slides
Abstract Lean's mathematical library mathlib is a rapidly growing unified library of formalized mathematics. The library contains most of the material in a typical undergraduate curriculum, and is used as a basis to formalize many advanced results in mathematics.
One such result that I recently formalized together with Patrick Massot and Oliver Nash, is that you can turn a sphere inside out without creasing or tearing it. We formalized this as a corollary of a much deeper theorem in differential topology, the h-principle for open and ample first order differential relations, which is a general technique to find solutions for construction problems.
|
11:10 - 11:50 |
Synthetic Computability in Constructive Type Theory
Abstract Mathematical practice in most areas of mathematics is based on the
assumption that proofs could be made fully formal in a chosen
foundation in principle. This assumption is backed by decades of
formalising various areas of mathematics in various proof assistants
and various foundations. An area that has been largely neglected for
computer-assisted and machine-checked proofs is computability theory.
This is due to the fact that making computability theory (and its
sibling complexity theory) formal is several orders of magnitude more
involved than formalising other areas of mathematics, due to the --
citing Emil Post -- ``forbidding, diverse and alien formalisms in which
this [...] work of Gödel, Church, Turing, Kleene, Rosser [...] is
embodied.''. For instance, there have been various approaches of
formalising Turing machines, all to the ultimate dissatisfaction of the
respective authors, and none going further than constructing a
universal machine and proving the halting problem undecidable.
Professional computability theorist and teachers of computability
theory thus rely on the informal Church Turing thesis to carry out
their work and only argue the computability of described algorithms
informally.
A way out was proposed in the 1980s by Fred Richman and developed
during the last decade by Andrej Bauer: Synthetic computability theory,
where one assumes axioms in a constructive foundation which essentially
identify all (constructively definable) functions with computable
functions. A drawback of the approach is that assuming such an axiom on
top of the axiom of countable choice - which is routinely assumed in
this branch of constructive mathematics and computable analysis - is
that the law of excluded middle, i.e. classical logic, becomes invalid.
Computability theory is however dedicatedly classical: Almost all basic
results are presented by appeal to classical axioms and even the full
axiom of choice.
We observe that a slight foundational shift rectifies the situation: By
basing synthetic computability theory in the Calculus of Inductive
Constructions, the type theory underlying amongst others the Coq proof
assistant, where countable choice is independent and thus not provable,
axioms for synthetic computability are compatible with the law of
excluded middle.
I will give an overview over a line of research investigating a
synthetic approach to computability theory in constructive type theory,
discussing suitable axioms, a Coq library of undecidability proofs,
results in the theory of reducibility degrees, a synthetic definition
of Kolmogorov complexity, constructive reverse analysis of theorems,
and synthetic oracle computability.
Parts of results are in collaboration with Dominik Kirst, Gert Smolka,
Felix Jahn, Fabian Kunze, Nils Lauermann, Niklas Mück, and the
contributors of the Coq Library of Undecidability Proofs.
|
11:50 - 12:30 |
On the exquisite pleasure of doing coinduction and corecursion in Isabelle
Slides
Abstract Coinduction is a powerful technique for defining and reasoning about infinite objects and infinite behaviors. I will show how the Isabelle/HOL proof assistant is a natural home for coinduction. I will illustrate Isabelle's compositional infrastructure for codatatypes, coinductive predicates, and corecursive function definitions. As a working example, I will give a formal proof of the equivalence between Knaster-Tarski greatest fixpoints and provability by infinite proof trees -- which is a foundational result for coinduction in proof assistants. I will also show how Isabelle facilitates the sound mixture of recursion and corecursion.
|
12:30 - 14:00 |
Lunch
|
14:00 - 15:00 |
Luddy 1106
Joint Invited Talk: Integrating Cost and Behavior in Type Theory
Slides
Abstract Joint work with Harrison Grodin (Carnegie Mellon), Yue Niu (Carnegie Mellon), and Jon Sterling (Cambridge)
The computational view of intuitionistic dependent type theory is as an intrinsic logic of (functional) programs in which types are viewed as specifications of their behavior. Equational reasoning is particularly relevant in the functional case, where correctness can be formulated as equality between two implementations of the same behavior. Besides behavior, it is also important to specify and verify the cost of programs, measured in terms of their resource usage, with respect to both sequential and parallel evaluation. Although program cost can---and has been---verified in type theory using an extrinsic formulation of programs as data objects, what we seek here is, instead, an intrinsic account within type theory itself.
In this talk we discuss Calf, the Cost-Aware Logical Framework, which is an extension of dependent call-by-push-value type theory that provides an intrinsic account of both parallel and sequential resource usage for a variety of problem-specific measures of cost. Thus, for example, it is possible to prove that insertion sort and merge sort are equal as regards behavior, but differ in terms of the number of comparisons required to achieve the same results. But how can equal functions have different cost? To provide an intrinsic account of both intensional and extensional properties of programs, we make use of Sterling's notion of Synthetic Tait Computability, a generalization of Tait's method originally developed for the study of higher type theory.
In STC the concept of a ``phase'' plays a central role: originally as the distinction between the syntactic and semantic aspects of a computability structure, but more recently applied to the formulation of type theories for program modules and for information flow properties of programs. In Calf we distinguish two phases, the intensional and extensional, which differ as regards the significance of cost accounting---extensionally it is neglected, intensionally it is of paramount importance. Thus, in the extensional phase insertion sort and merge sort are equal, but in the intensional phase they are distinct, and indeed one is proved to have optimal behavior as regards comparisons, and the other not. Importantly, both phases are needed in a cost verification---the proof of the complexity of an algorithm usually relies on aspects of its correctness.
We will provide an overview of Calf itself, and of its application in the verification of the cost and behavior of a variety of programs. So far we have been able to verify cost bounds on Euclid's Algorithm, amortized bounds on batched queues, parallel cost bounds on a joinable form of red-black trees, and the equivalence and cost of the aforementioned sorting methods. In a companion paper at this meeting Grodin and I develop an account of amortization that relates the standard inductive view of instruction seequences with the coinductive view of data structures characterized by the same operations. In ongoing work we are extending the base of verified deterministic algorithms to those taught in the undergraduate parallel algorithms course at Carnegie Mellon, and are extending Calf itself to account for probabilistic methods, which are also used in that course.
Please note that Luddy Hall is not the same as Lindley Hall.
Luddy Hall is located at 700 N. Woodlawn Ave.,
about 0.6 mile (=1 km) north of Lindley Hall.
|
15:00 - 15:30 |
Luddy Hall Lobby
Break
|
Rewriting and Automata (CALCO) Luddy 1104 | |
15:30 - 15:55 |
String Diagram Rewriting Modulo Commutative (Co)monoid Structure
Abstract String diagrams constitute an intuitive and expressive graphical syntax that has found application in a very diverse range of fields including concurrency theory, quantum computing, control theory, machine learning, linguistics, and digital circuits. Rewriting theory for string diagrams relies on a combinatorial interpretation as double-pushout rewriting of certain hypergraphs. As previously studied, there is a `tension' in this interpretation: in order to make it sound and complete, we either need to add structure on string diagrams (in particular, Frobenius algebra structure) or pose restrictions on double-pushout rewriting (resulting in `convex' rewriting). From the string diagram viewpoint, imposing a full Frobenius structure may not always be natural or desirable in applications, which motivates our study of a weaker requirement: commutative monoid structure. In this work we characterise string diagram rewriting modulo commutative monoid equations, via a sound and complete interpretation in a suitable notion of double-pushout rewriting of hypergraphs. |
15:55 - 16:20 |
Online
Completeness for categories of generalized automata
Slides
Abstract We present a slick proof of completeness and cocompleteness for categories of $F$-automata, where the span of maps $E\leftarrow E\otimes I \to O$ that usually defines a deterministic automaton of input $I$ and output $O$ in a monoidal category $(\mathcal K,\otimes)$ is replaced by a span $E\leftarrow F E \to O$ for a generic endofunctor $F : \mathcal K\to \mathcal K$ of a generic category $\mathcal K$: these automata exist in their `Mealy' and `Moore' version and form categories $F\text{-}\mathsf{Mly}$ and $F\text{-}\mathsf{Mre}$; such categories can be presented as strict 2-pullbacks in $\mathsf{Cat}$ and whenever $F$ is a left adjoint, both $F\text{-}\mathsf{Mly}$ and $F\text{-}\mathsf{Mre}$ admit all limits and colimits that $\mathcal K$ admits.
We mechanize some of our main results using the proof assistant Agda and the library `agda-categories`.
|
16:20 - 16:45 |
Online
Generators and Bases for Monadic Closures
Slides
Abstract It is well-known that every regular language admits a unique minimal deterministic acceptor. Establishing an analogous result for non-deterministic acceptors is significantly more difficult, but nonetheless of great practical importance. To tackle this issue, a number of sub-classes of non-deterministic automata have been identified, all admitting canonical minimal representatives. In previous work, we have shown that such representatives can be recovered categorically in two steps. First, one constructs the minimal bialgebra accepting a given regular language, by closing the minimal coalgebra with additional algebraic structure over a monad. Second, one identifies canonical generators for the algebraic part of the bialgebra, to derive an equivalent coalgebra with side effects in a monad. In this paper, we further develop the general theory underlying these two steps. On the one hand, we show that deriving a minimal bialgebra from a minimal coalgebra can be realized by applying a monad on an appropriate category of subobjects. On the other hand, we explore the abstract theory of generators and bases for algebras over a monad. |
16:45 - 17:10 |
Fractals from Regular Behaviours
Slides
Abstract We are interested in connections between the theory of fractal sets obtained as attractors of iterated function systems and process calculi. To this end, we reinterpret Milner's expressions for processes as contraction operators on a compact metric space. When the space is, for example, the plane, the denotations of fixed fixed point terms correspond to familiar fractal sets. We give a sound and complete axiomatization of fractal equivalence, the congruence on terms consisting of pairs that construct identical self-similar sets in all interpretations. We further make connections to labeled Markov chains and to invariant measures. In all of this work, important results from process calculi are used. For example, we use Rabinovich's completeness theorem for trace equivalence in our own completeness theorem. In addition to our results, we also raise many questions related to both fractals and process calculi.
|
MFPS Luddy 1106 | |
15:30 - 15:55 |
Online
Graded Differential Categories and Graded Differential Linear Logic
Abstract In Linear Logic ($\mathsf{LL}$), the exponential modality $\oc$ brings forth a distinction between non-linear proofs and linear proofs, where linear means using an argument exactly once. Differential Linear Logic ($\mathsf{DiLL}$) is an extension of Linear Logic which includes additional rules for $\oc$ which encode differentiation and the ability of linearizing proofs. On the other hand, Graded Linear Logic ($\mathsf{GLL}$) is a variation of Linear Logic in such a way that $\oc$ is now indexed over a semiring $R$. This $R$-grading allows for non-linear proofs of degree $r \in R$, such that the linear proofs are of degree $1 \in R$. There has been recent interest in combining these two variations of $\mathsf{LL}$ together and developing Graded Differential Linear Logic ($\mathsf{GDiLL}$). In this paper we present a sequent calculus for $\mathsf{GDiLL}$, as well as introduce its categorical semantics, which we call graded differential categories, using both coderelictions and deriving transformations. We prove that symmetric powers always give graded differential categories, and provide other examples of graded differential categories. We also discuss graded versions of (monoidal) coalgebra modalities, additive bialgebra modalities, and the Seely isomorphisms, as well as their implementations in the sequent calculus of $\mathsf{GDiLL}$. |
15:55 - 16:20 |
Algebra of Self-Expression
Abstract Typical arguments for results like Kleene's Second Recursion Theorem and the existence of self-writing computer programs bear the fingerprints of equational reasoning and combinatory logic. In fact, the connection of combinatory logic and computability theory is very old, and this paper extends this connection in new ways. In one direction, we counter the main trend in both computability theory and combinatory logic of heading straight to undecidability. Instead, this paper proposes using several very small equational logics to examine results in computability theory itself. These logics are decidable via term rewriting. We argue that they have something interesting to say about computability theory. The paper has a few possibly-surprising results such as a classification of quine programs (programs which output themselves) in two decidable fragments. The classification goes via examination of normal forms in term rewriting systems, hence the title of the paper. The classification is an explanation of why all quine programs (in any language) are "pretty much the same, except for inessential details." |
16:20 - 16:45 |
Saturating Automata for Game Semantics
Slides
Abstract Saturation is a fundamental game-semantic property satisfied by strategies interpreting higher-order concurrent programs. It states that strategies must be closed under certain rearrangments of moves, used to express the intuition that program moves (P-moves) depend on moves made by the environment (O-moves).
We propose an automata model over an infinite alphabet, called saturating automata, for which all accepted languages are guaranteed to satisfy a closure property corresponding to saturation.
We show how to translate the finitary fragment of Concurrent Idealized Algol (FICA) into saturating automata, confirming their suitability for modelling higher-order concurrency. Moreover, we show that, for terms in normal form, the resultant automaton has linearly many transitions and states with respect to term size, and can be constructed in polynomial time. This is in contrast to earlier attempts at finding automata-theoretic models of FICA, which did not guarantee saturation and involved an exponential blow-up during translation, even for normal forms. |
16:45 - 17:10 |
Fixpoint constructions in focused orthogonality models of linear logic
Abstract Orthogonality is a notion based on the duality between programs and their environments used to determine when they can be safely combined. For instance, it is a powerful tool to establish termination properties in classical formal systems. It was given a general treatment with the concept of orthogonality category, of which numerous models of linear logic are instances, by Hyland and Schalk. This paper considers the subclass of focused orthogonalities.
We develop the theory of fixpoint constructions in focused orthogonality categories. Central results are lifting theorems for initial algebras and final coalgebras. These crucially hinge on the insight that focused orthogo- nality categories are relational fibrations. The theory provides an axiomatic categorical framework for models of linear logic with least and greatest fixpoints of types.
We further investigate domain-theoretic settings, showing how to lift bifree algebras, used to solve mixed- variance recursive type equations, to focused orthogonality categories. |
17:10 - 17:35 |
Cartesian Differential Kleisli Categories
Slides
Abstract Cartesian differential categories come equipped with a differential combinator which axiomatizes the fundamental properties of the total derivative from differential calculus. The objective of this paper is to understand when the Kleisli category of a monad is a Cartesian differential category. We introduce Cartesian differential monads, which are monads whose Kleisli category is a Cartesian differential category by way of lifting the differential combinator from the base category. Examples of Cartesian differential monads include tangent bundle monads and reader monads. We give a precise characterization of Cartesian differential categories which are Kleisli categories of Cartesian differential monads using abstract Kleisli categories. We also show that the Eilenberg-Moore category of a Cartesian differential monad is a tangent category. |
18:00 |
Upland Brewery, 350 W 11th St.
Conference Dinner
|
Thursday, June 22, 2023
9:00 - 10:00 | |
10:00 - 10:30 |
Rawles 107
Break
|
MFPS Lindley 102 | |
10:30 - 10:55 |
Online
A complete V-equational system for graded lambda-calculus
Abstract Modern programming frequently requires generalised notions of program equivalence based on a metric or a similar structure. Previous work addressed this challenge by introducing the notion of a V-equation, i.e. an equation labelled by an element of a quantale V, which covers inter alia (ultra-)metric, classical, and fuzzy (in)equations. It also introduced a V-equational system for the linear variant of lambda-calculus where any given resource must be used exactly once.
In this paper we drop the (often too strict) linearity constraint by adding graded modal types which allow multiple uses of a resource in a controlled manner. We show that such a control, whilst providing more expressivity to the programmer, also interacts more richly with V-equations than the linear or Cartesian cases. Our main result is the introduction of a sound and complete V-equational system for a lambda-calculus with graded modal types interpreted by what we call a Lipschitz exponential comonad. We also show how to build such comonads canonically via a universal construction, and use our results to derive graded metric equational systems (and corresponding models) for programs with timed and probabilistic behaviour. |
10:55 - 11:20 |
A Categorical Framework for Program Semantics and Semantic Abstraction
Abstract Categorical semantics of type theories are often characterized as
structure-preserving functors. This is because in category
theory both the syntax and the domain of interpretation are
uniformly treated as structured categories, so that we can
express interpretations as structure-preserving functors between
them. This mathematical characterization of semantics makes it
convenient to manipulate and to reason about relationships
between interpretations. Motivated by this success of functorial
semantics, we address the question of finding a functorial
analogue in abstract interpretation, a general framework for
comparing semantics, so that we can bring similar benefits of
functorial semantics to semantic abstractions used in abstract
interpretation. Major differences concern the notion of
interpretation that is being considered. Indeed, conventional
semantics are value-based whereas abstract interpretation
typically deals with more complex properties. In this paper, we
propose a functorial approach to abstract interpretation and
study associated fundamental concepts therein. In our approach,
interpretations are expressed as oplax functors in the category
of posets, and abstraction relations between interpretations are
expressed as lax natural transformations representing
concretizations. We present examples of these formal concepts
from monadic semantics of programming languages and discuss
soundness.
|
11:20 - 11:45 |
Dependent Type Refinements for Futures
Abstract Type refinements combine the compositionality of typechecking with the expressivity of program logics, offering a synergistic approach to program verification. In this paper we apply dependent type refinements to SAX, a futures-based process calculus that arises from the Curry-Howard interpretation of the intuitionistic semi-axiomatic sequent calculus and includes unrestricted recursion both at the level of types and processes. With our type refinement system, we can reason about the partial correctness of SAX programs, complementing prior work on sized type refinements that supports reasoning about termination. Our design regime synthesizes the infinitary proof theory of SAX with that of bidirectional typing and Hoare logic, deriving some standard reasoning principles for data and (co)recursion while enabling information hiding for codata. We prove syntactic type soundness, which entails a notion of partial correctness that respects codata encapsulation. We illustrate our language through a few simple examples. |
11:45 - 12:10 |
Profinite lambda-terms and parametricity
Slides
Abstract Combining ideas coming from Stone duality and Reynolds parametricity, we formulate in a clean and principled way a notion of profinite lambda-term which, we show, generalizes at every type the traditional notion of profinite word coming from automata theory. We start by defining the Stone space of profinite lambda-terms as a projective limit of finite sets of usual lambda-terms, considered modulo a notion of equivalence based on the finite standard model. One main contribution of the paper is to establish that, somewhat surprisingly, the resulting notion of profinite lambda-term coming from Stone duality lives in perfect harmony with the principles of Reynolds parametricity. In addition, we show that the notion of profinite lambda-term is compositional by constructing a cartesian closed category of profinite lambda-terms, and we establish that the embedding from lambda-terms modulo beta-eta-conversion to profinite lambda-terms is faithful using Statman’s finite completeness theorem. Finally, we prove that the traditional Church encoding of finite words into lambda-terms can be extended to profinite words, and leads to a homeomorphism between the space of profinite words and the space of profinite lambda-terms of the corresponding Church type. |
12:10 - 12:35 |
Online
Jumps: the exponential logic of sequentialization
Slides
Abstract Linear logic has provided new perspectives on proof-theory, denotational semantics and the study of programming languages. One of its main successes are proof-nets, canonical representations of proofs that lie at the intersection between logic and graph theory. In the case of the minimalist proof-system of multiplicative linear logic without units (MLL), these two aspects are completely fused: proof-nets for this system are graphs satisfying a correctness criterion that can be fully expressed in the language of graphs.
For more expressive logical systems (containing logical constants, quantifiers and exponential modalities), this is not completely the case. The purely graphical approach of proof-nets deprives them of any sequential structure that is crucial to represent the order in which arguments are presented, which is necessary for these extensions. Rebuilding this order of presentation — sequentializing the graph — is thus a requirement for a graph to be logical. Presentations and study of the artifacts ensuring that sequentialization can be done, such as boxes or jumps, are an integral part of researches on linear logic.
Jumps, extensively studied by Faggian and di Giamberardino, can express intermediate degrees of sequentialization between a sequent calculus proof and a fully desequentialized proof-net. We propose to analyze the logical strength of jumps by internalizing in an extention of MLL where axioms on a specific formula introduce constrains on the possible sequentializations. The jumping formula needs to be treated non-linearly, which we do either axiomatically, or by embedding it in a very controlled fragment of multiplicative-exponential linear logic, uncovering the exponential logic of sequentialization. |
12:35 - 14:00 |
Lunch
|
Special Session on Semantics and Compilers Lindley 102 | |
14:00 - 14:40 |
Semantic Intermediate Representations for Safe Language Interoperability
Slides
Abstract Designers of typed programming languages commonly prove meta-theoretic properties such as type soundness for at least a core of their language. But any practical language implementation must provide some way of interoperating with code written in other languages -- usually via a foreign-function interface (FFI) -- which opens the door to new, potentially unsafe behaviors that aren't accounted for in the original type soundness proofs. Despite the prevalence of interoperability in practical software, principled foundations for the end-to-end design, implementation, and verification of interoperability mechanisms have been largely neglected.
In this talk, I'll advocate a proof technique for ensuring soundness of practical languages, which implement interoperability using glue code that mediates interaction between languages after compilation to a common lower-level intermediate representation (IR). The technique involves building a _semantic intermediate representation_: a semantic model of source-language types as relations on terms of the lower-level IR. Semantic IRs can be used to guide the design and implementation of sound FFIs and to verify that the IR glue code used to implement conversions ensures type soundness. I'll conclude with some avenues of future work using semantic IRs as a basis for principled design of language interoperability.
|
14:40 - 15:20 |
Compiling with Call-by-push-value
Slides
Abstract Paul Levy demonstrated that call-by-push-value (CBPV) provides a natural way to decompose both the operational and denotational semantics of call-by-value and call-by-name into more fundamental primitive notions of value and computation type. In this talk we discuss how this same call-by-push-value structure naturally arises in stack-based implementation techniques for compilation of functional programming languages. We show how several standard compilation passes (continuation-passing style, closure conversion) amount to the correctness of Church-encoding in impredicative polymorphic CBPV. Further we relate CBPV to existing calculi for compilation: ANF, Monadic form, SSA, CPS and speculate on potential advantages and disadvantages of incorporating CBPV structure explicitly into compiler intermediate languages.
|
15:20 - 16:00 |
Three dimensions of compositionality in CompCert semantics
Slides
Abstract Compositional models have been designed for a broad range of language features, but using compositional semantics to formulate and establish compiler correctness theorems remains challenging. In this context, horizontal composition of program components interacts with vertical composition of compiler phases, as both must be compatible with the refinement relation used to formulate compositional semantics preservation.
In this talk I will outline how this challenge plays out in the context of the certified compiler CompCert. By borrowing concepts and tools from higher category theory, we can give a systematic account of the way composition principles interact with each other, and give string diagram algebras for various models. I will then explain how the CompCertO model can accomodate spatial as well as horizontal and vertical compositionality, extending its capabilities as a verification framework.
This is work in progress with Yu Zhang, Zhong Shao and Yuting Wang.
|
16:00 - 16:30 |
Rawles 107
Break
|
MFPS Lindley 102 | |
16:30 - 16:55 |
Dynamic Separation Logic
Slides
Abstract This paper introduces a dynamic logic extension of separation logic.
The assertion language of separation logic is extended with modalities for the
five types of the basic instructions of separation logic: simple assignment,
look-up, mutation, allocation, and de-allocation.
The main novelty of the resulting dynamic logic is that it
allows to combine different approaches to resolving these modalities.
One such approach is based on the standard weakest precondition calculus of separation logic.
The other approach introduced in this paper provides a novel alternative
formalization in the proposed dynamic logic extension of separation logic.
The soundness and completeness of this axiomatization has been formalized in the Coq theorem prover. |
16:55 - 17:20 |
A denotationally-based program logic for higher-order store
Abstract Separation logic is used to reason locally about stateful programs. State of the art program logics for higher-order store are usually built on top of untyped operational semantics, in part because traditional denotational methods have struggled to simultaneously account for general references and parametric polymorphism. The recent discovery of simple denotational semantics for general references and polymorphism in synthetic guarded domain theory has enabled us to develop TULIP, a higher-order separation logic over the typed equational theory of higher-order store for a monadic version of System F{mu,ref}. The Tulip logic differs from operationally-based program logics in two ways: predicates range over the meanings of typed terms rather than over the raw code of untyped terms, and they are automatically invariant under the equational congruence of higher-order store, which applies even underneath a binder. As a result, "pure" proof steps that conventionally require focusing the Hoare triple on an operational redex are replaced by a simple equational rewrite in Tulip. We have evaluated Tulip against standard examples involving linked lists in the heap, comparing our abstract equational reasoning with more familiar operational-style reasoning. Our main result is the soundness of Tulip, which we establish by constructing a BI-hyperdoctrine over the denotational semantics of F{mu,ref} in an impredicative version of synthetic guarded domain theory. |
17:20 - 17:45 |
Semantics of multimodal adjoint type theory
Slides
Abstract We show that contrary to appearances, Multimodal Type Theory (MTT) over a 2-category M can be interpreted in any M-shaped diagram of categories having, and functors preserving, M-sized limits, without the need for extra left adjoints. This is achieved by a construction called "co-dextrification" that co-freely adds left adjoints to any such diagram, which can then be used to interpret the "context lock" functors of MTT. Furthermore, if any of the functors in the diagram have right adjoints, these can also be internalized in type theory as negative modalities in the style of FitchTT. We introduce the name Multimodal Adjoint Type Theory (MATT) for the resulting combined general modal type theory. In particular, we can interpret MATT in any finite diagram of toposes and geometric morphisms, with positive modalities for inverse image functors and negative modalities for direct image functors. |
Friday, June 23, 2023
9:00 - 10:00 |
Online
Lindley 102
Invited Talk: Incorrectness Logic for Scalable Bug Detection
Abstract Incorrectness Logic (IL) has recently been advanced as a logical under-approximate theory for proving the presence of bugs—dual to Hoare Logic, which is an over-approximate theory for proving the absence of bugs. To facilitate scalable bug detection, later we developed incorrectness separation logic (ISL) by marrying the under-approximate reasoning of IL with the local reasoning of separation logic and its frame rule. This locality leads to techniques that are compositional both in code (concentrating on a program component) and in the resources accessed (spatial locality), without tracking the entire global state or the global program within which a component sits. This enables reasoning to scale to large teams and codebases: reasoning can be done even when a global program is not present. We then developed Pulse-X, an automatic program analysis for catching memory safety errors, underpinned by ISL. Using PulseX, deployed at Meta, we found a number of real bugs in codebases such as OpenSSL, which were subsequently confirmed and fixed. We have compared the performance of Pulse-X against the state-of-the-art tool Infer on a number of large programs; our comparison shows that Pulse-X is comparable with Infer in terms of performance, and in certain cases its fix-rate surpasses that of Infer.
|
10:00 - 10:30 |
Rawles 107
Break
|
MFPS Lindley 102 | |
10:30 - 10:55 |
Online
A Language for Evaluating Derivatives of Functionals Using Automatic Differentiation
Slides
Abstract We present a simple functional programming language, called Dual PCF, that implements forward mode automatic differentiation using dual numbers in the framework of exact real number computation. The main new feature of this language is the ability to evaluate correctly up to the precision specified by the user -- in a simple and direct way -- the directional derivative of functionals as well as first order functions. In contrast to other comparable languages, Dual PCF also includes the recursive operator for defining functions and functionals. We provide a wide range of examples of Lipschitz functions and functionals that can be defined in Dual PCF. We use domain theory both to give a denotational semantics to the language and to prove the correctness of the new derivative operator using logical relations. To be able to differentiate functionals---including on function spaces equipped with their compact-open topology that do not admit a norm---we develop a domain-theoretic directional derivative that is Scott continuous and extends Clarke's subgradient of real-valued locally Lipschitz maps on Banach spaces to real-valued continuous maps on Hausdorff topological vector spaces. Finally, we show that we can express arbitrary computable linear functionals in Dual PCF. |
10:55 - 11:20 |
Pearl's and Jeffrey's Update as Modes of Learning in Probabilistic Programming
Slides
Abstract The concept of updating a probability distribution in the light of new
evidence lies at the heart of statistics and machine learning. Pearl's
and Jeffrey's rule are two natural update mechanisms which lead to
different outcomes, yet their precise similarities and differences
remains mysterious. This paper clarifies their relationship in several
ways: via separate descriptions of the two update mechanisms in terms
of probabilistic programs and sampling semantics, and via different
notions of likelihood (for Pearl and for Jeffrey). Moreover, it is
shown that Jeffrey's update rule arises as a variational inference technique under natural independence assumptions. In terms of categorical probability theory, this amounts to an analysis of the situation in
terms of the behaviour of the multiset functor, lifted to the Kleisli
category of the distribution monad. |
11:20 - 11:45 |
A model of stochastic memoization and name generation in probabilistic programming: categorical semantics via monads on presheaf categories
Abstract Stochastic memoization is a higher-order construct of probabilistic programming languages that is key in Bayesian nonparametrics, a modular approach that allows us to extend models beyond their parametric limitations and compose them in an elegant and principled manner. Stochastic memoization is simple and useful in practice, but semantically elusive, particularly regarding dataflow transformations. As the naive implementation resorts to the state monad, which is not commutative, it is not clear if stochastic memoization preserves the dataflow property -- ie whether we can reorder the lines of a program without changing its semantics, provided the dataflow graph is preserved.
In this paper, we give an operational and categorical semantics to stochastic memoization and name generation in the context of a minimal probabilistic programming language, for a restricted class of functions.
Our contribution is a first model of stochastic memoization of constant Bernoulli functions with a non-enumerable type, which validates data flow transformations, bridging the gap between traditional probability theory and higher-order probability models. Our model uses a presheaf category and novel probability monad on it. |
11:45 - 12:10 |
Joint Distributions in Probabilistic Semantics
Slides
Abstract Various categories have been proposed as targets for the denotational semantics of higher-order probabilistic programming languages. One such proposal involves joint probability distributions (couplings) used in Bayesian statistical models with conditioning. In previous treatments, composition of joint measures was performed by disintegrating to obtain Markov kernels, composing the kernels, then reintegrating to obtain a joint measure. Disintegrations exist only under certain restrictions on the underlying spaces. In this paper we propose a category whose morphisms are joint finite measures in which composition is defined without reference to disintegration, allowing its application to a broader class of spaces. The category is symmetric monoidal with a pleasing symmetry in which the dagger structure is a simple transpose. |
12:10 - 12:35 |
Propositional Logics for the Lawvere Quantale
Abstract Lawvere showed that generalised metric spaces are categories enriched over [0, \infty], the quantale of the positive extended reals. The statement of enrichment is a quantitative analogue of being a preorder. Towards seeking a logic for quantitative metric reasoning, we investigate three (closely related) many-valued propositional logics over the Lawvere quantale. The basic logical connectives shared by all three logics are those that can be interpreted in any quantale, viz finite conjunctions and disjunctions, tensor (addition for the Lawvere quantale), and linear implication (here a truncated subtraction); to these we add, in turn, the constant 1 to express integer values, and scalar multiplication by a non-negative real to express general affine combinations. Propositional Boolean logic can already be interpreted in the first of these logics; Łukasiewicz logic can be interpreted in the second; Ben Yaacov's continuous propositional logic can be interpreted in the third; and quantitative equational logic can be interpreted in the third if we allow inference systems instead of axiomatic systems.
For each of these logics we develop a natural deduction system which we prove to be decidably complete w.r.t. the quantale-valued semantics. The heart of the completeness proof makes use of Motzkin transposition theorem. Consistency is also decidable; the proof makes use of Fourier-Motzkin elimination of linear inequalities.
Strong completeness does not hold in general, even for theories over finitely-many propositional variables; indeed even an approximate form of strong completeness in the sense of Ben Yaacov -provability up to arbitrary precision-does not hold. However, we can show it for such theories having only models never mapping variables to \infty; the proof uses Hurwicz's general form of the Farkas lemma. |
12:35 - 14:00 |
Lunch
|
Special session on Categories of bidirectional processes Lindley 102 | |
14:00 - 14:40 |
Online
Introduction to categorical cybernetics
Abstract The term “categorical cybernetics” refers to two things: a general theory of categories of optics and related constructions, and a lot of specific examples. These tend to arise in topics that historically were called "cybernetics" (before that term drifted beyond recognition) - AI, control theory, game theory, systems theory. Specific examples of "things that compose optically" are derivatives (well known as backprop), exact and approximate Bayesian inverses, payoffs in game theory, values in control theory and reinforcement learning, updates of data (the original setting for lenses), and updates of state machines. I'll do a gentle tour through these, emphasising their shared structure and the field we're developing to study it.
|
14:40 - 15:20 |
Lenses and Dialectica Constructions
Slides
Abstract Dialectica categories were first introduced in my doctoral work to provide an internal characterization of Godel's Dialectica Interpretation. Many years later it was realized that Dialectica category morphisms have the same types as lenses, for a specific definition of lenses. In this short talk, we explain how the motivation for lenses, a special case of bidirectional transformations, in terms of connections of computing systems might suggest uses for dialectica versions of lenses.
|
15:20 - 16:00 |
Optics: the Algebra of Monoidal Decomposition
Slides
Abstract Optics have gained popularity in recent years. They appear in differential categories, compositional game theory, bidirectional accessors, process dependency and causality. At the same time, they remain a bit mysterious: why are they so effective? We claim that, apart from their usual monoidal tensor (⊗), optics have a second, hidden, monoidal tensor (⊲). This tensor structure is the crucial ingredient for some applications of optics, but it hides in plain sight because it is non-representable: the tensor of two objects is not again an object, but only a presheaf. Once we can clearly see this second tensor, optics become a non-representable duoidal algebra that is the universal such one over a monoidal category in a precise sense.
This talk is based on "The Produoidal Algebra of Process Decomposition", a recent joint work with Matt Earnshaw and James Hefford.
|
16:00 - 16:30 |
Rawles 107
Break
|
MFPS Lindley 102 | |
16:30 - 16:55 |
Online
Implicative models of set theory
Abstract We will show that using Miquel's implicative algebras one can produce models of intuitionistic set theory IZF generalizing both realizability models à la Friedman/Rosolini/McCarty and Heyting-valued models. Since Miquel showed that every tripos over Set is equivalent to one arising from an implicative algebra, this has as consequence that if one assumes the inaccessible cardinal axiom, then every topos which is obtained from a Set-based tripos using the tripos-to-topos construction hosts a model of intuitionistic set theory. |
16:55 - 17:20 |
Online
A topological counterpart of well-founded trees in dependent type theory
Abstract Within dependent type theory, we provide a topological counterpart of well-founded trees (for short, W-types) by using a proof-relevant version of the notion of inductively generated suplattices introduced in the context of formal topology
under the name of “inductively generated basic covers”. In more detail, we show, firstly, that in Homotopy Type Theory, W-types and proof-relevant inductively generated basic covers are propositionally mutually encodable. Secondly, we prove they are definitionally mutually encodable in the Agda implementation of intensional Martin-Löf’s type theory. Finally, we reframe the equivalence in the Minimalist Foundation framework. All the results have been checked in the Agda proof assistant. |