**1.1.**
Mis-quoting Gauss and Eric Bell,
Logic is the Queen and Servant of Mathematics.

Ever since Logic became a mathematical discipline in the nineteenth century, foundational discussions (whether using sets, types or categories) have been based on four premises:

- logicians know best, and mathematicians should be grateful for what they are given;
- it is up to mathematicians to glue the continuum back together from the dust that logicians have provided; whilst
- they have convinced each other that foundations should
permit
*arbitrary*abstraction of mathematical processes; and - anyone who tinkers with the foundations risks bringing the whole edifice of science crashing down.

**1.2.** The first two of these attitudes flew in the face of the tradition of
the preceding two millennia, in which, for example, Euclid proved lots of
*rigorous theorems* about lines, circles and other figures
*as things in themselves*.
Frege and Hilbert believed the third, and were admonished for it by
Russell and Gödel.
We now have a compromise situation in which the admissible forms of
abstraction are in equilibrium with the extent to which the typical
mathematician understands them. The result of this is that we debate
the “truth” of the axiom of choice, *etc*. , without any point of reference
in the real world against which to assess it.

So it is impossible to axiomatise *arbitrary* abstraction,
and nor is the *status quo* clearly defined.

There is, however, a natural *baseline* for foundations,
namely *computation* (also called *recursion*),
because the Church–Turing thesis says that all forms of computability
are essentially equivalent.
Mathematics in the time of Gauss and before was computational.
As we shall see, this also agrees with many of the intuitions of
modern general topology.
However, we also need to consider logically stronger systems,
in particular in order to study the *behaviour* of computation
and topology, for example whether a program terminates or an open
subset covers a compact one.

**1.3.**
What do we mean by “foundations” in mathematics anyway?

The basics are *counting* and *measurement*.
Over the millennia, mathematicians have found clever ways of deriving
results about these things by means of inventions
whose originally fictitious nature is still remembered in their names,
such as *irrational*, *imaginary* and *ideal* numbers.

For example, the front-line issue in mathematics during the sixteenth century
was how to solve cubic equations.
Scipione del Ferro and Niccolò Tartaglia had found
a method for the case where there is only one (real) root.
Rafaele Bombelli showed how to extend this to the case
where there are three real roots,
but his *intermediate* calculations involved square roots of −1,
which had not been needed in the original case [FG87, §8.A].

The foundational question here is not so much how to “define” √−1
as to show that, if we introduce it in the middle of a calculation
but then eliminate it, we will not obtain a contradiction.
More precisely, whatever we do get by this method could have been
found using real numbers alone.
In logical jargon, complex numbers are a *conservative extension*.

So what is their value? Just try doing Fourier analysis without them, and see what a mess you get into!

An *axiom* was originally a statement that is obvious and
needs no justification, but the meaning of this word has changed.
It is still a *starting point*, but one that is carefully chosen
to facilitate the development of a particular body of abstract theory.
The purpose of *foundations* should be reinterpreted in the same way,
on a larger scale.

**1.4.**
As the quotation from Laplace says, language and notation can be
powerful driving forces behind a mathematical theory,
whilst a bad notation (such as using the letters cd i lmvx
to write numbers) obstructs even the simplest task.
As the subject has got more powerful and sophisticated, so the design
of language and notation has become a professional discipline in itself.
This discipline is, or at least ought to be, called Logic.
It has now enjoyed well over a century of its own mathematical development
and applications.

However, this does not justify the prescriptive role
that Logic*ists* and their successors have claimed.
The “orthodox” foundations of mathematics were based on the ideas
from quite a short part of the history of Logic,
and even in this period were not the most important things
what were going on in Mathematics.
Moreover, the *conceptual* structure of the modern subject,
in particular Emmy Noether’s Modern Algebra,
developed after the foundations that were allegedly designed for it.
At the very least, these foundations should be reviewed
in the light of other developments over the last century
and the technological needs of the present one.

**1.5.** The philosophical thesis of this paper is that we can
make Logic the *servant* of (a particular discipline in) Mathematics.
Starting from what we see as the principal theorems of the Mathematics
that we want to do,
we employ certain results that link categorical and symbolic Logic
as *tools* to create a new language for our chosen
Mathematical discipline.
The formal method that we propose is set out in
the next two sections.

In motivating this, we shall use the metaphor that a logician is like
an *engineer* or an *architect* who has been commissioned to
design a new gadget or building for a *client*, who in turn
has *customers*.
However, we stress that our proposal is not just philosophy,
but exploits extensive
technical development in a number of mathematical disciplines.

Being a mathematician, I only discuss this thesis very briefly and make my points perhaps more tersely than philosophers and historians would do. So I leave it them to explore them more fully. I feel compelled to say some of these things because I have found it impossible to do a substantial piece of research on the reformulation of general topology within the “orthodox” framework that my colleagues, referees, conferences, journals, universities and funding agencies say that I should use.

**1.6.**
The instruments that we shall apply to designing a foundation *for*
(some part of) mathematics are category theory and modern symbolic logic
(*i.e*. proof theory and type theory).
Very many excellent Mathematicians
“work in an elementary topos” or “in Martin-Löf type theory”,
but if we were to do this we would once again be bowing
to (other) Logicians as our masters.
So, in order to employ these disciplines as servants of mathematics,
we make our own *selection* from the menu of techniques that they offer.

Category theory grew out of (algebraic) topology and algebraic geometry,
without any foundational pretensions in the beginning
[ML88, McL90].
It is now used throughout mathematics,
primarily for the notion of *universal property* or *adjunction*.
Mathematicians in other fields who have some feeling for the unity
of the subject express the headline theorems of their speciality in this form.
The categorical approach seems to be very effective
in taking decades’ worth of experience in one discipline,
distilling it into adjunctions and a very small number
of other abstract but widely applicable concepts,
and transferring it to other subjects.
These now include the theoretical parts of computer science and physics.

Thus the client’s brief to the architect of the new foundations will be provided as universal properties.

On the other hand, symbols are the currency of the everyday business
of mathematics, so
they must be the way in which the new foundations will be *used*.
However, any *pre-existing* symbolic language acquiesces to
a lot of foundational prejudgements without giving them proper scrutiny.
We aim to replace these assumptions with our own principles.
These are, in the first instance, formulated using category theory,
because it is *agnostic*.

We shall demonstrate the *inter-relationship* between category theory
and symbolic logic in the next section.
It is this that makes them together a powerful method for devising
new language and notation for mathematics.

**1.7.** The greater part of this paper is a survey of a research programme (ASD)
that applies the proposed methodology in the case where the client is
general topology, and its customers include geometric topologists,
analysts and theoretical computer scientists.

The new theory axiomatises continuity *directly*,
without any recourse whatever to set theory or its usual alternatives.
It begins from ideas concerning the duality of algebra and topology
that were introduced by Marshall Stone
(§§4, 5 & 7).
These are put in an abstract form (hence the name ASD)
using a monadic adjunction (§6),
along with an algebraic equation that characterises the way
in which the Sierpiński space uniquely classifies open subspaces
(§8).
Surprisingly much of the basic theory of open, closed, compact and
Hausdorff spaces and subspaces can be recovered in this setting,
and the resulting theory is computable, at least in principle.

However, when we try to apply this theory to discrete mathematics
and computation,
we find that we need *something* to play the role of “sets”.
For this, we use those spaces that come with ∃ and =,
which we call *overt discrete*
(§9).
They can be made to behave like traditional set theory if we add
the non-computable hypothesis that all spaces have “underlying sets”
(§10).

The infinitary joins in general topology make a surprisingly late explicit appearance here, completing the re-axiomatisation of locally compact spaces (§11) and providing the basis for recursive elementary real analysis. To go beyond that, however, we need to re-think the underlying categorical framework (§12), which is work in progress [M].

The purpose of this paper is to tell the whole story of ASD, from beginning to end, and it summarises work that would fill a book. This means that it has to take a “broad brush” approach, and it is not encyclopedic. If you are looking for the details of a particular topic, you should therefore consult the articles that are cited here.

Since many of the details of the theory are still missing,
the time is not ripe to write a textbook about ASD.
A few arguments are given here in full if they are foundationally
important but presented inadequately in the other papers.
In particular, we give the whole of the proof that these methods
characterise an elementary topos *without* mentioning
subobjects (§9.3).

**1.8.**
The most prominent mathematical achievement of the ASD programme so far
is the account it gives of recursive elementary real analysis,
*including the Heine–Borel theorem*.
This states that [0,1]⊂ℝ is compact
in the sense that every cover by a family of open subspaces
contains a finite sub-family that also covers [I].

This is the practical reason why no computable foundations of analysis
have previously been developed that are appropriate for mathematics.
Suppose we take ordinary set theory and analysis on the one hand
and computability theory on the other,
putting them together in the obvious way,
by defining ℝ as
the *set of computably representable real numbers*.
Then we find that this important result *fails*.
Roughly speaking, this is because we may *list* the computable
real numbers *a*_{0},*a*_{1},… according to the codes that represent them,
and cover each of them by intervals of length
¼,⅛,
…, no finite subset of which would suffice.
See [BR87, §3.4] for a very clear account of this phenomenon.

The Heine–Borel theorem *is* valid in ASD, on the other hand, so
it is more in line with both the traditions and applications of topology.

**1.9.** The applications of Abstract Stone Duality to elementary real analysis
in [J] also demonstrate the utility of the new
language. That paper provides an entirely different introduction to
ASD that is aimed at a general mathematical audience — it assumes only
first year analysis and a facility with formal systems, completely
avoiding the underlying category theory.
It is complementary to the account in this paper,
neither of them assuming familiarity with the other.

Although we advocate computable foundations, we shall say very little about practical implementation here. This is a very interesting topic in itself, particularly in relation to elementary analysis, so it will be studied more extensively in future work [Bau08][K].