Tuesday, July 17, 2018

FLOC and Urban

I went to FLOC, Federated LOgic Conferences, in Oxford, just for one day, Friday 13th July.  It turned out to be a lucky day.  I did most of a "Summit on Formal Methods meets Machine Learning", without winning any jackpots.  Maybe I missed the best bit, I ducked out towards the end to see "Three Research Vignettes in honour of Mike Gordon".

The only thing which stood out for me, to take away from the "summit", was a little snippet about progress on SAT solvers in the first decade of this century.  That they improved by two orders of magnitude, just from algorithmic improvement (not from hardware acceleration).  With which, in my head I wove a little story about why we should not use neural nets for everything, even if we might, perhaps, in principle.  Why did I need that little story?  Well, because I'm talking  about "Cognitive Architecture", and what I have to say probably needs to sound a lot like GOFAI (good old fashioned AI), even though I do recognise many places in the architecture where neural nets are desirable.  So I want ways of defending a GOFAI/ML/DL hybrid monster.  It shouldn't need a defence, but in some quarters anything with a whiff of GOFAI is non-PC.

After the tribute to Mike Gordon there was a reception.  At the reception I met Josef Urban, whose work I had been aware of, as much to my liking, particularly in direction and breadth of vision.  That was probably the most important thing which happened to me at FLOC 2018.  I can tell that, because I have been reviewing his ideas, and I am now intent on orienting the account I am trying to give of my own ideas so that its relation to Urban's ideas is clear.

In this post I will sketch explicitly some of the connections.

My starting point is dreams and vision.   Urban cites "three semantic dreams".  Semantics is a particular emphasis in his writing, and though I to attach great strategic importance to (formal) semantics, I have not yet well understood the role which semantics plays in Urban's ideas.

The "semantic dreams" are:
 (i) The dream of general Artificial Intelligence, and in particular AI that helps with general scientific reasoning and research.
(ii) The dream of Automated Reasoning, and in particular Automated Theorem Proving that could attack nontrivial mathematical theorems.
(iii) the QED dream, of all of mathematics (and perhaps also programming and all exact sciences backed by mathematics) being done in a form that is fully understandable to computers, verified for correctness by computers, and assisted in various semantic ways by computers.
which might compresses down to AI/AR/QED, noting however that no mention is made of "common sense", the whole endeavour is scientifically oriented.  So as far as AI is concerned it would for most observers fall short of the (present) holy grail of "general Artificial Intelligence", GAI.

Urban's vision is not a mere amalgamation of these.  His describing these as "semantic dreams" adds a perspective or emphasis which I have not previously seen, and he quickly expands out on the dreams in the following way:

 My basic motivation is the dream of scientific AI. I cannot imagine any other future than the one in which computers will to a very large extent semantically understand, assist, and develop not just mathematics and computer science, but also physics, chemistry, biology, medicine, engineering, economics, law, politics, etc.
 In his "dreams" Urban is clearly talking about important ideals rather than visionary people, and it is clear that he does have some distinguished visionaries in mind (notably Leibniz, perhaps John McCarthy) who don't appear because that's not what his list is.

I think of myself as engaged in a contemporary sequel to the ambitious projects of three prior philosophers, Aristotle, Leibniz and Carnap, all of whom I have seen as visionary advocates for deductive reason in its broadest application and the automation of reason.

These two triples of inspiration are complementary.  So it is of interest, looking in greater detail, to see how Urban's ideas and my own fit together, and to consider what kind of synthesis from them might be possible.

Urban uses the words "semantic", "understanding" and "reason".  I like to talk about the "deductive paradigm", and the "deductive/computational paradigm shift" which have a lot to do with semantics and reason.  In my next blog post I will talk about these ideas and see whether they fit together.


Tuesday, July 10, 2018

"Synthetic Gognitive Architecture"

This is about the title, how the words in it are intended, and how the whole should be taken.

The word "synthetic" here is to be contrasted with "natural".  I am concerned, not with the architecture of some naturally occurring cognitive system (such as the human brain), but rather with articulating and evaluating architectures for artificial cognitive systems.

The word "cognitive" means here "concerned with knowledge", subject to a couple of qualifications.  The cognitive systems of interest are built around a hierarchically organised body of propositions (its knowledge base), and with propositions about those propositions (meta-knowledge) concerning the strength of the evidential support for the propositions.  But which of these propositions would pass the bar to qualify as knowledge in natural English is not our concern.  More generally here, the synthetic philosophy we are engaged in is similar to (but a little less compromising than) some aspects of Rudolf Carnap's philosophy. The concepts we work with are to be construed according to the definitions and descriptions we give for them, they will typically not be the same as ordinary language concepts going by the same name and are not intended to explain, nor even to "explicate" those concepts.

My second caveat distances me from the discipline of Cognitive Science, which seems primarily concerned with human cognition and the human brain, or with artefacts intended to simulate, realise or surpass human intelligence.  In particular, I am not concerned here with cognitive systems which perfectly mimic human cognition.  The systems of interest here might betray their non-humanity in a "Turing test" by their in-humanly high intelligence, and probably won't do or understand a lot of what an intelligent homo sapiens does.

So to "architecture", what is this?  Well, no, I'm not thinking of a detailed account of the physical structure of some building.  Software systems have architectures too, which might be the highest level description of the structure of the system.  I use the term more broadly here, for those considerations which should come first in the process of designing some artefact.  There is a caveat here as well.  What comes first in design is requirements analysis, getting clear on what the system is intended to achieve, which I think precedes architecture rather than being a part of it.  I have to do some of that, so that we can all make our own assessments of whether the architectural proposals are likely to deliver, but strictly speaking, it's not part of the architecture.

What are those considerations which should come first in the design of cognitive systems?  I believe there are a number of philosophical issues which need to be resolved, which impact upon what the system is intended to achieve and how it does it.  The first are probably epistemological and foundational, but the full breadth I will leave open.  Of course, it's weird to consider resolving philosophical questions as part of a design process.  These things are debated for millennia and never resolved.  But in practice, this is what cognitive scientists do when they build software for artificial intelligence.  They resolve the issue by making a choice rather than by discovering the truth, though you may need to read between the lines to discover the driving philosophy.  It is this method which I am calling synthetic philosophy, provided that it is explicit and systematic, and the earliest architectural work for cognitive systems should be, I suggest, primarily that kind of philosophy, choosing or constructing the philosophical scaffolding needed to build cognition, a conceptual framework and a set of ground rules for the enterprise.

As to the whole, I don't think I need to say any more today.

Monday, July 9, 2018

Synthetic Cognitive Architecture

This is not my only moribund blog, I have two others.  My unfulfilled aspirations are like London buses.

Well here goes again, I may revive all three for a while, let's see.

My silence of late does not betoken a terminal change of direction on my part, but rather a further deterioration in the tenuous connection between what I am thinking, and what I am writing (mostly nothing), mitigated only by some wiki writing for github "projects" conceived as an alternative approach to my one principle objective (discussed here).
The direction of my intellectual pretensions remains: to add a chapter of my own to the story which began with Aristotle's conception of demonstrative science, and progressed first to Leibniz's idea of a calculus ratiocinator, providing a decision procedure for the whole of science and then to Carnap's ambition, armed with the more adequate modern logics available to him, to facilitate rationality in science through its formalisation.

Carnap pre-dated, and probably did not anticipate, the digital electronic computer, so he conceived of himself as, in some respects, following in the steps of Frege and Russell, rather than Leibniz.  His project for science failed for many reasons, which we need not go into right now.  It would have been sufficient that the complexity of formal proofs makes reasoning so arduous as to prevent its adoption by scientists, even had they felt the need for greater deductive rigour.  The advent of the digital computer changed all that.  Leibniz's project (within the limits which modern theory imposes upon it - at best a partial decision procedure) became more plausible, with the more advanced computational machinery continuously advancing leaps and bounds.  That machinery might make the burden of formality less onerous and its prospective benefits more alluring.

With the advent of the computer, a third centre of academic competence in logic, more vigorous and better funded than philosophy or mathematical logic, took over in its own way the thrust towards formality and its automation.  This new discipline had of necessity to embrace formal notations, and to mechanise their use.  Admittedly, most of these new formal notations were programming languages, suitable for describing algorithms, but not for the formalisation of science.   The complexity and unreliability of the resulting software begat the ambition to formally verify the software, and this lead some computer scientists (and some industrial corporations) into stronger formal systems of more general applicability.  The use of computers for formal deduction has continued to advance throughout almost the entire history of electronic computation, and the capabilities of the resulting systems have advanced considerably.   It nevertheless remains the case that formal reasoning about non-trivial problems, with the best machine assistance, takes considerably more time than traditional Journal style proofs.  The mass of extra detail which characterises formal proofs, in contrast with the extant tradition in mathematics, is easily coped with  by computers.  But finding a viable proof strategy requires mathematical intelligence, of a kind, so far, beyond the achievements of AI.

Hopefully, this defect of intelligence in mathematical software will be remedied, and my own project anticipates that remedy, as well as aspiring to contribute to it.  The stage is set, by these approaching advances, for the realisation, not only of Carnap's logicist project, but of something which eclipses in scope the ambitions of Aristotle, Leibniz and Carnap.

A particular feature of the three precedents I have cited is their breadth of scope, encompassing the whole of science.  Though there is much research in Computer Science which might contribute to such a project, there is nothing which encompasses the whole.   Thinking about that whole is my preoccupation, aspiring to write.  The advance of computer science, and of cognitive science, now make even those broader conceptions too narrow, and my own project embraces and advocates a wholesale paradigm shift from computation of data, to deduction over propositions.  This I now characterise as Synthetic Cognitive Architecture, an enterprise which belongs to a new kind of constructive philosophy which may be seen as a successor to Carnap's Logical Positivism.  This is philosophy as engineering.

At the core of this architecture is a logical system which facilitates inference across huge and diverse propositionally interpretable data.  A first essay at such a system was the ideas I christened X-Logic (the X, as in XML, connoting linguistic pluralism), ostensibly the subject matter of my blog of the same name.  In that same blog I now propose to discuss not only the logical core of the cognitive architecture, but more broadly, the epistemology underpinning the system.

Carnap's corner may be in the game too, where I hope to say something abut the relationship between this embryonic architecture and its distinguished (if ill fated) predecessors.







Tuesday, September 5, 2017

Ontological Absoluteness

Going back to this topic from long ago, not being myself a fan of Wittgenstein, I would refer all to Carnap, whose position on ontology I think clear and sensible.

It is articulated most concisely perhaps in his paper "Empiricism, Semantics and Ontology", though the internal/external distinction for ontological questions which he introduced here was not well-received.

Anyway, its relevance here is that it provides a basis for affirming the objectivity of set theoretic truths (internal) without accepting that questions about what sets exists have any absolute truth value (these would be external questions).

More on this belongs perhaps at Carnap Corner.

Giving up. Or not.

I pretty much gave up.

Not without good cause.  Too many years of wanting but failing to articulate.

A lot of that is down to a taciturn nature.
I don't talk, so naturally, I don't write.

The other factor is lack of audience.  There is no audience which wants to hear the message I want to deliver, and partly for that reason, non that I could hope will even understand it.

I won't delve deeper into the pathology.

So why am I here?

Well, I tried to give up, but failed.
Failed to find an alternative (one must surely do something?).




Sunday, November 13, 2011

Ontological Absoluteness

I've been following the EFI project at Harvard.
Its a series of workshops "exploring the frontiers of incompleteness" which is mostly philosophy of set theory.
The next talk is by William Tait, so I had a look at his paper (the papers are made available in advance of the talks), and that paper lead to this post.

I have had this problem from time to time. I believe in the objective truth of statements of set theory (once you have specified which set theory you are talking about, by which I mean, once you have fixed the meaning of the language) but I don't consider myself a platonist.
I sometimes come across people who don't understand this.
They think that if you believe in the objective truth of set theory you must be a "Platonist".


So I look at Tait's paper, and he is talking about Wittgenstein's "Philosophical Investigations".  I didn't read it all, so I probably have the wrong end of the stick, but that doesn't matter, because what I'm writing about is some small change in my own thinking which came from it.  What I took him to be doing was arguing against the idea that for the semantics of set theory to be definite there must exist entities, the sets, which the terms in the language denote.  This is what Wittgenstein calls the "Augustinian" theory of language, and the Philosophical Investigations provides a critique of this theory.
Well, this is something which had occurred to me to.  Not that I worked it out in any detail, but it had occurred to me that reference to Wittgenstein would help people to understand that set theory could be quite definite in meaning, and hence that statements in set theory might have definite truth values, without there needing to be absolute truths about what sets there are.

Monday, July 26, 2010

Active Zones

In my grand scheme of things there are at present three books in play, rather remote and implausibly ambitious objectives.

The things I am thinking about and "working on" right now are less grandiose, and possibly less implausible.
I hope that these pieces of the elephants might be more intelligible to others and so I thought I would give a short account here.

The thing which has consumed the greatest portion of my time of late has been my attempts to analyse formally (with ProofPower) the substance of Grice's "Vacuous Names".
This is conceived of as a part of my large projected tome of formal philosophy (which I call "analyses of analysis"), but since it is provoked more by the logic of my dialogue with Speranza than the aims of that tome, my conception of the work has gradually been evolving as I get a better idea of how it may be fitted into the historical themes which dominate the volume.
In that context it is an episode in the development of semantics, and is helping me to get my mind around how to address that topic in the larger context.

The Griceian material is an exemplar of a method of analysis with ProofPower, making use of shallow embeddings, and part of what I am trying to do at the same time as the Grice paper is get this method (and the particular languages and tools I do it with) written up in a form which might conceivably be intelligible to some philosophers. To this end I started a chapter on HOL, in which I would like to give an account of the language and logic closely based on the Church paper on his Simple Theory of Types. There is a place to this (to appear) but I have been struggling to find the right approach to this.

The last active zone is nominally X-Logic, but the first bit of the new formal models for X-Logic needs a bit of lattice theory, and the lattice theory may not strictly need, but is nevertheless serving as an excuse for an excursion into Universal Algebra (to appear), which itself has contributed a little impetus to my recent love affair with the proposition. The interest here is in doing Universal Algebra in a type theory in a way which is actually useful in the development of more concrete algebraic theories such as Lattice theory. The danger with high levels of abstraction is that the cost of using the abstract theory is just as large as the cost of doing the work without it. The first test will be in obtaining a sufficient theory of quotient lattices from a more general theory of algebraic quotients. The connection with the proposition is an aspect of the reuse problem. Universal algebra provides more abstract or more general results which subsume some of the results obtainable in specific algebras. What does this relationship between abstract and concrete theories tell us about the notion of proposition?
This is a conceptually simple sandbox for thinking about such matters, relative to the much greater difficulties in making sense of what happens in category theory.

RBJ