Tuesday, July 17, 2018

FLOC and Urban

I went to FLOC, Federated LOgic Conferences, in Oxford, just for one day, Friday 13th July.  It turned out to be a lucky day.  I did most of a "Summit on Formal Methods meets Machine Learning", without winning any jackpots.  Maybe I missed the best bit, I ducked out towards the end to see "Three Research Vignettes in honour of Mike Gordon".

The only thing which stood out for me, to take away from the "summit", was a little snippet about progress on SAT solvers in the first decade of this century.  That they improved by two orders of magnitude, just from algorithmic improvement (not from hardware acceleration).  With which, in my head I wove a little story about why we should not use neural nets for everything, even if we might, perhaps, in principle.  Why did I need that little story?  Well, because I'm talking  about "Cognitive Architecture", and what I have to say probably needs to sound a lot like GOFAI (good old fashioned AI), even though I do recognise many places in the architecture where neural nets are desirable.  So I want ways of defending a GOFAI/ML/DL hybrid monster.  It shouldn't need a defence, but in some quarters anything with a whiff of GOFAI is non-PC.

After the tribute to Mike Gordon there was a reception.  At the reception I met Josef Urban, whose work I had been aware of, as much to my liking, particularly in direction and breadth of vision.  That was probably the most important thing which happened to me at FLOC 2018.  I can tell that, because I have been reviewing his ideas, and I am now intent on orienting the account I am trying to give of my own ideas so that its relation to Urban's ideas is clear.

In this post I will sketch explicitly some of the connections.

My starting point is dreams and vision.   Urban cites "three semantic dreams".  Semantics is a particular emphasis in his writing, and though I to attach great strategic importance to (formal) semantics, I have not yet well understood the role which semantics plays in Urban's ideas.

The "semantic dreams" are:
 (i) The dream of general Artificial Intelligence, and in particular AI that helps with general scientific reasoning and research.
(ii) The dream of Automated Reasoning, and in particular Automated Theorem Proving that could attack nontrivial mathematical theorems.
(iii) the QED dream, of all of mathematics (and perhaps also programming and all exact sciences backed by mathematics) being done in a form that is fully understandable to computers, verified for correctness by computers, and assisted in various semantic ways by computers.
which might compresses down to AI/AR/QED, noting however that no mention is made of "common sense", the whole endeavour is scientifically oriented.  So as far as AI is concerned it would for most observers fall short of the (present) holy grail of "general Artificial Intelligence", GAI.

Urban's vision is not a mere amalgamation of these.  His describing these as "semantic dreams" adds a perspective or emphasis which I have not previously seen, and he quickly expands out on the dreams in the following way:

 My basic motivation is the dream of scientific AI. I cannot imagine any other future than the one in which computers will to a very large extent semantically understand, assist, and develop not just mathematics and computer science, but also physics, chemistry, biology, medicine, engineering, economics, law, politics, etc.
 In his "dreams" Urban is clearly talking about important ideals rather than visionary people, and it is clear that he does have some distinguished visionaries in mind (notably Leibniz, perhaps John McCarthy) who don't appear because that's not what his list is.

I think of myself as engaged in a contemporary sequel to the ambitious projects of three prior philosophers, Aristotle, Leibniz and Carnap, all of whom I have seen as visionary advocates for deductive reason in its broadest application and the automation of reason.

These two triples of inspiration are complementary.  So it is of interest, looking in greater detail, to see how Urban's ideas and my own fit together, and to consider what kind of synthesis from them might be possible.

Urban uses the words "semantic", "understanding" and "reason".  I like to talk about the "deductive paradigm", and the "deductive/computational paradigm shift" which have a lot to do with semantics and reason.  In my next blog post I will talk about these ideas and see whether they fit together.


Tuesday, July 10, 2018

"Synthetic Gognitive Architecture"

This is about the title, how the words in it are intended, and how the whole should be taken.

The word "synthetic" here is to be contrasted with "natural".  I am concerned, not with the architecture of some naturally occurring cognitive system (such as the human brain), but rather with articulating and evaluating architectures for artificial cognitive systems.

The word "cognitive" means here "concerned with knowledge", subject to a couple of qualifications.  The cognitive systems of interest are built around a hierarchically organised body of propositions (its knowledge base), and with propositions about those propositions (meta-knowledge) concerning the strength of the evidential support for the propositions.  But which of these propositions would pass the bar to qualify as knowledge in natural English is not our concern.  More generally here, the synthetic philosophy we are engaged in is similar to (but a little less compromising than) some aspects of Rudolf Carnap's philosophy. The concepts we work with are to be construed according to the definitions and descriptions we give for them, they will typically not be the same as ordinary language concepts going by the same name and are not intended to explain, nor even to "explicate" those concepts.

My second caveat distances me from the discipline of Cognitive Science, which seems primarily concerned with human cognition and the human brain, or with artefacts intended to simulate, realise or surpass human intelligence.  In particular, I am not concerned here with cognitive systems which perfectly mimic human cognition.  The systems of interest here might betray their non-humanity in a "Turing test" by their in-humanly high intelligence, and probably won't do or understand a lot of what an intelligent homo sapiens does.

So to "architecture", what is this?  Well, no, I'm not thinking of a detailed account of the physical structure of some building.  Software systems have architectures too, which might be the highest level description of the structure of the system.  I use the term more broadly here, for those considerations which should come first in the process of designing some artefact.  There is a caveat here as well.  What comes first in design is requirements analysis, getting clear on what the system is intended to achieve, which I think precedes architecture rather than being a part of it.  I have to do some of that, so that we can all make our own assessments of whether the architectural proposals are likely to deliver, but strictly speaking, it's not part of the architecture.

What are those considerations which should come first in the design of cognitive systems?  I believe there are a number of philosophical issues which need to be resolved, which impact upon what the system is intended to achieve and how it does it.  The first are probably epistemological and foundational, but the full breadth I will leave open.  Of course, it's weird to consider resolving philosophical questions as part of a design process.  These things are debated for millennia and never resolved.  But in practice, this is what cognitive scientists do when they build software for artificial intelligence.  They resolve the issue by making a choice rather than by discovering the truth, though you may need to read between the lines to discover the driving philosophy.  It is this method which I am calling synthetic philosophy, provided that it is explicit and systematic, and the earliest architectural work for cognitive systems should be, I suggest, primarily that kind of philosophy, choosing or constructing the philosophical scaffolding needed to build cognition, a conceptual framework and a set of ground rules for the enterprise.

As to the whole, I don't think I need to say any more today.

Monday, July 9, 2018

Synthetic Cognitive Architecture

This is not my only moribund blog, I have two others.  My unfulfilled aspirations are like London buses.

Well here goes again, I may revive all three for a while, let's see.

My silence of late does not betoken a terminal change of direction on my part, but rather a further deterioration in the tenuous connection between what I am thinking, and what I am writing (mostly nothing), mitigated only by some wiki writing for github "projects" conceived as an alternative approach to my one principle objective (discussed here).
The direction of my intellectual pretensions remains: to add a chapter of my own to the story which began with Aristotle's conception of demonstrative science, and progressed first to Leibniz's idea of a calculus ratiocinator, providing a decision procedure for the whole of science and then to Carnap's ambition, armed with the more adequate modern logics available to him, to facilitate rationality in science through its formalisation.

Carnap pre-dated, and probably did not anticipate, the digital electronic computer, so he conceived of himself as, in some respects, following in the steps of Frege and Russell, rather than Leibniz.  His project for science failed for many reasons, which we need not go into right now.  It would have been sufficient that the complexity of formal proofs makes reasoning so arduous as to prevent its adoption by scientists, even had they felt the need for greater deductive rigour.  The advent of the digital computer changed all that.  Leibniz's project (within the limits which modern theory imposes upon it - at best a partial decision procedure) became more plausible, with the more advanced computational machinery continuously advancing leaps and bounds.  That machinery might make the burden of formality less onerous and its prospective benefits more alluring.

With the advent of the computer, a third centre of academic competence in logic, more vigorous and better funded than philosophy or mathematical logic, took over in its own way the thrust towards formality and its automation.  This new discipline had of necessity to embrace formal notations, and to mechanise their use.  Admittedly, most of these new formal notations were programming languages, suitable for describing algorithms, but not for the formalisation of science.   The complexity and unreliability of the resulting software begat the ambition to formally verify the software, and this lead some computer scientists (and some industrial corporations) into stronger formal systems of more general applicability.  The use of computers for formal deduction has continued to advance throughout almost the entire history of electronic computation, and the capabilities of the resulting systems have advanced considerably.   It nevertheless remains the case that formal reasoning about non-trivial problems, with the best machine assistance, takes considerably more time than traditional Journal style proofs.  The mass of extra detail which characterises formal proofs, in contrast with the extant tradition in mathematics, is easily coped with  by computers.  But finding a viable proof strategy requires mathematical intelligence, of a kind, so far, beyond the achievements of AI.

Hopefully, this defect of intelligence in mathematical software will be remedied, and my own project anticipates that remedy, as well as aspiring to contribute to it.  The stage is set, by these approaching advances, for the realisation, not only of Carnap's logicist project, but of something which eclipses in scope the ambitions of Aristotle, Leibniz and Carnap.

A particular feature of the three precedents I have cited is their breadth of scope, encompassing the whole of science.  Though there is much research in Computer Science which might contribute to such a project, there is nothing which encompasses the whole.   Thinking about that whole is my preoccupation, aspiring to write.  The advance of computer science, and of cognitive science, now make even those broader conceptions too narrow, and my own project embraces and advocates a wholesale paradigm shift from computation of data, to deduction over propositions.  This I now characterise as Synthetic Cognitive Architecture, an enterprise which belongs to a new kind of constructive philosophy which may be seen as a successor to Carnap's Logical Positivism.  This is philosophy as engineering.

At the core of this architecture is a logical system which facilitates inference across huge and diverse propositionally interpretable data.  A first essay at such a system was the ideas I christened X-Logic (the X, as in XML, connoting linguistic pluralism), ostensibly the subject matter of my blog of the same name.  In that same blog I now propose to discuss not only the logical core of the cognitive architecture, but more broadly, the epistemology underpinning the system.

Carnap's corner may be in the game too, where I hope to say something abut the relationship between this embryonic architecture and its distinguished (if ill fated) predecessors.