Tuesday, July 10, 2018

"Synthetic Gognitive Architecture"

This is about the title, how the words in it are intended, and how the whole should be taken.

The word "synthetic" here is to be contrasted with "natural".  I am concerned, not with the architecture of some naturally occurring cognitive system (such as the human brain), but rather with articulating and evaluating architectures for artificial cognitive systems.

The word "cognitive" means here "concerned with knowledge", subject to a couple of qualifications.  The cognitive systems of interest are built around a hierarchically organised body of propositions (its knowledge base), and with propositions about those propositions (meta-knowledge) concerning the strength of the evidential support for the propositions.  But which of these propositions would pass the bar to qualify as knowledge in natural English is not our concern.  More generally here, the synthetic philosophy we are engaged in is similar to (but a little less compromising than) some aspects of Rudolf Carnap's philosophy. The concepts we work with are to be construed according to the definitions and descriptions we give for them, they will typically not be the same as ordinary language concepts going by the same name and are not intended to explain, nor even to "explicate" those concepts.

My second caveat distances me from the discipline of Cognitive Science, which seems primarily concerned with human cognition and the human brain, or with artefacts intended to simulate, realise or surpass human intelligence.  In particular, I am not concerned here with cognitive systems which perfectly mimic human cognition.  The systems of interest here might betray their non-humanity in a "Turing test" by their in-humanly high intelligence, and probably won't do or understand a lot of what an intelligent homo sapiens does.

So to "architecture", what is this?  Well, no, I'm not thinking of a detailed account of the physical structure of some building.  Software systems have architectures too, which might be the highest level description of the structure of the system.  I use the term more broadly here, for those considerations which should come first in the process of designing some artefact.  There is a caveat here as well.  What comes first in design is requirements analysis, getting clear on what the system is intended to achieve, which I think precedes architecture rather than being a part of it.  I have to do some of that, so that we can all make our own assessments of whether the architectural proposals are likely to deliver, but strictly speaking, it's not part of the architecture.

What are those considerations which should come first in the design of cognitive systems?  I believe there are a number of philosophical issues which need to be resolved, which impact upon what the system is intended to achieve and how it does it.  The first are probably epistemological and foundational, but the full breadth I will leave open.  Of course, it's weird to consider resolving philosophical questions as part of a design process.  These things are debated for millennia and never resolved.  But in practice, this is what cognitive scientists do when they build software for artificial intelligence.  They resolve the issue by making a choice rather than by discovering the truth, though you may need to read between the lines to discover the driving philosophy.  It is this method which I am calling synthetic philosophy, provided that it is explicit and systematic, and the earliest architectural work for cognitive systems should be, I suggest, primarily that kind of philosophy, choosing or constructing the philosophical scaffolding needed to build cognition, a conceptual framework and a set of ground rules for the enterprise.

As to the whole, I don't think I need to say any more today.

No comments: