Theory and practice

In the early 90s, software engineers working on problems related to me would often say

In theory, there is no difference between theory and practice. But, in practice, there is.

We agreed that, ideally, theory and practice would correspond, and it created real problems when they didn’t.

I had originally been recruited as a mathematician with an interest in logic, presumably with the expectation that I would be useful to the development of computer science and its applications, only to find a complete non-meeting of minds with many software engineers and theoreticians of the day. But I did get to use computers as a tool, and some colleagues who had worked or otherwise engaged with Turing but who weren’t themselves computer insiders noted that my use of computers was more like his, and more to their liking, and inconsistent with the then theoretical thinking. So I spent two years with the remit of trying to convert the theoreticians and used the opportunity to read what was then available from Turing. While I developed some lasting relationships with the computer folk I failed to convert them. The human sciences explanation for my failure is that I failed to understand ‘where they were coming from’, which I accept. My proposed remedy was to get more of Turing’s work published, and more of its implications publicised. (While this has happened it still hasn’t been effective in changing minds, and I still, don’t know why. But to get back to the time-line … )

After my failure to change computing theory (despite a sympathetic hearing in some quarters), I carried on working with practical engineers. As a team we were trying to establish what could be reasonably be done with the infrastructure (operating systems, databases etc) of the time, within ‘the state of the art’, to anticipate what was around the corner,  try to get a head-start on applications, and even help the process when we could. (Engaging with vendors, standards bodies, big and ‘bleeding edge’ users etc as appropriate.) We were trying to bridge the gap between what people who thought of computers in Turing’s terms thought ought to be possible, with what was reasonable with the technology of the day.

Some of my work involved commenting on proposed specifications for firmware and software, often involving emulations (with the help of those software engineers). The intention was that the engineers could develop and demonstrate applications that would sort-of-kind of work well enough to illustrate their potential, and then be able to port them onto off-the-shelf computers as they became available. I was officially concerned with the strategic aspect of this (identifying which sources to back in the race to develop more functional infrastructure), but inevitably got drawn into the engineering aspects, to justify my strategy.

The primary issue that emerged was that some vendors would over-promise and under-deliver and – what was worse – their manuals would often document the intended behaviour, rarely the actual. And the difference would sometimes make a critical difference, particularly at the ‘bleeding edge’. Fortunately we were able to develop good relations with some big users who could pressure vendors to better align their documentation and the software, in some cases making use of our home-spun software to show that the specifications were achievable. But, as we all know and suffer from, there is a continual process of alignment between thoery and practice, even now.

Thus the quote above is two-sided, and it was always useful in meeting new people to deploy the quote and see which side they thought ought to change: theory or practice. The truth, of course, is that they often need to find creative changes to both in order to align them while not departing too much from what users expected (based on the original descriptions) and what engineers were used to doing (based on the earlier practice and precedents).

At the time my main area of concern was in ‘transaction control’, and in particular the development of mechanisms and a theory that would allow one to reason about processes that occur concurrently. At the time the problem was regarded as solved, but the solution only really addressed concurrency of the computer code, and could break down as soon as users communicated outside the system. It seemed, and still seems, to me, that Turing had a perfectly adequate solution(generalized semaphores) but that the available implementations were too simplistic (in neglecting outside communications). It seem to me (and may still be the case) that one could solve this practically, if only you could get the computing community to ‘own’ the issue. (Some seemed quite happy to think of users as appendages of the machine, with no other significance.) But a key issue was data representation.

Turing’s approach was a aligned with those of his tutor, Russell, who advocated a relational view of the world. At the time, relational databases were becoming popular, and seemed to me a great advance on previous approaches. A weakness was that they were only designed to accommodate finished fact-like data. But in many applications you start with initial, tentative, partial, suspect data and want to work towards something that is more finished, fact-like and ‘actionable’. The use of technology (and the associated theories) that assumes fact-like data is a problem!

My approach was and is to represent what is fact like, and put that in the data base. For example, from time to time different countries will take a different view as to which other regions are legitimate countries, so ‘facts’ relating to countries are not absolute facts, but relative to time and to authority. If one can adequately represent the underlying actuality in the database the apparent ‘facts’ can be derived. A difficulty is that in practice the ‘facts’ are often – like nation states – socially constructed and there is no prospect of any  construct – social or otherwise – of any underlying actuality. It seems to me that the mainstream data modelling approach seemed to adopt the theory that countries were fixed things: what was needed was a theory that refelected actual practice, even if this highighted diffiulties in maintaining such data.

One could follow Russell by modelling the the recogniiton of countries  as relationships. But relational databases don’t directly support this. Confusingly, object-oriented databases, which were newly fashionable, do actually allow some flexibility in modelling processes and in this sense are more relational than relational databases (!) But what we lacked then was any overarching theory to keep engineers ‘on the rails’. Actually, this may not be quite true: what we lacked was any community that had an adequately institutionalised interpretation of any adequate theory. Thus still seems to be the case. But does it matter now?

So far, I have simply documented a story that I suspect many will sympathize with. But even if it matter, what can be done about it? The overwhelming advice I have had is that if you want to change something, you first need to understand it. I am struggling to do this. It seems to me that many our problems of contemporary life can be seen -as in Russell’s time – as symptoms of the institutionalised use of inappropriate logics. So why do we cling to the old ‘classical’ logics? One reasonable theory is that people to find it easier to blame bad people than bad logics. But why?

Now the new, speculative, ‘insights’:

From the point of view of Russell et al, a relational or ‘process’ view is more logical. For mathematical logic there may be many (mathematical) ‘object-based’ models for a given theory. There is a distinction between conclusions that follow from the process logic alone and those which are ‘forced’ in the sense that they are conclusions in all possible models. Mathematicians are generally ambivalent about which is appropriate to empirical theory. Most applied mathematicians work in fields in which the process theory is strong enough to imply that all models are equivalent, in which case the point is moot. In Physics the ‘Copenhagen agreement’ is essentially an agreement to disagree, or at least not to talk about it.

Since Kant, psychologists have generally thought that humans were somehow hard-wired to think ‘objectively’. Certainly mainstream western languages (such as English) seem better suited to object-based reasoning. Most physicists try to explain their theories in an object-based way, but it is not always clear (at least to me) what they really think. Recently, behavioural economists have noted that people don’t think ‘rationally’. It seems to me that a possible explanation is the psychologists’ rationality is object-based, so maybe many people have relational (or ‘process’) tendencies, which are being adapted to the constraints of language and the dominant cultures. In this case, maybe thewe should try to better align theory and effective practice by re-conceptualising them in a process-like way.

I have some hopes for this in that in my view mathematical probability theory seems to me often misinterpreted, as if it were being interpreted to some wrong-headed belief, whereas a more relational interpretation seems much more reliable. I have similar thoughts about categorisation and discrimination, but based on less study. So maybe the elusive wrong-headed-ness is object-based thinking, and maybe this is correctable, or at least more work-around-able than was thought.

I intend to review some more published material of relevance to this issue and then discus some implications, but meanwhile I have related distraction to deal with …

Dave Marsay


%d bloggers like this: