A Perspectives model can be executed by the Perspectives Distributed Runtime (or PDR; we also call it The Core). Each co-operating user deploys the PDR on a device of his own. These PDRs exchange information automatically. In this text we describe the operation of the PDR in logical terms. We also explain a rather unique interpretation of the model-view-control GUI paradigm, based on logic. We show how Perspectives uses the Closed World Assumption and negation by failure, and deploys Truth Maintenance. The system achieves high data integrity because each modification is checked with the relevant model, on each instance of the PDR, thus precluding hacking to a great extent.
Logical representation
A model, whether entered visually in the Perspectives Diagram Language (PDL) or textually in the Perspectives Representation Language (PRL), is represented in terms of contexts and roles. PDL and PDR have more constructs than just contexts and roles, so they must be seen as higher languages. Higher, that is, than the basic language of the system in which one can only write down contexts and roles and which is aptly called the Context Role Language (CRL). Elsewhere we discuss how PRL and PDL is compiled into CRL and how CRL is represented as JSON documents.
Contexts and roles have a name. A context functions as a namespace. So, two contexts may each have a role with the same name without causing confusion, because the ‘real’ name of the roles are qualified with the context name: contextname + local role name = qualified role name.
A context is always also represented by a role, its external role. Because of this we can embed a context in another context, just by filling a role of the embedding context with the external role of the embedded context.
By a similar qualification mechanism, a context name is qualified with its embedding context. A model is just another context, but not embedded in anything. Its name functions as the root of a namespace hierarchy. So each context and role in CRL has a unique name and that makes it possible to put them in a flat list and never confuse them.
It is easy to represent contexts and roles as logical facts. There are just a few things we need to write down. For a role:
- what values have it’s properties?
- by what role (if any) is it filled?
- to what context does it belong?
Strangely enough, we have nothing to add for a context! Nothing, that is, if we have described its outer role (and there is an inner role, too). Just think: a context is nothing more than a bunch of roles (including its inner and outer role) and we’ve already stated for each role to what context it belongs. So, logically, we just need to represent the three types of facts listed above in order to represent a Perspectives model as a logical theory.
Types and instances. We might model a business meeting, introducing roles such as chair and participant. The meeting itself we would represent with a context. However, a particular meeting – as distinguished from the type of the meeting – would be a context, too. Usually we can ignore the difference between type and instance as the text around the words will make clear what we mean. For the record we note that when we talk about a context as part of a model, we mean context type. But when a user ‘enters a context’, we refer to a context instance. Context types (as written down in PRL, for example) are compiled to context instances. The Context Role Language just handles context instances. In contrast, it is not possible to write down a context instance in the Perspectives Representation Language. In the literature on knowledge representation systems, this distinction is usually found in terms of the T-box (theory) and A-box (assertions). But again, this is nothing but a convention: down below it is all just logical facts (or contexts and roles, in the case of Perspectives).
Queries
Imagine a dinner party. We’ll model it using a seating arrangement, among other things. We’ll represent this arrangement with a context that has ‘seat’ as a role. Each seat has a number so we can refer to it easily. More people attend than usual, so there are not enough chairs in the dining room. We take a chair from the kitchen and put it at the table. Let’s suppose we’ve represented the kitchen, too, with a context and that it has roles for its chairs. Now, the proper way to deal with that kitchen chair in the dining room is to fill an arrangement seat role with a kitchen chair role. We can now express what seat number the kitchen chair has. But that number is not attached to that chair forever; it has nothing to do with the kitchen, just with the temporary arrangement during the dinner party.
Now consider the kitchen knife that is used to cut the meat at the table. It belongs in the kitchen (has a role in the kitchen), and it bears properties that are relevant in that context. But in its role at the dinner party, it has no extra properties. Unlike the kitchen chair, we need not say anything about that knife that is relevant in the dinner party context and not relevant in the kitchen context. In such a case, we need not create a role for the knife in the dinner party and fill it with the role the knife plays in the kitchen. We can just refer to the kitchen knife role in the dinner party context.
Such a role is called a calculated role. Calculated, because it is represented in the dinner party context by a recipe how to find that role, starting from the dinner party context. This is what commonly is called a query. In general, queries are paths through the graph of contexts and roles. A query path may be quite long! PRL contains various operators to build a query from. Common operations are:
- move from a role to its context
- move from a context to a role
- move from a role to its filling role
- etc.
Logically, a query translates to an expression with a free variable. The query result then is just a list of substitutions for that variable such that they make the expression logically true. Notice the plural: a query may have zero, one or more results. The simplest kind of query is just a role name. Notice that a role (as a type) may have many instances in the same context, as with ‘seat’ in a seating arrangement!
A user interface for the core
We’ve seen how a model and user data can be conceived of as a logical theory, that is: a bunch of simple facts for contexts and roles and expressions with a free variable for queries. Now how does this relate to a program with an interface that someone can actually use?
First of all, a user needs to be able to see something. Some of that logical information has to be displayed. To pick up the example given above, the dinner host wants to have a look at his seating arrangement. Now, notice the active sense of the previous sentence. “The dinner host wants to look at the seating arrangement”. The “dinner host” is a user role. The seating arrangement we can think of as a table of seat numbers and guest names. In Perspectives, we call this an Action. An action should have at least a subject, a verb and an object. “Looking” is one of the primary verbs used in Perspectives (actually, we use consult)
The user interface operates like this: somehow the user instantiates an action. In conventional terms, the user executes a function. This he can do by clicking on a button, on a link, by making a gesture, and what have you. His intention is to show that seating arrangement on the screen. But, because we have a system based on logic, what happens first is that an action is instantiated. This just means that more facts are added to the logical theory.
Let’s just step back for a moment. Above, we have shown how both a model, including types and queries, and the user data, translate into logical facts. Now I’ve just stated that an action can be instantiated and that this translates into more logical facts. Is this a new principle? No, it isn’t, because an Action is just another context. In fact, an Action is a context with prescribed roles: subject, verb, object. ‘Action’ is just a type of context and an instantiated Action is just a context instance (with role instances). And we already know how to translate contexts and roles into logical facts. Nothing new under the sun!
Having settled that, we now understand that the user has enriched the logical theory with some facts by clicking a button. How does this make this seating arrangement table appear? This is where we leave the realm of logic and move over to the architecture of the core as a program. The core is complemented by an ‘observer’ that monitors the theory for instantiated actions that have not been finished. It displays such actions on the computer screen, using an ingenious system that maps the various actions into as many types of display. So, merely by asserting that he looks at the seating arrangement, the user makes this arrangement appear on screen. One might say that the observer rather is like the djinn in Aladdins tale: as a thought arises, it is interpreted as a wish and instantly fulfilled.
Through the user interface, the user instantiates many actions concurrently. By this we mean an action can be instantiated (‘started’) before others have finished. All these active actions will have an object role. These objects are filled with a role from the context of the action. In other words, each object is a query, some very simple, like just the name of a role, other queries can be long paths. The objects are displayed on the screen by the observer. We call the corresponding queries the actual queries.
Changes to the logical theory
At any moment, the PDR will manage a collection of logical facts (internally represented as roles and contexts). But this is not a static collection. As soon as the user interacts with the program, actions are instantiated and added to this collection (the ‘logical theory’). And, through other kinds of actions, the user can add properties to roles or change them, or create roles and contexts.
What happens to the actual queries when the theory changes?
A brief digression on logical inference. Classic logic (propositional and predicate logic) is monotonous in its inferences under addition. This means that, once established, a logical conclusion stays valid, no matter how many new facts are added to the logical theory the conclusion was drawn from. Now, withdrawing facts is another matter entirely. If a fact is withdrawn from the theory, we no longer know the validity of the conclusions. We have to infer them again to be sure.
This monotony rests on an assumption that usually is not valid in the context of databases: that if we have no facts about, say, the existence of A, we cannot draw the conclusion that A does not exist. Absence of proof is no proof of absence. This is not very practical in the case of, for example, a database holding the inventory of a shop. Then we want it to be precisely the other way round: if the book is not in the database, it cannot be found in the store, either! This is what we call the closed world assumption. It sort of says that, by our logical theory, we know all there is to know about the part of the world it describes.
There is a form of logical inference that exploits the closed world assumption. It is called negation by failure and goes a little like this: if we don’t have The World according to Garp, according to the database, we must order it to be able to sell it. So a conclusion is drawn based on the observation that a fact cannot be found (‘failure’). Now imagine a computer system in a book store that compares client orders to the inventory and lists books that should be ordered. Suppose that The World according to Garp is on that list. Then it happens, by coincidence, that a client actually returns a copy of that book. It is obvious that the order list now should be updated: we don’t want to order yet another copy of The World according to Garp now one has been returned.
This rather long digression goes to show that, when we use the closed world assumption, conclusions may no longer be valid even when we just add facts to the logical theory.
Returning to Perspectives, we add two observations:
- Perspectives uses the Closed World Assumption and negation by failure;
- The user can retract facts.
Now we see that we have a problem. We have built a user interface out of the results of queries, but these query results – logically inferred facts – may no longer be valid as soon as we add or subtract a fact to the logical theory! And the user does this all the time! Any button click will instantiate an action; closing a screen will complete an action, add more facts, etc.
Truth Maintenance
So we have the situation where we have many logical inferences in the context of a continuously changing logical theory. Re-evaluating each query with every change might be so time consuming that the system does not perform adequately. This situation was recognised as a problem for the first time in the eighties of the previous century, when robotics and other fields of Artificial Intelligence were based on logic. A solution was found in so called Truth Maintenance Systems. The idea is rather simple: just record your inferences. If you know, for example, on what basic facts a conclusion rests, you can immediately retract that conclusion if one of them is retracted from the theory. Perspectives deploys this technique to keep the results of the ‘actual queries’ up to date all the time.
Forward chaining, backward chaining
In (Classical) Artificial Intelligence, systems that derive conclusions from facts can deploy two classes of techniques. Either they reason forward from facts to conclusions, e.g. “Socrates is human”, “All humans are mortal”, hence: “Socrates is mortal”. Or they can reason backwards from (possible) conclusions, trying to find support for them: “Is Socrates mortal?”. Both systems have advantages and disadvantages. A classical example of backward chaining is the declarative language Prolog. In contrast, many expert systems were based on the forward chaining language OPS5. In Perspectives, we deploy both techniques. In contemporary IT, forward chaining is known as data driven computation. This form of computation is the base of, for example, the popular React library for graphical user interfaces.
Computed roles (and computed properties) are derived only when asked. In the example of the dinner party above, only when the user asks about the knife used to carve the meat at the table, the derivation that it actually is the kitchen knife is made. So this is backward chaining.
But we use forward chaining to update the results of actual queries. This stands to reason, as we need updates to query results when new facts are added (or facts were retracted). The implementation of the forward chaining system is not unlike the technique employed in the OPS5 language mentioned above: the RETE system, which is one of the fasted systems for the purpose.
This forward chaining regime is a perfect base for a Graphical User Interface based on React (a data driven library). Perspect IT provides a library that can be used from a javascript program to interface with the PDR and create a custom user interface in React.
Distribution: the Bubble and the Universe
We can now picture the Perspectives Distributed Runtime as a system that manages a collection of facts and actual query-results derived from those facts, that maintains these results under a continuous stream of changes to the facts. On top of that, the Observer displays the objects of the actual queries, thus creating a user interface that adapts to the changing facts as we watch it. This somewhat explains the ‘runtime’ bit, but how about ‘distributed’?
Let’s focus on the logical theory. In the context of Perspectives, we should ask: whose theory? Perspectives was built to accommodate the fact that we all have a unique view of the world around us. This is reflected in that logical theory. Each user has his of her own version of it! But users whom interact with each other, of necessity share a lot of facts. Otherwise, they could not communicate, could not interact, could not co-operate.
We call the facts that one user has at his disposal his Bubble. Bubbles overlap. Together, the bubbles of all users form the Perspectives Universe. The Universe is the collection of all facts known to all users. But there is no one who can oversee the Universe. There is no data collection somewhere that holds all the facts. That is, partly, what distribution is about (as a sideline: notice how different this is from the decentralisation employed by Blockchain: there, many nodes exist that hold all transactions!).
If a user modifies, for example, a property on a role in a context where another user plays a role who has access to that property, their PDR’s exchange this modification. After that, both users have access to the same modified property value. This means it appears in both their Bubbles. In more technical terms: it means their local databases both contain this fact. The PDR of the receiving server will update all actual query results influenced by this new fact, so if the property had a place on the receiving users’s screen, it immediately show the new value.
Authorisation and integrity
Does this exchange mechanism imply that a user can change the facts in the overlap between his Bubble and that of another user? The answer is both yes and no. First, the model that governs the particular fact will give him the power to change a fact, or not. So not all users who see a fact in their Bubble, can actually modify it. This is good; you don’t want someone else to change, for example, your address!
But what if someone hacks his PDR and/or modifies the model such that he gives himself the right to modify your address? That might be possible, so then he modifies your address locally first (that is, on his own computer). Then his PDR sends the modification to you. However, before acting on that modification, your PDR checks with its own copy of the model if the sender is endowed with the privilege to change your address. This being not the case, your PDR refuses to change your local data.
This means that, even if a person were to successfully hack his system (PDR and models), he still cannot change data so that others will use it. He will only have succeeded in modifying his own copy of the data. Every PDR monitors authorisation with every modification. Hereby the Perspectives system achieves very high data integrity: a particular piece of data can only be changed by those who are allowed to do so according to the model. And these are, by design, the people who have a vested interest in correct information, being exactly those whom it concerns.
Wrapping up
The Perspectives Distributed Runtime stores information in a native format that represents just contexts and roles. These can be easily translated into basic logic facts and expressions with a free variable (for queries). This gives the system a solid semantic foundation, that bears comparison with those of the languages that are part of the Semantic Web. The user interface is nothing but a selective display of these logical facts, kept up to date while the fact base changes. Changes occur because some user (be it the user of a particular PDR or others whose Bubbles overlap with his) adds, modifies or retracts some fact. Because every modification is checked with the model that governs it, the system achieves very high data integrity.
As queries can use negation by failure and conclusions are automatically adapted as the data changes, the system behaves in a way that suits humans better than classical logic. (Good Old Fashioned) AI researched this so-called common sense reasoning thoroughly, discovering many aspects such as default reasoning and others. Many different kinds of logic were invented to cover the reasoning behaviour displayed by humans. A particular approach to this topic was introduced by David Poole in 1988 (A logical framework for Default Reasoning). It depends on the notion of local, classical logical theories, incremental change to these theories and then updating conclusions. This must sound familiar, by now. Without exaggerating, we may state that Perspectives implements aspects of this work of modelling human reasoning behaviour.