Future of Programs using Assertions, Goals, and Plans

Programs using Assertions, Goals, and Plans have been proposed as important to the future of program languages ever since Terry Winograd demonstrated SHRDLU using Planner in 1968.

Recently, Robert Kowalski proposed considering a situation posed by the following notice in the London Underground:

                          Emergencies
Press the alarm signal to alert the driver.
The driver will stop if any part of the train is in the station.
If not, the train will continue to the next station, where help can more easily be given.
There is a 50 pound penalty for improper use.

Some of the procedural information for the above is embedded the following using a dialect of modern Logic Programs:

When Assertion Pressed[alarmSignalButton]→ Assert Alerted[driver]▮
When Assertion Alerted[driver]→
   When Assertion InStation[train]→ Assert Stopped[driver, train],
   When Assertion ¬InStation[train]→ Assert ContinuingToNextStation[driver, train]▮
When Assertion ImproperlyUsed[person, alarmSignalButton]→ Assert PenaltyOwed[person, 50 pounds]▮  

Of course, the above program needs to be fleshed out with considerably more code.

How important are Logic Program languages using assertions, goals, and plans to the future?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Logic programming is

Logic programming is probably much less important than machine learning. I have a few reasons for this belief:

  1. symbolic reasoning simply doesn't correspond well to sensor inputs (e.g. visual recognition) or actuator outputs (e.g. controlling a robotic arm) and attempting to associate signals with symbols results in ugly impedance mismatch and is often arbitrary
  2. the behavior of a symbolic logic system is often unstable; a small local change in a fact or rule can cause big changes elsewhere. While this is an advantage for expressiveness, it is a disadvantage for effective control and maintenance. One tends to need global reasoning about all the facts and rules in the system.
  3. logical reasoning has difficult to control performance characteristics. By comparison, ML can easily be achieved with real-time and bounded space characteristics (or near-bounded logarithmic)
  4. constraint-logic and probabilistic-logic programs don't compose easily, or at least I still haven't figured out how to compose them at the implementation level.
  5. logic programming is difficult to securely modularize in a distributed system; if you isolate fact sets, the searches themselves can still reveal a lot of hidden details and serve as covert channels.

There are declarative programming models with a relationship to logic programming that are more suitable for low-level systems and the bulk of application logic.

That said, I do anticipate useful, specialized roles for logic programming.

One such role is for static meta programming. We could describe functions using something similar to the Coq tactics language (LTAC) and extract more efficient and context-sensitive implementations. We could link modules based on types and assertions, or even select between modules based on context (e.g. preference for Qt vs. GTK, or presence of a GPS service).

Another role (for constraint-logic) is for modeling and composing art assets (meshes, images, music, procedural generation). In this role, the logic is relatively isolated anyway (per asset) and the symbolic nature may integrate well with the human element. The logic can result in a flexible, context-sensitive, and expressive means of describing the asset. (I blogged about this a while back.)

A third role is for policy management, administrative control, deciding who gets access to a resource, etc..

Basically, I think the role of general LP is for domain specific, relatively isolated purposes.

Logic Programs are General Purpose but not Complete

Logic Programs are:
1. General purpose because they are not domain specific and can be used in almost every domain.
2. Not Complete because there are some systems that Logic Programs cannot implement, e.g., a financial account

See Formalizing common sense for scalable inconsistency-robust information integration using Direct Logic reasoning and the Actor Model

Logic Programs are for inference; not for changing anything

Logic Programs are for inference; not for changing anything.

Well, if you want to model

Well, if you want to model or cause change (e.g. to model a video game or music using logic programming) you can use temporal logic programming. Actions can be integrated with TLP by having some propositions that are 'watched' by an external system, thus enabling a signal for when they're active or activated. It's general purpose, and has been used for low-level robotic controllers and real-time systems in addition to high-level interactive fictions and such. (But it still has the issues of instability and unpredictable performance.)

Have you ever researched temporal logic and other temporal models? They're worth studying. I suggest starting with the Dedalus: Datalog in Time and Space paper.

"Watchers" can change things; Logic Programs per se doesn't

"Watchers" can change things; Logic Programs per se doesn't.

Logic programming always has

Logic programming always has watchers: whomever performs the logic query. If a watcher is a software agent, it can use query results to control rendering or steering, etc..

But temporal logic can represent change and state internally.

Logic Programs can reason about changes, but not make them

Logic Programs can reason about changes, but not actually make changes.

Pure logic programming is

Pure logic programming is pure. Nonetheless, it can be used effectively in practice. There are ways to model change in pure computations, and temporal models are one of the nicer approaches. And all 'cause change' means is to model the cause-effect relationships for the change.

The fact that we need a little integration is the same for EVERY paradigm. Even procedural programming or actors model can't change things outside the program without support from an OS or VM or hardware - a watcher, in a sense.

An Actor can change, e.g., the reported balance can change

An Actor can change, e.g., the reported balance can change. See Actor Model of Computation: Scalable Robust Information Systems

Temporal logic can represent

Temporal logic can represent a changing balance, too. It's Turing complete. Temporal logic can even represent Actors Model representing a balance. See Dedalus: Datalog in Time and Space. Just like your actors example, this sort of change is modeled internal to the program.

Integration is necessary for external change. For logic, some software might watch a 'print' proposition to determine what should be on the screen at a given time. Actors have analogous integration challenges, e.g. injecting a special 'print' actor to represent the screen.

What is the Temporal Logic Program for the account?

In ActorScript, the code for the account looks like this:

CreateEuroAccount.[startingBalance:Euros] ≡
 Actor  currentBalance := startingBalanceâ—Š 
   GetBalance[ ]→   currentBalance¶
   Deposit[anAmount]→  Void  afterward  currentBalance := currentBalance+anAmount¶   
   Withdraw[anAmount]→
    (amount > currentBalance) ?? 
       True ↝ Throw  OverdrawnException[ ]¿
       False ↝  Void  afterward  currentBalance := currentBalance-anAmount▮  

What is the corresponding Temporal Logic program?

I would need to develop a

I would need to develop a small library of queues, persistent state, financial transactions, etc. before properly modeling an account. Even your code is so incomplete that imagining its use for an actual bank makes me shudder. Have you looked at the linked paper yet? It describes queues and persistent data among other things.

But to get a very rough sketch for how this might be expressed, consider something like:


account(user,balance+depositAmt)@next :- account(user,balance), depositAction(user,depositAmt).
account(user,balance+depositAmt)@next :- account(user,balance), noUpdateAction(user).
noUpdateAction(user) :- userQueue(user,[])
depositAction(user,amt) :- userQueue(user,(deposit(amt):rest))
...

Temporal Logic Programs are not a usable paradigm

From you code for the account above, I conclude that Temporal Logic Programs are not a usable paradigm. The parts of your program seem to be in all the wrong places.

Your Temporal Logic program looks very much like the failed attempts of the Japanese 5th Generation Project. I had many good friends at ICOT, but the Prolog-like clause syntax that they attempted to extend defeated them :-(

See Inconsistency Robustness in Logic Programs.

That sounds like the kind of

That sounds like the kind of argument Hitler would have made.

Transaction Logic?

What about Transaction Logic?

The authors claim that Transaction Logic allows dynamic asserts, retractions etc while still being logically clean.
By using tabling (sophisticated memoization) it can also be made efficient.

Read the Dedalus paper,

Read the Dedalus paper, Hewitt. It motivates TLP better than I'm willing to try. You seem far too pleased with your ignorance and your unreasonable ability to reach such conclusions based on my half-assed efforts.

Concurrency is subtle, but we now have foundations

Concurrency is subtle, but we now have the foundations on which to build.

I recommend the following video:
Hewitt, Meijer and Szyperski: The Actor Model (everything you wanted to know, but were afraid to ask)

Also, the following article might be helpful:
Actor Model of Computation: Scalable Robust Information Systems

How is this on topic? I've

How is this on topic?

I've used actors. I've implemented actors systems. I've even developed some interesting variations, e.g. supporting multi-actor hierarchical transactions, and cascading actor destruction patterns (so I can remove a well-defined subgraph). I've even watched that video, and read a much earlier version of your paper. I am not ignorant of actors.

Nonetheless, in the last few years I favor temporal approaches as the 'foundation' for concurrency - synchronous reactive, functional reactive, reactive demand, temporal logic, discrete event simulations, time warp, tea time, cellular automata, etc.. I don't use temporal logic, but I understand it well because I use other temporal models.

I did not find actors to be a pleasant programming experience. Pervasive non-determinism is a bad default. Messaging and state per actor are sources of much accidental complexity. There are worse models, of course - threads and shared state, for example. But I believe that we can do better, and that temporal techniques are promising.

You seem to be assuming that I'm as ignorant about actors as you are about temporal models, that I'll embrace actors as my foundation for concurrency if only I better understood them. That is not the case. If you want to argue against temporal logics, you'll need to understand them well enough to present a coherent argument. So: Read the Dedalus paper, Hewitt.

Pervasive Indeterminacy is a given in the Future of Programs

Pervasive indeterminacy is a given in the future of programs, both locally (on chip) and non-locally. Around the edges we have Logic Programs, Programs with Immutable Behavior, and Quasi-commutative Programs.

Re: Pervasive Indeterminancy

I do acknowledge that hardware has a growing degree of parallelism and resulting indeterminacy. We can expect that trend to continue.

But, at the same time, I am seeing wider use of programming models and design patterns that enable programmers to hide, control, or marginalize the indeterminacy, such that it is not implicit or pervasive in their programs. And I also expect that trend to continue.

I have no reason to believe that the former will outpace the latter.

Pervasively Indeterminate but Eventually Consistent

I'm more in the camp that believes that programming as a profession will probably settle on "eventually consistent" as a goal when dealing with concurrent systems.

Meaning Ghod knows what order things actually happen in, but all possible orders that things might have happened in eventually converge on an end-state that meets known criteria.

Continually available systems have no "end state"

Continually available systems have no "end state" so there is on "eventual consistency."

Continuous systems don't

Continuous systems don't have an "end state", but we can still achieve a property analogous to eventual consistency. For example: every input that becomes available before time T will be accounted for in the output by time T+K, where K is unbounded but provably finite.

The resulting system might never settle on a consistent "end state" because it continues to receive new updates. However, developers would be able to make useful guarantees similar to EC systems, such as how long it takes for a particular edit to be seen as part of a document.

(If we can provide an asymptotic or hard bound for K, we can additionally support real-time systems.)

Ya, we should also

Ya, we should also distinguish between systems that are immediately consistent up until the current time t, rather than systems that are just eventually consistent, where consistency can lag a bit (especially in distributed systems): we might start processing behavior t + 1 even though consistency for t hasn't be achieved yet.

Then there are systems that will spring time t-1 on you after you already processed t...oops!

Eventual consistency alone is unpleasant

Eventual consistency by itself is awkward, inelegant, ugly, and painful. EC is the acrobat crawling across the safety net. EC is the human immune system fighting infections in open wounds. I use those metaphors because EC is nice to have: safety nets are great, immune systems are awesome. Nonetheless, the acrobat should not fall into the safety net. The human should have a healthy layer of skin to prevent infections.

I want EC, but I don't want to "settle on" EC. Robust systems should have robust layers and recovery abilities.

My RDP model is designed such that a runtime can use eventual consistency as the third safety layer, due to updates on different signals being commutative and each signal tracking its own stability. The second safety layer is pairwise snapshot consistency between logical partitions, which prevents glitches that can be determined without global analysis. The first safety layer is speculative evaluation, which allows even glitches to be close to the truth within our ability to predict or plan for the future. None of these "safety layers" are necessary if developers model latency adequately: latency gives us extra time to stabilize.

RDP is a relatively extreme case, but there are many other models that are effective for controlling indeterminism.

Eventual consistency is ok...

...if you don't need to know when the system is consistent (e.g. the UI can just be updated as more consistent info is known) or can figure out when the system is consistent (for some time t). I know these are big IFs, but they might be necessary to take when considering extreme parallelism, distributed computing, or loosely/dis-connected operation.

Of course, full determinism is also useful but comes with its own IFs. For example, programming in CUDA requires planning everything out and then the computation runs like a well-oiled synchronous machine! MapReduce systems schedule IO and pipeline computations for a similar effect (though not nearly as synchronous, of course).

EC is a tool

You're right, there are places where we can apply eventual consistency without any real issues. It is convenient to have the tools for EC when these conditions occur. With RDP, I might explicitly leverage EC for disruption-tolerance via a state resource that is 'eventually' replicated between client and cloud.

But the utility of EC depends deeply on the problem domain. We can't just say EC is good for UI; we need to know what that particular UI is used for - e.g. real-time control of a weaponized UAV vs. VLC competitor vs. MMORPG client. And you might need to get more precise still, down to the purpose of specific widgets. This makes it difficult to introduce EC in a generic way even at the framework level.

TLP seems to have the parts of a program in all the wrong places

Temporal Logic Programs seems to the parts of a program in all the wrong places :-(

For example, consider the following ActorScript program:

CreateEuroAccount.[startingBalance:Euros] ≡
 Actor  currentBalance := startingBalanceâ—Š 
   GetBalance[ ]→   currentBalance¶
   Deposit[anAmount]→  Void  afterward  currentBalance := currentBalance+anAmount¶   
   Withdraw[anAmount]→
    (amount > currentBalance) ?? 
       True ↝ Throw  OverdrawnException[ ]¿
       False ↝  Void  afterward  currentBalance := currentBalance-anAmount▮  

To illustrate the above program, consider the following example of the use of an account noting that preparations are carried out concurrently:

Let anAccount ← CreateEuroAccount.[€5]◊
 Prep anAccount.Deposit[€11], anAccount.Withdraw[€5]◊              // the balance might temporarily be €0
   Let  aReportedBalance ← anAccount.GetBalance[ ]◊                      // aReportedBalance is €11  
     Prep anAccount.Deposit[€7],                                                         
          anotherReportedBalance ← anAccount.GetBalance[ ]◊   // anotherReportedBalance might be €11 or €18
      [aReportedBalance, anotherReportedBalance]â–®                                             

The above expression returns [€11, €11] or [€11, €18].

Compare the above with the following Temporal Logic Program that Dmbarbour has provided above in this forum topic:

account(user,balance+depositAmt)@next :- account(user,balance), depositAction(user,depositAmt).
account(user,balance+depositAmt)@next :- account(user,balance), noUpdateAction(user).
noUpdateAction(user) :- userQueue(user,[])
depositAction(user,amt) :- userQueue(user,(deposit(amt):rest))

Note the following:
* The "@" in the above program is not part of mathematical logic.
* What is "next" in the above program?
* How are exceptions thrown (and caught) in Temporal Logic Programs?

The `@next` is part of the

The `@next` is part of the logic. Read the paper to learn how, and what it means.

Temporal logic doesn't need, or use, exceptions (because it doesn't have control-flow). But you could report a fact that basically says "withdrawal rejected because of insufficient funds".

"@next" is not part of Mathematical Logic

Of course, "@next" is not part of mathematical logic.

Exceptions are standard in modern software engineering. It's unfortunate that they are not supported in Temporal Logic Programs.

There is more than one

There is more than one mathematical logic (constructive, linear, classical, modal, etc.). If you read the Dedalus paper, you'll learn exactly how `@next` is part of a mathematical logic (hint: it's shorthand). You'll also find code examples.

Exceptions only make sense for systems with a clear line of control-flow. There are other means to address errors and alternatives that are suitable for dataflow or logic.

What is "@next" shorthand for in Mathematical Logic?

What is "@next" shorthand for in Mathematical Logic?

Read the paper.

Read the paper.

Maybe "@next" is not shorthand for anything in Mathematics?

Maybe "@next" is not shorthand for anything in Mathematics?

It is clear that you don't

It is clear that you don't actually want the answers to any of your questions. But if you ever change your mind, the answers to many of your questions about temporal logic programming are in the Dedalus paper. It is among the best introductions to TLP that I've read.

Operationally "@next" is not shorthand for anything in Math

Operationally "@next" is not shorthand for anything in mathematics because mathematics is descriptive. Similarly, the constructs of ActorScript are not shorthand for anything in mathematics. However, an ActorScript program does have a mathematical denotation. But the denotation is not capable of actually performing the work of the program.

@next is descriptive, not

@next is descriptive, not operational. It just means "at the next instant", which has a formal meaning in a discrete temporal logic. It is shorthand for writing this in the proper logical form. This is in the Dedalus paper, and not in a subtle way.

Please stop asserting 'facts' about temporal logic until you've studied it. And don't expect people to spoon-feed you answers, even if you troll for them.

In a concurrent system, there is no "next instant"

In concurrent systems, there is no "next instant"

However, for the ActorScript implementation of the account (above in this forum topic), we can speak of "the next message received" and specify what the balance will be for that message.

Re: "concurrent system"

I am not aware of any definition for concurrent system that forbids the notions of logical instants. Nor does concurrency imply use of "messages". A typical description of concurrency:

concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other -- Wikipedia

Temporal logic programs can easily represent several computations executing simultaneously and potentially interacting with each other. Ergo, temporal logic programming is concurrent. QED.

It's even arguable that temporal logic is more concurrent than actors model: actors model does not have any representation for "executing simultaneity" (e.g. no ability to receive simultaneous messages), whereas temporal logic has a very precise representation of simultaneity.

Concurrency should not be conflated with indeterminacy. Temporal logic can model indeterminacy using "eventually" instead of "next". (Dedalus has this feature.) But the concepts are separable.

Does "receiving multiple messages simultaneously" make sense?

Does "receiving multiple messages simultaneously" make sense?

It makes sense. As a natural

It makes sense. As a natural example, I often receive multiple messages in my post-office box. As another natural example, my brain apparently receives messages from my eyes and ears simultaneously (and with predictable latency).

It can even be useful and efficient. There is often some form of batch-level processing one might perform. I could prioritize the messages within the batch. If using a reactive observer pattern, I might eliminate redundant update messages and combine updates from different sources that arrive in the same batch before sending the result.

If you want to model something like 'synchronized swimming' where each swimmer is an independent component, you may want an even stronger condition: control over latency for when a message is received, such that you can ensure messages are received by different actors simultaneously. That condition is also natural, respecting how light and sound and information move at very predictable speeds through physical space, or how signals move at predictable rates through our nervous system to enable coordinated human behavior:

it takes about five milliseconds for the fastest nerve impulse to travel the length of the arm.[6] That means that when your arm is still rotating toward the correct position, the signal to release the ball is already at your wrist. In terms of timing, this is like a drummer dropping a drumstick from the 10th story and hitting a drum on the ground on the correct beat. -- XKCD High Throw

Actors model ignores a lot of the useful simultaneity that physics and nature supports.

In the ActorScript account, only one message received at a time

In the In ActorScript account, only one message is received at a time.\

CreateEuroAccount.[startingBalance:Euros] ≡
 Actor  currentBalance := startingBalanceâ—Š 
   GetBalance[ ]→   currentBalance¶
   Deposit[anAmount]→  Void  afterward  currentBalance := currentBalance+anAmount¶   
   Withdraw[anAmount]→
    (amount > currentBalance) ?? 
       True ↝ Throw  OverdrawnException[ ]¿
       False ↝  Void  afterward  currentBalance := currentBalance-anAmount▮ 

How could the Actor receive multiple messages simultaneously?

As I said above: actors

As I said above: actors model does not have any representation for "executing simultaneity" (e.g. no ability to receive simultaneous messages). If we desire to receive multiple messages simultaneously, it is necessary to tweak the model. After tweaking the model, you might decide to rename it (because it isn't quite "actors model" anymore). ActorScript would not implement the tweaked model unless you decide to make it happen.

(The above really should go without saying.)

One option for this tweak is that actors (maybe call them 'batch-actors') receive a non-empty set of messages at a time. (In practice, this set could be "all the messages in the mailbox" but that wouldn't be a formal part of the model.) Traditional actors could be trivially supported by these batch-actors.

Another option is that the mailbox is a stateful entity from which messages can be deleted when they are no longer useful. IIRC, Ambient Oriented Programming has this kind of input model, and even tracks whole conversational contexts sort of like threaded discussions.

A batch implementation of the account is a terrible idea

Do you really think that a batch implementation of the account is a good idea? A batch implementation can be done in ActorScript, but it is a terrible idea.

Why is it terrible?

Why is it terrible?

Note: I never said whether batching was good or bad in this particular use-case. But good use-cases do exist for batching and simultaneity.

Your 'account' is rather awful to start with. E.g. I can deposit $50 then withdraw $20 and overdraw due to non-deterministic message arrival order in Actors model. And I won't even get started on auditing...

Exactly what is the batch implementation of the account?

Exactly what is the batch implementation of the account?

Also the following code will never overdraw:

Prep anAccount.Deposit[$50]â—Š anAccount.Withdraw[$20]â–®                                               

You should know how to

You should know how to process a list of withdraw, get balance, and deposit messages in your own language. I don't know enough ActorScript to provide an exact implementation, and even if I did I'm not here to spoon feed you. The trivial implementation would be to handle the batch one message at a time, if you're not doing anything special at the batch level. (That pattern should be easy to generalize.)

Alternatively, you could sum the deposits before handling the withdraws, to avoid unnecessary overdraws. Or vice versa, to get as many overdraw penalties as possible.

I take it the `â—Š` operator in ActorScript somehow enforces order upon the messages?

Electronic deposits and withdrawals come in one at a time

Electronic deposits and withdrawals come in one at a time. So where is the "batch"?

Batching would exist at the

Batching would exist at the batch-actors layer. If the actual domain sends only one message at a time, ensuring each message is processed before sending another, then the batch size would simply be one message. I.e. you'd receive `{ Deposit[50] }` (allowing for a list or bag of actions) instead of `Deposit[50]`. (As I've said, batching is useful for some problems, not for every problem.)

Though, if this was the actual domain, your implementation would be illegal. Where is your auditable double-entry book keeping? Shouldn't a proper database be involved somewhere?

Anyhow, we've strayed way off topic. I have no idea how to relate this to "Future of Programming using Assertions, Goals, and Plans". The only relevant point has already been made: logic can model concurrent, interacting computations and 'change' just fine by formalizing the concept of time.

Implementations versus Denotations

It is important not to confuse the following:
*implementations (such as the ActorScript program that appears in this forum topic)
*Actor denotations (see Actor Model of Computation: Scalable Robust Information Systems).

Note that temporal state models are inadequate for denotational semantics because of the problem with unbounded nondeterminism (see What is Computation?).

The account Actor can be sent multiple messages concurrently, but it processes them one at a time. Of course, in an actual application, the implementation of the account Actor would be more complicated.

Note that temporal state

Note that temporal state models are inadequate for denotational semantics because of the problem with unbounded nondeterminism (see What is Computation?).

You can develop temporal state models with or without unbounded non-determinism. Dedalus happens to include it, using an @async shorthand, but one could avoid the problem by avoiding the non-determinism, or bounding it.

You should attempt to cure your ignorance of temporal logic before you assert anymore of your 'truths' about it. I have already pointed you to a good primer.

The account Actor can be sent multiple messages concurrently, but it processes them one at a time.

I agree. That's a limitation of the actors model. Which is why, earlier, I proposed a batch-actor model - a simple variation on the actor model without that particular limitation. In the batch-actor model, an account actor could receive multiple messages simultaneously.

Note: new models are not difficult to specify, and are not implementation details

Global state models support only bounded nondeterminism

The global state models support only bounded nondeterminism. See the proof by Plotkin in What is computation?.

Not a problem.

First, Plotkin's informal proof doesn't apply to the temporal logic scenario (we aren't making a "choice" at each time step).

Second, praising unbounded non-determinism is ludicrous. It has no practical applications.

Unbounded nondeterminism required so that a server works

Unbounded nondeterminism is required to prove that a server responds to requests that it receives. That's why Hoare switched the semantics of CSP from bounded nondeterminism to unbounded nondeterminism.

No, if you want to prove a

No, if you want to prove a server responds, you've got to prove your non-deterministic wait is finite. It might be an unbounded finite, or it might be a bounded finite, but the 'unbounded' part is missing the point.

Also, the need to wait on a long series of individual messages is only relevant if you're limited to answering one request at a time.

There is no corresponding

There is no corresponding "execution sequence" or "choice points" or "tree" in the temporal logic. There is just a logical expression that some statement will become true eventually.

Easiest way to understand unbounded nondeterminism is Actors

The easiest way to understand unbounded nondeterminism is the configurations of the Actor Model (with messages in transit). See What is Computation? Actor Model versus Turing's Model.

Dmbarbour's proposed "batch" Actors have terrible modularity

Dmbarbour's proposed "batch" Actors have terrible modularity. (Try writing a "batch" implementation of the ActorScript account earlier in this forum topic.)

Modularity wasn't relevant

Modularity wasn't relevant to any of my points.

"Batch" Actors also have performance problems

I take it that you concede that "batch" Actors have terrible modularity.

"Batch" Actors also have performance problems by comparison with the ActorScript account implementation earlier in this forum topic.

Nah, they don't have

Nah, they don't have performance problems. You can get all sorts of collection-oriented processing benefits from working with batches. But again, not relevant.

Performance problems with "batch" Actors

Following are some of the performance problems with "batch" Actors:
* Slower dispatching on message because must collect a batch
* Slowness due to having to create a batch
* Slowness in dispatching of messages in a batch

None of those problems exist

None of those problems exist unless implemented by an incompetent.

Is @async a kludge?

Is @async a kludge? How would it be used to implement the ActorScript account earlier in this forum topic?

Read the paper, Mr. Hewitt.

What `@async` is shorthand for is actually a standard feature of most temporal logic models.

From above response, I take it that Dmbarbour concedes the point

From above response, I take it that Dmbarbour concedes the point that @async is a kludge.

You're an idiot.

From response above, I take it...

I love clear questions like that.

This is a good question when a complete answer is explored in detail. It's the same answer whether or not we talk about 1) a receiver in a actors model, or 2) a callee in a (sync or async) C or assembler subroutine. I'll write as if only the receiver in actors model is involved, i.e. (1), but the same answer applies to (2).

Does "receiving multiple messages simultaneously" make sense?

Yes, if the receiver is not single threaded with messages strictly serialized. It's still yes if we talk about green threads with cooperative scheduling; it's logical concurrency in a section of code that's the issue, not just physical.

If a receiver is immutable, serving multiple concurrent reads at once as a good optimization when performance improves due to concurrent replies lowering system latency.

If a receiver is mutable, there's probably a critical section that must serialize execution in multiple native or green threads, unless nondeterministic end state is okay. (Sometimes it's okay: a cache can still work when garbled, just with fewer hits.) Any of the standard mechanisms work fine: locks, semaphores, condition variables, etc.

But if the runtime enforces single threaded message dispatching, then no, simultaneous message reception doesn't make sense.

Re: clear questions

A sensible interpretation of "receiving multiple messages" is simply "receiving a collection of messages" - i.e. after there is at least one message in the actor's mailbox, process everything in the mailbox at once, as a set or vector of messages.

Based on your argument, I'm guessing you understood this in terms of "replicating the actor behavior in multiple threads", similar to how thread-based concurrency can become entangled with OOP. This interpretation is very implementation dependent (and therefore bad). Actors model doesn't have a concept called 'threads'. It only has actors and messages, and a few rules relating them.

hmm, batches of messages.

I'd expect threads to be an implementation detail. Does actors model define concurrent message receipt as non-observable?

I have in mind something that can do an actors model. Each actor is a green process, with one or more green threads inside, not observable outside unless this detail is exposed as purposeful dependency. (Total actor state can be much less than 1K unless there's a lot of green threads or deep stack frame chains.)

Under a cooperative, async, green thread scheduler, we see long-running message handlers get interrupted. So concurrent message handling is supported if such an actor desires. If state is immutable, this isn't observable. If mutable, an actor can serialize when deterministic, or handle concurrent messages interleaved if this seems useful. But this mode would not be called an actor if actors model forbids this behavior.

(Mostly I think about this as applied to signal processing like network apps. But I've also been thinking about an app that looks like a daemon-based Hypercard with a browser interface, and it would be possible to populate with async event handling actors. Exploring the range of behavior this can have is one way I fail to fall asleep at night.)

Actors are described as

Actors are described as processing one message at a time. An actor should not be dealing with 'interrupts' or handling new messages while still working on the old one. If you want to model that sort of behavior, you would idiomatically use a cluster of actors.

flavors of grouping identities into services

I thought a one-message-at-a-time model might apply. I'll be sure to say concurrent message handling means a lightweight process is not an actor.

We need more words to describe interruptions, because we have a lot of shades of gray in scheduling to describe. In this case I meant suspended, by yielding to real-time processing code or running other green threads, as opposed to forcing a lightweight process to respond to an external signal. Informally, I think it's okay to say a suspend-and-resume sequence amounts to interruption, even if this does not invoke standard interrupt jargon associated with hardware.

Thanks for the cluster-of-actors reference. In effect, if an actor runtime has threads, one actor can act like a cluster of actors. Same thing, just folded differently, which an external observer may not be able to distinguish.

An example of concurrent execution

Never mind the issue of Temporal Logic programs not being able to throw exceptions.

Consider the following example of the use of an account noting that preparations are carried out concurrently:

Let anAccount ← CreateEuroAccount.[€5]◊
 Prep anAccount.Deposit[€11], anAccount.Withdraw[€5]◊              // the balance might temporarily be €0
   Let  aReportedBalance ← anAccount.GetBalance[ ]◊                      // aReportedBalance is €11  
     Prep anAccount.Deposit[€7],                                                         
          anotherReportedBalance ← anAccount.GetBalance[ ]◊   // anotherReportedBalance might be €11 or €18
      [aReportedBalance, anotherReportedBalance]â–®                                             

The above expression returns [€11, €11] or [€11, €18].

What is the Temporal Logic Program for the above?

LP is important, but classic implementations are problematic.

Logic Programming, or programming using assertions, goals, and constraints, is not really capable of much right now because the classical algorithms for it using exhaustive search, simply do not scale.

It can be used productively in ways to derive methods of finding solutions (ie, where its output is a program that produces solutions using other more scalable methods) but will never amount to much as a solution in itself to ordinary problems.

That said, problems that seem to be expressible only in terms of logic programming may be seen to yield to a heavily optimized form of it in which the next transformation or the next "rule to apply" is selected via a heuristic system optimized by machine learning. Such a program when freshly compiled on an unfamiliar problem may take orders of magnitude longer to reach a solution than the same program supplemented by statistical information derived from thousands of runs.

For example, you could have a rule-based system in which the next rule to apply were selected by a neural network, where the neural network is trained in the course of previous runs. This sort of approach could leapfrog the notorious scaling problems, but wouldn't work unless the tasks it was given were otherwise repetitive with minor variations. It would be even less predictable, however, than current implementations.

Careful!

Two points:
1) Is your optimised rule selection going to be complete in the seldom seen cases?

2) You can't just reorder clause (or rule as you put it) selection willy nilly. You have to understand how your system searches. Take Prolog for example. Here's a simple Prolog interpreter which allows you to specify one of two clause selection rules. One give success, the other loops forever.

%%%
%%% The global store for code/rules.
%%%
claws_store([]).

%%%
%%% assert_claws(+C) 
%%%
%%% The clause C is asserted into the claws store.
%%% New clauses are added to the end.
%%%
assert_claws(C) :- 
    retract(claws_store(S)),
    append(S, [C], NewS),
    assert(claws_store(NewS)).

%%%
%%% claws/3 implements the clause selection rule
%%%
claws(prolog, Head, Body) :-              % Prolog clause selection rule
	claws_store(S),
	member((Head :- Body), S).

claws(backwards, Head, Body) :-           % An ordering that is the reverse of Prolog's
	claws_store(S),
	reverse(S, Backwards),
	member((Head :- Body), Backwards).


%%%
%%% demo(+Rule, +Clause)
%%%
%%% Interprets Clause in accordance with the clause choice rule represented
%%% by Rule.
%%%
demo(_, true).
demo(ClauseChoiceRule, (C1 ; C2)) :- demo(ClauseChoiceRule, C1) ; demo(ClauseChoiceRule, C2).
demo(ClauseChoiceRule, (C1 , C2)) :- demo(ClauseChoiceRule, C1), demo(ClauseChoiceRule, C2).
demo(ClauseChoiceRule, C) :- claws(ClauseChoiceRule, C, Body), demo(ClauseChoiceRule, Body).

%%%
%%% Test code.
%%%
:- assert_claws((integer(0) :- true)).
:- assert_claws((integer(s(X)) :- integer(X))).

Running some queries shows the difference. First Prolog's clause selection rule:

| ?- demo(prolog, integer(X)).
X = 0 ? ;

X = s(0) ? ;

X = s(s(0)) ? 

It seems to give us what we want. Now with the backwards selection rule:

| ?- demo(backwards, integer(X)).
^C

The interpreter got stuck in an endless loop!

See, that's the sort of thing...

A predictable search sequence is never a good implementation choice because it makes the system prone to infinite loops. It is far better to choose randomly from among the clauses whose conditions are met.

Also, an infinite loop, when there is a way to escape it, is the sort of thing that a system can easily train itself to avoid. In optimizing for rapid progress, it will start by making random selections among the clauses whose conditions are satisfied, but reinforcing choices that cause "progress." It can see progress because when progress is being made, it is presented with choices it hasn't been presented with before, or better still, reaches a resolution state -- a state of program completion or a state of waiting for needed user input.

Pretty much any loop involves the presentation of repetitive choices and a failure to reach resolution states. Thus each selection made leading along the loop would be negatively reinforced and become less likely to be made in the future, especially when considered against the context of the particular set of clauses of which that choice is comprised and the particular recent history of choices seen.

Before very long you'd see the system learning, when given that choice and that recent history of choices, to make the selections that have, in the past, led more quickly than other selections from among that group of clauses to progress (novel choices) or, more importantly than mere progress, to resolution states.

It should probably never completely exclude any possible selection, lest it become prone to looping. Therefore you'd want it to leave some small probability of choosing a clause even when it has been extensively negatively reinforced. But if a clause is in fact completely useless (infinite looping, for example) in the context of a particular choice, then the probability of selecting that clause in the context of that choice could eventually become arbitrarily small.

Bear

I disagree

A predictable search sequence is never a good implementation choice because it makes the system prone to infinite loops.

What is it about predictability that can lead to infinite loops, that cannot be exhibited by unpredicability?

It is far better to choose randomly from among the clauses whose conditions are met.

In the example given, a random shuffling of the clauses then a selection of the first match would not help you. The case where an infinite loop occurs would be less likely but not impossible. So this would not solve the problem. If however, I order the clauses knowing the clause selection rule's deterministic behaviour, I can avoid the problem.

Also, an infinite loop, when there is a way to escape it, is the sort of thing that a system can easily train itself to avoid.

Is this not the halting problem? At what point does the system say "this is an infinite loop!". Why would a query asking if a really large integer in Peano representation was an integer --- in my example integer(s(s(s(s...(s(s(0)))...)))) --- not falsely trigger this infinite loop mechanism?

A predictable search

A predictable search sequence is never a good implementation choice because it makes the system prone to infinite loops.

What is it about predictability that can lead to infinite loops, that cannot be exhibited by unpredicability?

In an unpredictable choice, all of the clauses whose conditions are met have some probability of being the next clause to be selected.

When a system fails to repeat its choices time and again, it escapes the loop. Randomness is a reasonably reliable strategy for avoiding repetition.

There is exactly one completely reliable strategy for avoiding loops; breadth-first search. But breadth-first search can take longer than the heat death of the universe and more memory than the number of atoms in it, if you're talking about nontrivial cases. On nontoy examples, it's as bad as being stuck in an infinite loop.

Is this not the halting problem? At what point does the system say "this is an infinite loop!". Why would a query asking if a really large integer in Peano representation was an integer --- in my example integer(s(s(s(s...(s(s(0)))...)))) --- not falsely trigger this infinite loop mechanism?

The system never says, "this is an infinite loop!" It says instead, "hmmm, making this selection when faced with this choice never seems to get me anywhere new; I think I'll start making that selection less often when I see a choice like that." It's not so much making a deterministic choice that this is definitely a loop, and it must break out of it; it's more like it is just getting bored because it never sees novel choices of which clauses it has available to select, and starts trying other possibilities if there are any, to see if they lead to any different choices.

It's not the halting problem; we still can't say for certain that it will halt. But we can assert that if it doesn't halt, the failure to halt won't be because it made the same stupid selection every time it was presented with some particular choice of possible clauses to try next.

Picking apart a really large integer in Peano representation would be boring, but in the absence of any other clauses whose criteria were met, there'd be no choice of ways to proceed. So the system would continue until it got an answer. If there were other clauses whose criteria were met occasionally, you could probably expect some approximate ratio of choices between the available clauses to develop. But slowly, over many completions of similar queries, the ratio would be skewing toward Peano expansions because regardless of what other choices are explored, the probability of reaching a resolution state would vary strictly with the number of Peano expansions performed.

Can we live with incompleteness.

What is it about predictability that can lead to infinite loops, that cannot be exhibited by unpredicability?

In an unpredictable choice, all of the clauses whose conditions are met have some probability of being the next clause to be selected.
When a system fails to repeat its choices time and again, it escapes the loop. Randomness is a reasonably reliable strategy for avoiding repetition.

It doesn't really answer the question. Any undesirable search path produced by an predictable searcher can be chosen by an unpredictable searcher (otherwise it would have a degree of predictability). You offer no guarantee. Predictable search can give guarantees as I pointed out with the following:

If however, I order the clauses knowing the clause selection rule's deterministic behaviour, I can avoid the problem.

You say

There is exactly one completely reliable strategy for avoiding loops; breadth-first search.

Ok, but now have other problems --- remember we did start out with logic programming. Imagine

pred(X) :- deep_search_large_tree(X), signal_success.
pred(X) :- signal_failure.

This would attempt signal_failure every time with a breadth first search (wrt. clause selection). The proof tree for deep_search_large_tree couldn't be expanded until signal_failure had been dealt with. Let's assume the attempt at proving signal_failure is a bad thing if deep_search_large_tree succeeds.

This could of course be rewritten

pred(X) :- deep_search_large_tree(X), signal_success ; signal_failure.

But at some point we'll have to chose a disjunct and breadth first search of the disjuncts will give the same problem.
To solve this we now have to introduce a mechanisms to prune branches of the search tree, or delay branch searches, or we could just duplicate search as in the following:

pred(X) :- deep_search_large_tree(X), signal_success.
pred(X) :- not deep_search_large_tree(X), signal_failure.

If this is a good thing or not depends on the application, whether or not incompleteness can be tolerated.

Going back to your infinite loop avoidance mechanism:

it's more like it is just getting bored because it never sees novel choices of which clauses it has available to select, and starts trying other possibilities if there are any, to see if they lead to any different choices.

Now we either need to duplicate work again or some mechanism to find out what other clauses have been chosen at the current level of the proof tree

pred(X) :- deep_search_large_tree(X), signal_success.
pred(X) :- other_clause_has_been_attempted, signal_failure.

Again, is this a good thing or not? Can we live with incompleteness?

There are many search

There are many search strategies that are hard to predict before running but that can still offer useful guarantees. If I were implementing Ray's idea, I'd certainly ensure an equivalence to full search[*] (as a degenerate case).

Signals aren't part of (purist) LP. Those are one of Prolog's additions (like cut). But if you wanted to integrate signals with search, you'd just have it stop one search branch.

[*addendum]: An easy technique to achieve this is, instead of probabilistically choosing a rule, you create a queue of rules with duplicates corresponding to how often a rule is to be attempted. ML can then optimize both the frequency and ordering in which rules are attempted, creating simplistic strategies. This approach would be amenable to ML based on genetic programming.

For starters you have an invalid production.

If you have a clause that can do something you don't want done, then it is up to you to ensure that its conditions are not met.

In Logic Programming, it doesn't matter in principle which clause is next executed, because they are all valid. IE, every clause will produce something that is consistent with what's known, regardless of whether that thing becomes later useful.

The above example is wrong (inconsistent) because signal_failure can be produced when the search for a solution has not in fact yet been proven to fail. Producing signal_failure at this point is inconsistent, and the rule should be rewritten so that signal_failure is produced only when there is real evidence that the goal cannot be reached (or, put another way, when you have derived the negation of the goal or can make no further progress toward the goal).

If you have a clause that

If you have a clause that can do something you don't want done, then it is up to you to ensure that its conditions are not met.
Yes, and this is exactly what was done in the example. It was achieved through pragmatic control. Let me define that last term. According to Boizumault* we can identify four different types of control:
Pragmatic control
Here the clause selection and atom selection rules are known and exploited by the programmer. He/she lays out the code to exploit these rules.
Explicit control within the program
This covers metalogical predicates such as cut or '->'/2 in the top level on the left hand side of a ';'/2.
Explicit control separate from the program
An example of this could be metarules for controlling search, possibly in a different language.
Automatic control
The system implements control strategies itself. Intelligent backtracking and the like.
Now, the example I gave, that you quite rightly identified as being logically wonky, used pragmatic control. It was an example that is not too dissimilar to real code. Systems with I/O or destructive knowledge base updates are much easier when the programmer can express control.

Your solutions to the infinite loop problem — Monte Carlo and BFS — reduce the possibility of expressing pragmatic control. The atom selection rule still applies though so this avenue of control is not entirely eliminated. Should we adopt such a solution, I suggested that either we

  • duplicate effort which keeps it nice and logical, or
  • introduce some mechanism of explicit control (I only really offered explicit control within the program, but separate from the program would do as well).
I cannot imagine how automatic control could aleviate the pain as it would need to read the programmer's mind.

Without explicit control of some form, I can't see how Monte Carlo or BFS could be used to build software. This is why I asked if we could live with potential incompleteness.

*Patrice Boizumault, The Implementation of Prolog, Princeton University Press, 1993
If you think I've moved the goal posts by bringing up the building of software then I can almost sympathise with you but in the last post I did try reintroduce the programming into the discussion.

Impure LP

Where does plain-old logical control fall into those four classes? E.g. using intermediate productions where necessary, and using more detailed conditions? There are also hybrid techniques: automatic control with programmer hints, automatic selection/switching among strategies defined separately from the program.

Anyhow, I'm seeing a pattern here: it seems the vast majority of your complaints about automatic control are based around impure logic programming, where some sort of side-effect (cut, signal, etc.) is executed while a clause is matched. Prolog is a not a pure LP language.

I favor pure logic programming and a strong focus on logical control. Integration with the outside world is achieved by modeling inputs as live program manipulation (injecting or manipulating facts, perhaps based on external sensors or databases) and modeling output as software agents observing/querying certain facts (e.g. to decide what is rendered, or to recognize requests for inputs). It shouldn't be a surprise, but this is very similar to pure functional programming.

Impure functional programming is better described as 'procedural' programming (the good kind, with first-class procedures). I don't have a word for impure logic programming, except to call it a hybrid imperative/logic style.

Anyhow, one advantage of pure LP styles is that rule selection is primarily a performance concern. There are no spurious signals from evaluating clauses in a bad order. Of course, it's an important performance concern, and one worth addressing - hints, strategies, ML, etc,. But there is no need to get it absolutely perfect the way we must with impure LP. There won't be any surprising effects. With pure LP, there is no need for automatic control to "read the programmer's mind". And we don't suffer any unpredictable incompleteness.

(Rule selection is also potentially a termination concern, depending on the logic. Temporal logics are often total-up-to-time, which has the interesting consequence that developers logically model more intermediate structures and computations.)

Prolog represents just one way of doing LP, not the only way, and not my preferred way.

nowt

Where does plain-old logical control fall into those four classes? E.g. using intermediate productions where necessary, and using more detailed conditions?

What do you mean by intermediate productions and detailed conditions? Can you give an example, say a proof derivation with these things in it?

There are also hybrid techniques: automatic control with programmer hints, automatic selection/switching among strategies defined separately from the program.

I cannot speak for the original author of these four classes but I'll tell you what I think.

I would say anything with programmer hints depends on how the hints are expressed, either within or separate from the program. I wouldn't class it as automatic.
The automatic changing of strategies which were defined separately would be explicit control again. In this case, I'm picturing a metalevel interpreter for control knowledge executing a program which interprets domain level knowledge.
The switching of strategies would be the switching of the metalevel program.

To me automatic control is completely automatic.
However it is just names. At the end of the day it's what works that is important.

Anyhow, I'm seeing a pattern here: it seems the vast majority of your complaints about automatic control are based around impure logic programming, where some sort of side-effect (cut, signal, etc.) is executed while a clause is matched. Prolog is a not a pure LP language.

I have no problem with automatic control, it is nondeterministic control that is problematic for me.

I favor pure logic programming and a strong focus on logical control. Integration with the outside world is achieved by modeling inputs as live program manipulation (injecting or manipulating facts, perhaps based on external sensors or databases) and modeling output as software agents observing/querying certain facts (e.g. to decide what is rendered, or to recognize requests for inputs). It shouldn't be a surprise, but this is very similar to pure functional programming.

This is a serious question I have for you: how would you handle a device interrupt? If I understand your live program manipulation method, you'll get something asserted into the knowledge base (I'll assume you'll have some kind of storage area for facts - correct me if I've assumed too much). When are you going to know the knowledge base has been updated? Who decides when your handler is going to get started? How are you going to manipulate the registers and in the right order (deal with device registers, then enable the interrupt again)?

Impure functional programming is better described as 'procedural' programming (the good kind, with first-class procedures). I don't have a word for impure logic programming, except to call it a hybrid imperative/logic style.

Would you call SML with its references procedural?

Prolog represents just one way of doing LP, not the only way, and not my preferred way.

I think everyone knows how you feel about Prolog by now :) Honestly, you're the one that keeps bringing Prolog up.

What do you mean by

What do you mean by intermediate productions and detailed conditions? Can you give an example, say a proof derivation with these things in it?

Think of how precedence is modeled using a context-free grammar: instead of using one big rule, you use a bunch of smaller rules. The intermediate structures can imply a precedence.

With temporal logic, this is often taken to a much greater extreme. You can model such things as procedural computations as occurring over time, where the whole computation is really an intermediate structure for the information you derive. E.g. you move from a "fact" that a particular procedure is in a particular state in one instant, to a "fact" that said procedure is in the next state in the next instant.

I would say anything with programmer hints depends on how the hints are expressed, either within or separate from the program. I wouldn't class it as automatic.

The essential idea with "programmer hints" is that the implementation must be free to ignore them. This also means it is free to use them in flexible ways, e.g. with machine learning one might weight the initial decisions based on the hints.

I don't understand why you wouldn't class it as automatic. Developers don't have full control over the search, and your complaints about 'automatic' control would still apply in an impure model, so it's at least partially automatic.

I have no problem with automatic control, it is nondeterministic control that is problematic for me.

Hmm. In practice, once you consider portability to other implementations, 'automatic' tends to imply 'nondeterministic'. That is, if it isn't determined by the programmer, then it's an implementation detail that the programmer cannot depend upon staying stable in every circumstance.

This is a serious question I have for you: how would you handle a device interrupt? If I understand your live program manipulation method, you'll get something asserted into the knowledge base

I'll answer this in context of a temporal logic system. In a non-temporal system, you probably won't be working with real-time devices (beyond coarse-grained pause/resume) so it should be a non-issue.

A device interrupt must occur at some 'time' value. So you add a 'fact' to the system (knowledge base or code) that basically says such-and-such an interrupt occurred at a particular time. You can then derive information from the presence (or absence) of the interrupt.

The choice of 'time' values can depend on the implementation. For a discrete temporal logic, computations run at integral time-steps that have no strong relationship to real-time (though, some implementations might be 'millisecond' based or similar). In this case, you'd typically inject the interrupt as occurring at `now+1`.

For a more continuous or real-time temporal logic, you could actually read the time off a wall clock. In this case, you might need to rewind and recompute any facts that were computed for the same time period that might be affected by the new fact. The resulting system has a sort of "rolling window" where the past is stable and the future has yet to be computed, but computations within the window are still in flux (subject to rollback and replay). You could in most cases avoid rewind by adding just a small bit of latency to the interrupt - e.g. wall-clock + 20ms. If you haven't read about 'lightweight time warp' systems, they operate in a very similar manner, albeit using discrete lamport clocks for a distributed system.

When are you going to know the knowledge base has been updated?

Immediately. That's what live programming means. Though, depending on what time-value the interrupt was tagged with, it might not be immediately relevant (temporal logic tends to run with causality).

Who decides when your handler is going to get started? How are you going to manipulate the registers and in the right order (deal with device registers, then enable the interrupt again)?

Who needs a handler? Handlers are for control-flow systems (you pass control to a handler). As far as the logic is concerned, the interrupt is a plain-old-fact (as in: "it's a fact that we were interrupted at time T, what do we logically conclude from this?")

Dealing with CPU registers, injecting the "we were interrupted" fact into the knowledge base, etc. would be part of the language implementation or runtime.

Would you call SML with its references procedural?

Yes. Same with OCaml or Lisp. Relevantly, they are commonly used that way. Functional programming in the true (pure) sense has an almost entirely distinct set of idioms and software design patterns, and a different set of gotchas.

I think everyone knows how you feel about Prolog by now :) Honestly, you're the one that keeps bringing Prolog up.

I bring it up because your arguments assume Prolog-like features are part of LP, if only for the programming aspect. I'm hoping to break you from those assumptions, or at least make them a bit more explicit.

I don't understand why you

I don't understand why you wouldn't class it as automatic. Developers don't have full control over the search,

Automatic means that it works on its own. If developers can control it in some way then it's not automatic.

and your complaints about 'automatic' control would still apply in an impure model, so it's at least partially automatic.

It's explicit then. Being partially automatic is like being partially pregnant. At the end of the day it makes no difference.

Who needs a handler? Handlers are for control-flow systems (you pass control to a handler). As far as the logic is concerned, the interrupt is a plain-old-fact (as in: "it's a fact that we were interrupted at time T, what do we logically conclude from this?")

Dealing with CPU registers, injecting the "we were interrupted" fact into the knowledge base, etc. would be part of the language implementation or runtime.

I'm talking about writing the handler in the logic programming language. Injecting the fact and peeking and poking memory mapped registers would be primitives implemented in the runtime library, but the actual checking of status, the getting/putting of the data, the reconfiguration of the device, and the enabling of interrupts; how are you doing that sequence without deterministic control?

I bring it up because your arguments assume Prolog-like features are part of LP, if only for the programming aspect. I'm hoping to break you from those assumptions, or at least make them a bit more explicit.

As I pointed out earlier, it is a question of resolution and deterministic clause choice (deterministic literal/atom selection has been taken for granted). Prolog is just one example of a resolution based system with a deterministic computation rule. Any sequential system inspired by the Kowalski and van Emden paper would probably do. These features are most definitely part of logic programming. I have never claimed they are the only part as you seem to assume.

So semi-automatic guns are

So semi-automatic guns are partially pregnant? Maybe your classifications and rubrics need reconsideration?

Logic programming can potentially go all the way down to specifying CPU handlers, but that isn't how it's done in practice (unless you're extracting programs as proofs, cf. Coq and Bedrock). Instead, the logic programmer focuses on the logic, and you have a separate implementation layer to provide the interrupts into the knowledge base. This is simply the LP version of the OS API.

When we do procedural programming, we want our 'machine' (VM or OS APIs) to be observed as and influenced by procedure calls. When we do LP, we want our 'machine' to be observed as and influenced by logical facts or predicates. That's fundamental. It should go without saying. However, like many people, perhaps you've never even imagined a virtual-machine or OS API other than what is commonly used today. I have the advantage of having seen pure LP used for controlling robots and helicopters, executed upon a relatively specialized real-time OS. We don't write handlers. We simply describe how things should be spinning or moving based on the current state and sensors, which are represented as facts in the knowledge base (changing at well-defined, atomic instants).

Basically, your question about handlers isn't relevant in pure LP practice because it's not how IO is done.

Re: "it is a question of resolution and deterministic clause choice" - with pure LP, this question is irrelevant. Prolog is a bad example system for generalizing LP because you're focusing on issues that arise due to the 'impure' features rather than the 'logic programming' features.

Re: "These features are most definitely part of logic programming." - That's analogous to saying pervasive ad-hoc side-effects and exceptions are definitely part of functional programming (and therefore lazy or parallel evaluation is an awful idea). It is simply wrong. But it is perhaps not obviously wrong, except with hindsight of having seen pure variations.

We could debate whether or not supporting effects is worth the overhead of requiring deterministic resolution, but I don't really feel like it. I hope you open understand that some versions of logic programming aren't under the constraints you're assuming based on your experience with Prolog, due to having different IO models.

Take this in the spirit it is intended.

All you had to say was that your idea of logic programming couldn't do interrupt handlers. That would be enough and any of us that design things would understand as what is fit for one purpose isn't necessarily fit for another. It's OK.

I'm writing an operating system in a logic programming language. Now don't take this the wrong way but your ideas of what should and shouldn't be done in logic programming are of no interest to me. I cannot try to solve my problems in accordance with your limited experience. By limited I mean you've always been removed from the bare metal in your logic programming. That's OK too, not many people outside Japan have written an OS in a logic programming language. To say that my question about interrupt handlers isn't "relevant" actually made me laugh.

We're not all working under the same set of limitations. At some point you'll have to understand that.

Your question about handlers

Your question about handlers isn't relevant to pure LP. That doesn't mean it isn't relevant in any context. But it does mean that judging LP by its representation of handlers is confused, like judging chess players by how well they play Tetris. (I don't have any objection to your use of impure LP. I only object to your conflation of the impurity with the LP in your arguments. It leads to premature generalization.)

Handlers are unnecessary and awkward for pure LP, but you can do them via meta-programming. I.e. the machine can be observing for a 'this procedure P is the handler for signal S' fact, and the procedural code for P can be computed or represented within the logic. I already mentioned this above (re: Coq and Bedrock).

In any case, my "limited experience" seems strictly broader than yours, though is probably shorter. (Judging by such phrases, I'm guessing you intended your message in a very irritated spirit. So taken. :) I have used impure LP, but you haven't used pure LP. One university course had me implement and use many logics (which is not as hard as it sounds, using Maude or OBJ).

nowt

In any case, my "limited experience" seems strictly broader than yours, though is probably shorter. (Judging by such phrases, I'm guessing you intended your message in a very irritated spirit. So taken. :)
No I'm not irritated — I'm too old to get worked up about it. Look, if you got upset about me describing your experience as limited then I am genuinely sorry about that. I did express what I meant by "limited" though so as not to cause offence. Maybe I should have used another word. The point being made was that your experience was confined to a particular area.
I have used impure LP, but you haven't used pure LP.

You're making assumptions again. How do you know I haven't used pure logic programming? On this website you'll find two implementations of pure logic programming languages written by me.

First, in this very thread an interpreter for pure Prolog http://lambda-the-ultimate.org/node/4789#comment-76225 It may not be very interesting but it's pure logic programming none the less.

As we know how much you dislike Prolog let me give you another example. This one is a fully normalising lambda calculus reduction machine along with a compiler which translates lambda calculus expressions in Lisp s-expression syntax to the machine's instructions http://lambda-the-ultimate.org/node/4597#comment-72406 According to Volume 5, of the Handbook Of Logic in Artificial Intelligence, which deals with logic programming, I can call this an example of an equational logic. No I/O there so I'm that's pretty sure that's pure.

Neither of these examples is particularly impressive but they are the only demonstrations of pure logic programming I could find that I had posted on this very site. I, of course, could claim much greater deeds done in private but that wouldn't be provable. Anyway, ask yourself, would someone who doesn't use pure logic programming spend time implementing pure logic programming systems?

Can you see that you make wrong assumptions? As I wrote earlier, you must understand we're all working with different limitations.

arguments are evidence of experience

Your arguments are evidence of your experiences, as are the questions you ask. I'm not 'assuming'. I'm inferring from the evidence available to me.

Pointing out that you've written some logic programs or subprograms that happen to be pure is analogous to a Lisp developer pointing at his pure Fibonacci function and saying "Look, I know pure FP!". That Lisp developer is unlikely to recognize or acknowledge the utility of pure approaches to integrating IO (e.g. monads). A pure FP programmer must address the IO problem in order to develop useful applications; ignoring it is not an option. Doing so has a pervasive impact on software and language design. The same can be said of pure LP relative to impure LP.

(Also, your second example - the lambda calculus reduction - makes extensive use of impure effects via rplaca and rplacd to communicate between different steps in the program.)

would someone who doesn't use pure logic programming spend time implementing pure logic programming systems?

Of course not! But before generalizing about LP and what is possible or essential, that someone should be aware of other implementations (both existing and potential). E.g. before insisting that deterministic clause selection is critical for LP, they should stop and think "why is this so important? is it reasonable to develop usable LP systems without this limitation?"

you must understand we're all working with different limitations

IIRC, this argument started because you're insisting that we all should work with the same limitation: deterministically ordered clause selection.

It's ok to be wrong

All you had to do was say that you were wrong. You claimed that I "haven't used pure LP" and I gave you concrete evidence to the contrary. But no, you decided to twist and turn, duck and avoid. It seems I hadn't used the kind of pure logic programming that you have sanctioned.

It's ok to be wrong. It's not a sign of weakness to admit failure. Your current way of coping with failure isn't a viable long term strategy IMHO, but, that has nothing to do with me, it is you that pays the price.

By the way:

That Lisp developer is unlikely to recognize or acknowledge the utility of pure approaches to integrating IO (e.g. monads).

There are more than a few Lisp programmers out there who have forgotten more about pure functional programming than you or I will ever know about that subject. You're making those big assumptions again.

Programming with Purity

If a Lisp programmer points at a pure function (e.g. to recognize Peano numbers, to use one of your own examples) and insists he therefore knows or has performed 'pure functional programming', this is actually strong evidence to the contrary. He hasn't demonstrated knowledge of the important parts, like how IO is handled to develop complete applications in the pure style. He hasn't grasped the distinction between 'pure function' and 'pure functional programming'. What he has demonstrated is that he doesn't even know enough to recognize a meaningful demonstration - i.e. he's at the level of unconscious incompetence with respect to pure functional programming. Apparently, all he knows is that pure functions are involved.

Right now, you're that Lisp user. Except, in this case it's 'pure logic' vs. 'pure logic programming', and you're that Prolog user. You've pointed at impure implementations of pure logics, not the use of pure logics to develop practical applications.

I do not believe this was a "twist and turn, duck and avoid" maneuver on my part. Rather, it's a failure to distinguish between "pure L" and "pure LP" on your part. One of your earlier comments ("I did try reintroduce the programming into the discussion") - referring to your use of signals to represent effects in typical impure LP code - seems to indicate you do understand the distinction and use it in a different context, even if you failed to make the distinction in this case.

There are more than a few Lisp programmers out there who have forgotten more about pure functional programming than you or I will ever know about that subject.

That is possibly true, especially if you count people who also program in Haskell or languages favoring a pure functional style. But even if such Lisp programmers exist, I imagine they're a small subset of all Lisp programmers. The Lisp programmer I mentioned (based on a few actual Lisp programmers that haunt locations like Hacker News) isn't part of that subset.

Logic Programming doesn't

Logic Programming doesn't mean Prolog. You're right that Prolog has the problem you describe, which make it less like logic. (It also has `cut` and other non-logical concepts.)

But there are other LP languages without this problem, e.g. where termination can be guaranteed no matter in which order the rules are applied. (Though 'guarantee' might still be of such a high exponential that it happens some time after the heat death of the universe.) Or where search order is left in the hands of the compiler/interpreter.

You've misunderstood/missed the point.

Logic Programming doesn't mean Prolog. You're right that Prolog has the problem you describe, which make it less like logic. (It also has `cut` and other non-logical concepts.)

The problem isn't Prolog, the problem is the inference rule. The choice of clause is critical with resolution. In this case, the fact that Prolog has metalogical built in predicates is neither here nor there. This is all about resolution.

There is no requirement to choose a clause

There is no requirement to "choose" a clause. You can apply ALL of them - i.e. breadth first. Any other choice is for efficiency reasons.

Prolog addresses efficiency issues by putting the decision in the hands of the developers, at risk of developers choosing poorly. Ray describes another approach: put the strategy in the hands of machine-learned heuristics. I expect that design would make LP much more viable for inexpert developers, similar to how automatic memory management simplifies things in most use cases. There is probably some good intermediate ground, too. We could use an initial order-of-clauses in the code to derive initial weights in the ML model. Or we could modularize search strategies so they're separate from the model definition (similar to how rewrite strategies are handled in Maude).

It is somewhat annoying that, because Prolog is the most prominent LP language, people tend to conflate the separate concerns of efficient search strategy with the logic program. Logic programming, in its purest form, is just about developing a useful set of inference rules. Managing search strategy is not an essential part of the logic programming experience.

nowt

For bfs see my reply to Ray's post.

Managing search strategy is not an essential part of the logic programming experience.

Maybe it's not an essential part of your logic programming experience.

You can argue control over

You can argue control over search to be useful, valuable, etc.. But it isn't part of any logic I know, and several LP languages do without. Therefore, it isn't essential. QED.

General LP system that scales without it?

It may be possible to do generalized LP without search strategies, but are you aware of systems that supported actual large programs in that style? It's certainly an aspiration. The Datalog new wave seems to have given up on general LP in favor of making automation tractable.

I'm not aware of LP being

I'm not aware of LP being used in any applications that I consider large, at least not in a pervasive manner (as opposed to little pockets of LP). It hasn't been a popular programming model, and it has a (mostly deserved) reputation for poor scaling properties.

But I do know of many systems where search is left entirely to the implementation, which are very successful even at larger scales. These include constraint solvers, parsers, planners, schedulers, term rewrite models, and rules-based systems. I feel confident that it can work for LP, too.

Dedalus and Bloom (descendants of Datalog) are both designed for general purpose, scalable LP systems, and do not have user-controlled search strategy. In a sense, developers can partially control search by modeling a computation along the temporal dimension. But I'm not aware of them being in use for real projects.

OPS5, OPS83, ART, a few others...

There have been several systems that scale reasonably well when the clause selection rule is to make a random selection from among those whose conditions are met.

In most of those systems, larger-scale systems more or less require you to write rules that keep track of subgoals. So you get rules like this (in OPS83 syntax) where you have many productions whose purpose is to keep track of what subgoals will be relevant to proving something, and most productions have conditions which depend on knowing that the new thing produced will match a relevant subgoal.

(seeking 'grandfather ?x) :: (seeking 'parent ?x)

(seeking 'grandfather ?x) & ('parent ?x ?y) & ('father ?y ?z) :: ('grandfather ?x ?z)

(seeking 'grandparent ?x) & ('parent ?x ?y)::(seeking 'parent ?y)

(seeking 'parent ?x) & ('mother ?x ?y) :: ('parent ?x ?y)

(seeking 'parent ?x) & ('father ?x ?y) :: ('parent ?x ?y)

(seeking 'father ?x) & ('parent ?x ?y) & ('male ?y) :: ('father ?x ?y)

(seeking 'mother ?x) & ('parent ?x ?y) & ('female ?y) :: ('mother ?x ?y)

(seeking ?parent ?x) & ('mother ?x ?y) :: ('parent ?x ?y)

(seeking 'grandfather ?x) & ('parent ?x ?y) & ('parent ?y ?z) & ('male ?z) :: ('grandfather ?x ?z)

There are many ways such a system can come to its goal, and you may or may not need all of these rules -- but as long as all of the rules are at least consistent, they make no false productions and nothing is produced that doesn't contribute to the goal.

Examples?

Can you describe the domain/application? (Genuine interest here -- I wrote the core part of an app in Prolog recently.)

Removed

Removed