## Type system based on epistemic modal logic?

This is something I’ve been mulling over lately, and I’m curious to hear opinions on it and whether there’s any existing research (I couldn’t find any).

Since type systems based on linear logic are receiving more attention lately in Rust and the forthcoming linear types extension for Haskell, I was looking around at different logics to see if they would make a useful basis for type system features in a programming language. I was aware of modal logics, but hadn’t really looked into them since I read From Löb’s Theorem to Spreadsheets a few years ago.

I came across epistemic modal logic, a logic for reasoning about knowledge. That struck me as potentially very useful for distributed data stores, in which different agents (servers) know different things (have different data), and also know things about the things that they and other agents know.

For instance, if a client asks a server for some data, the server may be aware that it doesn’t have the data in question, by the “negative introspection axiom”, ¬Kiφ ⇒ Ki¬Kiφ, if an agent (i) does not know (¬Ki) some fact (φ), then that agent knows (Ki) that it does not know the fact (¬Kiφ). However, it may know that some other agent does know the fact, and can go and fetch it accordingly, or arrange for the other agent to send it directly to the client.

This could incorporate other modalities such as possibility (◊), necessity (□), “everyone knows” (E), “common knowledge” (C), “distributed knowledge” (D). Just brainstorming, I feel like a type system based on these ideas might be able to enforce things like:

• You can only make a synchronous request for some data from a server if that server necessarily has the data and can therefore respond immediately. Pseudocode:

syncRequest (fact : FactID, server : ServerID) : Fact
requires necessarily (server knows fact)
• After you send some data to a server, you know that it possibly knows the data and that you can ask for it asynchronously.

send (fact : FactID, server : ServerID) : ()
ensures possibly (server knows fact)
• Any part of a message (metadata or data) can be lost, and the server can recover or negotiate repair to achieve eventual consistency. (I’m envisioning a sort of Erlang-style “fire and forget” message passing system, for example over UDP.)

• Constraints on server & client knowledge to express tradeoffs between consistency and availability.

I’m just riffing informally here. Let me know your thoughts, or if you can point me to any resources. :)

## Comment viewing options

### Solid idea

I think this is a good idea. Looks like the more-or-less predictable next step in type theory. I.e., lots of it was about 'propositional' interpretations, then came the quantifiers, it seems reasonable to expect that the following ideas will come from modal logic.

### Here is the basic challenge

Epistemic modal logic is most useful in military applications, where Java rules. Think drones communicating without communicating (e.g. due to signal jamming preventing direct communication and instead relying on pre-defined inference rules to communicate). Good luck.

### Temporal modalities too.

My feeling is that for any useful distributed systems work, you'd need both knowledge and temporal logic operators. Since achieving simultaneous coordination or common knowledge is impossible, the type system should be able to allow the notion of eventual common knowledge.
That is you should be able to say that E(voted_yes) => ◊ Commit

### Time modalities are useful but not needed

From what I’ve been able to find in the literature, you can arrive at eventual common knowledge without an explicit notion of time if you have a protocol for public announcement. Coordination within a fixed time bound is impossible, but in practice it looks like you can safely assume common knowledge if you have some reasonable restrictions that I’d been using anyway.

For instance, in my current implementation of these ideas, I have reliable asynchronous channels, so if a message is sent then it will eventually be received; authenticated agents, so you can trust their assertions; and monotonically growing sets of facts, so the assumption of common knowledge will never be contradicted.

The authentication bit has been interesting to work on. I’m making this system with the notion of trust and compromise built in, so you can express things like “I know that the private key for agent A was compromised a week ago, so invalidate all knowledge that was derived from the assumption that A was trustworthy since then, roll back all the reversible actions that were taken based on that knowledge, give me a list of all the irreversible actions, and tell me exactly which facts were potentially leaked”.