Programming Languages for a Programmable World (what's on those tablets in Westworld, anyway?)

It occurs to me that, as our world becomes more programmable - we need better languages for programming the world around us. Perhaps we can start discussing what those languages, and the run-time environments behind them, might look like.

We're in the Anthropocene, human activity has been the dominant influence on climate and the environment. We shape the face of the planet, and with almost 7 billion of our 8 billion carrying smartphones - we have reached the "singularity" - we have become globally connected rendering engines of an increasingly plastic reality. In a world of complex, self-organizing & adaptive systems - our dreams emerge into our collective subconsciousness (Facebook, Twitter, the Evening News), and we proceed to make those dreams real through our speech & actions. And so far, we've been letting our Ids rule the show - perhaps, because we simply don't have the language, or the processes, to be more deliberate about negotiating the world we want to live in.

The notion of a Holodeck has been expanded greatly in recent years - "Westworld," "Ready Player 1," "Free Guy" - we get closer and closer to the notion of designing & scripting the world around us. Theme Parks, "Reality TV," LARPs, CONs built around fictional universes, large-scale LVC (Live, Virtual, Constructive) military exercises ... bring us closer and closer to the point where we can deliberately design & script the world around us, at the stroke of a key.

But we're still living in a world where the pen rules. Westworld shows us engineers reprogramming the world from their tablets. Parzival & Blue Shirt Guy pull up virtual consoles, with super-user privileges. But, so far, the designs are conceptual - the GUIs are fictional, as is the code behind them. It's time that we start developing those interfaces & the run-time environments behind them. The Internet is increasingly the SCADA system for our planetary systems - it's time to start developing a new generation of protocols & languages.

Shakespeare wrote, "All the world's a stage, and all the men and women merely players." John wrote, "In the beginning was the Word, and the Word was with God, and the Word was God." In the introduction to the "Whole Earth Catalog" - Stewart Brand wrote "We are as gods and might as well get good at it." Learning to be better gods, starts with improving our vocabulary, grammer, and diction. We need better design & scripting languages for shaping the world around us. And then we can talk about Design & Engineering Processes.

We have MATLAB for systems modeling. We have have business plans, program plans, contracts, budgets & schedules for building systems. We have "mission orders" for military campaigns. But, the closer we get to the daily experience of life, the more nebulous our language becomes - we're back to natural languages that are not particularly useful for defining scenes, scenarios, characters, behaviors - for describing or scripting the stuff of reality. We use natural languages to write course catalogs & syllabi for universities; to write conference programs for events. We write shooting scripts for movies. Unity is simply not up to real-world tasks, like setting up an improv scene to be played out by avatars.

Playing Game Master is still an art, practiced by individuals. If we are to truly master our reality, to write our own scripts, and to live together as cast & crew in each others' games, we need better language for discussing & negotiating our roles & lines - rules of engagement & order for "reality jamming" if you will - ideally ones that let us all "Live Long & Prosper" in a sustainable world of "Infinite Diversity in Infinite Combination." We seem to have gotten pretty good at setting up for a "Forever War" (or perhaps one ending in Mutually Assured Destruction). Now we need to get good at setting up for a "Million Year Picnic."

The question is... what do those languages look like? What's the computational model behind them? What's the run-time bridge between thought, rendering the scenery, setting up the action?

This is a community of language designers - perhaps we can start the discussion here. Your thoughts, please!

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

natural language or fail

nothing will change all that meaningfully unless you can have ai style natural language interaction, i feel.

natural language is not enough

I think natural language is not enough.
The good parts of natural language are
1) that you have an intelligent interlocutor who understands the context you're working with and
2) recognizes when you have misspoken, misunderstood, or are just wrong (I think current languages could be improved with a model of programmer error and intention)
3) that you can add context in as needed (this had been difficult to make secure in programming languages)

Natural language would need to be augmented with a system for showing the consequences of the choices you're making and drawing attention to things that need more specification (or less specification)

I would like it if programs were represented as a series of independent choices you've made. I'm not sure the best way to represent choices that weren't independent and what makes something a choice instead of a background assumption.

accidental noise removed

accidental noise removed


Natural language? Okay, why not.

For critical apps? Hell, no - not yet. We've got many other hanging fruits to figure out first, still. And some aren't hanging very low. They're still pretty hard to only grasp, actually (robustness, availability, correctness, proofs, etc)

We still can barely understand each other in our own mother tongue.

I get shivers at the idea of having a machine, or a serie of machines, do anything really critical on our behalf after just speaking to it.

At least with humans in the loop, you may find someone with enough IQ *and* empathy (or wise enough to avoid getting sued) instead of sending you to hell in a hand basket with a nonchalant punch line, "Task completed, thank you for your business."

Just imo.

When machines can think twice after remembering the feeling of pain that they have defined by and for themselves, maybe I'll change my mind about non-leisurely AI, that's all I'm saying. Till then, show me the code.



Dead programs

One aspect that jumps to mind is the gap between tweaking / tuning local aspects of the code and the expected global behaviour of the system. This is often surprising during the development cycle on small systems and will, presumably, be more surprising for large systems. The hardest part of debugging is the jump from "what I thought I expressed" to "what I actually defined", and it reminds me of this Jack Rusher talk from Strange Loop:

Sounds good

Hey Simon! I wrote to Tim Sweeney Twitter about my Metaverse programming language and platform sometimes but don't get a reply. Probably because I'm from the Ukraine. How could you do that?

Your language looks interesting. Also the documentation is a good example of how to do it right. How has the LuT helped you?