I was having a discussion about typing with shelby3 regarding typing in our scripting language, and shelby3 made the following statement:
I am reasonably sure, as I can have an eager language, which is call-by-value, and have strict typing, no sybtyping and no variance, and it will be sound.
Why do we both see this so differently?
Are there any cases where subtyped records are truly indispensible? Standard examples are cases like Point3D as a subtype of Point2D, but this isn't convincing.
Certainly Point2D corresponds to the (x,y) plane of Point3D, but that's only one of 3 axis-aligned planes in 3D space. So while subtyping allows you to naturally program in one 2D subset, chances are you'll need operations along any arbitrary 2D plane, and so you probably won't ever use Point2D's functions in programs using Point3D.
Other cases might be something like a CEO and an Engineer both subtyping an Person record type, but this seems like a domain modelling failure. CEO's and engineer's aren't different kinds of people, they're both just people, but with different roles. So I'd argue a more natural expression would be a single Person record with a sum field: type Role = CEO | Engineer.
So are there any truly indispensible use cases for record subtyping, where it's truly the most natural expression of the domain?
The idea is it will be a small single-paradigm language, a hybrid between functional / imperative.
The key features for the language:
Additional features that would be good but not decided:
Here is a link to the discussions about the language: https://github.com/keean/zenscript/issues
All are welcome to come and comment and help shape the direction of the language. You don't have to contribute code, just helping with the design discussions would be appreciated (but please take note of the goals above).
Type checking after inference is pretty straightforward and typically not too expensive, but suppose we have a low powered device that receives some mobile code. Is there some sort of even cheaper proof of work that type checking will pass, so we can avoid verifying the code's type safety?
I don't mean some cryptographic certificate requiring verification of the sender since that requires trusting the sender, but a shortcut to verifying some meaningful safety independent of any other parties. Proof-carrying code still requires verifying the code against the proof certificate, so I'm looking for something cheaper.
Edit: the following seems like a good overview of various bytecode formats and their safety properties as of 2007, covering also the tamper-proof bytecodes above: Intermediate Representations of Mobile Code.
Is anyone aware of concrete term rewriting systems in a form of computer languages? It seems to me as a very promising area as a programming paradigm, yet there is no sign of it as far as I have searched the web. There is no sign of it even on wiki page of programming paradigms, even generally. Are we looking at yet unrevealed area of programming? By my personal opinion, term rewriting might be a way humans think at some higher abstraction level, above all the neural networks native to the human brain. Term rewriting essence seems so natural to me as a thinking framework for solving math, physics, chemistry or logic problems, that I'm surprised there are no implementations of it in a form of programming languages.
Does anyone know of any implementation? And should term rewriting be accepted as a different programming paradigm?
At this year's SPLASH there will be a follow-up to last year's NOOL 2015 workshop (discussed here on LtU). Objects may not be the trendiest thing in PL circles, but I think they have a bright future. (Example: session types are essentially concurrent objects in disguise, and they are quite trendy.)
Please consider submitting! (Disclaimer: I'm one of the organisers :)
@nntaleb tweeted a table from his Antifragility book listing several instances of academics claiming ideas that actually originated in industry had an academic origin.
I'm curious about the Wiener/Cybernetics example - the dates don't seem to work. I'm also curious about examples and nonexamples of "lecturing birds how to fly", ie. cases where this misappropriation has happened and also places where there is the belief that ideas originated in industry but in fact came from academia.
To those of you with math backgrounds....
A random conversation, about math and machine learning, led me to wondering - LISP & lambda calculus aside - is it reasonable to view programming languages as mathematical representations?
After all - a program represents a set of logical/mathematical relationships - be it descriptive, a model, or a series of operations - not unlike the way a set of differential equations can model a physical system (and be used to predict behavior).
Are there fundamental differences, beyond the symbology and grammars, that differentiate a set of equations from a program?
I ask this partly out of intellectual curiosity, and partly because, when it comes to analyzing and modeling systems, my mind tends to think in terms of code, rather than formulas - and I kind of wonder if there's really something fundamentally different about the thought process, or is it more akin to the difference between, say, differential and integral forms?
hi, I was re-reading an LtU blast from the past about FRP and the discussions there made me think to ask this here community to post some concise updates on how FRP research has been going of late. In case any of you in that field have so much free time.
My idea here is to introduce a place to discuss ideas about handling memory exhaustion, or related resource limit management. The goal is to have something interesting and useful to talk about, informing future designs of programming language semantics or implementation. Thoughtful new solutions are more on topic than anecdotes about old problems. (Feel free to tell an anecdote if followed by analysis of a workable nice solution.) Funny failure stories are not very useful.
Worst case scenarios are also of interest: situations that would very hard to handle nicely, as test cases for evaluating planned solutions. For example, you might be able to think of an app that would behave badly under a given resource management regime. This resembles thinking of counter examples, but with emphasis on pathology instead of contradiction.
In another discussion topic, Keean Schupke argued the idea failure due to out-of-memory is an effect, while others suggested it was semantically more out-of-scope than an effect in some languages. I am less interested in what is an effect, and more interested in how to handle problems. (The concept is on topic, when focus is what to do about it. Definitions without use cases seem adrift of practical concerns.)
Relevant questions include: After partial failure, what does code still running do about it? How is it presented semantically? Can failed things be cleaned up without poisoning the survivors afterward? How are monitoring, cleanup, and recovery done efficiently with predictable quality? How do PL semantics help a dev plan and organize system behavior after resource failures, especially memory?
Active forum topics
New forum topics