User loginNavigation |
LtU ForumWhy is there no widely accepted progress for 50 years?From machine code to assembly and from that to APL, LISP, Algol, Prolog, SQL etc there is a pretty big jump in productivity and almost no one uses machine code or assembly anymore, however it appears there's been almost not progress in languages for 50 years. Why is that? Are these the best possible languages? What is stopping the creation of languages that are regarded by everyone as a definite improvement over the current ones? Functional Constructors in Theme-D
I redesigned the constructors in Theme-D version 2.0.0, see [1].
The design is inspired by the OCaml object system.
Constructors are special procedures used to create instances (objects)
of classes. They are not defined like other procedures but
Theme-D creates them using the
construct statement in a
class and field initializers in a class. The translator-generated
default constructor is sufficient in many cases. For example, consider
the class <complex> defined in the standard library:
(define-class <complex>
(attributes immutable equal-by-value)
(inheritance-access hidden)
(fields
(re <real> public hidden)
(im <real> public hidden))
(zero (make <complex> 0.0 0.0)))
The translator-generated default constructor takes two real arguments
and sets the first to the field re and the second to the field
im.
The programs
(define-class <widget>
(fields
(str-id <string> public module)))
(define-class <window>
(superclass <widget>)
(construct ((str-id1 <string>) (i-x11 <integer>) (i-y11 <integer>)
(i-x21 <integer>) (i-y21 <integer>))
(str-id1))
(fields
(i-x1 <integer> public module i-x11)
(i-y1 <integer> public module i-y11)
(i-x2 <integer> public module i-x21)
(i-y2 <integer> public module i-y21)
(i-width <integer> public module (+ (- i-x21 i-x11) 1))
(i-height <integer> public module (+ (- i-y21 i-y11) 1))))
The constructor of class <window> passes the first argument
str-id1 to the constructor of its superclass
<widget>. The constructors also initialize the fields using their
arguments. Note that the field initializers may contain more complex
expressions than just copying an argument variable.
Here is the second example:
(define-class <widget>
(construct ((str-id1 <string>)) () (nonpure))
(fields
(str-id <string> public module
(begin
(console-display "new widget: ")
(console-display-line str-id1)
str-id1))))
(define-class <window>
(superclass <widget>)
(construct ((str-id1 <string>) (i-x11 <integer>) (i-y11 <integer>)
(i-x21 <integer>) (i-y21 <integer>))
(str-id1) (nonpure))
(fields
(i-x1 <integer> public module i-x11)
(i-y1 <integer> public module i-y11)
(i-x2 <integer> public module i-x21)
(i-y2 <integer> public module i-y21)
(i-width <integer> public module (+ (- i-x21 i-x11) 1))
(i-height <integer> public module (+ (- i-y21 i-y11) 1))))
Here we log the calls to the constructor of <widget> to the
console. Note that we have to declare the constructors as
nonpure if they have side effects.
[1] Theme-D Homepage By Tommi Höynälänmaa at 2020-03-08 16:58 | LtU Forum | login or register to post comments | other blogs | 4008 reads
Deterministic ConcurrencyToward a deterministic treatment of concurrency for the general case. Various desired forms of reasonablenessA small study in the vagaries of Rgular types in C++. How can we get better statically typed assurances around the nuances of such concepts? Which languages help the most? Things like Rust or ATS come to mind. Bjarne Stroustrup interview on Youtube.Lex Fridman just interviewed Bjarne Stroustrup on youtube. It's long; a bit over an hour and a half. But they're talking extensively about C++ and its origins and evolutions and guiding principles. By Ray Dillinger at 2019-11-08 23:31 | LtU Forum | login or register to post comments | other blogs | 5540 reads
Type Mapping in Source-To-Source TranslationI'm in the middle of writing an Object-Pascal-to-Java translator. One interesting aspect is the mapping between source and target types. Is anybody aware of literature about this topic, books, papers, etc.? I would be interested in things like rules for "widening" a target type over the source type; constraints that must hold on operations applied in the target language so that the result complies with the result in the source language; maybe a formalism where you could even show (prove) that, given some sets of types and operations; etc. I do not aim to set things up this way, but would enjoy reading some theoretical material about what I'm doing. Is there serious research in the area of source translation, anyway? Histogram: You have to know the past to understand the present by Tomas PetricekHistogram: You have to know the past to understand the present by Tomas Petricek, University of Kent
RopeThis is my very first post on this site. Also this is my first post regarding my new PL idea: Rope First, an introduction to me (in a hopefully non-narcissistic way): I have no experience or education designing programming languages.
I get it, and I agree with it for the most part; why waste your time doing what has already been done? What progress would we make if everyone started from how to make fire and tried making it to the moon? But there is something that I don't agree with. (Not with Newton, but with a secondary application) Should we all only blaze trails where a previous trailblazer left off? My personal experience has taught me that, if you disagree, then you'll learn more and, rarely, discover something amazing. It is with that mentality that I want to introduce "Rope." Second, an introduction to ROPE:
That's all I got for now. But I'm sure I'll come up with more. The lofty goal behind the "language to create languages" philosophy is that, at the start, everyone tries to make their PL perfectly suited for its purpose. But as we've seen, languages keep expanding and descendants get spawned as a result of either success or failure. So if we had to do it all over again, wouldn't we want a language that intended to have descendants. Wouldn't it have been nice to have a language that lived for its children? That's the idea I want to explore. Oh and as a final thought. The origin of the name is this: NDArray/multi-columnar with efficient CRUD operations?I'm in the hunt for material or implementations of NDArray and/or columnar structures with not-bad support for CRUD operations (insert-update-delete) that could work in-memory. I'm aware of kdb+ but my understanding is only for big append-only loads then calculations on it. Currently, I have for my own little relational lang (http://tablam.org) several structures that backed the relations (BTreeMaps - Vectors - Table - Scalars). The main is a table: https://bitbucket.org/tablam/tablam/src/41b3b0676d6062031998af48b683f8f0062bf279/core/src/types.rs#lines-221 and this is ok. But wonder what other options I could explore. (The allure for me in use just a NDArray-like container is that I can reduce my implementations to just 2 BtreeMaps - NDArray) By mamcx at 2019-07-31 15:58 | LtU Forum | login or register to post comments | other blogs | 3589 reads
Session Types for Purely Functional Process NetworksSession types greatly augment purely functional programming. Session types enable pure functions to directly model process networks and effects. We can adapt session types to pure functions by first reorganizing function calls of form ` Sequential session types conveniently represent that intermediate outputs are available before all inputs are provided. A simple reordering to ` Sequential sessions already demonstrate a trivial model of interaction: the caller can observe the intermediate output `X` before computing inputs `B` and `C`. We can also model 'plain old structured data' types as unidirectional sessions, e.g. a type ` Session type systems usually also support choice and recursion. We can adapt 'choice' to pure functions by assigning a choice-label to a choice parameter and this label determines which subset of choice-specific parameters we'll use. A simple example: type IF = &{ add: ?x:int ?y:int !r:int
| negate: ?x:int !r:int
}
With this definition in scope, the session type `?method:IF` could represent an external choice of 'method'. The choice-label `add` or `negate` might be assigned to implicit parameter `method.case`, and the label chosen will determine whether we further use parameters `in method.add.x : int`, `in method.add.y : int`, and `out method.add.r : int` or `in method.negate.x : int` and `out method.negate.r : int`. This is an exclusive choice, so a compiler could safely 'overlap' memory for these five parameters, similar to a C union. But unlike a conventional union or variant, the choice determines both inputs and outputs. Choice session types can conveniently model object-oriented interfaces or singular request-response interactions. Aside: Session type systems distinguish external choice (&) vs internal choice (⊕). In the adaptation to functional programming, whether a choice is external or internal is based on whether the 'choice parameter' like 'method' in is input or output. However, it's convenient to represent some choices from the 'external choice' perspective. Thus, use of `&` above allows the type to be syntactic sugar for ` Recursive session types can further augment our functions with unbounded trees or streams of interactions. Conceptually, they allow functions to have an unbounded set of parameters, each with a unique 'path' name. A demand-driven stream type might have a form: ` Use of recursive session types is similar to conventional functional programming with tree-structured data. A compiler or garbage collector can recycle memory for parameters that become irrelevant to further computation. Session types can represent many useful evaluation strategies such as call-by-need or bounded-buffer pushback. Intriguingly, session types can model 'algebraic effects' via recursive streams of request-response choice sessions. Beyond sequencing, choice, and recursion, we can also extend functional programming with 'concurrent' sessions to represent partitioned data dependencies. For example, with function type ` Session types give us a rich model for interaction with pure function calls. Implicitly, these interactions are between the 'call' and 'caller'. Fortunately, it is not difficult for a session-typed functional programming language to support 'delegation' such that we tunnel handling of interactions to another function call. When we begin to delegate long-lived sessions (e.g. recursive streams) between functions, the program begins to take a form of a 'process network' where pure functions are the processes and delegation models the wiring between them. Use of session types and delegation for purely functional process networks will subsume Kahn Process Networks (KPNs), which are limited to simple streams as the only interaction between processes. With session types, we can effectively model processes that rendezvous, coroutines, processes that have clear bounds on input and output, clear termination behavior. As a summary, session types for purely functional programming supports:
Session types greatly improve this does not compromise functional abstraction or functional purity, except insofar as unbounded interactions with functions are not what we usually imagine from the mathematical connotations of 'function'. I have not searched hard for prior art on the subject of session types exposing partial evaluation of pure functions as a basis for interaction and deterministic concurrency. I would not be surprised to discover all this is known in obscure corners of academia. But to me, who has recently 'discovered' this combination, this seems like one of those 'obvious in hindsight' features with an enormous return on investment, which all new functional programming languages should be seriously pursuing. |
Browse archives
Active forum topics |
Recent comments
3 weeks 1 day ago
3 weeks 2 days ago
3 weeks 2 days ago
3 weeks 3 days ago
3 weeks 6 days ago
3 weeks 6 days ago
4 weeks 15 hours ago
4 weeks 19 hours ago
4 weeks 20 hours ago
4 weeks 20 hours ago