LtU Forum

Is the "Getting started"-guide up-to-date?

The Getting Started thread is very educational, but I am writing to point out that as time progresses more literature comes into being and there might be a more efficient way to digest programming language theory, category theory, programming language design, functional programming etc. I would love to see a curated list that describe a nice progression path from apprentice to journeyman and beyond, that still could be community maintained much like the getting started guide.

There could even be different suggested paths. I for one have programmed a lot of software, and my knowledge mostly pertains to maintainability, and software engineering approaches to distributing knowledge and writing robust systems. I am however very interested in the concepts that this site pertains to, but I have a lot of difficulties approaching it solely from a theoretical stand-point. I need application, and I find that until one's own knowledge moves beyond a particular point does it become possible to dream up one's own applications and sandbox-experiments.

And, I am sure there are other individuals that approach these concepts from a more mathematical standpoint, but I cannot much speak for them as a group as I am not amongst them.

Kind regards,
Filip Allberg

Affine Types - Introductory reading

I have been exploring Rust as of late, which has an affine type system (I hear). Is there any good introductory papers to familiarize oneself with the subject on a theoretical basis? I would have an easier time starting from a practical mindset that interleaves the "necessary" theory, edging increasingly toward a purely theoretical view but I realize that may not exist.

Kind regards,
Filip Allberg

LIVE 2016 @ ECOOP (CFP)

LIVE 2016 aims to bring together people who are interested in live programming. Live programming systems abandon the traditional edit-compile-run cycle in favor of fluid user experiences that encourage powerful new ways of “thinking to code” and enable programmers to see and understand their program executions. Programming today requires much mental effort with broken stuttering feedback loops: programmers carefully plan their abstractions, simulating program execution in their heads; the computer is merely a receptacle for the resulting code with a means of executing that code. Live programming aims to create a tighter more fluid feedback loop between the programmer and computer, allowing the computer to augment more of the programming process by, for example, allowing programmers to progressively mine abstractions from concrete examples and providing continuous feedback about how their code will execute. Meanwhile, under the radar of the PL community at large, a nascent community has formed around the related idea of “live coding”—live audiovisual performances which use computers and algorithms as instruments and include live audiences in their programming experiences.

We encourage short research papers, position papers, web essays, tool demonstrations (as videos), and performance proposals in areas such as:

  • Recent work in REPLs, language environments , code playgrounds, and interactive notebooks.
  • Live visual programming.
  • Programming by example.
  • Programming tools for creative experiences and interactive audio visual performances.
  • Live programming as a learning aid.
  • Fluid debugging experiences
  • Language design in support of the above.

Submissions will go through EasyChair. Papers and essays must be written in English and provided as PDF documents. As a recommendation, papers should be around 5 pages and videos should be 5-10 minutes in length; other non-paper submissions should consume no more than 30 minutes of a casual reader’s time. However, papers up to 10 pages and videos up to 20 minutes are also welcome. Video and non-paper submissions can by listed as URLs (e.g. to a web page, file locker, or streaming site) in the submission’s abstract. At the author’s discretion, workshop articles can be published using an institutional ISBN with full support for open access.

Any questions or trouble with submitting, please contact smcdirm@microsoft.com.

Why not C++?

There is a subset of C++ that is supposedly the bee's knees, according to Stroustrup, Sutter, and Dos Reis. Not to start an opinionated flame war, I want to know if you believe their argument holds enough water. I mean this both technically (if you stick to what they say, does it work) and in terms of practicality (are humans able to do that). To whatever extent you disagree, what are viable alternatives?

I mean this both about what they say works (their approach using 'good' C++), and what they say doesn't (cf. gc finalizers).

You can write C++ programs that are statically type safe and have no resource leaks. You can do that without loss of performance and without limiting C++’s expressive power. This model for type- and resource-safe C++ has been implemented using a combination of ISO standard C++ language facilities, static analysis, and a tiny support library (written in ISO standard C++). This supports the general thesis that garbage collection is neither necessary nor sufficient for quality software. This paper describes the techniques used to eliminate dangling pointers and to ensure resource safety. Other aspects – also necessary for safe and effective use of C++ – have conventional solutions so they are mentioned only briefly here.

The techniques and facilities presented are supported by the Core C++ Guidelines [Stroustrup,2015] and enforced by a static analysis tool for those [Sutter,2015].

Onward 2016 call

The Onward call for papers is on, with a submission deadline of April 1st this year. Please consider submitting your edgy PL ideas.

Onward! is looking for grand visions and new paradigms that could make a big difference in how we will one day build software. But Onward! is not looking for research-as-usual papers—conferences like OOPSLA are the place for that. Those conferences require rigorous validation such as theorems or empirical experiments, which are necessary for scientific progress, but which typically preclude discussion of early-stage ideas. Onward! papers must also supply some degree of validation because mere speculation is not a good basis for progress. However, Onward! accepts less rigorous methods of validation such as compelling arguments, exploratory implementations, and substantial examples. The use of worked-out examples to support new ideas is strongly encouraged.

Onward! is reaching out for constructive criticism of current software development technology and practices, and to present ideas that could change the realm of software development. Experienced researchers, graduate students, practitioners, and anyone else dissatisfied with the state of our art is encouraged to share insights about how to reform software development.

Whither actual generality/customizability/flexibility?

I have yet to come across The Answer to being able to really truly do the Open-Closed principle. "I'm sorry, Zaphod, I just don't believe a word of it." Be it OO or FP or whatever, in my experience when a new functionality / requirement / realization comes along, there's nothing that can be done to prevent some inevitable need to crack open the bloody code and have at it.

Random example: Inheritance.

We have all experienced trying to plan ahead for change. Inevitably, when change comes, it is not the change we anticipated. We find that we have paid the price of generality and forward planning, but still have to modify the software to adapt to the change in requirements. Agile methodologies use this: they tell us to eschew generality, and instead make the software ready to embrace change by being changeable. Inheritance aids in this process by allowing us to write software in terms of the concrete, specific actions that the application needs now, and to abstract over them only when we know - because the change request is in our hands - that the abstraction is needed.

Except for how I haven't seen inheritance ever done right, such that there really isn't long term peril and sturm und drang that is going to come along with it, ball and chain style.

If you had to force rank the various ways people have tried to support something like Open-Closed, what would your list look like?

Learning two dialects simultaneously (CL/Scheme)

Hi, I am just getting started with Lisp but am an experienced
programmer since earlier, with some cursory experience with
functional programming, and adept at picking up programming
languages.

I was wondering if it is ill-advised to learn two dialects
simultaneously, specifically Common Lisp and Scheme. I have some
books that relate to AI and interpreters where some use Scheme
and other CL and would like to be able to get work with books
from these two domains simultaneously.

So, bad idea or not?

New Regexp

While working on new programming metalanguage, I encountered a parsing problem. I wanted to use Javascript Regexp to parse arbitrary tokens when parsing with recursive descent code. But, there is a big but with Javascript and Regexp: it is not possible to efficiently check does Regexp match at specific string offset. Splicing string on every check (to check supported beginning of string) was not my option, so I decided to implement my own Regexp library. I followed some documentation and got this implementation in my metalanguage:

RegExp <= (
    Union <= (@SimpleRE, '|', @RegExp) |
    SimpleRE <= (
        Concatenation <= (@BasicRE, @SimpleRE) |
        BasicRE <= (
            OneOrMore <= (@ElementaryRE, ('+?' | '+')) |
            ZeroOrMore <= (@ElementaryRE, ('*?' | '*')) |
            ZeroOrOne <= (@ElementaryRE, '?') |
            NumberedTimes <= (
                @ElementaryRE,
                '{',
                In <= (
                    Exactly <= @Integer |
                    AtLeast <= (@Integer, ',') |
                    AtLeastNotMore <= (@Integer, ',', @Integer)
                ),
                ('}?' | '}')
            ) |
            ElementaryRE <= (
                Group <= ('(', @RegExp, ')' |
                Any <= '.' |
                Eos <= '$' |
                Bos <= '^' |
                Char <= (
                    @NonMetaCharacter |
                    '\\', (
                        @MetaCharacter | 
                        't' | 'n' | 'r' | 'f' | 'd' | 'D' | 's' | 'S' | 'w' | 'W' | 
                        @OctDigit, @OctDigit, @OctDigit
                    )
                ) |
                Set <= (
                    PositiveSet <= ('[', @SetItems, ']') |
                    NegativeSet <= ('[^', @SetItems, ']')
                ) <~ (
                    SetItems <= (
                        SetItem <= (
                            Range <= (@Char, '-', @Char) |
                            @Char
                        ) |
                        @SetItem, @SetItems
                    )
                )
            )
        )
    )
)

It would work with some javascript back-end, but when I compared "union" to "set" in original Regexp definition, I concluded they are about the same thing, a choice of values detected at parse time. I didn't like this redundancy, so I decided to slightly change the definition of Regexp and to develop my own version which looks like this:

ChExp <= (
    Choice <= (@ConExp, '|', @ChExp) |
    ConExp <= (
        Concatenation <= (@WExp, @ConExp) |
        WExp <= (
            Without <= (QExp, '!', @WExp) |
            QExp <= (
                OneOrMore <= (@GExp, ('+?' | '+')) |
                ZeroOrMore <= (@GExp, ('*?' | '*')) |
                ZeroOrOne <= (@GExp, '?') |
                NumberedTimes <= (@GExp, '{', @Integer, '}') |
                GExp <= (
                    Group <= ('(', @ChExp, ')') |
                    Exp <= (
                        Any <= '.' |
                        Range <= (@Char, '-', @Char) |
                        Char <= (
                            @NonMetaCharacter |
                            '\\', (
                                @MetaCharacter |
                                't' | 'n' | 'r' | 'f' |
                                '0x', @HEXDigit, @HEXDigit, @HEXDigit, @HEXDigit, @HEXDigit, @HEXDigit
                            )
                        )
                    )
                )
            )
        )
    )
)

So, what is the difference? I completely removed Set notation (noted by [xyz]) because it can be replaced by "Union" notation (x|y|z). Initially, there was a thing missing in my new Regexp: negative set ([^xyz]). To support this I decided to introduce a new operator: "Without" (!), that parses expression to the left of '!' excluding expression to the right of '!'. So, '[^xyz]' now becomes '.!(x|y|z)'. What is interesting is that '!' operator allows to exclude a character sequence from parsing like in example '(.*)!(keyword1|keyword2)', in which we can parse a stream that is different from "keyword1" or "keyword2". Traditional Regexp supports only one character exclusion through negative set, so I observe a greater expressiveness of the new RegExp I designed. The new Regexp does not have redundant set-union colision, while its definition looks a bit cleaner than traditional Regexp.

I have to say, probably structured way of defining the grammar in my metalanguage saved me from the pitfalls which original authors of Regexp had in the seventies when they probably used unstructured plain BNF for defining Regexp.

So, what do you say? Is this new Regexp appearance worth of abandoning traditional standard that is widely spread across the globe? Do you have any sugestions on my modified solution? Traditional Regexp has been around for decades, do you think it is time for modernizing it?

Logic Programming with Failure as an Exception

I have been struggling with negation in logic programming for a while. There is a fundamental problem with negation because failure to find a proof is not the same thing as a disproof. This makes the common 'negation as failure' approach unsound.

I have looked at many alternatives, like negation as inconsistency, and 3 or 4 valued logics, and whilst some of these have merit they make both the language and the implementation much more complicated. This makes logic programming lose some of its elegance.

I considered for a while that negation was unnecessary, but that results in redundant code for simple examples like 'member' and 'not member'.

I think the fundamental problem is the confusion between the 'proof' and a boolean.

x.
:- x.
true.

Is misleading because we have proved 'x', but not really 'x true'. Let's instead write:

x.
:- x.
proved.

So we want to encode that x is true, we should rather write:

(x, true)
:- (x, X).
X = true, proved.

In this way we can write member, and negation of a boolean, in simple Horn clause logic, with only the addition of an disequality constraint, so that the order of clause execution is not significant:

not(true, false).
not(false, true).

member(X, cons(X, Tail), true).
member(X, nil, false).
member(X, cons(Y, Tail), Result) :-
    dif(X, Y),
    member(X, Tail, Result).

not_member(X, List, Result) :-
    member(X, List, IsMember),
    not(IsMember, Result).

This style is of course not as neat or easy to read as the default way that 'not' is used. The problem appears to be the confusion between the program as a proof and the return value. That a something is proved or not-proved is not the same thing as a function return value at all. So what if we instead treat 'failure' as an exception. We would then return a value from the Prolog clause like a function. We could instead write:

not(true) -> false.
not(false) -> true.

member(X, cons(X, Tail)) -> true.
member(X, nil) -> false.
member(X, cons(Y, Tail)) -> dif(X, Y), member(X, Tail).

not_member(X, List) -> not(member(X, List)).

Hopefully it is obvious what is going on here. I have adopted a different symbol (-> instead of :-) to reinforce that we are returning a value not the success/failure of the program. This keeps the language implementation simple, and avoids the whole negation of proofs problem. I would argue that disproofs are impossible, you can only prove something false, hence the idea of a negation of a proof is meaningless. With failure as an exception we keep the nice syntax of negation in logic programs, but also gain the ability to return more than true/false, moving the logic language a little bit towards functional programming.

Questions:

What do you think of this approach?

I haven't been able to find anything like this approach looking for 'failure as an exception', has this been looked at under a different name, or is there any literature already out there about a similar concept?

Edit: To clarify my use of the term failure as an exception, I mean that unlike Prolog where a clause can evaluate true or false, failure is not returned at all, however it does still trigger backtracking. The change is mostly syntactic sugar for horn clause logic with an 'out' mode final argument.

Edit2: changed "/=" to "dif" to match the usage in Prolog.

Andl is a New Database Language

Andl does what SQL does, but it is not SQL. Andl has been developed as a fully featured database programming language following the principles set out by Date and Darwen in The Third Manifesto. It includes a full implementation of the Relational Model published by E.F. Codd in 1970, an advanced extensible type system, database updates and other SQL-like capabilities in a novel and highly expressive syntax.

The intended role of Andl is to be the implementation language for the data model of an application. It is already possible to code the business model of an application in an SQL dialect, but few people do this because of limitations in SQL. Andl aims to provide a language free of these problems that works on all these platforms.

Andl can use Sqlite as its backend. It can import data from any SQL source. It provides Thrift, REST, Web and REPL APIs. It has a Workbench for convenient experimentation.

The web site is here. Recent posts are here.

Questions and feedback much appreciated.

XML feed