LtU Forum

Higher Order Functions Considered Unnecessary for Higher Order Programming

Joseph A. Goguen, Higher Order Functions Considered Unnecessary for Higher Order Programming (1987).

It is often claimed that the essence of functional programming is the use of functions as values, i.e., of higher order functions, and many interesting examples have been given showing the power of this approach. Unfortunately, the logic of higher order functions is difficult, and in particular, higher order uni cation is undecidable. Moreover (and closely related), higher order expressions are notoriously difficult for humans to read and write correctly. However, this paper shows that typical higher order programming examples can be captured with just fi rst order functions, by the systematic use of parameterized modules, in a style that we call parameterized programming. This has the advantages that correctness proofs can be done entirely within fi rst order logic, and that interpreters and compilers can be simpler and more efficient. Moreover, it is natural to impose semantic requirements on modules, and hence on functions. A more subtle point is that higher order logic does not always mix well with subsorts, which can nonetheless be very useful in functional programming by supporting the clean and rigorous treatment of partially de ned functions, exceptions, overloading, multiple representation, and coercion. Although higher order logic cannot always be avoided in specifi cation and veri fication, it should be avoided wherever possible, for the same reasons as in programming. This paper contains several examples, including one in hardware verifi cation. An appendix shows how to extend standard equational logic with quanti fication over functions, and justi fies a perhaps surprising technique for proving such equations using only ground term reduction.

This (old paper) proposes an interesting approach for formulating functional programs. But can this truly subsume all uses of higher order functions? I don't see the paper address how the uses of higher order functions in general can be replaced (not that I have a counterexample in mind).

Anyone familiar with the OBJ language? Do other languages share this notion of 'modules', 'theories'?

Prior art for reifying lifecycle

Are there any prior examples of programming languages that expose the program processing lifecycle as a value or syntax element?

By lifecycle, I mean steps like the below which many languages follow (though not necessarily in order):

  1. lex: turn source files into tokens
  2. parse: parse tokens into trees
  3. gather: find more sources with external inputs
  4. link: resolve internal & external references
  5. macros: execute meta-programs and macros
  6. verify: check types, contracts, etc.
  7. compile: produce a form ready for loading
  8. run: load into a process that may be exposed to untrusted inputs

Does anyone have pointers to designs of languages that allow parts of the program to run at many of these stages *and* explicitly represent the lifecycle stage as a value or syntax element?

I'm aware of reified time in hardware description languages like Verilog and in event loop concurrent languages like JavaScript and E, but that's not what I'm after.

Background

I work in computer security engineering and run into arguments like "we can either ship code with dynamic languages that is hard to reason about the security properties of, or not ship in time."

I'm experimenting with ways to enable features like the below but without the exposure to security vulnerabilities or difficulty in bringing sound static analysis to bear that often follows:

  • dynamic loading,
  • embedded DSLs,
  • dynamic code generation & eval,
  • dynamic linking,
  • dynamic type declaration and subtype relationships & partial type declarations,
  • powerful reflective APIs

I was hoping that by allowing a high level of dynamism before untrusted inputs reach the system I could satisfy most of the use cases that motivate "greater dynamism -> greater developer productivity" while still producing static systems that are less prone to unintended changes in behavior when exposed to crafted inputs.

I was also hoping, by not having a single macros-run-now stage before runtime, to allow use cases that are difficult with hygienic macros while still allowing a module to limit how many assumptions about the language another module might break by reasoning about how early in the lifecycle it imports external modules.

The end goal would be to inform language design committees that maintain widely used languages.

cheers,
mike

Upward confluence in the interaction calculus

The lambda calculus is not upward confluent, counterexamples being known for a long time. Now, what about the interaction calculus? Specifically, I am looking for configurations c1 and c2 that have the same normal form with no such c that c →* c1 and c →* c2.

Update: a necessary and sufficient condition for strong upward confluence discussed in arXiv:1806.07275v3 which also shows that the condition is not necessary for upward confluence by showing upward confluence for the interaction system of the linear lambda calculus.

New DSL for secueity

Hello,thought I’d share a new DSL by endgame life querying security logs : https://www.endgame.com/blog/technical-blog/introducing-event-query-language

It is meant to help reason about security events. Best illustrated in this example:

What files were created by non-system users, first ran as a non-system process, and later ran as a system-level process within an hour?

sequence with maxspan=1h
[file where event_subtype_full=="file_create_event" and user_name!="SYSTEM"] by
file_path
[process where user_name!="SYSTEM"] by process_path
[process where user_name=="SYSTEM"] by process_path

While I could easily see how this can be expressed as SQL instead and perhaps backends do do that, I think it helps analyst to think about logic rather than data.

I think that there is a lot of improvement that can be had in. languages that help reason about (time) series and it’s a welcome addition to the DSL family.

I have a problem with arguments passed as non-evaluated expressions

So, since I've learned about Kernel I was very excited: the idea of explicit evaluation seemed like a very cool idea, giving much more power to the programmer in comparison to the standard "pass evaluated arguments" strategy (1: this statement can be argued upon; 2: there were numerous posts here at LtU and other blogs about potential drawbacks).
Then, I've learned about Io language, which also seems to embrace the idea - when caller sends a message to a target, passed arguments are passed as expressions, not values, giving the full range of custom "control" messages, macros, etc.
This is when it hit me - although the idea sounds very cool, there is something wrong with it. Most likely my mind is stuck in an endless loop of dubious reasoning that I can't get out of, so, hopefully, someone can clarify my concerns.
Let us break down an example, where in some context we have:

a := Number(1) ;ignore how these two lines are actually executed
b := Number(3) ;whats important that the context has two number objects bound to symbols "a" and "b"
someAdderPrimitiveObject pleaseDoAddtheseNumbers(a, b) toString print

so, the caller context asks someAdderPrimitiveObject to add numbers a and b, and the arguments are just passed as "a" and "b" symbols. no problem here, as far as we concerned, because that same "someAdderPrimitive" object can ask the caller to send actual values back.
let's say we had defined the "pleaseDoAddtheseNumbers" something as

someAdderPrimitiveObject pleaseDoAddtheseNumbers := method(x, y, [body, whatever that is])

so, when the "pleaseDoAddtheseNumbers" method is invoked, the "a" and "b" symbols are bound in the environment of the "pleaseDoAddtheseNumbers" method's activation record to "x" and "y" symbols.
The method body would try and do something like this:

valx := caller pleaseEvaluateForMe(x)
valy := caller pleaseEvaluateForMe(y)
[do something with these values, whatever]

This is where it gets problematic for me. The callee (activation record of the "pleaseDoAddtheseNumbers" method) asks the caller (the original message sender) back for a value of its argument (which is bound to a locally known symbol "x") and in order to avoid infinite recursion of ping-pong of messages like "evaluate this for me", the callee *has* to pass the *value* of its own symbol "x" (bound to value, which is symbol "a") back to a caller, to ask it for a value (in this case: some boxed object-number 1).

So far as I've seen the problem is solved on an interpreter level, where this kind of thing is handled "behind the scenes".
Does that mean that the system that never evaluates passed arguments cannot implement itself, because at some point you *have* to pass values, in order for them to be operated upon?

Sorry if this is a mess, I hope someone undestands it :)

C++ fun


error: value of type 'std::__1::__map_const_iterator<std::__1::__tree_const_iterator<std::__1::__value_type<float, TStrongObjectPtr<UObject> >, std::__1::__tree_node<std::__1::__value_type<float, TStrongObjectPtr<UObject> >, void *> *, long> >' is not contextually convertible to 'bool'

Proof system for learning basic algebra

I've written a system for learning basic algebra which has you solve problems by clicking on the rules of basic algebra (commutativity, associativity, etc.). It is impossible to make mistakes because the machine checks things like correct handling of division by zero. Each step is justified, so each solution serves as a formal proof. Right now it handles algebra up to quadratic equations and simultaneous equations.

It is intended to be a computer proof system anyone can learn. The style of mathematics presented mimics equation solving on a blackboard, so it should be familiar to everyone.

https://algebra.sympathyforthemachine.com/

Type Bombs

People like blowing stuff up, explosions are great! I started wondering about type bombs, small expressions which have large type terms. Is there anything known about that, in particular for Haskell/ML since these are popular?

The small bomb I know of employs 'dup' and is relatively well known. I.e., take `dup`

dup x = (x,x)

The repeated application gives back a type term which grows exponentially in the size of the expression.

(dup . dup . dup . dup . dup) 0

(It depends on the type checker, of course, whether the type actually is represented exponentially.)

You can lift `dup` further but can you grow faster? I tried `let t = (\f -> f . f) in let d = (\x -> (x,x)) in (t.t.t.t) d 0` which doesn't type check in ghci as a first order approximation. But I thought I would ask you, people.

What is the fastest growing type term you can come up with? Or what term will keep a type checker really, really busy?

The Heron Programming Language

I wanted to share some progress on a new functional statically typed language, called Heron, that I have been working on. There is a demo of some 3D at https://cdiggins.github.io/heron-language/.

Some of you might remember earlier projects that I posted many years ago also called Heron, but this language is quite a departure from earlier work. This new language is designed for numerical and array processing (e.g. 3D and 2D graphics). I'm very interested in any and all feedback. I'm also looking for opportunities for collaborations.

Thanks in advance!

terminology for scope of discourse, i.e. CS-domain

I noticed a lack of good terminology to discuss mixed content intended for different uses -- or merely for alternate focus -- depending on 1) what sort of tool might work with it, or 2) what kind of activity or analysis is in scope. The idea I want means something like "asset class" but is also covered by "domain" in its general sense, except many people accept only one sense of a word upon first hearing. (Insert a comedy dialog here: a physicist mentions a magnetic domain in an iron object, only to be blind-sided by a troll who explains internet domains do not apply in that context.)

Typically people put different asset classes in different files, and say "this file's content is for that tool", but never talk about what they have in common besides that they are "files". Some tools might want to perform cross-domain analysis, or code generation. But it's hard to describe without enumerating "now we do X" and "now we do Y"; and the relation of X and Y is idiosyncratic and specific, rather than being generally alternate asset class domains.

The term "aspect" in aspect-oriented-computing is related, except no one will know what you mean if you use aspect as a synonym for domain. If you take content from different asset class domains, and mix them together in one file (with related things near each other for easier reference), with each one scoped suitably via syntax of some kind, there isn't a good word that denotes what is inside each scope. In one you have "stuff about X", and in another you have "stuff about Y", but stuff is pretty vague. You can put them in different logical files in an archive, but a problem of crummy terminology remains.

For example, suppose in one scope you have a bunch of code that "does task X", and in another scope you put a description of sequence diagrams that ought to result from that, so a tool can generate test code or perform analysis for verification. (A typical reason to do this might be that one asset is an emergent result of another asset class, and very hard to see unless explicitly described.) In a more general case, you might want to characterize what several sorts of program are doing in a larger system.

What I'm looking for is terminology about the semantics of expressing different things, perhaps in different languages, and making statements or queries about their relationships in space, time, or causal interactions. (The obvious response is "don't do that"; in a comedy dialog, one party can ask "why do you want to do that?" while pushing either focus on X or focus on Y, with no concern about how they relate -- it's someone else's problem.) Probably each domain has its own type system, but that doesn't seem very helpful here.

XML feed