functions vs procedures..

been reading up a bit on functional languages lately out of interest (99% a c/c++ programmer myself)

I was curious if any of the multi-paradigm languages distinguished 'Functions'(without side-effects) from 'Procedures'(with side-effects) as a means of cleanly separating pure functional code from the rest... is this a common/usefull concept or not?

obviously in c++ i'm used to the idea of const pointers to have some concept of what gets modified but there's always globals & over-rides there...

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

monads, type and effect systems, linear types

Monads are often used to this effect. Loosely speaking, a Haskell function of type "a -> b" is pure, while one of type "a -> IO b" can perform I/O side effects. Type and effect systems let you annotate a term's type with the effects it may perform, in a little more detail. Linear types let you explicitly account for creating, copying, and destroying resources.

Contradiction

Loosely speaking, a Haskell function of type "a -> b" is pure, while one of type "a -> IO b" can perform I/O side effects.

I hope I'm not the only person who sees a contradiction between these two statements.

Re: Contradiction

Well, if the first type is "forall a b. a -> b" then the function either doesn't terminate or violates purity. Assuming that "a" and "b" stand for some type, instead of a polymorphic type variable, then you have to state that "b" is not a monadic type. :-)

But Sameer did qualify that he was speaking loosely...

Nit picking

Assuming that "a" and "b" stand for some type, instead of a polymorphic type variable, then you have to state that "b" is not a monadic type. :-)

I think the proper caveat is that a function of -> type is always pure, but it can produce a side-effecting procedure descriptor of type 'IO b' as a value.

even for monadic types

you have to state that "b" is not a monadic type.

There are lots of monadic types. For instance take b to be Maybe Integer or [Bool] (or String) -- or even Identity Double. Could there really be something about the Monad interface that allows a type to break purity?

Yes I'm nitpicking here -- and I believe it's an important nit-pick, given all the confusion I see about monads. The Monad interface is just a type class among many type classes. It has nothing to do with purity.

Yes, but that wasn't the point:

I wasn't speaking formally, I just wanted to give a quick rough idea of how monads can be used, to give the original poster an idea of whether that's something they'd like to look up on their own.

I prefer

I prefer 'functions are pure transforms of domain to range'. Being 'pure', they may neither cause nor observe side-effects outside of computational costs (memory, delay, energy costs). And 'procedures are imperative descriptions of behavior to execute'. In this view, procedures don't have arguments, but might have replies, explicit delays, or exceptions. But you could have a function of type 'Domain->!Range' in my usual notation, with !Range being 'procedure replying with range'.

Since you mention C++, boost follows this notion a bit in that its functor library allows you to argument a functor independently of executing it. I.e. pass all the arguments and get back a zero-argument thunk which may then be executed any number of times.

Not that my preferences mean much. See a recent discussion on the futility of trying to distinguish (or impose distinction) of 'functional vs. procedural' by any particular metric or quality, at least in any global sense. I do allow my preferences to influence the vocabulary of 'function' vs. 'procedure' as I use the terms in a programming language I'm developing, but I'm under no illusion: I'm doing naught more than adding to the overall confusion.

Definitions

To emphasize dmbarbour's point about definitions, another way that "function" and "procedure" have been used is that a function is a platonic ideal. It's purely a relationship on sets. A procedure, on the other hand, is a means to compute the right side of a tuple in a relationship given the left side. By those criteria, these two Haskell definitions define two different procedures for computing the same function.

even1 x = x `mod` 2 == 0

even2 = not . odd 
        where odd x = x `mod` 2 == 1

The Scheme community in particular seems to use "procedure" this way.

Pascal, on the other hand, says that if something is only used for side effects and does not compute a value (i.e. the equivalent of a "function" that returns void in C) then it is called a "procedure" where a "function" must return a value but may have side effects along the way. You might find a few old C hackers who borrow that terminology.

Programmers in impure functional languages and most imperative languages don't tend to make either distinction, and just call everything a function (or perhaps method :-) ).

To avoid that mess most programmers use a phrases like "pure function" vs "impure function" to make the distinction.

Interference per Reynolds

John Reynolds' work on the design of programming languages has emphasised issues around side effects; several of the papers recommended in the Reyndoldsfest story are concerned with them.

One such issue is that certain liberalisations of programming language semantics (in Gedanken, the idea that the references involved in assignment statements references can arise from very general computations) can put certain constructions at risk of ghastly misinterpretations; the phenomenon Reynolds calls interference, and is closely related to aliasing. There's a nice summary of this in the opening sections of Syntactic control of interference revisited, which provides an alternative, linear-type-based approach to avoiding difficult-to-understand constructions than that Reynolds provided in his Syntactic control of interference.

Reynolds' Idealised Algol eliminates interference by separating side-effect-free functions from side-effectful procedures; Reynolds proposed a non-toy language, Forsythe, based on similar principles and intersection types, but I don't think it came to anything.

Interference and argument order

The syntactic criteria for the possibility of interference that I mentioned can also be used to provide an alternative solution to the language design issue that seems to agitate a few LtU readers.

Specifically, there are good algorithms to detect the possibility of interference that can be repurposed to detect function definitions whose semantics could change if arguments were reordered. With LtU readers, it is the work of a moment to contrive stipulations on implementations that could appear in a language reference which would effectively deter or prevent programmers from writing such ill-considered code.

interesting replies. If i've

interesting replies.
If i've understood correctly, Erlang (with it's pure functions + messages) would be a language which mixes FP with side effects but with a rigourous seperation between them ?

i do like the idea of monads (i must read more..)

it's work in games that has had us thinking about ideas from the FP world, and of course there's the likes of tim sweeney stating that future machines will require a parallel language... all fascinating stuff. Larabee is hugely exciting (gpu power fully generalized..)

After going through all the mess of threading & optimizing, it does sound an appealing prospect to have a pure *uncluttered* description of what a program is supposed to *do*, & then put in memory/thread allocation 'hints' applicable to map efficiently to hardware (threads, unrolled-branchless-inorder-ILP wide-SIMD) ... WITHOUT obfuscating meaning :)

purely academic speculation.. it would be interesting to know if this ideal is theoretically possible, and what would such a language resemble given the options & concepts already out there.

any discussions here related to this subject? (I'm new to this forum)

Ada

Ada has a built in distinction between functions and procedures, where functions can return values but cannot accept references, whereas procedures cannot return a value but accept references.

However, it is always possible to change state from a function, thus the compiler does not make use of this fact.

Unwise

It's silly for a language to make a distinction that is ultimately meaningless to its interpretation or execution. Why introduce complexity for both the programmer and the parser without a significant return on investment?

If functions were restricted from modifying state, then at least one could perform a great deal more partial-evaluation and link-time optimization. Further, it'd be easy for programmers to know the effects of composing black-box functions (i.e. from a library).

Perhaps

At least with Ada you can know exactly what the language designers had in mind:

8.4 Function Subprograms

The purpose of a function is to calculate a value. This is the
conventional mathematical meaning of a function. Small functions to
access complex data structures are an essential feature of structured
programming: Not only do they hide irrelevant parts of the data
structure but they provide a cleaner interface to the outside world.

Although the mathematical origin of the function concept is clear, its
incorporation into a programming language can lead to several
different formulations depending on what operations are allowed on
variables. Different levels of restriction can be considered, leading
to different concepts of function:

(1) Reading global variables is not allowed.

(2) Reading global variables is allowed but updating them
is not.

(3) Reading and updating global variables is allowed.

The first level corresponds to the mathematical notion of function;
there are no implicit parameters in the guise of global variables, and
two function calls with the same arguments always deliver the same
result. However the class of cases in which such functions can be used
is rather limited.

The second level has interesting mathematical properties that can be
used for code optimization. For example, if F is such a function then
for evaluation of an expression such as

F + F

the function need only be called once. However this kind of function
would not be allowed to perform input-output (since this is a side-
effect), and instrumentation (by update of a global counter upon each
call) would not be possible.

The third level allows functions such as random number generators or
memo functions, which modify the global environment. Such functions do
not have the aforementioned properties. If for example RANDOM is such
a function, then 2*RANDOM is not necessarily equal to RANDOM + RANDOM.

In an earlier version of Ada - the Green language - we attempted to
provide a formulation of functions that corresponds to the second
level, but experimenting with this concept has shown that this would
exclude many benevolent side-effects. For example, it led to the
imposition of limitations on access variables (since the invocation of
an allocator is a kind of side-effect). Furthermore, checking for
functionality could require reconsideration of the text of separately
compiled compilation units.

These conceptual and implementation difficulties led to the present
more pragmatic definition, which corresponds to the third level.

The only limitation imposed in Ada on functions is that the mode of
all parameters must be in: it would not be logical to allow in out or
out parameters for functions in a language that excludes nested
assignments within an expression.

This means that optimization of expressions such as F + F will be
achieved only when the compiler can conclude that there are no side-
effects that matter.

For multiple calls of functions within an expression, Ada follows an
approach of collaterality as described in section 3.8. This means that
the language does not define in which order to call F, G, and H in an
expression such as

F + G + H

The language rules state that this evaluation must be done in some
order - that is, not in parallel - but this order is not defined by
the language, so that the meaning of a program for which this order
matters is not defined.

This semantics reflects a pragmatic view of side-effects, once
expressed by Brian Higman [Hi 63]:

The plain fact of the matter is (1) that side-effects are sometimes
necessary, and (2) programmers who are irresponsible enough to
introduce side-effects unnecessarily will soon lose the confidence of
their colleagues, and rightly so.

The Ada '83 Rationale, section 8.4

Thanks for the reference.

Thanks for the reference. (I believe the justification a flawed one, but it's nice to see it spelled out.)

Bah! The worst of all worlds!

This means that
the language does not define in which order to call F, G, and H in an
expression such as

F + G + H

The language rules state that this evaluation must be done in some
order - that is, not in parallel - but this order is not defined by
the language, so that the meaning of a program for which this order
matters is not defined.

Translation: It t'was too much work to get the compiler to enforce safety, and since we are a committee, we couldn't make up our mind.

Thus thanks to our indecision coupled with human incompetence of the average programmer.. if you run a large program on a different compiler, or a different version of the same compiler, or with different optimization settings.... expect different results.

Bah!

Scheme

Note the the exact same thing is true of the Scheme expression

(+ 1 2 3)

since order of evaluation is undefined.

Nope

The goal behind having an unspecified order is to allow the compiler writer to figure out an order of evaluation that is more efficient. Whether that's a good trade-off or not is certainly debatable, but its cause is not indecision on the part of a committee. Steele, Sussman, Ritchie, and Kernighan did not have to answer to committees when they designed their respective languages.

Wow! Optimization doesn't get any more premature than that!

Premature optimization all the way back at language design stage....

...and it's fruit sure are evil.

There is another reason for

There is another reason for this. It is, of course, easy for the language to specify order of evaluation. But expressions that rely on order of evaluation are often confusing to readers, and are often the cause of bugs. By saying that the order of evaluation is unspecified, the language designers outlaw relying on order of evaluation.

That's like claiming that since division by zero

or null pointer dereference produce unspecified (or undefined) behavior--the language designers outlaw these things.

True in some sense, but they weaken the language.

One useful metric of a programming language (or its implementations) is the ability to reject ill-formed programs prior to execution. Languages which make it easy to write programs which are syntactically valid but semantically ill-formed tend to suffer for it.

Dynamic, not semantic

One useful metric of a programming language (or its implementations) is the ability to reject ill-formed programs prior to execution. Languages which make it easy to write programs which are syntactically valid but semantically ill-formed tend to suffer for it.

Surely you mean statically vs. dynamically?

In a large software project, having the compiler (or associated tools) validate as much code and properties as possible is the advantage you were talking about.

On the other hand, if the program is not syntactically or semantically valid the compiler has to reject it. Since Ada (or C, or Scheme) didn't want to enforce implementations to have an effect type system, the behaviour is unspecified, not forbidden.

That's exactly what the Ada '83 committee mean by:

... programmers who are irresponsible enough to
introduce side-effects unnecessarily will soon lose the confidence of
their colleagues, and rightly so.

The language doesn't stop you from doing it, in the same way that it doesn't stop you from dividing by zero, but your programs will become unpredictable.

Ohad, is right, of course.

Ohad, is right, of course. An important point to keep in mind is that language designers may want to discourage some types of correct and legal programs. Suppose we have strict left-to-right evaluation. Code that depends on in it for correctness is well defined and legal. It is still bad style and a source of confusion, the language designers think. They discourage such style (in correct programs!), without designing an effect system etc., by making the order of eval undefined.

Kicking the dog...

Philosophically this is a questionable motivation, since it sort of punishes the user for the programmer's poor style...

Not really. It leads to the

Not really. It leads to the establishment of code review practices and static analysis tools. We shouldn't be myopic and think the language definition is all there is to the software development process.

The real test has to be empirical: Is real world Ada software (say) more likely to have this type of bugs or not.

While code analyses (human and machine)

are a good thing--again, design decisions which increase the need for correctness analysis outside the implementation proper, probably need to be justified. Performance might be a reasonable justification in many contexts, and much quality code is written in Ada--but the language definition is important.

OTOH, the extrapolation ad absurdum of your argument is that we ought to program in assembly language; for that gives ample justification and opportunity for productive code review. :) I know you aren't suggesting such a thing, of course.

Of course, many automatic code analysis tools can be integrated more tightly into language implementations (human code reviews are another matter). OTOH, many of the analyses performed by such beasts are time-consuming, and not something you want to happen every time you do a build.

Analysis inside the implementation proper

You can stipulate in the language standard that fully conformant compilers emit warnings. Or that code failing static tests must be syntactically tagged (e.g., "use typeUnsafeArgumentsWithEffects" declarations) to be valid.

Timeline

Of course, we must mention that effect type systems trace their origins to the late 80's and early 90's.

Unless I missed some references.

Once more

John C. Reynolds (1978). Syntactic Control of Interference. In Proc. 5th ACM POPL, pp. 39—48.

Google Scholar has 58 hits for ["syntactic control" affine]; no doubt each relates how Reynold's system essentially is an affinely-typed lambda calculus, how affine logic is a resource logic, and how this means affine type systems are effect-typing systems. I recommend Pottier (2007)'s Wandering through linear types, capabilities, and regions for directness.

Postscript — cf. The linear bestiary of François Pottier

I recommend Pottier (2007)'s

I recommend Pottier (2007)'s Wandering through linear types, capabilities, and regions for directness.

Worthy of a homepage story, methinks.

I agree, generally. But...

I agree, generally. But I still don't like that approach to specification, and I'm not sure it's what the C designers had in mind.

One rationale for undefined evaluation order is optimization opportunity. We've already discussed that. Another is ideological: code written that way is hard to understand.

Either way, you don't want people writing code that depends on evaluation order. But suppose you have no way to prevent it (without imposing overly onerous restrictions, e.g., all function arguments must be simple variable references). If your main concern is performance, you'll probably say, "Well, we need this optimization opportunity, so programmers will just need to be careful."

But if your main concern is ideology, then in effect you're saying, "Well, error-prone is the next best thing to illegal." And I think that's a very dangerous decision. As a matter of principle, if you think programmers shouldn't do something, but you can't prevent them from doing it, I think the only responsible thing is to make it work properly.

So I'll give the designers the benefit of the doubt and assume that their main rationale was performance rather than style.

Actually this is all a red herring.

The language designers can achieve the same effect by...

  1. Declaring the result of evaluating an expression will be deterministic (ie. the same value will result for the same expression, irrespective of compiler, compiler version or optimization settings).
  2. The exact order should be specified in a technical appendix with the explicit advice a) not to rely on it and b) to exclude the actual specified order from all user guides, references and teaching material. (ie. It's for compiler writer's eyes only.)
  3. Optimizers are free to reorder the evaluation of the expression.. provided they can be sure the result will be the same.

Different issue

As we all know, optimising early can have nasty consequences. But optimising later can be good. And choosing early to lock yourself out of optimisations later can be a very bad design decision. And that's the reason why these various language designers designed things the way they did. There's a price to pay for this decision, but what comes without a price?

Couldn't the compiler

Couldn't the compiler produce an error when it detects an expression that implicitly depends on a particular evaluation order?

Effects?

Are you proposing to extend Scheme, C, and Ada with effects systems so compilers can tell the difference?

Would it really require

Would it really require something as elaborate as an effects system? Wouldn't it simply require a specification of evaluation order for certain constructs, ie. let bindings are evaluated eagerly, and ruling out the composition of expressions where evaluation is unspecified, ie. nested function application (although this sounds like a crude effects system come to think of it).

Crude

"Crude effects system" in the sense that you're basically assuming every function application might have effects so the programmer needs to explicitly order them :-)

I think that if you want to insist on let binding application results before calling another function then you might as well just specify that the language does strict left-to-right evaluation. Then the programmer can avoid the syntactic burden of binding a bunch of variables in the fairly common case that left-to-right order works.

This distinction, of course,

This distinction, of course, was already part of Pascal.

Pascal and Call by Reference

I'll admit that many details of (Turbo) Pascal have faded from my mind, but the only difference I recall between procedures and functions is that functions returned a value, whereas procedures did not.

I thought functions accepted var arguments as well...

As was explained above, Ada

As was explained above, Ada functions can have side effects.

Oz

Oz has an somewhat interesting take on the issue--it doesn't distinguish between input and output parameters. A "return value" is simply a parameter to a procedure which the procedure expects to be unbound, so that it may bind the value. (Conversely, other parameters which the procedure expects to be bound are input parameters--a procedure which attempts to use an unbound parameter will block until someone else comes along and binds it).

Oz does support traditional function call symtax, but this is syntactic sugar for the above.

My memory of these things are a bit hazy; as always, I recommend PVR's excellent book for more on the subject.

We discussed this issue

We discussed this issue several times in the past. Here is one discussion I managed to dig up.

Relevant paper in CCS last

Relevant paper in CCS last year: verifiable functional purity in java
(http://www.cs.berkeley.edu/~finifter/pure-ccs08.pdf)

The concept is useful. As mentioned, in a sense, monads are a heavy-handed way of achieving this. We have an intuition for what we want out of the backend in terms of purity for proving properties, but what I think is important about the above paper is that it gets at a fresh approach to a more convenient frontend. For example, we might imagine a gradual/soft typing style approach to this. Finally, it's useful to motivate the form of purity (which isn't necessarily obvious once you move beyond the canon): in this case, I think they were trying to verify some voting machine application written in Joe-E and found that it'd be a big help.

Pure functions in dodo

The language I am developping, dodo, makes the distinction. There is a but...

I would describe a "pure" function as a function which returns always the same output for the same inputs, without having any observable effects (other than the time spent, and the heat generated etc to calculate a result). The compiler can optimize calls to a pure function by caching its results.

Dodo also allows functions that have observable effects, as long as they are granted the capability to do so. A capability is generally passed as parameter, and the compiler knows about their type.

Finally a method is what you call a procedure, it has no restrictions.

Databases, SQL

This isn't really a general-purpose example, but in SQL Server (but as I understand it, not Postgres; don't know about Oracle), there's a separation between pure and impure functions. If a function 'f' is pure, the database will allow you to persist the result 'g=f(a)' as a column, put it in an indexed view, etc. Impure functions are not accepted since they would corrupt the index.

Later on, the query optimizer may (or may not) automatically match an application of 'f' to 'a' with the precomputed 'g', and look up the cached value in an indexed view. So this behavior kind of looks like a 'compiler' automatically caching a pure expression. It's a pain to work like this, but this is what I'm currently doing at work to amortize the computation done in a 'select' over 'inserts', and so on.

Also the implementation of all of this is a bit of a cop-out: user defined functions explicitly state if they're pure or not, and the database will simply trust this declaration with no further inference.

gcc function attributes.

gcc has a similar facility with function attributes.

Alas, as far as I know (correct me if I'm wrong), they are hints to the optimizer using references to the function, rather than declarations that the compiler will enforce on the implementation of the function.

Which is a pity, since they could provide some amazingly useful guarantees about function behaviour.

Here are the appropriate snippets of the gcc docs...

`const'
Many functions do not examine any values except their arguments,
and have no effects except the return value. Basically this is
just slightly more strict class than the `pure' attribute below,
since function is not allowed to read global memory.

Note that a function that has pointer arguments and examines the
data pointed to must _not_ be declared `const'. Likewise, a
function that calls a non-`const' function usually must not be
`const'. It does not make sense for a `const' function to return
`void'.

The attribute `const' is not implemented in GCC versions earlier
than 2.5. An alternative way to declare that a function has no
side effects, which works in the current version and in some older
versions, is as follows:

typedef int intfn ();

extern const intfn square;

`pure'
Many functions have no effects except the return value and their
return value depends only on the parameters and/or global
variables. Such a function can be subject to common subexpression
elimination and loop optimization just as an arithmetic operator
would be. These functions should be declared with the attribute
`pure'. For example,

int square (int) __attribute__ ((pure));

says that the hypothetical function `square' is safe to call fewer
times than the program says.

Some of common examples of pure functions are `strlen' or `memcmp'.
Interesting non-pure functions are functions with infinite loops
or those depending on volatile memory or other system resource,
that may change between two consecutive calls (such as `feof' in a
multithreading environment).

`noreturn'
A few standard library functions, such as `abort' and `exit',
cannot return. GCC knows this automatically. Some programs define
their own functions that never return. You can declare them
`noreturn' to tell the compiler this fact. For example,

void fatal () __attribute__ ((noreturn));

void
fatal (/* ... */)
{
/* ... */ /* Print error message. */ /* ... */
exit (1);
}

The `noreturn' keyword tells the compiler to assume that `fatal'
cannot return. It can then optimize without regard to what would
happen if `fatal' ever did return. This makes slightly better
code. More importantly, it helps avoid spurious warnings of
uninitialized variables.

The `noreturn' keyword does not affect the exceptional path when
that applies: a `noreturn'-marked function may still return to the
caller by throwing an exception or calling `longjmp'.

Do not assume that registers saved by the calling function are
restored before calling the `noreturn' function.

It does not make sense for a `noreturn' function to have a return
type other than `void'.

`malloc'
The `malloc' attribute is used to tell the compiler that a function
may be treated as if any non-`NULL' pointer it returns cannot
alias any other pointer valid when the function returns. This
will often improve optimization. Standard functions with this
property include `malloc' and `calloc'. `realloc'-like functions
have this property as long as the old pointer is never referred to
(including comparing it to the new pointer) after the function
returns a non-`NULL' value.

Related to Eiffel's non-void work discussed recently...
`nonnull (ARG-INDEX, ...)'
The `nonnull' attribute specifies that some function parameters
should be non-null pointers. For instance, the declaration:

extern void *
my_memcpy (void *dest, const void *src, size_t len)
__attribute__((nonnull (1, 2)));

causes the compiler to check that, in calls to `my_memcpy',
arguments DEST and SRC are non-null. If the compiler determines
that a null pointer is passed in an argument slot marked as
non-null, and the `-Wnonnull' option is enabled, a warning is
issued. The compiler may also choose to make optimizations based
on the knowledge that certain function arguments will not be null.

If no argument index list is given to the `nonnull' attribute, all
pointer arguments are marked as non-null. To illustrate, the
following declaration is equivalent to the previous example:

extern void *
my_memcpy (void *dest, const void *src, size_t len)
__attribute__((nonnull));