Usefullness of constants

Hello, there.

This actually is my first post here, but I've been reading LtU for about a year.

Background: I am implementing a small, general purpose procedural language, mainly for gaining some knowledge and insight into subject. I am using ANTLR for parser/lexer generation (Java-ish), and, don't laugh - PHP as a output - mainly because I feel rather familiar with it, and also because I intend to use this language for web - webserver served. So far I have done basics - overall structure, function declaration/compile time argument checking, general language statement (if, for, etc.) implementation.

Question: What are the reasons for constants (like - define("SOME_CONST", "My Fancy Value"); - PHP) in languages that do not compile directly to machine code? As for my language, it would seem logical to use functions (parameterless) returning constant values instead of adding another "subsystem" for constants. Not that I am too lazy, to do it, just wondering if there are any other reasons, asaide from potential speed-up? Am I missing some point?

Thank You,
Krists

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I think you're right that

I think you're right that you won't lose any power using functions instead of constants.

Note: Some languages have bindings and values as fundamental concepts. So defining a constant 'x = 10' and defining a function 'f = x -> x*x' are both using the same binding mechanism, but the latter bound value happens to be a function.

Constants are pretty useful

Constants like 1 and 2 and.... "hello world". The question from a consistency standpoint is whether these too are functions? And if they are not functions, should we be able to name them?

literals vs. named

literals vs. named constants...

You are right that constants

You are right that constants can be modeled as functions. But I think you are probably wrong about why constants are used in other languages, and what their advantages. What exactly do you have in mind when you write "What are the reasons for constants... in languages that do not compile directly to machine code?"?

By my understanding,

By my understanding, languages that generally are compiled directly to machine code (actual CPU instructions), tend to be used for tasks where maximum speed is (more/most) important, so constants are usefull for them - in source they allow extra abstraction, and are implemented as defined values themselves or pointers to them - accessing those could take quite (an order of magnintude, in worst case) less time than executing some function that would return same value or pointer. On the other hand, languages that are (semi-?)interpreted (albeit nowadays I doubt a strict line can be drawn between "interpreted" and "compiled"), actual access times for constant or call would be more equal, hence the question for usefulness.

Abit of oftopic: Isn't my english too cluttered? It is not my first language and I am not very versed in writing in it.

close to perfect

I'd have guessed English was your first language. You write it better than an average American speaker on tech subjects. Your verb declensions and phrasing are very good and natural (better than most English speakers manage in tech subjects with long sentences). Other than a couple typos (extra 'l' after useful, missing 'f' in off topic) the last post looks about perfect.

By the way, I'm interested in this discussion of constants, but don't have much to add right now.

I thought this is what you

I thought this is what you had in mind. I think your analysis is based on some problematic assumptions. But I'll just concentrate on the most general issues for now.

1. You can expect any decent compiler to be able to optimize and inline function calls to simple functions such as these, so x=4 and x=f() with f={return 4} should be lead to identical machine code.

2. Interpreters are more likely to be limited in their whole-program analysis and optimization (though this is an over-generalization), so in fact the cost of function calls in this scenario will be greater than in the compile code case. Since interpreted implementations are usually slower, and since speed is always an issue (even when performance isn't the main concern), the cost of your approach here is probably going to be more noticeable and the pressure to avoid it greater.

The more general issues are as follows: (1) Compiled languages are not used only when "maximum speed" is important. Moreover, when speed is important, the cost of function calls is usually marginal, and especially if we restrict our attention to the few places in which named constants are used. Not all systems have real-time constraints. (2) The crucial issue is whether you (and the compiler) can rely on the fact that the function will always return the same value. This is crucial for optimization, as well as for reasoning about the code. Whether you know this ultimately depends on the semantics of the language (pure/impure), type system issues etc.

Impure languages

In an impure language there may be a difference if constants are promised to only be evaluated once and functions (procedures) are promised to be evaluated with each use. Side effects make the difference visible.

Quite right, of course. But

Quite right, of course. But if you model constants as functions, you are guaranteed that these functions are free of side-effects.

Two quick answers

1) Reader comprehension: Defining MaximumLoginAttempts (or MAXIMUM_LOGIN_ATTEMPTS if you're accustomed to curly-braces conventions) as 2 and then using that name in the remainder of the code (instead of the literal 2) makes the code more self-explanatory. Defining it as a constant (instead of a variable) adds documentary value at the point of definition and also protects the value from thoughtless modification during use. (*)

2) Configuration management: Grouping a related set of such definitions gives you a way to accomplish single-point-of-control for your system. Modifying one line gives your users three login attempts, and avoids the risk that a global replacement of 2 with 3 will change some twos being used for another purpose, with disastrous results (e.g. breaking your quick-and-dirty binary search ;-).

(*) Extra credit for anyone who remembers which language was notorious in its early days for allowing a program to redefine the meaning of a literal 2.

um, question is not about

um, question is not about whether constant-ish things (some "name" mapped to "constant value") are usefull, but on that if (and why) they are needed as a seperate component of language.

anyways - about "Reader comprehension" - in my language (same goes for most of other languages I am familiar with) function names can be "all caps and underscores" -MAXIMUM_LOGIN_ATEMPTS().
and on "Configuration management" - again - there is no reason why I could not define all configuration related functions in one place, and use them througout my program.

Oops!

My apologies for being too hasty and missing the point. The same logic I (mis)applied before is still relevant.

Unifying "const" values with zero-argument functions lets you change the complexity of a policy (e.g. MaximumLoginAttempts) without consequence to the usage sites, which seems a good thing. (I think this is independent of whether the language implementation is performance-obssessed or not. ;-)

FWIW, Scala also follows this Uniform Access Principle.

Scala doesn't unify (quite)

Scala is a worthy model in regards to making access to value members and function members look the same even at the generated code level, but I wouldn't say it unifies values with nullary functions.

object Constants {
  val valConstant = {println("evaluating a value"); 42}
  def funcConstant = {println("evaluating a function"); 42}
}

The expression for valConstant will only be evaluated once - your console will only see "evaluating a value" once. The expression for funcConstant will be evaluated every time funcConstant is called in your program execution - you'll see "evaluating a function" in the console arbitrarily many times.

Which was the point I was trying to make above (perhaps unsuccessfully).

Scala unifies field access and getter/setter functions.

I'm not sure if this is what Joel was referring to, but Scala does unify direct field access and getter/setter functions. Field access is compiled down to a function call to a trivial "getter" method. If you want to provide a fancier method later, you can do so without breaking ABI compatibility.

Sure, but...

You're quite right that access to "fields" is via automatically generated methods.

But if I understand the OP's question it was "why would a language need a distinct concept (syntactically or semantically) for 'constants'". Scala certainly does, and hopefully my code showed why. Also worth noting is that Scala has "lazy vals" which are basically memoized nullary functions. Still distinct from ordinary nullary functions which are evaluated on each use and "vals" which are evaluated at object construction.

Logic vs Lambda

Most presentations of logic that I have seen tend to just define a set of function symbols, each of which has an associated arity (number of arguments it accepts). In this view, a constant is then simply a function of zero arity (distinct from a function that takes a single unit argument). This is different from the view in lambda calculus based languages, in which all functions take a single argument, and multiple arguments are handled by either tuples or currying. Here, functions are just one type of value (assuming a typed LC), and binding values to names is handled separately. "Constant" is therefore a property of the mapping from names/symbols to values in these languages. You could implement constant-like functions that take a unit argument (e.g. PI()), but in a language with side-effects that will be semantically different to the usual notion of "constant".

Usefulness of variables

If you are considering unifying constants with functions, I suppose that you regard them as different from regular variables. Is that because your language is pure (no variables)?

No, my language is not pure.

No, my language is not pure. Out of curiosity I just looked up "pure procedural language", and it appears that "pure" for procedural languages (and mine is procedural) means there are no functions per se (- and no arguments, which implies only one variable scope - global one) - it's just GOSUBs and RETURNs all the way to the bottom. I intend to have my functions.
The main reason I am considering "unifying constants with functions" (e.g.. not implementing them at all) is because I am not sure if it's worth doing it, for simplicity's sake.

Re: No, my language is not pure.

I did not know about the notion of a purely procedural language. I used that term in the sense it would be used when speaking of a pure functional language.

My question was really, why don't you unify constants with variables instead of functions?

Perl's constants are functions

As you suggest, Perl's constants are implemented as nullary functions. This works syntactically because Perl doesn't require parentheses to invoke a function, so

   $a = myFunction();

can be written as

   $a = myFunction;

Both call myFunction (possibly inlined) and return the result. Thus, you can write something like:

   $a = MY_CONSTANT;

which actually calls a function, but lets you easily pretend it doesn't.

How much linguistic inspiration you want to take from Perl is up to you. ;)