archives
High-Level Nondeterministic Abstractions in C++, L. Michel, A. See, and P. Van Hentenryck.
This paper presents high-level abstractions for nondeterministic search in C++ which provide the counterpart to advanced features found in recent constraint languages. The abstractions have several beneï¬ts: they explicitly highlight the nondeterministic nature of the code, provide a natural iterative style, simplify debugging, and are efficiently implementable using macros and continuations. Their efficiency is demonstrated by comparing their performance with the C++ library Gecode, both for programming search procedures and search engines.
Although implementing continuations in C(++) with setjmp/longjmp is not new, this paper shows a nice application of the technique.
hi all,
I'm a newbie here but I assure you I'm not a troll. I was reading about Pluvo and then it struck me...
so you are bored and want to write a new language. what do you do at first? plan a look and feel. well, different people come from different backgrounds and they don't like the same look and feel. but then, it basically means you write a lexer, a parser, and then finally the interpreter. too much work. and it's not reusable by any other bored programmer. it's fun doing it again, of course. :)
I was thinking, what about having a rough lexical standard for interpreted languages? the point is that with clear specification everyone can write a lexer/parser and then extend it. hopefully, the base will already be there.
of course, different languages are different...
but I think some guidelines are absolutely necessary. not for the expert programmers, they can cope with anything, but the objective is to reduce the learning time for students.
so here are some imaginary rules. each idea is independent of the others. if you like them, that would probably convince you that it is indeed useful to do something like this.
- source codes are XML files. there are four different types of text: documentation, annotations, code and comments. an IDE is a must so that full advantage of this formulation can be taken. the basic principle is that source code itself is valuable data. something like:
<source language = "imaginary" target = "2.1">
<documentation>
The classical "Hello World" program
</documentation>
<code>
<annotation visibility = "public" />
<comment>the exact syntax is not relevant here</comment>
say "Hello World"
</code>
</source>
</documenation>
too verbose for text editors, but not so for an IDE. parsing XML is easy. displaying different things in different colors is.. well.. depends on you. but text editors like gedit, emacs, jEdit, Notepad++, know what to do.
- the encoding is always Unicode. this basically means you can use mathematical symbols as part of your language. advantage? well, it's been like 40 years or so of ASCII based programming, right?
-
there should be an all-pervasive naming convention. either camelCase or if the language permits, hyphen-seperated-words. but not both.
- long names are preferred. so no more MVar or mapM_ or __init__ and scoping rules apply. Monad.map looks pretty (over mapM). scaring off uni students is not going to do any good.
- the languages should have a clear-cut way of communicating with the outside world. the names exported should not contain any eccentricities (should be pure ([alpha][alpha|num]*)).
- for variable names (or function names), same name with different annotations should not have different meaning (I don't hate Perl, but I think @list and $list having completely different meanings is not healthy).
-
there will be no way to write two different statements on the same line (like ; separated ones in C). block structure dictated by indentation. this suggestion brings up the controvertial idea of 'good taste' again. I think once someone implements a very good library for dealing with one particular standard like these, that library, hopefully very well documented, people will accept the standard just the way it is. may be they will demand minor tweakings here and there, but not more than that. psychology is a really really mysterious thing.
things like these... the problem with interpreted language is that they are very concise, in my humble opinion, too concise. of course, diving any deeper will cause so much difference in opinions that I should probably stop here.
Hello all!
Is there available online (and free) literature on CPL? More specifically, I am interested in
`D.W. Barron, J.N. Buxton, D.F. Hartley, E. Nixon, C. Strachey. The main features of CPL. The Computer Journal, Vol.6, #2, pp.134-143 (1963)'
and
`D.W. Baron, C. Strachey. Programming. In L. Fox, ed. Advances in Programming and Non-numerical Computation, pp.49-82. Pergammon Press, 1966'.
So far, the only source I was able to find is this volume in commemoration of Chrostopher Strachey, but even there they seem to have pulled off the full texts already.
Thank you in advance,
Boyko
I've been doing a lot of OpenGL coding in C++ for my work, and it occurred to me that C++ is a much better language for making DSLs than I first thought. C macros (yes, I know, *shudder*), templates and inline functions, and operator overloading offer a pretty good degree of flexibility. But not all is well. From the C++ FAQ, http://www.parashift.com/c++-faq-lite/operator-overloading.html:
[13.7] Can I create a operator** for "to-the-power-of" operations?
Nope.
The names of, precedence of, associativity of, and arity of operators is fixed by the language. There is no operator** in C++, so you cannot create one for a class type.
If you're in doubt, consider that x ** y is the same as x * (*y) (in other words, the compiler assumes y is a pointer). Besides, operator overloading is just syntactic sugar for function calls. Although this particular syntactic sugar can be very sweet, it doesn't add anything fundamental. I suggest you overload pow(base,exponent) (a double precision version is in ).
By the way, operator^ can work for to-the-power-of, except it has the wrong precedence and associativity.
So maybe C++ is not that great for DSLs :) My immediate thought was, how does common lisp handle this, seeing as its macro facility is often referenced as being an excellent tool for DSLs, but then I realized that its prefix notation + s-expressions prevents associativity and precedence from ever being an issue. The order of evaluation is always explicit.
How do languages that don't use lisp's syntax handle this? Are there any that let you hand define the associativity and precedence? The latter seems like a particularly hard problem because different areas of the code could define their own unique operators, etc.
Another limitation not mentioned in the FAQ entry is that you can only overload operators when at least 1 of the operands is a user defined type. So you can't change how primitive and standard library types interact at all.
|
Recent comments
21 weeks 6 days ago
22 weeks 3 hours ago
22 weeks 3 hours ago
44 weeks 1 day ago
48 weeks 3 days ago
50 weeks 11 hours ago
50 weeks 11 hours ago
1 year 4 days ago
1 year 5 weeks ago
1 year 5 weeks ago