Lambda the Ultimate - LtU Forum, Site Discussion
http://lambda-the-ultimate.org/taxonomy/term/1 2/0
Main Discussion ForumenC runtime type info gimmick which supports scripting
http://lambda-the-ultimate.org/node/5442
<p >I am experimenting with runtime type information in C, so that values in memory have types, rather like Lisp and Smalltalk values have types, and not just variables. This post is intended to be fun. (The idea is for you to enjoy this, and I don't want to argue. But I might respond to remarks with short dialogs in lieu of argument, since that would be fun.) The rest of this is explanation and some detail.</p>
<p >In an earlier post I described a kind of heap I call a vat. It has the effect of associating metainfo with every block of memory allocated, in a different position in memory. (There is a constant time arithmetic calculation that turns a block pointer into a metainfo pointer, and vice versa; so the main penalty in access is a cache line miss if this happens more often when lacking adjacency to its block.) A continuous memory block is a <i >rod</i>, while the fixed size metainfo describing it is a <i >knob</i>. A counted reference to rod (or any point within) is a <i >hand</i>. On a 64-bit platform, a knob is 128 bits, a hand is also 128 bits, and rods vary in size.</p>
<p >Some bits of a hand are a copy of bits in the knob, so a reference is only valid when it agrees with the current self description of an object, particularly the current 16-bit generation number, so dangling refs are seen. Of the bits copied, another 20 bits are the id of the object's <i >plan</i>, which describes the type, layout, and any dynamic behavior if you also supply methods for that type in hashmap entries. You can think of <i >plan</i> as morally identical to <i >class</i> in Smalltalk, but initially I was aiming for perfect format description on a field by field basis, with recursion grounded in primitive native types. I wanted to write centralized code to debug print anything based on the plan. But then I sort of noticed I would be able to write Smalltalk style code that works on the C objects, if you were willing to call a dynamic api to send messages (method selectors with parameters).</p>
<p >If you point at a subfield somewhere inside the middle of a rod, you can eventually figure out exactly what it is by drilling down in the rod's plan, after fetching the plan for the id in the rod's metainfo knob. But since this lookup is offset-based, it aliases anything with the same offset. That is, a struct named foobar that contains a foo and then a bar will have the same offset for foobar and foo. If you send a message, did you mean to foobar or to foo? I would start a search in method tables for the largest containing object, then work through nesting until I found a match.</p>
<p >At the moment I am writing metainfo by hand, but generating it from C declarations later is the idea. On 64-bit platforms a plan is 256 bits, and each field requires 128 bits, plus however long the string names get, and method tables if there are any. When a field is composed of bit fields, I use a plan that says all its fields are bitfields, and then each field describes bit sizes and shifts instead of byte sizes and offsets.</p>
<p >Even if most of an app is comprised of static code, I will want to write dynamic and interactive scripts to explore runtime state in tests, and to inspect things at runtime in ways I did not anticipate earlier. I would normally think of glomming on a scripting language to do this, on the side. But I can use C as the scripting language instead, if memory state is universally annotated this way with type metainfo.</p>LtU ForumThu, 22 Jun 2017 20:51:12 +0000What would be involved in moving logic beyond FOL?
http://lambda-the-ultimate.org/node/5440
<p >Carl Hewitt has opined that first-order logic (FOL) should not be regarded as logic of choice for various people such as computer scientists (e.g., he says "First-order theories are entirely inadequate for Computer Science" in <a href="http://lambda-the-ultimate.org/node/4784?comments_per_page=10000">a reply to me in Mathematics self-proves its own Consistency (contra Gödel et. al.)</a>. He recommends instead his Inconsistency-Robust Logic for mathematicians, computer scientists and others.</p>
<p >What would it take to really change our perspective to be grounded in a logic other than FOL? I think that looking at John Corcoran's <a href="http://www.textproof.com/ar/corcoran-first-days.pdf">First days of a logic course</a> provides the beginnings of an answer (the article treats Aristotelian logic rather than FOL): an alternative logic should provide a natural basis grounding all of the concepts that we expect a new student of logic to grasp, and a rival to the kind of traditional logic course based on first-order logic should be comparably good or better for building on.</p>
<p >What would an introductory logic course look like for, e.g., Inconsistency-Robust Logic? Would it start with Aristotelian logic or would it start with something else? What would having taken such a course be good for, as a prerequisite?</p>LtU ForumThu, 15 Jun 2017 12:37:28 +0000Restructor: Full Program Automatic Refactoring
http://lambda-the-ultimate.org/node/5439
<p >Wrote up a description of some personal research I did 2005-2010: <a href="http://strlen.com/restructor/">http://strlen.com/restructor/</a></p>
<p >This algorithm will take an arbitrary program (including side effects) and refactor (create and inline functions) until the code contains no more under-abstraction (copy paste code) or over-abstraction. It is an implementation of code-compression: it finds a set of functions representing your code such that the total AST nodes are minimized.</p>
<p >I had originally intended this as a new way of programming, where you could modify your code using copy paste to your hearts content, and leave the annoying work of abstraction to the computer. I now realize that is a bit naive, and note some reasons why this may not be ideal on the page above. Still, an interesting idea that I've not seen before, would love to hear y'alls expert opinions :)</p>
<p >For reference, I posted about this idea on LTU in 2004 under the topic <a href="http://lambda-the-ultimate.org/node/17">"abstractionless programming"</a>.</p>LtU ForumSat, 10 Jun 2017 16:21:46 +0000Free links to all (or practically all) recent SIGPLAN papers
http://lambda-the-ultimate.org/node/5437
<p >From <a href="http://www.sigplan.org/OpenTOC/">http://www.sigplan.org/OpenTOC/</a></p>
<p ><quote ><br >
ACM OpenTOC is a unique service that enables Special Interest Groups to generate and post Tables of Contents for proceedings of their conferences enabling visitors to download the definitive version of the contents from the ACM Digital Library at no charge.</p>
<p >Downloads of these articles are captured in official ACM statistics, improving the accuracy of usage and impact measurements. Consistently linking to definitive versions of ACM articles should reduce user confusion over article versioning.<br >
</quote></p>LtU ForumSat, 03 Jun 2017 22:19:33 +0000SW verification continues
http://lambda-the-ultimate.org/node/5433
<p >There is new release of the Albatross compiler available.</p>
<p >The Albatross programming language shoots at making verified software construction available to everybody.</p>
<p >The <a href="https://www.gitbook.com/read/book/hbr/alba-lang-description">language description</a> has been completely updated. The document describes how to get the compiler.</p>
<p >The previous releases already contained induction and recursion and even inductive sets.</p>
<p >The most important new feature is abstract data types. Abstract data types are realized by abstract classes, abstract functions and abstract properties. By using the abstraction it is possible to verify a lot of properties which can be inherited by any type which satisfies the concept of the abstract data type.</p>
<p >The design of the language is still an ongoing activity any comment. Hints, issue reports etc. are welcome.</p>
<p >Regards<br >
Helmut</p>LtU ForumTue, 16 May 2017 20:25:40 +0000Finding Solutions vs. Verifying Solutions
http://lambda-the-ultimate.org/node/5432
<p >[Edit] Due to a conversation below, it turned out that sets P and NP are commonly defined differently than I considered them in my initial report. Updated report with the same content, but without this inconsistency can be found here: <a href="https://docs.google.com/document/d/1pTESAkcVjv-08BBrebhbvl1Thf291TSfgZ_EeUlSn8A/edit?usp=sharing">Finding Solutions vs. Verifying Solutions</a>. I also changed the blog title from "P is not equal to NP" to the above.</p>
<p >Below is the initial, but obsolete post kept for conversational reasons:</p>
<blockquote ><p >In <a href="https://docs.google.com/document/d/1-BdzqO7bBxdyIXJF-t-UyJmowcm4dEwkZVoPbTcP4zs/edit?usp=sharing">this short report</a>, I argue that P is not equal to NP.</p>
<blockquote ><p >Summary: an approach is made by analyzing functions and their inverses mappings between domain and codomain elements. Drawn conclusions state that verifying data is an inverse function of finding solutions to the same problem. Further observation implies that P is not equal to NP.</p></blockquote>
<p >Any thoughts?</p></blockquote>LtU ForumTue, 16 May 2017 13:48:56 +0000the type of eval in Shen
http://lambda-the-ultimate.org/node/5431
<p >Thought some people might find this interesting. Years ago in the LFCS I was talking to Mike Fourman about Lisp and ML and the eval function and how I liked eval; he said 'What is the type of eval?'. I had no answer. Years later I can answer :).</p>
<p >In Shen, eval exists as a function that takes lists and evaluates them. so (* 3 4) gives 12 and [* 3 4] is a heterogeneous list w.o. a type in virgin Shen. If you apply eval to a list structure you get the normal form of the expression you would get by replacing the [...]s in the argument by (...)s.</p>
<p >e.g (eval [* 3 4]) = (* 3 4) = 12</p>
<p >Let's call these arguments to eval terms. Now some terms raise dynamic type errors to eval and some do not. So what we'd like is a class of terms that are well typed. We'll define term as a parametric type so [* 3 4] : (term number). </p>
<p >Using sequent calculus notation in Shen we enter the type theory. See the introductory video here (http://shenlanguage.org/) if you don't understand this notation.</p>
<pre >
(datatype term
T1 : (term (A --> B));
T2 : (term A);
__________________
[T1 T2] : (term B);
\\ some clerical stuff skipped here
X : (term A) >> Y : (term B);
______________________________
[lambda X Y] : (term (A --> B));
if (not (cons? T))
T : A;
______________________
T : (mode (term A) -);)
</pre><p >
So what is the type of eval? </p>
<p >eval : (term A) --> A. Surely? </p>
<p >Let's add this to the end of our data type definition</p>
<pre >
_______________________
eval : ((term A) --> A);
</pre><p >
and run it.</p>
<pre >
Shen, copyright (C) 2010-2017 Mark Tarver
www.shenlanguage.org, Shen Professional Edition 17
running under Common Lisp, implementation: SBCL
port 2.1 ported by Mark Tarver
home licensed to Mark Tarver
(0-) (datatype term
T1 : (term (A --> B));
T2 : (term A);
__________________
[T1 T2] : (term B);
\\ some clerical stuff skipped here
X : (term A) >> Y : (term B);
______________________________
[lambda X Y] : (term (A --> B));
if (not (cons? T))
T : A;
______________________
T : (mode (term A) -);
_________________________
eval : ((term A) --> A);)
type#term
(1-) (tc +) \\ enable type checking
true
(2+) (* 3 4)
12 : number
(3+) ((* 3) 4)
12 : number
(4+) (eval [[* 3] 4])
12 : number
(5+) (eval [[* 3] "a"])
type error
(6+) [* 3]
[* 3] : (term (number --> number))
(7+) (eval [* 3])
# CLOSURE (LAMBDA (V1852)) {1006925DFB} : (number --> number)
(8+) [lambda X [lambda Y X]]
[lambda X [lambda Y X]] : (term (B --> (A --> B)))
(9+) (eval [lambda X [lambda Y X]])
# FUNCTION (LAMBDA (X)) {1006B5799B} : (B --> (A --> B))
</pre><p >
You only need a few more rules to complete the term data type and add currying on the fly, but I'll leave it there. This was a byproduct of a much more extensive project I'm working on wrt to a typed 2nd order logic; but I thought it was fun to share.</p>
<p >bw</p>
<p >Mark </p>LtU ForumMon, 15 May 2017 16:21:13 +0000Any thoughts on WanaDecrypt0r?
http://lambda-the-ultimate.org/node/5430
<p >This has been all over the news in my country and I doubt people on LtU will have missed it. There is a large scale SMBv1/SMBv2 worm active in the world called WanaDecryt0r.</p>
<p >Any thought what this means for language security features as hackers are becoming more and more creative in exploiting holes?</p>LtU ForumSun, 14 May 2017 23:04:24 +0000Prove: 'Cont r a = (a -> r) -> r' forms a monad
http://lambda-the-ultimate.org/node/5429
<p >I don't follow Haskell too much, and more often than not I disagree with Erik.</p>
<p >However, this came up during an <a href="http://queue.acm.org/detail.cfm?id=3092954">interview</a> so it's supposedly something I should know.</p>
<blockquote ><p >
Prove that 'Cont r a = (a -> r) -> r' forms a monad.
</p></blockquote>
<p >Any takers?</p>LtU ForumThu, 11 May 2017 20:37:14 +0000Implementing typing rules -- how do I implement non-syntactic rules?
http://lambda-the-ultimate.org/node/5428
<p >Hi all,</p>
<p >Typing rules in papers are usually not directly implementable as a checker. For example, they usually include a rule that requires coming up with a type for a binder, e.g. a lambda argument (when lambda doesn't have type of its argument in the syntax). I'm wondering if there's a standard way of implementing this type of rules, because as far as I can see none of the papers I'm looking at explicitly say how to do this.</p>
<p >To be more concrete, I'm looking at the Frank paper (http://lambda-the-ultimate.org/node/5401). The paper says it has bidirectional typing dicipline, but other than that it doesn't hint at how to implement the typing rules. So when I see this rule for lamabda (Fun) I have no idea how to implement that.</p>
<p >I checked the literature on bidirectional typing (Local Type Inference etc.) but couldn't see anything relevant. So I'm guessing that in the literature when I see this kind of rules it means "use Damas-Hindley-Milner style metavariable generation + unification etc.", am I right? Are there any other ways of doing this? If yes, then how do I know which method I should be using?</p>
<p >Thanks</p>LtU ForumSun, 07 May 2017 06:10:10 +0000Egel Language v0.1
http://lambda-the-ultimate.org/node/5427
<p >Small notification: I made another language called <a href="https://egel-lang.github.io/">Egel</a>. It's an experimental toy language based on untyped eager combinator rewriting.</p>
<p >It's a beta release, things will change in the future and I'll likely break 'stuff' but feel free to download it or read the sources.</p>
<p >I have some ideas how to turn it into something useful in the future but nothing really concrete yet.</p>LtU ForumSat, 22 Apr 2017 21:19:10 +0000A refutation of Gödel's first incompleteness theorem
http://lambda-the-ultimate.org/node/5425
<p ><strong >Contradictions in Gödel's First Incompleteness Theorem.</strong>
<p ><strong >Notes for the second edition.</strong>
I have edited this post a bit, sorry if it is against the etiquette. The
original post can be found
<a href="https://github.com/enriquepablo/terms/blob/master/godel/godel.md">here</a>.
I have removed a couple of
-periopheral- mistakes. However I want to also warn that the argument is a lot
clearer when adapted to a modern rendition of Gödel's theorem, based on Turing
machines, as suggested in i
<a href="http://lambda-the-ultimate.org/node/5425#comment-94109">this comment</a>.
That adaptation is
carried out in the comments that hang from the previous referenced comment,
<a href="http://lambda-the-ultimate.org/node/5425#comment-94116">this</a> an onwards.
<p ><strong >Intro</strong>
<p >This refers to Kurt Gödel's
<a href="http://jacqkrol.x10.mx/assets/articles/godel-1931.pdf">
"On formally undecidable propositions of
principia mathematica and related systems"</a>.
The notation used here will be the same as
that used by Gödel in that paper.
<p >In that work, Gödel starts with a description of the formal system P,
which, according to himself,
"is essentially the system obtained by superimposing
on the Peano axioms the logic of PM".
<p >Then he goes on to define a map Phi,
which is an arithmetization of system P.
Phi is a one-to-one correspondence that assigns
a natural number, not only to every basic sign in P,
but also to every finite series of such signs.
<p ><strong >One</strong>
<p >There are alternative arithmetizations of system P.
I will later delve on how many.
<p >This is obvious from simply considering
a different order in the basic signs
when they are assigned numbers.
For example, if we assign the number 13 to "("
and the number 11 to ")",
we obtain a different Phi.
<p >If we want Gödel's proof to be well founded,
it should obviously be independent of
which arithmetization is chosen to carry it out.
The procedure should be correct for any valid Phi chosen.
otherwise it would **not** apply to system P,
but to system P **and** some particular Phi.
<p >To take care of this,
in Gödel's proof we may use a symbol for Phi
that represents abstractly any possible valid choice of Phi,
and that we can later substitute for a particular Phi
when we want to actually get to numbers.
This is so that we can show that substituting for any random Phi
will produce the same result.
<p >The common way to do this is to add an index i to Phi,
coming from some set I with the same cardinality as
the set of all possible valid Phi's,
so we can establish a bijection among them - an index.
Thus Phi becomes here Phi^i.
<p ><strong >Two</strong>
<p >Later on, Gödel proceeds to spell out Phi,
his Phi, which we might call Phi^0,
with his correspondence of signs and numbers
and his rules to combine them.
<p >And then Gödel proceeds to define
a number of metamathematical concepts
about system P, that are
arithmetizable with Phi^0,
with 45 consecutive definitions,
culminating with the definition of provable formula.
<p >Definition of provable formula means, in this context,
definition of a subset of the natural numbers,
so that each number in this set corresponds
biunivocally with a provable formula in P.
<p >Let's now stop at his definition number (10):
<pre >
E(x) === R(11) * x * R(13)
</pre>
<p >Here Gödel defines "bracketing" of an expression x,
and this is the first time Gödel makes use of Phi^0,
since:
<pre >
Phi^0( '(' ) = 11
Phi^0( ')' ) = 13
</pre>
<p >If we want to remain general, we may rather do:
<pre >
E^i(x) === R(Phi^i( '(' )) * x * R(Phi^i( ')' ))
</pre>
<p >Two little modifications are made in this definition.
First, we substitute 11 and 13 for Phi^i acting on "(" and ")".
11 and 13 would be the case if we instantiate the definition with Phi^0.
<p >And second, E inherits an index i;
obviously, different Phi^i will define different E^i.
And so do most definitions afterwards.
<p >Since, for the moment, in the RHS of definitions from (10) onwards,
we are not encoding in Phi^i the index i,
which has sprouted on top of all defined symbols,
we cease to have an actual number there (in the RHS);
we now have an expresion that, given a particular Phi^i,
will produce a number.
<p >So far, none of this means that any of Gödel's 45 deffinitions
are in any way inconsistent;
we are just building a generalization of his argument.
<p ><strong >Three</strong>
<p >There is something to be said of
the propositions Gödel labels as (3) and (4),
immediately after his 1-45 definitions.
With them, he establishes that, in his own words,
"every recursive relation [among natural numbers]
is definable in the [just arithmetized] system P",
i.e., with Phi^0.
<p >So in the LHS of these two propositions
we have a relation among natural numbers,
and in the RHS we have a "number",
constructed from Phi^0 and his 45 definitions.
Between them, we have an arrow from LHS to RHS.
It is not clear to me from the text what
Gödel was meaning that arrow to be.
But it clearly contains an implicit Phi^0.
<p >If we make it explicit and generalized, we must add indexes to
all the mathematical and metamathematical symbols he uses:
All Bew, Sb, r, Z, u1... must be generalized with an index i.
<p >Then, if we instantiate with some particular Phi^i,
it must somehow be added in both sides:
in the RHS to reduce the given expression to an actual number,
and in the LHS to indicate that the arrow now goes from
the relation in the LHS **and** the particular Phi^i chosen,
to that actual number.
<p >Obviously, if we want to produce valid statements about system P,
we must use indexes, otherwise the resulting numbers are
just talking about P and some chosen Phi^i, together.
<p >Only after we have reached some statement about system P
that we want to corroborate,
should we instantiate some random Phi^i and see whether
it still holds, irrespective of any particularity of that map.
<p >These considerations still do not introduce contradiction
in Gödel's reasoning.
<p ><strong >Four</strong>
<p >So we need to keep the indexes in Gödel's proof.
And having indexes provides much trouble in (8.1).
<p >In (8.1), Gödel introduces a trick that plays a central role in his proof.
He uses the arithmetization of a formula y to substitute free variables in that
same formula, thereby creating a self reference within the resulting expression.
<p >However, given all previous considerations,
we must now have an index in y, we need y^i,
and so, it ceases to be a number.
But Z^i is some function that takes a natural number
and produces its representation in Phi^i.
It needs a number.
<p >Therefore, to be able to do the trick of expressing
y^i with itself within itself,
we need to convert y^i to a number,
and so, we must also encode the index i with our 45 definitions.
<p >The question is that if we choose some Phi^i,
and remove the indexes in the RHS to obtain a number,
we should also add Phi^i to the LHS, for it is now
the arithmetic relattion **plus** some arizmetization Phi^i
which determine the number in the RHS, and this is not wanted.
<p ><strong >Five</strong>
<p >But to encode the index,
we ultimately need to encode the actual Phi^i.
In (3) and (4), If in the RHS we are to have a number,
in the LHS we need the actual Phi^i to determine that number.
If we use a reference to the arithmetization as index,
we'll also need the "reference map"
providing the concrete arithmetizations that correspond to each index.
Otherwise we won't be able to reach the number in the RHS.
<p >Thus, if we want definitions 1-45 to serve for Gödel's proof,
we need an arithmetization of Phi^i itself -with itself.
<p >This may seem simple enough, since, after all, the Phi^i are just maps,
But it leads to all sorts of problems.
<p ><strong >Five one</strong>
<p >Now, suppose that we can actually arithmetize any Phi^i with itself,
and that we pick some random Phi^i, let's call it Phi^0:
we can define Phi^0 with Phi^0,
and we can use that definition to further define 10-45.
<p >But since Phi^0 is just a random arithmetization of system P,
if it suffices to arithmetize Phi^0,
then it must also suffice to arithmetize any other Phi^i equally well.
However, with Phi^0, we can only use the arithmetization of Phi^0
as index to build defns 10-45.
<p >This means that, as arithmetizations of system P,
the different Phi^i are not identical among them,
because each one treats differently the arithmetization of itself
from the arithmetization of other Phi^i.
<p >Exactly identical arithmetical statements,
such as definition (10) instatiated with some particular Phi^i,
acquire different meaning and truth value
when expressed in one or another Phi^i.
<p >Among those statements, Gödel's theorem.
<p ><strong >Five two</strong>
[This argument does not hold if we have to consider recursively enumerable arithmetiazations.]
<p >A further argument that shows inconsistency in Gödel's theorem
comes from considering that if we are going to
somehow encode the index with Phi^i,
we should first consider what entropy must that index have, since
it will correspond to the size of the numbers that we will need to encode them.
And that entropy corresponds to the logarithm of the cardinality of I,
i.e., of the number of valid Phi^i.
<p >To get an idea about the magnitude of this entropy, it may suffice
to think that variables have 2 degrees of freedom,
both with countably many choices.
Gödel very conveniently establishes a natural correspondence
between the indexes of the variables and the indexes of primes and of their consecutive
exponentiations, but in fact any correspondence between
both (indexes of vars, and indexes of primes and exponents) should do.
For example, we can clearly have a Phi^i that maps the first variable of the first order
to the 1000th prime number exponentiated to the 29th power.
<p >This gives us all permutations of pairs of independent natural numbers,
and so, uncountably many choices for Phi^i;
so I must have at least the same cardinality as the real line.
Therefore y^i doesn't correspond to a natural number,
since it needs more entropy than a natural number can contain,
and cannot be fed into Z^i, terminating the proof.LtU ForumFri, 14 Apr 2017 18:22:20 +0000Making a one-pass compiler by generating fexprs that generate code
http://lambda-the-ultimate.org/node/5424
<p >I'm starting to write a simple compiler that transcompiles a simple scripting language for <a href="https://en.wikipedia.org/wiki/General_game_playing">general game playing</a> (at least for chess-like and a few other board games) into C which will be compiled in memory with the Tiny C library.</p>
<p >I noticed that there's a mismatch between the order in which parser generators trigger actions and the order in which tree nodes need to be visited in order that identifiers can be type checked and used to generate code. </p>
<p >Parser generators trigger actions from the bottom up. Ideally when you generate code, you visit nodes in the tree in whatever order gives you the type information before you see the identifiers used in an expression. Since fexprs let you control what order you visit parts of the inner expression, they're perfect for that.</p>
<p >So my parser is being written so that the parse generates an s-expression that contains fexprs that when run semantically checks the program and transcompiles in a single pass.</p>
<p >This also suggests a new version of Greenspun's 10th rule:<br >
A sufficiently complex C++ program will contain an informally specified, buggy and incomplete implementation of John Shutt's Kernel language.</p>LtU ForumFri, 14 Apr 2017 07:11:59 +0000New PL forums: plforums.org
http://lambda-the-ultimate.org/node/5423
<p >Hello, LtU Community,</p>
<p >I've built a new forum for the programming languages community, <a href="https://plforums.org">plforums.org</a>!</p>
<p >The forums software is well thought out, fast, and accessible. The forums support Markdown, <b >TeX math</b>, syntax highlighting, @-mentions, email notifications, a moderation system, and more. All of the above is done without requiring any JavaScript. The engine is MIT licensed and development happens in the open.</p>
<ul >
<li ><a href="https://plforums.org/meta/formatting-demo">Formatting Demo</a>.</li>
<li ><a href="https://plforums.org/about">Mission statement and code of conduct</a>.</li>
</ul>
<p >Later this year I will be inviting more researchers and engineers to post and participate in PL forums.<br >
Today, I am asking the LtU community for feedback on the software and perhaps participation!</p>
<p >The engineering community can benefit from the research that is often light years ahead of the current practices. The research community can in turn benefit from lessons the engineering community learned the hard way. I've built PL forums because I want a well-designed space where theory and practice cross-pollinate.</p>
<p >You might know me from software such as Twitter Bootstrap, or various libraries for Ruby on Rails such as <code >i18n-tasks</code> (static analysis) and <code >order_query</code> (keyset pagination).</p>
<p >Thanks, Gleb<br >
<b ><a href="https://plforums.org">plforums.org</a></b></p>LtU ForumWed, 12 Apr 2017 00:39:02 +0000Compiler IDE API
http://lambda-the-ultimate.org/node/5422
<p >As I am working (slowly) on a compiler at the moment, I have started thinking about IDE integration. It occurs to me (as I am sure it has to everyone else) that a compiler should not be a command-line program, but a library that can be integrated with applications and IDEs. The command line interface for the compiler should be just another client of the library.</p>
<p >So my question is, in an ideal environment (starting from scratch) what should an API for a compiler (targeting IDE integration as well as other runtime uses) look like? Obviously some method to build the code, some methods for auto-completion, type checking, type inference.</p>
<p >Other thoughts are that all the above functions need to operate in a dynamic environment, where incomplete code fragments exist, without breaking the compiler, and hopefully still offering useful feedback.</p>LtU ForumTue, 11 Apr 2017 09:41:05 +0000