User loginNavigation |
LtU ForumRacket is ‘Scheme all the way down’ with v8 on Chez SchemeChez Scheme’s (nanopass) compiler produces native code x86+x86-64, ARM (incl Raspberry Pi to the Apple M1). By spdegabrielle at 2021-02-14 12:49 | LtU Forum | login or register to post comments | other blogs | 570 reads
High level languages with optimal code generationOne of the issues that arises during the implementation of programs that need to have high performance, is that a large percentage of time is spent doing and debugging optimizations. These range from reorganizing data, reorganizing control flow, and sometimes even going as far as using inline assembly. This is work seems to be very error prone and quite difficult to do, with a lot of specialized knowledge that is necessary to achieve good performance. A good example of the difficulties of this process can be seen in the series handmade hero. How can we make it easier for compilers to help generate faster code automatically? I found out about superoptimization, but this seem to be very limited in capability. Specifically, it is very slow, can't deal with control flow and can't do anything about data structures. Regarding this last point, most languages force us to decide on them, and so a compiler can't even do much about it. A different strategy is used in Compilation of Bottom-Up Evaluation for a Pure Logic Programming Language, where a Turing complete Datalog inspired language is proposed. With this approach, the data structures are selected by the compiler. What other research looks at this problem, and what other ideas do you have? Is relational algebra + mapreduce a good way to approach this problem? How can we have program descriptions that compile into optimal or close to optimal code? Or at least how can we decouple optimizations from the problem description? P.S. This is a bit of a sequel to the Why is there no widely accepted progress for 50 years?. The answers I got in that thread were very thought provoking and helped to clarify much of my thinking. Thank you very much to all that participated in the discussion! Concurrent System Programming with Effect HandlersFrom 2017 cf. github repo of materials for CUFP 17 tutorial about it
A problem about programming with macros vs Kernel F-exprsI love Kernel F-exprs which looks better than ordinary macro in almost every ways, However, I have trouble trying to implement the following using F-expr (defmacro f () (make-array 1)) ;; Common Lisp style macro (define g (lambda () (list (f) (f))))
The two I can't think of a way to replicate this behavior using Kernel F-exprs. Any ideas? Loop and recursionGeneric loops are problematic to reason about and understand when you read the code, and therefore prone to cause bugs. I think it would be beneficial to forbid generic loops, and only have iterators. Recursions can also be problematic since the stack usage can become unpredictable. The exception is of course tail recursions, since those can re-use that current stack frame. If the language would prohibit generic loops and recursions except for tail recursions, would it still be useful as a generic programming language or would those restrictions make some common designs impossible? Object graph 'integrals'Is there some mathematical theory that considers ordered 'integrals' over object graphs? I've selectively checked category theory books and papers, and it looks that they are working with graph-to-graph operations, rather than with 'integral' object graph operations. Papers in graph theory considers some path 'integral' operations in ad-hoc manner like in delivery problem, but I've have not found generic considerations for graph-scoped 'integral' operations. Such graph integral would be high-order function that produce graph function basing on node specific function and it will combine node specific function using some combinator basing on links in graph. For theory behind dependency injection such graph 'integral' operations are needed, but I've have not found any papers or books on this simple-looking thing, possibly because I'm using wrong keywords. For example, creation of graph of objects and unwinding graph of objects in dependency injection are such 'integral' operations. We need to execute some object-dependent operation for each object in operation-specific order governed by graph. For example, for graph unwind operation we need to destroy objects that have no dependent objects first. For configuration 'refresh' there is a need to update only affected objects. Cognition and CodingFascinating research from MIT on human cognition and coding. The brain uses spatial navigation as well as math/logic areas to understand programs. Our apparent need for at least some spatial processing when programming, may lend additional support for an explicitly spatial syntax and semantics for programming languages. The full article is published here. looking for dependent research proof system language implemented in C++I am looking for a dependent research proof system language implemented in C++. Typer: ML boosted with type theory and SchemeTyper: ML boosted with type theory and Scheme Abstract Haskell-Like S-Expression-Based Language Designed for an IDEHaskell-Like S-Expression-Based Language (has to have my current absolute super favourite first sentence ever.) Abstract The state of the programmers’ toolbox is abysmal. Although substantial effort is put into the |
Browse archivesActive forum topics |
Recent comments
1 week 6 days ago
1 week 6 days ago
2 weeks 6 days ago
3 weeks 14 hours ago
3 weeks 18 hours ago
3 weeks 18 hours ago
3 weeks 18 hours ago
3 weeks 6 days ago
4 weeks 54 min ago
4 weeks 4 hours ago