User loginNavigation |
Graydon Hoare: 21 compilers and 3 orders of magnitude in 60 minutesIn 2019, Graydon Hoare gave a talk to undergraduates (PDF of slides) trying to communicate a sense of what compilers looked like from the perspective of people who did it for a living. I've been aware of this talk for over a year and meant to submit a story here, but was overcome by the sheer number of excellent observations. I'll just summarise the groups he uses:
I really recommend spending time working through these slides. While much of the material I was familiar with, enough was new, and I really appreciated the well-made points, shout-outs to projects that deserve more visibility, such as Nanopass compilers and CakeML, and the presentation of the Futamura projections, a famously tricky concept, at the undergraduate level. By Charles Stewart at 2022-02-27 14:47 | Implementation | login or register to post comments | other blogs | 5638 reads
Latent Effects for Reusable Language ComponentsLatent Effects for Reusable Language Components, by Birthe van den Berg, Tom Schrijvers, Casper Bach Poulsen, Nicolas Wu:
Looks like a nice generalization of the basic approach taken by algebraic effects to more subtle contexts. Algebraic effects have been discussed here on LtU many times. I think this description from section 2.3 is a pretty good overview of their approach:
By naasking at 2021-10-14 14:02 | Effects | Functional | Theory | login or register to post comments | other blogs | 11734 reads
Introducing PathQuery, Google's Graph Query LanguageIntroducing PathQuery, Google's Graph Query Language
Things that are somewhat interesting to me, from an engineering standpoint: 1. PathQuery has a module/compilation system, enabling re-use of PathQuery modules across projects. (Someone had mentioned that Google has around 40,000 PathQuery modules already, internally...) Also, from a socio-linguistic perspective, Graph Languages are effectively the new Object-Relational Mapping layer, but they solve an interesting organizational problem of allowing multiple teams to code in different languages, without needing to re-write / re-implement entities and mapping configurations in each language. It's the Old New Thing again... Google announces Logica: organizing your data queries, making them universally reusable and funYou can read more about it at the Google Open Source blog post, Logica: organizing your data queries, making them universally reusable and fun. They advocate for datalog-like language they developed internally at Google. The reason?
Coq will be renamedFrom the Coq-club:
Suggestions for alternative names go here. LAMBDA: The ultimate Excel worksheet functionPost by Andy Gordon and Simon Peyton Jones on LAMBDA giving Excel users the ability to define functions.
Google Brain's Jax and FlaxGoogle's AI division, Google Brain, has two main products for deep learning: TensorFlow and Jax. While TensorFlow is best known, Jax can be thought of as a higher-level language for specifying deep learning algorithms while automatically eliding code that doesn't need to run as part of the model. Jax evolved from Autograd, and is a combination of Autograd and XLA. Autograd "can automatically differentiate native Python and Numpy code. It can handle a large subset of Python's features, including loops, ifs, recursion and closures, and it can even take derivatives of derivatives of derivatives. It supports reverse-mode differentiation (a.k.a. backpropagation), which means it can efficiently take gradients of scalar-valued functions with respect to array-valued arguments, as well as forward-mode differentiation, and the two can be composed arbitrarily. The main intended application of Autograd is gradient-based optimization." Flax is then built on top of Jax, and allows for easier customization of existing models. What do you see as the future of domain specific languages for AI? By Z-Bo at 2021-01-15 13:59 | Implementation | Python | Scientific Programming | Software Engineering | login or register to post comments | other blogs | 47169 reads
Built to LastMar Hicks. Built to Last. Logic. Issue 11, "Care".
Recently, work on the history of technology has been becoming increasingly more sophisticated and moved beyond telling the story of impressive technology to trying to unravel the social, political, and economic forces that affected the development, deployment, and use of a wide range of technologies and technological systems. Luckily, this trend is beginning to manifest itself in studies of the history of programming languages. While not replacing the need for careful, deeply informed, studies of the internal intellectual forces affecting the development of programming languages, these studies add a sorely needed aspect to the stories we tell. Tackling the Awkward Squad for Reactive ProgrammingSam Van den Vonder, Thierry Renaux, Bjarno Oeyen, Joeri De Koster, Wolfgang De Meuter Reactive programming is a programming paradigm whereby programs are internally represented by a dependency graph, which is used to automatically (re)compute parts of a program whenever its input changes. In practice reactive programming can only be used for some parts of an application: a reactive program is usually embedded in an application that is still written in ordinary imperative languages such as JavaScript or Scala. In this paper we investigate this embedding and we distill “the awkward squad for reactive programming” as 3 concerns that are essential for real-world software development, but that do not fit within reactive programming. They are related to long lasting computations, side-effects, and the coordination between imperative and reactive code. To solve these issues we design a new programming model called the Actor-Reactor Model in which programs are split up in a number of actors and reactors. Actors and reactors enforce a strict separation of imperative and reactive code, and they can be composed via a number of composition operators that make use of data streams. We demonstrate the model via our own implementation in a language called Stella. The Simple Essence of Algebraic Subtyping: Principal Type Inference with Subtyping Made EasyThe Simple Essence of Algebraic Subtyping: Principal Type Inference with Subtyping Made Easy, Lionel Parreaux, ICFP 2020.
There's also an introductory blog post and an online demo. Stephen Dolan's Algebraic Subtyping (discussion) unexpectedly provided a solution to the problem of combining type inference and subtyping, but used somewhat heavy and unusual machinery. Now Lionel Parreaux shows that the system can be implemented in a very straightforward and pleasing way. Here's to hoping that it makes it into real languages! |
Browse archives
Active forum topics |
Recent comments
1 week 2 days ago
3 weeks 4 days ago
12 weeks 6 days ago
13 weeks 1 day ago
13 weeks 2 days ago
20 weeks 2 days ago
26 weeks 14 hours ago
26 weeks 1 day ago
27 weeks 14 hours ago
29 weeks 5 days ago