archives

A problem about programming with macros vs Kernel F-exprs

I love Kernel F-exprs which looks better than ordinary macro in almost every ways,
especially the elimination for the need of a reflective tower, or separation of phases, if I understand correctly.

However, I have trouble trying to implement the following using F-expr

(defmacro f () 
 (make-array 1)) ;; Common Lisp style macro
(define g (lambda () (list (f) (f))))

The two (f) above each expand into a different array instance.
Calling (g) will then return a list every time, with the same two different array instances.

I can't think of a way to replicate this behavior using Kernel F-exprs.
It seems that when an F-expr f is called there's no way to know where
in the source code is it called.
If f accepts some arguments, a dirty hack is to have a hash table mapping
f's unevaluated argument to array instances and hope the implementation
doesn't hash-cons the source code, but in this case the argument is just
one unique #nil, so that doesn't work.

Any ideas?

Google Brain's Jax and Flax

Google's AI division, Google Brain, has two main products for deep learning: TensorFlow and Jax. While TensorFlow is best known, Jax can be thought of as a higher-level language for specifying deep learning algorithms while automatically eliding code that doesn't need to run as part of the model.

Jax evolved from Autograd, and is a combination of Autograd and XLA. Autograd "can automatically differentiate native Python and Numpy code. It can handle a large subset of Python's features, including loops, ifs, recursion and closures, and it can even take derivatives of derivatives of derivatives. It supports reverse-mode differentiation (a.k.a. backpropagation), which means it can efficiently take gradients of scalar-valued functions with respect to array-valued arguments, as well as forward-mode differentiation, and the two can be composed arbitrarily. The main intended application of Autograd is gradient-based optimization."

Flax is then built on top of Jax, and allows for easier customization of existing models.

What do you see as the future of domain specific languages for AI?