Flash and cross platform mobile web technologies

As I'm sure most people here are aware Adobe announced they are discontinuing developing the Flash plugin (player) for mobile devices. Mike Chambers, Principal Product Manager for developer relations for the Flash Platform at Adobe, listed a variety of reasons for this in his explanatory post (link). Most are business and consumer oriented. One thing I thought might be of interest to LtU readers is the technical issues they faced on the mobile platform. Because performance was critical on mobile devices (minimizing data usage, handling low CPU and reducing power usage) Adobe ended up having to work specifically with:

  1. Mobile Operating System Vendors (such as Google and RIM)
  2. Hardware Device Manufacturers (such as Motorola and Samsung)
  3. Component Manufacturers (such as NVIDIA)

They were building a specific version of the pluggin for each: OS/Device/GPU combination, and that was the cost that made the player a money loser. To me what Adobe faced seems like a classic interpreter language problem. So I figure I'd bring this up: Is it possible to design an interpreter optimized for video playback and vector graphics that makes heavy use of hardware specific advantages (like video decoding hardware or vector computations on the GPU and not the CPU), and abstracts different input methods (think hover on mouseover abstracted for touchscreen input). That is a flexible and configurable interpreter building system capable of support low level libraries. What does the FP community have that does something like this or at least is a prototype?

While there exist good HTML5 solutions for video (H.264 in hardware for example), I don't know of anything that can really take the place of Flash for vector graphics. Further the hardware issues that Adobe faced simply pass up to each website under HTML5 schemes. There are other business reasons, like the DRM, Flash enables control of the user experience which prevents people "skipping the ads" in video. So I think this is now a wide open opportunity until some other technology fills it.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Finally tagless encoding. If

Finally tagless encoding. If each layer is built as a finally-tagless interpreter, then each layer can be replaced by a more specific implementation as needed without changing any existing code.

The paper shows this by starting with a simple interpreter for an embedded language, then replacing that with a staged compiler, then replacing that with a partial evaluator. Each subsequent step is built on the previous steps. Similarly for this case, you can start with a software implementation (the interpreter), then add an implementation that takes advantage of specific hardware. You can even add a dynamic layer using both, like a partial evaluator, which selects which implementation to use depending on additional context, like the complexity/depth of the term, tracing information, etc.

You can do this via functors in ML, or more easily with type classes in Haskell. The dynamic layer is probably tougher in ML.

ML or Haskell or ???

Actually, the "finally tagless" encoding turns out to be just as easy in ML, Haskell and Scala. You just need decent support for type constructors and ways to be parametric over type constructors.

The 'dynamic' layer of the partial evaluator is implemented via algebraic data types which encode additional context information. This is because one needs to dispatch on that information.

The one thing which is still "missing" from "finally tagless" is a good integration of pattern-matching, in the style of Righer's Type-safe pattern combinators. I tried, but I think I was over-ambitious, since I tried to also merge in Bimonadic Semantics for Basic Pattern Matching Calculi while I was at it, and I did not quite succeed. I subsequently became aware of Neel Krishnaswami's Focusing on Pattern Matching which made me hopeful, but I never got the time to revisit that code.

By 'dynamic layer', I meant

By 'dynamic layer', I meant deciding the implementation based on a runtime value. For instance, say you have an array EDSL ala APL. You would have a straight x86 compiler implementation, and a SIMD compiler implementation. Selecing the appropriate implementation based on CPUID information, or plugging in more implementations at runtime seems beyond classic ML modules (OCaml's recent addition of modules as first-class values addresses this though). Been quite awhile since I worked with functors though, so I could be wrong.

I had wondered about pattern matching in the tagless encoding as well. The various approaches to first-class patterns seem the obvious choice. I imagine they would be sufficient to implement a Pattern Calculus EDSL.

Have you, Ken and Oleg discussed the more general problem here?

Understanding the benefits of tagless final interpreters took me awhile, but now that I understand it, they're quite cool.

But even this approach has a man-hours wall - it is not automatic programming, and scaling up solutions to as many combinations as talked about is not tractable. Humans can't do it by themselves.

The earliest examples I've found of a computer scientist talking about these sorts or problems is the editions of Computer Graphics by William Newman and Bob Sproull. In those books, they trace the history of computer graphics hardware, and describe how the tiniest changes in hardware, from processor throughput to RAM to how the monitor draws to the screen to buffering, etc. can affect what algorithm might be the 'best'.

So I would claim that the experiences of the Adobe engineers aren't novel, and actually very common, and that industry has not adopted a formal method for tackling this problem. A related example is within each browser. Mozilla does not benchmark Firefox 'performance improvements' against older systems and so many 'upgrades' that commingle security patches and 'hot sexy user interface changes' actually end up being a mixed bag of good and bad. It is also well-known that some instruction sequences burn battery juice faster in exchange for better performance, while other optimizations might take longer but save your juice.

It could be an excellent topic for one of your Ph.D. students to tackle - man-machine interfaces for optimization, using "finally tagless" or some other encoding as the basic structure for assessing trade-offs. Of course, I don't know if your department head cares for this sort of stuff and whether you have tenure or not, whether you can find somebody to co-advise on the pieces you are not an expert in, etc.


I know Oleg and Ken have done some work in this direction already, and they are writing up some of these results. I was not involved in that work.

I am also quite interested in some aspects of this, and it would be perfectly appropriate for me to hire a PhD student to work on that (assuming I had the funding, which I won't know for a few months). But I do have funding for a post-doc position to work some related issues. [It is related because the underlying technological hurdles are very similar, even though the eventual applications seem very different.]


Had no idea you were at McMaster! I'm barely a hop away in TO. I suppose I shouldn't be too surprised though; there are probably some Queen's, U of T and York alumni on LtU as well.