User loginNavigation |
archivesout of memoryWhen a resource is exhausted, like memory, does this need special handling in a programming language? I see a way to avoid it, thus answering no. But I wonder what kind of argument requires yes instead of no. Note I rarely find paper references either useful or interesting, since I like basic ideas summarized in granularity of dozens to hundreds of words. (This is an offshoot from discussion Experiment where out of memory conditions came up. I thought it might be interesting to discuss memory limits as they affect programming languages, if they do.) Resource allocation can look like a system call to a PL (programming language), so running out of memory can be handled by the environment. A runtime hosting a PL can react instead. For example, in a lightweight process system, max memory used by one process might be limited. Hitting a limit occurs in a call, blocking the group/process/thread/fiber. What happens next depends on configuration, process architecture, and registered preference. Options include killing a process group, relaxing limits in triage, and/or blocking until space is freed by another scheduled entity under the same limit-scope mechanism. An "important" process ought to have an out-of-memory handler, to react more flexibly than letting that process be killed. In a lightweight process system, this can be a signal awakening a fiber handling high priority event notifications like "you're overdrawn". From a PL perspective, the code is just event handling, like any other event. A runtime or operating system sees a resource exhausted, so a scheduler responds and not the PL. Is there a reason why a PL should get any say? I can imagine a PL resisting management of resources by the OS and/or runtime because there's no concept of running inside bounded space, so limits imposed by the environment fail to be enough room all the time, because a PL can't stay inside and handlers basically fire all the time to no productive result. But that seems like a language arrogating to itself some OS resource handling role without suitable responsibility and competence. Failing to live inside limits seems more flaw than feature. My point of view isn't interesting to me because I understand it. I'd prefer to hear different views so I learn something. Is there a reason why a PL provides special value by handling resource exhaustion more directly than the host environment? Cost semantics for functional languagesThere is an ongoing discussion in LtU (there, and there) on whether RAM and other machine models are inherently a better basis to reason about (time and) memory usage than lambda-calculus and functional languages. Guy Blelloch and his colleagues have been doing very important work on this question that seems to have escaped LtU's notice so far. A portion of the functional programming community has long been of the opinion that we do not need to refer to machines of the Turing tradition to reason about execution of functional programs. Dynamic semantics (which are often perceived as more abstract and elegant) are adequate, self-contained descriptions of computational behavior, which we can elevate to the status of (functional) machine model -- just like "abstract machines" can be seen as just machines. This opinion has been made scientifically precise by various brands of work, including for example implicit (computational) complexity, resource analysis and cost semantics for functional languages. Guy Blelloch developed a family of cost semantics, which correspond to annotations of operational semantics of functional languages with new information that captures more intentional behavior of the computation: not only the result, but also running time, memory usage, degree of parallelism and, more recently, interaction with a memory cache. Cost semantics are self-contained way to think of the efficiency of functional programs; they can of course be put in correspondence with existing machine models, and Blelloch and his colleagues have proved a vast amount of two-way correspondences, with the occasional extra logarithmic overhead -- or, from another point of view, provided probably cost-effective implementations of functional languages in imperative languages and conversely. This topic has been discussed by Robert Harper in two blog posts, Language and Machines which develops the general argument, and a second post on recent joint work by Guy and him on integrating cache-efficiency into the model. Harper also presents various cost semantics (called "cost dynamics") in his book "Practical Foundations for Programming Languages". In chronological order, three papers that are representative of the evolution of this work are the following. Parallelism In Sequential Functional Languages
Space Profiling for Functional Programs This paper clearly defines a notion of ideal memory usage (the set of store locations that are referenced by a value or an ongoing computation) that is highly reminiscent of garbage collection specifications, but without making any reference to an actual garbage collection implementation.
Cache and I/O efficient functional algorithms The cost semantics in this last work incorporates more notions from garbage collection, to reason about cache-efficient allocation of values -- in that it relies on work on formalizing garbage collection that has been mentioned on LtU before.
By gasche at 2014-08-14 11:53 | Implementation | Lambda Calculus | 33 comments | other blogs | 41901 reads
|
Browse archivesActive forum topics |
Recent comments
22 weeks 3 days ago
22 weeks 3 days ago
22 weeks 3 days ago
44 weeks 4 days ago
48 weeks 6 days ago
50 weeks 3 days ago
50 weeks 3 days ago
1 year 1 week ago
1 year 5 weeks ago
1 year 5 weeks ago