LtU Forum

Call by Meaning

A new exciting paper in my Google Scholoar feed by Hesam Samimi et al. Abstract:

Software development involves stitching existing components together. These data/service components are usually not well-understood, as they are made by others and often obtained from somewhere on the Internet. This makes software development a daunting challenge, requiring programmers to manually discover the resources they need, understand their capabilities, adapt these resources to their needs, and update the system as external components change.

Software researchers have long realized the problem why automation seems impossible: the lack of semantic “understanding” on the part of the machine about those components. A multitude of solutions have been proposed under the umbrella term Semantic Web (SW), in which semantic markup of the components with concepts from semantic ontologies and the ability to invoke queries over those concepts enables a form of automated discovery and mediation among software services.

On the other hand, programming languages rarely provide mechanisms for anchoring objects/data to real-world concepts. Inspired by the aspirations of SW, in this paper we reformulate its visions from the perspective of a programming model, i.e., that components themselves should be able to interact using semantic ontologies, rather than having a separate markup language and composition platform. In the vision, a rich specification language and common sense knowledge base over real-world concepts serves as a lingua franca to describe software components. Components can query the system to automatically (1) discover other components that provide needed functionality/data (2) discover the appropriate API within that component in order to obtain what is intended, and even (3) implicitly interpret the provided data in the desired form independent of the form originally presented by the provider component.

By demonstrating a successful case of realization of this vision on a microexample, we hope to show how a programming languages (PL) approach to SW can be superior to existing engineered solutions, since the generality and expressiveness in the language can be harnessed, and encourage PL
researchers to jump on the SW bandwagon.

A theory of data parallel computing

This work is motivated by the following considerations

  • Parallelism in scientific computing is often of the data-parallel variety: a program has de facto sequential semantics, but operating on objects that are distributed over many locales. I'm not aware of much theory about this: theoretical computer science usually takes a CSP approach to parallelism, where each process does its own thing, and other processes are a source of messages that it reacts to.
  • In most contemporary processors data movement is very costly, more so than operations. There are few models that explicitly take into account that data is somewhere, and you can not just access data without making sure it's actually close to you.
  • There are many `modes' of parallelism these days: multi-threading, co-processor offloading, distributed memory message passing. A code written for one mode is impossible to translate to another. So I wanted to have a mode-independent way of describing algorithms that could be `realized' in terms of existing systems.

Here is my theory story. I have prototype software, and you can take my word for it that (within its limited functionality) it is as efficient as hand-coded mode-specific software.

pdf document, slightly updated

Feedback much appreciated.

SLE 2014 - Call for Participation

=======================================
CALL FOR PARTICIPATION

7th International Conference on Software Language Engineering (SLE) 2014

Vasteras, Sweden, September 15-16, 2014

http://www.sleconf.org/2014/

Co-located with:

29th IEEE/ACM International Conference on Automated Software Engineering (ASE 2014)
13th International Conference on Generative Programming: Concepts and Experiences (GPCE 2014)

---------------------------------------------------------
IMPORTANT DATES

SLE workshops: 14 September, 2014
Conference: 15-16 September, 2014

---------------------------------------------------------
SCOPE

Software Language Engineering (SLE) is the application of systematic, disciplined, and measurable approaches to the development, use, deployment, and maintenance of software languages. The term "software language" is used broadly, and includes: general-purpose programming languages; domain-specific languages (e.g. BPMN, Simulink, Modelica); modeling and metamodeling languages (e.g. SysML and UML); data models and ontologies (e.g. XML-based and OWL-based languages and vocabularies).

---------------------------------------------------------
TOPICS OF INTEREST

The overall principle of SLE is to be broad-minded and inclusive about relevance and scope. We solicit high-quality contributions in areas ranging from theoretical and conceptual contributions to tools, techniques, and frameworks. Topics of interest include, but are not limited to, the following:

- Tools and methods for software language design and extension (incl. meta-languages, meta-tools, language workbenches)
- Generative approaches, transformation and transformation languages, code generation
- Techniques for analysing software language descriptions
- Techniques for software language reuse, evolution and managing variation (syntactic/semantic) within language families
- Integration and coordination of software languages and tools
- Engineering Domain-Specific Languages (for modeling, simulating, generation, description, checking)
- Novel applications and/or empirical studies on any aspect of SLE (development, use, deployment, and maintenance of software languages)
- Cross-fertilization of different technological spaces (e.g. modelware, grammarware, etc)

---------------------------------------------------------
INVITED SPEAKER

Title: From Language Engineering to Viewpoint Engineering

Speaker: Colin Atkinson, University of Mannheim

Abstract:

As software systems increase in size and complexity, and are expected to cope with ever more quantities of information from ever more sources, there is an urgent and growing need for a more view-oriented approach to software engineering. Views allow stakeholders to see exactly the right information, at exactly the right time, in a way that best matches their capabilities and goals. However, this is only possible if the information is represented in the optimal languages (i.e. domain- and purpose-specific), with the necessary context information and the optimal manipulation/editing features - that is, if information is viewed from the optimal viewpoints. Rather than merely engineering languages, therefore, software engineers in the future will need to engineer viewpoints, which augment language definitions (e.g. meta-models, syntax ...) with context information (e.g. elision, location, perspective ...) and user-interaction information (e.g. editing pallets, view manipulation services ...). In this talk Colin Atkinson will outline the issues faced in supporting the flexible and efficient engineering of viewpoints and will present some key foundations of a fundamentally view-oriented approach to software engineering

---------------------------------------------------------
ACCEPTED PAPERS

- A SAT-based Debugging Tool for State Machines and Sequence Diagrams, Petra Kaufmann, Martin Kronegger, Andreas Pfandler, Martina Seidl and Magdalena Widl
- The Moldable Debugger: a Framework for Developing Domain-Specific Debuggers, Andrei Chis, Tudor Girba and Oscar Nierstrasz
- Streamlining Control Flow Graph Construction with DCFlow, Mark Hills
- Bounded Islands, Jan Kurs, Mircea Lungu and Oscar Nierstrasz
- Origin Tracking in Attribute Grammars, Kevin Williams and Eric Van Wyk
- Dynamic Scope Discovery for Model Transformations, Maris Jukss, Clark Verbrugge, Daniel Varro and Hans Vangheluwe
- Model Checking of CTL-Extended OCL Specifications, Sebastian Gabmeyer, Robert Bill, Martina Seidl and Petra Kaufmann
- Simple, efficient, sound-and-complete combinator parsing for all context-free grammars, using an oracle, Tom Ridge
- Towards User-Friendly Projectional Editors, Markus Voelter, Janet Siegmund, Thorsten Berger and Bernd Kolb
- Eco: a Language Composition Editor, Lukas Diekmann and Laurence Tratt
- Unifying and Generalizing Relations in Role-Based Data Modeling and Navigation, Daco Harkes and Eelco Visser
- ProMoBox: A Framework for Generating Domain-Specific Property Languages, Bart Meyers, Romuald Deshayes, Levi Lucio, Eugene Syriani, Manuel Wimmer and Hans Vangheluwe
- A Metamodel Family for Role-based Modeling and Programming Languages, Thomas Kühn, Max Leuthäuser, Sebastian Götz, Uwe Aßmann and Christoph Seidl
- fUML as an Assembly Language for Model Transformation, Massimo Tisi, Frédéric Jouault, Jérôme Delatour, Zied Saidi and Hassene Choura
- Evaluating the usability of a visual notation when developing new feature models, Aleksandar Jaksic, Robert France, Philippe Collet and Sudipto Ghosh
- Respect Your Parents: How Attribution and Rewriting Can Get Along, Tony Sloane, Matthew Roberts and Leonard Hamey
- Monto: A Disintegrated Development Environment, Tony Sloane, Matthew Roberts, Scott Buckley and Shaun Muscat
- AIOCJ: A Choreographic Framework for Safe Adaptive Distributed Applications, Mila Dalla Preda, Maurizio Gabbrielli, Saverio Giallorenzo, Ivan Lanese and Jacopo Maurio
- Test-data generation for Xtext with Xtextgen, Johannes Härtel, Lukas Härtel and Ralf Laemmel

---------------------------------------------------------
WORKSHOPS

ITSLE - This workshop explores SLE concepts from an industrial perspective.
http://www.sleconf.org/2014/ITSLE.html

Parsing@SLE - This workshop was organized the first time in 2013 and brought together a wide variety of people intereseted in parsing.
http://www.sleconf.org/2014/Parsing-at-SLE.html

---------------------------------------------------------
AWARDS

- Best paper. Award for best overall paper, as determined by the PC chairs based on the recommendations of the programme committee.
- Best reviewer. Award for best reviewer, as determined by the PC chairs using feedback from the authors.

Award Sponsors: Google, Gemoc, itemis

---------------------------------------------------------
REGISTRATION

Early registration ends 15 August.

http://www.sleconf.org/2014/Registration.html

---------------------------------------------------------
CONTACT

For any questions or concerns, please contact the General Chair: Jurgen.Vinju@cwi.nl

---------------------------------------------------------
GENERAL COMMITTEE CHAIR

Jurgen Vinju, Centrum Wiskunde & Informatica, The Netherlands

PROGRAM COMMITTEE CO-CHAIRS

Benoit Combemale, University of Rennes, France
David J. Pearce, Victoria University of Wellington, New Zealand

PROGRAM COMMITTEE

Emilie Balland, INRIA, France
Tony Clark, Middlesex University, UK
Zinovy Diskin, McMaster University / University of Waterloo, Canada
Martin Erwig, Oregon State University, USA
Anne Etien, University of Lille, France
Joerg Evermann, Memorial University of Newfoundland, Canada
Jean-Marie Favre, University of Grenoble, France
Robert France, Colorado State University, USA
Andy Gill, University of Kansas, USA
Martin Gogolla, University of Bremen, Germany
Pieter Van Gorp, Eindhoven University of Technology, The Netherlands
Giancarlo Guizzardi, Federal University of Espirito Santo, Brazil
Görel Hedin, Lund University, Sweden
Markus Herrmannsdoerfer, Technische Universitat Munchen, Germany
Jean-Marc Jézéquel, University of Rennes, France
Thomas Kuehne, Victoria University of Wellington, New Zealand
Ralf Laemmel, Universität Koblenz-Landau, Germany
Peter Mosses, Swansea University, UK
Sean Mcdirmid, Microsoft, China
Kim Mens, Université catholique de Louvain, Belgium
Marjan Mernik, University of Maribor, Slovenia
Pierre-Alain Muller, University of Haute-Alsace, France
Nathaniel Nystrom, University of Lugano, Switzerland
Klaus Ostermann, University of Marburg, Germany
Oscar Nierstrasz, University of Bern, Switzerland
Richard Paige, University of York, UK
Fiona Polack, University of York, UK
Arnd Poetzsch-Heffter, University of Kaiserslautern, Germany
Davide Di Ruscio, Università degli Studi dell'Aquila, Italy
João Saraiva, Universidade do Minho, Portugal
Bran Selic, Malina Software Corp., Canada
Jim Steel, University of Queensland, Australia
Tijs Van Der Storm, Centrum Wiskunde and Informatica, The Netherlands
Juha-Pekka Tolvanen, MetaCase, Finland
Michael Whalen, University of Minnesota, USA
Eric Van Wyk, University of Minnesota, USA
Steffen Zschaler, King's College London, UK

WORKSHOP ORGANIZATION CHAIR

Eric Van Wyk, University of Minnesota, USA

PANEL ORGANIZATION CHAIR

Ralf Lammel, Universitat Koblenz-Landau, Germany

PUBLICATION CHAIR

Olivier Barais, University of Rennes, France

PUBLICITY CHAIRS

Craig Anslow, University of Calgary, Canada (general publicity)
Tijs van der Storm, Centrum Wiskunde and Informatica, The Netherlands (social media)
Davy Landman, Centrum Wiskunde and Informatica, The Netherlands (web)

LOCAL ORGANIZATION CHAIR

Ivica Crnkovic, Malardalen University, Sweden
=======================================


Kind regards,
Craig

Craig Anslow, PhD
Postdoctoral Research Fellow
Department of Computer Science
University of Calgary, Canada
T: 403 210 8159
E: craig.anslow@ucalgary.ca
W: http://anslow.cpsc.ucalgary.ca/

Expressing Natural Deduction in Logic Languages

I am looking at the order theoretic axioms for intuitionistic propositional logic, which form a Heyting algebra. I am thinking about the join rule:

C ≤ A     C ≤ B
---------------
  C ≤ (A ^ B)

My understanding is the rule can be read both forwards and backwards. I initially encoded it like this:

rule(le(C, meet(A, B)), pair(le(C, A), le(C, B))).
rule(pair(le(C, A), le(C, B)), le(C, meet(A, B))).

But this needs the following additional rules to be defined:

rule(pair(A, B), pair(X, Y)) :- rule(A, X), rule(B, Y).
rule(pair(A, B), pair(Y, X)) :- rule(A, X), rule(B, Y).
rule(A, B) :- rule(A, C), rule(C,B).

The last of which is particularly problematic as it matches any 'A', and 'B' so results in dramatic increase in the search space, and results in a larger search depth to find the solution.

So as an alternative I want to encode the rules more natively, and I am thinking about extending the interpreter to allow conjunctions in the head, so the rules would be:

le(C, meet(A, B)) :- le(C, A), le(C, B).
le(C, A), le(C, B) :- le(C, meet(A, B)).

The implementation would match the first head struct as usual, and then would search the remaining goals for a match to the conjoined structs in the head (removing the matching goals).

Does this seem an efficient way to deal with this? Am I overlooking some way of rewriting the rules (although there is something to be said for keeping them close to the natural way of writing them)? Any suggestions for other ways to tackle this problem? I can't find any references to extending Prolog in this way, so I am concerned that this is problematic, or there is just a much simpler way to deal with it.

Cross-platform idiomatic code generation

Some languages cross-compile. Things like Xtend and Shen generate source code, whereas things like Bigloo and Fantom target bytecode. For some reason I day dream about tools to translate a Nice Language into Regular Day Job Languages while somehow maintaining an idiomatic source-level code style in each of those.

Model driven design tools talk about it sometimes: Practical Model-to-Code Transformation in Four Object-Oriented Languages. Admittedly that's constraining things a bit since they are all OO, and not e.g. 1 OO, 1 FP, 1 Logic target.

This paper describes a practical framework that generates idiomatic code in four different object- oriented languages, Java, C++, Eiffel and C#, starting from an intermediate model expressed as an XML parse tree. Effort was directed towards ensuring that the generated systems executed with identical semantics.

A Functional Representation of Data Structures with a Hole (1998)

LtU doesn't seem to have any mentions of this paper by Minamide from 1998.

Here's a simple problem that benefits from the ideas described in the paper. The usual way of writing functions like "map" or "append" in a strict language like ML involves building the result list in reverse and then reversing it, which uses 2x more memory than needed. Imperative languages like Java can do without intermediate lists, but at the cost of using mutation on the list nodes after allocating them. To get the best of both worlds, we can extend the idea of tail recursion to also allow "tail allocation" aka "destination passing style", which just means that a recursive call nested within a constructor call should be treated as tail-recursive, and compiled to code that uses mutation internally. That way, the naive implementations of functions like "map" and "append" become as efficient as imperative code, allocating only as much memory as needed for the final result.

Some folks on the OCaml mailing lists have pointed out that such an approach would break some assumptions of the garbage collector, namely that the old generation never points into the young generation. I don't know if that problem outweighs the potential benefits of the technique.

No Instruction Set Computer NISC

How might your language design change in light of this sort of vertical integration?

Abstract: General-purpose processors are often unable to exploit the parallelism inherent to the software code. This is why additional hardware accelerators are needed to enable meeting the performance goals. NISC (No-Instruction-Set Computer) is a new approach to hardware-software co-design based on automatic generation of special-purpose processors. It was designed to be self-sufficient and it eliminates the need for other processors in the system. This work describes a method for expanding the application domain of the NISC processor to general-purpose processor systems with large amounts of processor-specific legacy code. This coprocessor-based approach allows application acceleration by utilizing both instruction-level and task-level parallelism by migrating performance-critical parts of an application to hardware without the need for changing the rest of the program code. For demonstration of this concept, a NISC coprocessor WISHBONE interface was designed. It was implemented and tested in a WISHBONE system based on Altium’s TSK3000A general-purpose RISC soft processor and an analytical model was proposed to provide the means to evaluate its efficiency in arbitrary systems.

Generational Real-Time Garbage Collection

(this researcher seems to me (an ignorant neophyte admittedly), to have quite a bit of interesting work on GC, among other interesting things.)

Generational Real-Time Garbage Collection

Abstract. While real-time garbage collection is now available in pro- duction virtual machines, the lack of generational capability means ap- plications with high allocation rates are subject to reduced throughput and high space overheads.

Since frequent allocation is often correlated with a high-level, object- oriented style of programming, this can force builders of real-time sys- tems to compromise on software engineering.

We have developed a fully incremental, real-time generational col- lector based on a tri-partite nursery, which partitions the nursery into regions that are being allocated, collected, and promoted. Nursery col- lections are incremental, and can occur within any phase of a mature collection.

We present the design, mathematical model, and implementation of our collector in IBM’s production Real-time Java virtual machine, and show both analytically and experimentally that the collector achieves real-time bounds comparable to a non-generational Metronome-style col- lector, while cutting memory consumption and total execution times by as much as 44% and 24% respectively.

p.s. i'm less interested in supporting OO everywhere than i am in having kick-ass GCs, i assume that using something more FP with a GC like this would be never less performant anyway.

Real time GC for FPGAs

A real time collector for reconfigurable hardware seems kinda like a nice little 'hardware' implementation.

It is thus quite a tour de force that the authors of the following paper have built a provably correct real-time collec- tor for reconfigurable hardware (field programmable gate arrays). How can this be? It turns out the FPGA setting makes the problem simpler in some ways than is the case with software GC running on stock processors. They can reasonably impose simple uniformity on the memory and its layout; they can exploit single-clock read-modify-write steps; and perhaps most importantly they have, via dual ported memory, com- pletely concurrent memory access. This all leads to one of the cleanest and per- haps most understandable implemen- tations of a concurrent GC that has ever been presented. The other prerequisites for real-time collection also follow eas- ily. It is difficult to find the right word to express the feeling I get seeing such a so- phisticated algorithmic idea reduced to such a straightforward hardware imple- mentation—“Cool!” will have to suffice. (emphasis mine)

XML feed