Top rated programming conferences?

I have my own programming language (these days who hasn't?), and would like to learn what would be the best way to promote it. LTU publishes conference materials, but usually after the fact. I understand that submitting something to JVM language summit can give the project some visibility, but it it is at least half a year away... What other conferences you recommend?

P.S. I'm submitting my stuff to SIGMOD/PODS but this is within database community. Sadly, there is not much communication between PL and database people.

Comment viewing options

Depends

It depends what you mean by "promote." I'm assuming you mean you'd like to make people aware of your language and actually try to get people to adopt it for real tasks.

The purpose of PL conferences is not really to promote your language, but rather to disseminate your research ideas. To the degree that it's a form of promotion, it's really self-promotion, advancing your reputation in the research community and within your department, etc. But maybe that's too cynical.

If you mean industry conferences, again, I'm not sure that this is the best route. It seems there is mostly interest in tools that already have some acceptance and mainstream use. I really don't know how to go about promoting a new programming language.

(Well, you can always try to connect it somehow to Google... That seems to do the trick (Noop, Go)...)

Regarding "try to get people

Regarding "try to get people to adopt it for real tasks", I've heard the following anecdote. In order to get AT&T Labs people using C++, Bjarne removed all C compilers overnight, and next day people learned they have to switch to C++. (Although, I find it hard to believe since his C++ compiler implementation initially was a C front end.)

Funny anecdote... but what

Funny anecdote...

but what is the goal of your programming language?

What is it about your programming language that you would like to promote?

How do you intend on promoting it? Forget WHO... focus on telling us WHAT. Then conference selection is really a matter of targeting a conference that will (a) be familiar with your topic (b) be receptive to it as something that advances research or opens up a new area of research.

The point is to get good feedback and encouragement on what direction to take the project. Unless you are closed-minded and visionary and don't care what people think. Then you simply need to be the best salesman the world has ever seen.

Relational Programming Language

I fully realize what challenges and competition newly created language faces. Thus, I'm focusing more on ideas, and accept that language failure doesn't really mean anything (other than hurting author ambitions) if the ideas spread into more successful languages.

On the other hand, the language focus -- relational programming -- is not an area where I feel strong competitor presence. What is the state of the art: SQL? One have to go no further than Chris Date writings to dismiss it. Then, there is Datalog, Tutorial D, and several research-level relational languages one can find via google.

I'd suggest that "relational programming" didn't really take off yet. Among the other things, genuinely successful general-purpose relational programming language would knock dead all these ad-hock Object-Relational Mapping tools. It would provide a ubiquitous in-memory database, changing the way programmers think about their data structures. Functional people would have to put more effort to algebraic underpinnings of list structures. Forget XML: who would want these awkward verbose labeled trees when one has a standard way to specify a relation in a text file...

Among the other things,

Among the other things, genuinely successful general-purpose relational programming language would knock dead all these ad-hock Object-Relational Mapping tools. It would provide a ubiquitous in-memory database, changing the way programmers think about their data structures. Functional people would have to put more effort to algebraic underpinnings of list structures.

You might have missed the party. .NET's LINQ is there, which was inspired by Haskell's advancements in this area. Unless you're considering something even more ambitious?

LINQ IMHO has not yet hit a

LINQ IMHO has not yet hit a sweet spot.

Entity Framework is painful to work with. It expects you to know the target of your query in advance of deployment. What I want is something like David Ryan's Argot* to automatically distribute code to the client, and allow the client to piece together sections of relational forms. This sounds very abstract, but having considered many alternatives, and being familiar with all the relational languages listed by the OP, I am pretty convinced this is what I want.

In other words, the core problem here is not an API problem or even a syntax problem. It is a deployment problem. It is only vaguely PLT related - currently. And this sort of deployment is not the same thing as mobile programming. Automatic code distribution to a client that has zero bootstrapping logic is a different problem.

*Argot was just submitted to the 6lowapp IETF working group, under the guise of "XPL/TRP - Extensible Presentation Language/Type Resolution Protocol".

Side note: I'm very familiar about various stuff going on in the .NET community, such as the project to support a code motion API for LINQ so that you can do common query transformations like scalar subquery unnesting. That's cool and all, but it is a very orthogonal concern and doesn't even require "LINQ", per say.

It expects you to know the

It expects you to know the target of your query in advance of deployment. What I want is something like David Ryan's Argot* to automatically distribute code to the client, and allow the client to piece together sections of relational forms.

I don't think you need to know the target in advance, just the schema of the objects being queried; perhaps I'm misunderstanding what you mean by "target". Also you can piece fragments of queries together by working with IQueryable<T> instances. As for client mobility, it isn't support by default, but expression trees can be easily serialized and reconstructed. What am I missing that couldn't easily be supported by a combination of existing .NET technologies and maybe a little glue?

On being EASY

Having worked on the problem I am describing for longer than I can remember, and now having a full-time employee job where I've worked on the problem on-and-off as a R&D project for the next generation of our software... I can say that it isn't "easy" to solve a problem when you first have to well-define it.

I work pretty heavily with SQL. Although there are no job titles where I work, one valid one for me would probably be "Database Developer" (the phrase "DBA" is pretty obsolete these days).

I do think you understand the problem definition, at least somewhat, though. I base this guess on what I've seen you discuss on LtU before and what you state here:

expression trees can be easily serialized and reconstructed

Our application protocol is entirely syntax-directed messages. That means the meaning of a message is defined as a function of the structure of the sentence. Most implementations encode the structure into the meaning function. As Karl Lieberherr once stated, regarding his design of Adaptive Software, "This leads to software that is hard to maintain under changing structures, and those structures do change frequently." The key ideas is that, through the Law of Demeter and "data-structure shy programing", program transformations are simplified. For some transformations, updating a binary can be proven to not break modularity. That is to say that altering the grammar does NOT necessarily break message interpretation by an object. This has significant impact on application protocol versioning. (As I understand it, Argot includes all the ingredients for upgrading the protocol on the fly.)

There is also a separate design task of defining a visual front-end for all of this. Using emacs to shuffle around unstructured text is a very baroque way to manipulate a relational programming language, as Elmsari & Navathe essentially argued over 20 years ago. It is especially true for "the analyst" who is willing to be trained to learn a powerful interface (in the Doug Engelbart sense).

On a related note, much of what I do now is very related to the discussion David Barbour and Andreas Rossberg were having about using type systems as program extensions vs. type systems as IDE extensions. In the general case, the type system must be an IDE extension for it (the IDE concepts) to be re-usable by clients in small pieces. (However, I'm not sure yet where David wants to go with that idea, so I stayed a mile away from that discussion as it took place, and just watched it unfold.)

I hope that is clearer, while intentionally not giving away too many details about what I work on professionally.

I don't think you need to know the target in advance, just the schema of the objects being queried; perhaps I'm misunderstanding what you mean by "target"

There are various ways to blur the meaning of "target". Recent advances in data mining languages and tools try to do this, by more or less "eliminating select" (I can't find references as I type this, but maybe something will come up, but the basic idea is that when you are data mining you might not know exactly what you are looking for, but instead you might be looking for relations between two unknown things, or maybe one known and one unknown, etc.). Another target you can perform late-binding on is, as you say, the schema. To do this with entity framework requires emitting byte code, and also creating a byte code emission caching layer on top of the emitter, such that you don't get drowned out in performance by creating many similar types - the .NET VM is not smart enough to do this for you, as far as I can tell.

I can say that it isn't

I can say that it isn't "easy" to solve a problem when you first have to well-define it.

I agree, but I'm afraid your elaboration didn't really clarify much for me as I'm obviously missing some context. I'll just note that the original message I responded to was specifically about relational programming and in-memory querying, which LINQ and Haskell seem to provide, and with more flexibility than SQL. They aren't quite "state of the art" in terms of the query expressiveness or execution model, but they are in terms of popularity/deployment.

While what you describe sounds interesting, I'm not sure how it relates specifically to relational programming, except perhaps as client convenience (IDE) and extensions for safe schema upgrade. Perhaps there is something specific to relational programming that you cannot divulge.

Another target you can perform late-binding on is, as you say, the schema. To do this with entity framework requires emitting byte code, [...]

Is it really necessary to emit bytecode? Certainly if you reuse that code frequently custom bytecodes achieve the highest performance, but it strikes me as merely an optimization, not as any more or less safe.

[...] and also creating a byte code emission caching layer on top of the emitter, such that you don't get drowned out in performance by creating many similar types - the .NET VM is not smart enough to do this for you, as far as I can tell.

No, though .NET 4 implements code GC for all types and assemblies, not just DynamicMethod, so a caching layer will reduce overhead but not necessarily impact VM steady-state. I'm not sure that anyone would build a "multistaging cache" directly into their VM; it seems more the responsibility of a library.

Re: .NET and .NET 4

For .NET 3.5 SP1, based on CLR v2, the hooks I wanted weren't there.

There is no way to unload an assembly without unloading all the AppDomains containing it. This means you have to use AppDomains as a factory for generating types, and a separate AppDomain is responsible for holding byte-codes and acting as an enourmous byte code factory with flyweight intelligences. It is not just CPU performance but memory performance (which is the only bottleneck I usually care about, since I have the most control over it).

I am not sure I like the design I just described, but "it works". AppDomains, IMHO, should not be used as boundaries between every type in the system.

Bottom line: There is no good design I see here for expressing what I am saying, engineering-wise. Such is life inside a VM.

I'll do a better job investigating what .NET 4 offers. Thanks for the tip.

Bottom line: There is no

Bottom line: There is no good design I see here for expressing what I am saying, engineering-wise. Such is life inside a VM.

Interpretation would be an alternative, though perhaps it doesn't fit your performance requirements. An alternative I've played with a bit is a more low-level compilation technique using DynamicMethods whose code is GC'd in .NET 3.5.

Ultimately, all classes, records, variants, etc. can be encoded as compositions of products and sums. You can compile programs in your high-level API to runtime DynamicMethods which operate on a set of products/sums in a standard library/runtime, rather than generating new types at runtime.

Again, it's an approach I'm playing with, so I don't know how it will turn out in the end, and I obviously don't know if this would work in your problem domain.

...and I had a semi-working prototype of that idea. (I also had some other cool prototypes I hacked together based on examples I saw in Petzold's WPF book, and I might throw those particular prototypes up on my blog at some point to describe how to implement the basics of a "lively kernel"-style or Genera-style or Garnet-style environment where you can drill down and introspect the GUI. This builds off the composition stuff you mention. It's sort of like Snoop, but more composable and not rigid. [Edit: and Snoop is external process, whereas my tools were internal to the environment itself.])

I'm well-aware of DynamicMethod performance boost over the reflective method invocation technique.

The next step was to add support for currying lambdas. I'm currently not working on this project, so it's been on hold the last three months.

The next step was to add

The next step was to add support for currying lambdas. I'm currently not working on this project, so it's been on hold the last three months.

Do you mean transforming multi-arg delegates to single-arg delegates? I've already created such combinators in my open source C# library. Not sure if this is the currying you meant, or if you meant a compilation transform of some sort for your relational language.

So, does LINQ suggest a

So, does LINQ suggest a compelling query language? What is its main data structure: list (of functional heritage) or relation (from database model)?

To make the matter straight I don't find Tutorial D language compelling. It looks like some cobol sugarcoating over Relational Algebra. I also don't agree with their thesis object!=relation. Algebra A described in the appendix A (http://www.dcs.warwick.ac.uk/~hugh/TTM/APPXA.pdf) has some novelty, and I do expand on these ideas.

I don't like LINQ that

I don't like LINQ that much.

I can't extend the query syntax to add extra front-end support for specific DSL back-end implementations. For example, I can't add a SKYLINE operator to LINQ.

There are other flaws I see in LINQ, but it all boils down to LINQ being very simple. This is also a strength, but I think more can be done (perhaps even into the future) to add more front-end to the language.

I am unaware of SKYLINE being implemented in Haskell.

LINQ's main data structure is a stream (IEnumerable), which is in effect a list. Its operators are in effect list comprehensions.

I like Tutorial D's syntax for type representation, and built-in support for TABLE_DEE and TABLE_DUM.

Extensible operators and custom syntax

I can't extend the query syntax to add extra front-end support for specific DSL back-end implementations. For example, I can't add a SKYLINE operator to LINQ.

You can in GeniuSQL, which was doing LINQ-y things in Python long before LINQ came around.

I'm not familiar enough with GLORP to know for sure if it can do that, but I'd be willing to bet it can too; it's been around even longer.

Part of the problem is that

Part of the problem is that the LINQ back-end must also support suitable customization points, otherwise the query will be eagerly evaluated by the operator. There's no good way to snip a hole in LINQ to inject concepts across layers. I believe Frans Bouma's LLBLGenPro LINQ-based ORM has a quasi-solution to the back-end, but I never looked into what they do exactly, because I'm not going to spend  on a license just to find that out. Technically Linq 2 Sql also does reflection on an operator or two to detect if it can be delay evaluation.

What came first, GeniuSQL or SQLAlcehmy? SQLAlchemy is not that appealing to me. I also don't understand the benefit of a Table Data Gateway pattern. Then again I don't understand the need for any of the patterns in PoEAA, aside from legacy integration and words to describe existing, stupid, enterprise software. [Edit: Obviously, this is a hot button. I could probably say that some GoF patterns, as described by their UML diagrams, are dumb... but I rarely take the GoF's suggestions in implementing (or even UML modeling), say, a State pattern.]

There are other things I'm neurotic about, which are probably more preference-oriented than things that really matter.

[Edit: In case you and others are interested, there is also the Dee DSL for Python, which is inspired by Tutorial D.]

SKYLINE operator is exotic

SKYLINE operator is exotic -- not all of the big 3 vendors bothered to implement it. The bigger question is what "extensible DBMS" means. I would argue that a user shouldn't be expected to program new relational operations. Doing so assumes in-depth expertize how all operations fit together, and what kind of algebraic laws can be leveraged for query optimization -- this is the job level of language designer. In my language in addition to standard predicate extensions (aka relations) there are means to define predicates programmatically (in procedural code). This is how user can implement ternary predicate "plus", for example.

As for list comprehensions, this is not the kind of idea that is going to revolutionize database field. I understand that genesis can be traced to set comprehension, which brings us to ZFC axiom system, comprehension axiom, etc. The root of the problem, however, is set membership relation. Set membership is very awkward from algebraic perspective: most of ZFC axioms are designed to fight its inconvenient algebraic properties: non idempotence, non transitivity. Compare it to subset operation which is partial order; set intersection, and set union which form a lattice. It is the leverage of these three set operations that made Relational Model so successful for database management.

I would argue that a user

I would argue that a user shouldn't be expected to program new relational operations.

Who is your "user" in this scenario?

Edit: You are right. You're really saying there is no point in allowing users to do the sort of optimizations I am suggesting, because they are likely to get them wrong with disasterous effects. You win.

How do you plan to improve upon Tutorial D?

...nobody...

[Separate question: What does "Tegiri Nenashi" mean? Google says it is a joke name. Feel free to answer this one via email.]

Tutorial D vs. LINQ

Perhaps this should be a separate discussion. (And plugging in my pet language as well:-). Yet, to test the water, how do you think LINQ and Tutorial D are compared?

Captive audience

I thought about mentioning this earlier. Exploiting a captive audience is definitely a tried-and-true method of language evangelism. It helps if the audience is large and influential.

The purpose of PL

The purpose of PL conferences is not really to promote your language, but rather to disseminate your research ideas. To the degree that it's a form of promotion, it's really self-promotion, ...

Changing "disseminate" to "promote" may enable you to dial your cynicism down a notch or two. ;)

A SIGPLAN-ish conference

A SIGPLAN-ish conference (OOPSLA/ECOOP, PLDI, POPL, ICFP) is not very useful for promoting a language, but rather for promoting ideas in the language that can be used more universally. There is a classic post on how to submit your language to OOPSLA that someone could probably dig out (Bill Pugh wrote it?), but the basic gist is...focus on ideas vs. languages.

If you just want to promote your language...this is tough as there is a lot of language noise out there. You need find your niche and your niche audience. At the very least, create an open source project out of it with some decent documentation and examples. But even then "if you build it, they won't come", you've got to promote your language directly to the niche you are targeting (a database conference is great for a database language).

Then be prepared for disappointment, 99% of all languages fail to obtain more than one or two users, while 99.9% of all languages fail to achieve critical mass. (All these numbers are made up, but representative of reality)

Big and small

I agree with everything Sean says, and in particular: You need find your niche and your niche audience.

Big conferences are probably the wrong place to do this. Big conferences give a decent number of people whose interests intersect yours only slightly, some awareness of what you are up to. They are perfect for things that lots of people can see the point of, which won't usually include adopting a new programming language.

Smaller, more focussed workshops are where it is more possible to get people whose interests are close to what you are interested in, involved in what you are doing.

I've generally got more out of satellite workshops of big conferences than from the main programme.

Advice: look at where the papers that most interested you were published. Look at where the kind of users you are targetting tend to publish (if they do publish). People tend to decide which conferences and workshops are of interest to them based on who is on the program committee and who the invited speakers are. These should be good clues as to what kind of CfPs might suit you.

Whether you want to talk to PL types or DB types depend on what you have got, what problems you need to solve, and what user problems you can solve.