Well, its been really quiet around here, and I have been working on the design of my own programming language(like probably half the rest of the people here...), and so I figured I would bounce some of my ideas off people here and find out why they're wrong :)

The type system is mostly what I'm curious about opinions for. It starts with records or structures(however you want to look at). The programmer can make a type like so:

type complex {
a as real;
b as real;
}


Now, when it comes to functions, they determine type compatibility by a pattern matching mechanism. This pretty much means functions can be overloaded, and matching is on all arguments, which could be as simple as a type name, or it can merely look for a member of the record to match on, or any other number of patterns.

I'm not sure how creative this is, but a couple of my motiviations, I like pattern matching and I think type inferencing stuff is cool too, but for me, coming up with the entities I want to manipulate is part of how I write code and think of the design. So the idea of maybe "naming" the patterns, and calling that a type has appeal to me, and then overloading functions with different type signatures/patterns seems like a lot of what I want.

One other slightly oddball feature is that in an argument declaration, a function can delcare "with coercion", in which case if an argument doesn't match any pattern, a single parameter function is sought out that starts with the character '!' which denotes that it can be used for conversions between types. This may be a bad feature or hard to implement, but I like the idea of being able to selectively allow something like it(at the programmers request of course).

As to object oriented programming, I'm going to have lexical closures and hashes, which may not make the OO people happy, but it seems to get a lot of cases.

The rest of the language is a bit imperative feeling, but even the control constructs return values, and it has anonymous functions. For algebraic expressions, the language allows functions of two parameters to be called in an infix notation, and since function names are allowed to range over about anything you want, this makes it easy to do operator overloading(without making it a special case), but it also means that I don't really have precedence(although I'm still thinking about that), which also adds a little uniformity with the built-in types, since several of them will have some operators meant to be used in infix form.

Here's a couple of more examples:

type phasor {
r as real;
theta as real;
}

function * phasor:(a as phasor, b as phasor)
{
return (phasor:[a.r * b.r, a.theta + b.theta]);
}


Now, a function that wanted complex numbers(but didn't care whether they were in polar or rectangular form might be declared like so:

function dosomething '[phasor, complex]:(a as '[phasor, complex], b as '[phasor, complex])
{
... do something ...
}


A function that wanted a structure with a specific member might go like this:

function getnext 'typevar:(a as 'typevar:[next as ref 'typevar])
{
return (a.next);
}

Okay, bad example maybe(lists are a primitive type), but still. The quote is used to denote that the type is allowed to range over anything that matches the partial pattern to follow. In cases like this where it shows up multiple times, the same value must be substituted throughout. In fact, the use of the same "variable"(type or otherwise) even in different arguments in the pattern declaration requires the values match. I guess these are more like "logic" variables than anything.

I'm still working on syntax, especially for the pattern matching, but this is just to give you an idea of my current state of mind.

Any feedback on this is appreciated. This language is just me doing it for fun and the experience of designing a language. I'm using things on the edge of my understanding, but I'm also trying things that I think I would like to solve problems that bug me. So be harsh :).

## Comment viewing options

This response is a little late because I've been working on my programming language. :-)

In general, overloading and type inference don't get along very well. The one is figuring out what function to call based on types, while the other is figuring out the types based on the functions that are called. It's possible to define these two features so that they work together, but I haven't given it much thought lately beyond understanding that it always gets messy.

The problem with overloading is that the language must resolve function applications in situations where more than one overloaded variant of a function could be called. There are reasonable ways to start out with the overloading resolution rules, but cases always come up where the results are not as the programmer expects, and the more sophisticated the overloading resolution is, the more abitrary some of the resolutions become.

Haskell and Clean provide overloading and type inference, both of them without problems. There's also System CT

Yes, overloading in Haskell in the form of type classes is so nice that, being weaned on C++ I hardly recognize the correspondence.

### on inferencing.

I have thought a lot about the issue of type inferencing, and while I think its interesting, its not something that I have a huge amount of experience with, and so it not to be a part of my design(currently...).

Another reason is that I have often found that I use type declarations to sketch out my ideas, and so the type annotation is actually part of my thought process. I would say that I consider types to be the objects I'm considering manipulating, but unfortunately, the word object has attached meanings that I do not always wish to imply.

### types and pattern matching

Another fundamental way of constructing types is with alternatives. (These are sometimes called sum types because the number of possible values is the sum of the number of values of each of the alternatives.) That's what unions are used for in C. I wouldn't say that C unions give you sum types because a safe feature should be expected.

In the direction you're headed with pattern matching, the significance of the names of record types is going to be interesting. If a function is declared to take a complex, will it accept any argument that has real fields 'a' and 'b'? It sounds as if you're not planning for abstract types (where a module could export a type and the fields of the record would be hidden) so structural compatibility would be appropriate. In other words, the name 'complex' would be used as a synonym for the field structure of its record.

### abstract typing and structrual compatibility.

Right now, my thoughts are that types can be defined anywhere, and thus have a lexical scope of application. As to "data hiding", I intend for that to be accomplished by simply returning a lexical closure(hence an ADT is a type which consists only of a list of closures).

As to structural compatibility, that is precisely what I'm looking at. "Types" are named patterns, but other patterns can be specified which parameterize the type name, and just plain on structural matching in names within a record are also to be used.

Since the name of a type is lexical in its scope, it means two functions can give the same pattern a different name, but since the matching can occur on more than just the name of a type, it can still be passed across many boundaries. So yeah, a function that takes "complex" values could be declared to match with the same field structure.

Oh yeah, in function declarations, anonymous patterns are allowed(i.e. no name specified), but if a parameterized name is specified, it does a kind of unification matching on all occurences of the parameterized name(like specifying a "b" in two separate patterns in a function's parameter list require those two b's to have the same value).

I figure that this gives me something akin to reflection available in the system(symbols are allowed, and basically are what constitutes names) and I'm hoping that my approach to types being lexical scoped may allow for some natural ways to create a module system.

As another aside, I intend to have Erlang style concurrency primitives as well(which the pattern matching should play nice with). I'm actually considering treating lexical closures as separate processes, and thus, they can be sent between processes, since it is only the process ID that needs to be sent, and it seems to again, collude with the notion of a lexical closures being a nice way to think about OOP, in the same vein that the Erlang community treats processes as an "object". This may be a bad idea, so I'm not sure about it.