## Is this a new programming paradigm

I am implementing a new language whose fundamental blocks are two structures called "Connect" and "Signal".

The "Signal" carries the data from one connect to another.

A Connect takes a signal as an input and gives another signal as the output.The behaviour of the connect depends on the following things:
- Their type(all connect's are not the same, type not as in programming theory)
- The data present on the signal
- Current state of the connect.(It acts like a Finite State Machine)

Can i call this a new way of doing things, a paradigm?

## Comment viewing options

### To me this looks alot like th

To me this looks alot like the port-based paradigm of Pict, and to some extent of Erlang.

### Have a play with Max/MSP

It's a graphical language for processing streams of audio and MIDI data, seems to work in quite a similar way.

Maybe look into stream-based programming / languages for signal processing ? I think there's quite a lot out there of this nature, if you ask engineers about it :)

### data flow

It is not new. This is data flow. There are many languages based on data flow. Some of these languages have a "clock calculus" to prove certain properties of the network, such as that it can be implemented in bounded memory.

### Functional-reactive programming

You might also want to look at functional-reactive programming languages (google it). Actually, I'm working on a connection-based language with FRP signals as a core construct. Not sure if its just coincidence, let me know if you want access to my dissertation or a paper I've prepared and will submit on this subject.

### Game semantics?

Another coincidence - I was trying to extend a DSL for financial contracts with concepts like advances, reserves, and cross-collateralization, and yesterday stumbled upon a need for data flow. Started the evening on the FRP sites, finished it reading papers on game semantics of pi calculus...

So my question would be - are you using game semantics in your dissertation?

### no game semantics

My language is based on connections as simple linking mechanisms. Because such connections are not very expressive, object-oriented mechanisms are used to organize the connections. Anyways, I finally got around to putting a paper on the web:

See
http://lamp.epfl.ch/~mcdirmid/mcdirmid06superglue.pdf
for a 10,000 word paper on the language.

and

See http://lamp.epfl.ch/~mcdirmid/thesis.pdf for my dissertation.

### FrTime

An example of a functional reactive language is FrTime (Father Time). There is an implementation in PLT Scheme.

### Message passing

This looks a lot like message-passing concurrency. Each 'Connect' is defined by a FSM and sends and receives messages to other Connects. This model has a long history. The Actor model of Carl Hewitt, defined in the early 1970s, uses this idea. Erlang uses this idea to make fault tolerance easy, and E uses it to make security easy. Chapter 5 of CTM explains the model and gives an example of a lift control system with the state diagrams. The special case when there is no observable nondeterminism is dataflow concurrency. It is explained in Chapter 4 of CTM. Dataflow concurrency has two flavors, eager and lazy (push and pull), which can be mixed together.

### push == pull, or: I push you, you pull me.

Dataflow concurrency has two flavors, eager and lazy (push and pull), which can be mixed together.

Is there really any difference? From the inside of a co-routine or process push and pull are indistinguishable. The important distinction to me seems to be blocking or nonblocking because that tells you which end(s) is/are in control of the flow.

### viewing state

I think of the difference between push and pull in terms of viewing state. "pull" allows us to get the current value of some state, while "push" allows us to be notified of changes in some state. For example, we can look at the clock to get the current time ("pull") or set an alarm to be notified when its 6 AM ("push"). I know this gets away from the traditional notion of dataflow programming, which often does not deal with state. But I've found it to be useful in extending the dataflow paradigm with support for state.

### push == pull

I think of the difference between push and pull in terms of viewing state. "pull" allows us to get the current value of some state, while "push" allows us to be notified of changes in some state.

This is still looking at it in terms of who calls who. You can turn pulls into pushes and vice versa by inverting who calls who. Or if you use processes then it looks to each one like they are the one calling. For example, assume you have three actors wired like this:

actorA -> actorB -> actorC

written as processes:

   void actorA(void) { while (true) { char x = getc(); send(x); }}
void actorB(void) { while (true) { char y = receive(); send(y);  }}
void actorC(void) { while (true) { char z = receive(); putc(z);  }}


Who is pushing and who is pulling?

the same thing written as push:

   void actorA(void) { while(true) { char x = getc(); push(x); }}
void actorB(char y) { push(y); }
void actorC(char z) { putc(z); }


the same thing written as pull:

   char actorA(void) { return getc(); }
char actorB(void) { return pull(); }
void actorC(void) { while(true) { char z = pull(); putc(z); }}


What matters is where the blocking is, not whether the implementation uses pushing, pulling or processes.

For example, we can look at the clock to get the current time ("pull")

That's not really pulling, that's polling.

set an alarm to be notified when its 6 AM ("push")

You could also pull on the condition that it is 6 AM, which will block until it is 6 AM.

### Polling is a kind of pull

Polling == periodic pull

At least that's the usage I'm familiar with. For me, pushing is when the process that creates or changes data propagates the new data to other processes or data structures. Examples would include SMTP, spreadsheet recalc (update propagation), materialized views, callbacks, publish-subscribe, interrupt-driven I/O.

Pull would include POP3, polling, traditional database views, top-down recalculation.

The examples you gave all count as push. There are a lot of different ways to implement push, depending on the context (same process, separate processes, over the network etc.). A blocking read, waiting for a new network connection, providing callback functions are all valid ways, but they are pointless unless there is an active process that is going to write or signal a semaphore or make a new network connection.

It may seem ambiguous in the case where the OS (or another intermediary) provides a general interface to wait for events, but I consider that a push (publish-subscribe). A process that creates a new file may not intend to push data to a process waiting on a directory change notification, but you're still limited to the events the OS provides, otherwise you have to resort to polling.

It is a bit ambiguous in the case of waiting for a particular time.

### Stop pushing :)

I would say that instead of discussing push/pull distinction it's more productive to look at control transfer patterns in PLs (or should we call them blocking patterns?).

E.g., in synchronous pi calculus the sender is blocked until the receiver receives the message, in asynchronous pi calculus is is not blocked at all, and in call/return languages (functional PLs without control constructs a la call/cc) the caller is blocked until the receiver returns. In all cases we might want to call the sender a pusher, but only in the last case we can call him a puller as well (synchronous pi case is on a border). The receiver in neither case can be usefully classified as either pusher or puller, hence my first statement about (lack of) usefulnes of push/pull terminology.

I guess what I am trying to say is that push/pull terminology is good for sales persons, but not for scientists.

### and in call/return languages

and in call/return languages (functional PLs without control constructs a la call/cc) the caller is blocked until the receiver returns

Is this synchronous or asynchronous because both can be simulated in this case? Though the caller is blocked which makes it synchronous, it can do something else too, in which case it is asynchronous.(Atleast in my language it has a chance of doing something else, so i assume everyone also has this feature)

Is there some formal work citing this?

I can say that it is because of message passing concurrency model, but that doesn't make it precise enough.

### this is a new programming paradigm

Paradigm is very strong word. We have to wait and see.
Something like paradigm along the same lines was recently started at www.ontospace.net