threaded and multicode

Hello, I am trying to figure out a schema to implement
multithreading on multiple cores for an language that
I'd like to design. First of all I came to the
conclusion that you cannot use native threads. Using
native threads means that your language implementation
can interrupt at any time, making a single asm-insn
the unit of atomicy. You need tons of synchronication
and need to think about raceconditions etc. which
explodes code complexity. The benefits of native
threads can better be achieved using custom programmed
threads, meaning: simple stack switching with a few
assembler insn. I found that the latest google-go
gcc branch of Ian Tailor is working on gcc support
for variable size stacks and split stacks: that means
you can increase the stack size dynamicaly while
executing. The only need for native threads would be
for blocking i/o, therefore I'd partition by langauge
implementation in 2 native threads, one handling the
I/o and the other implementing the threading in
custom fashion. Implementing threading myself means:
I can implement cooperative threading.
I can define blocks of atomicy without need of synchronisation.
I can derive a garbage collection schema where I can
ignore stack references.
Now I was thinking quite a long time on how to handle
multicore. Having handled mutithreading the easy way
i dont want to start over with synchronication on insn
level. Up till now didnt came up with a enlighting solution

therefore I'd like to know weather someone can give
me a idea...

The only thing that I came up would be
cross calls: A object belongs to a CPU. Every CPU
has a worker-thread running on every other CPU. If a CPU
tries to modify a object that doesnt belong to himself
if has to do a crosscall: If need to wake up the worker-thread
on the owner's cpu....
With only a few global objects this scheme might be applicable,
however it will shurely kill the system if you have
a lot of global objects...
Is there some other way to avoid the synchronization
overkill in multicode mode....
-- Greetings Konrad

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Concept and implementation mismatch

There seems to be a mismatch between the concepts of your language and the available mechanisms for implementing the concept. Actual parallelism requires synchronisation for modifications of shared objects, and as you observe that has overheads which grow radidly. This suggests that the language concepts are not appropriate for sharing objects. Instead you could consider if serialisation of access to shared objects could be appropriate whihc could then be implemented by message passing (for example go channels) which is a well understood mechanism for serialisation and maps to the multicore threaded environment.

Cheers
Lex

Thanks, I'll take a look at

Thanks, I'll take a look at that. It seems that a language gets easier to implement if I strip it from global shared objects. Then I'd not need to take care of multicore synchronization. However I'm so used to having them that this step kind of seems to radical...

E

Your solution of having each object "live" on a single CPU is very close to the concurrency model that E uses.

Hi, Thanks for the tip, I'll

Hi, Thanks for the tip, I'll look at "E", I never heared
of it before.
Up till now I came this far: I'll use something I call
"cooperative locking", I guess that it is
called somewhat different in the academic world:
Each object has a cpu-id field. If the cpu-0 that
wants to work with an object sees that the cpu-id is
cpu-1, he requests, with simple message passing, that
the cpu-1 sets the cpu-id to cpu-0. (the owner cpu
releases the lock) Because I'll use
cooperative multithreading too everything is fully
synchronouse: Threadswitching and locking is done
at predefined soucelocation, maybe at each new
"statement" in the interpreter...
Thats my solution so far...