User loginNavigation |
Use Cases for Shared-Memory Concurrency?The async/await thread got me thinking about Coyotos, and the fact that outside of the kernel I couldn't think of any use-case where mutable shared-memory concurrency was motivated in application code. Assumptions:
While there are clearly algorithms that benefit from mutable shared-memory concurrency, most have equivalently efficient implementations built on messaging without use of shared memory. This becomes especially true if we add in the ability to observe when data is deep-frozen (which address many cases of "initialize before going parallel) to support immutable shared memory concurrency. For example, we did a high-performance ethernet driver in EROS that utterly relied on mutable concurrent shared memory for performance, but could have been implemented with immutable shared memory if we had had the ability to transfer an observably frozen reference as part of a message. Offhand, I'm not convinced that message ports need to support mutable shared memory data transfers at all. By "observably deep frozen", I mean a reference into a previously mutable graph that can no longer be reached (from anywhere) by a reference that permits further mutation. One way to do this is to return a single [transitively] read-only reference to some element of the graph that we can show has not otherwise escaped from the execution context in which it had been mutable, and whose mutable execution context is known to have terminated at the point of return. A similar effect can be had from deep-copy, but deep-copy can have prohibitive cost for large object graphs. I'm sure there will be some obvious answers, but I'm prompted to ask: what kinds of codes or problem types exist that truly require the shared memory concurrent model (i.e. that cannot be efficiently encoded using a combination of async/await and non-mutably-shared concurrent threads with messaging)? Is there a common pattern to them? Is the pool of such codes large enough to warrant first-class design support in programming languages and OS runtimes? How many of those cases would be eliminated if we had reliably observable deep frozenness in the type system or the language runtime? To be clear, this is mostly an idle ponder on my part. Putting the async/await discussion together with observable frozen-ness in my head got me to thinking on this, so I thought I'd put the question out there for discussion. Regards and thanks... By shap at 2020-09-10 16:58 | LtU Forum | previous forum topic | next forum topic | other blogs | 3163 reads
|
Browse archives
Active forum topics |
Recent comments
22 weeks 6 days ago
22 weeks 6 days ago
22 weeks 6 days ago
45 weeks 1 day ago
49 weeks 2 days ago
51 weeks 3 hours ago
51 weeks 3 hours ago
1 year 1 week ago
1 year 6 weeks ago
1 year 6 weeks ago