Current "hot" topics in parallel programming?


I hope this is the right place to ask, but I've read a lot of good comments on other topics here so I'll just ask. At the moment I'm searching for a topic for my dissertation (Ph.d in non-german countries I think), which must have to do something with parallelism or concurrency, etc. but otherwise I'm quite free to chose what I'm interested in. Also Everything with GPU's is not reasonable, because a colleague of me does already research on this topic and we'd like to have something else for me :)

So, the magic questions is: What would you say are interesting topics in this area? Personally I'm interested in parallel functional programming languages and virtual machines in general but I'd say that a lot of work has been already done there or is actively researched (e.g. in the Haskell community).

I'd greatly appreciate any help in pointing me to other interesting topics.

Best regards,

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

"...a lot of work has been already done there..."

I wouldn't let the notion that "a lot of work has been already done there" put you off doing work in that area yourself. Just because some work has been done doesn't mean that there isn't plenty more to be done. In general, you're going to be much better off if your research topic is something that you've already got an interest in. Have you used any of the current paralel functional programming languages? What bugged you about them? What could you do to make them easier to use? Or able to provide better performance? Or more likely to produce a reliable system? Are there applications for which they don't work well, or problems they to which they don't scale?

I agree that this is not a

I agree that this is not a reason not to pursue a research area, but if I am allowed to rephrase the original question I think the intent was to ask which questions or direction are currently considered interesting and worthwhile by people working in the area. That's always a good question to ask...

Moveable code, multi-core

I don't know what is considered hot at the moment, but I personally like it when research parallels industrial advances.

So, maybe you could look at moveable (efficient, type-safe) code (not Java sandboxing but more agent-like systems) for ubiquitous computing. Another thing is making use of current multi-core machines (where the number of cores is, say, at least 64) through new languages.

[Btw: some rationale for the ubiquitous computing. Although people have been working on that for, give and take, the last twenty years, I expect sensor networks will hit mainstream within a few years from now. Probably through implementation in cellular phones or cars, maybe both. You might want to look at Java Sun Spots as a relatively low-cost solution for implementing languages upon.]

[As has been stated below, Guy Steele and others who have one foot in industry also are looking at manners of leveraging coming processor designs by employing high-level languages.]

[I was assuming the GPU stuff is not multi-core research.]

[Another rationale for the movable code. 'Most' sensor network is research is on data-aggregation. I would like to see the reverse, spatial (or, area-aware) computation, which is a generalization of that, where a node joins a computation by entering a region. No idea for an application domain, but it might be interesting. (I don't assume this is a new idea.)]

Guy Steele gave an

Guy Steele gave an interesting talk about parallel programming yesterday at CMU. The slides should be up on that page sometime soon.

Open Systems Survivability

My own interest is in open distributed systems programming with runtime upgrades, along with the optimization, security, and survivability aspects (of which there are a great many).

But that isn't exactly parallel programming, which I loosely understand to be the breaking down of algorithmic problems for operation on many CPUs to produce solutions.

topic = problem, not tech

I'm really a concerned when a PhD student wants advice on what technology to explore rather than what problem to solve. My advice: find an open problem, do a review of literature, then start thinking of a solution. Otherwise, you might be building tech that has no application, and you either have to BS through your dissertation, or give up.

I find it personally better to start with application domain problems (e.g., a scientific computing application) rather than known technical problems (e.g., lock contention). This gives you more latitude in creating a solution, though problems in both categories lead to valid topics. If you are itching to do language design, a problem in an application domain would probably more likely lead to that solution (e.g., a DSL, but be open minded, play devil's advocate with yourself).