ACM A.M. Turing Centenary Celebration

ACM A.M. Turing Centenary Celebration

33 ACM A.M. Turing Award Winners came together for the first time, to honor the 100th Anniversary of Alan Turing and reflect on his contributions, as well as on the past and future of computing. The event has now taken place—but everyone can join the conversation, #ACMTuring100, and view the webcast.

This event totally flew under my radar! Many thanks to Scott Wallace for pointing it out.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Highlights

In the Programming Languages panel it was really interesting to hear Niklaus Wirth lament that bloated software wastes so much computer capacity. Ten years ago this really seemed true -- "Intel giveth, Microsoft taketh away" -- but what about today? I think of the humongous data centers of Google, Amazon, etc and can't help but think we're actually in a new era of hyper-efficiency.

Alan Kay's talk was excellent.

Hyper-efficient at what?

Just wondering.

Here's what I meant: The old

Here's what I meant:

The old model is that I have a computer on my desk, and it's way more powerful than I really need, so my desktop software vendors get lazy and soak up the excess capacity.

The new model is that software I use (search, Gmail, etc) is running in a massive server farm somewhere, and the people operating it are working really hard to increase their efficiency and keep their running costs down.

So there's a lot of work going into efficiency nowadays in the commercial software world.

Not so sure that that model is really revolutionary

In days of yore companies where running a lot of software, especially when there was 'big data', on Big Iron (mainframes), I assume this also drove efficiency in these businesses for the same reasons you mentioned.

Of course this was not so visible at the desktop end. But now that we have reinvented networked computation and storage, 'The Cloud', it's all more apparent to everyone.

Edit: At the time storage constraints probably drove the efficiency drive in that direction as opposed to computation efficiency. Educated guess; I worked on 'green screens' and seen some of the tricks to save a few bytes. The Y2K bug was probably the most famous upshot of this efficiency drive.

Here's why I disagree

You and I both watched Alan's talk, and the abysmal follow-up Q&A by the audience who just didn't seem to understand Alan's notion of efficiency ("Isn't today's how tomorrow's what, ad infinitum?").

Sure, those companies have figured out how to build, in Alan's words, more efficient distilleries and refineries, but they are not attacking the true problem. How many Google employees does it take to sustain a massive server farm? How many lines of code?

Looked at another way, what is much better in Gmail than Office? For me, search is faster. Here, it has nothing to do with software vendors being lazy and soaking up excess capacity. When I search in Outlook, I am using all my disk capacity, all my CPU, all my memory, etc. But that doesn't matter. Why? Because e-mail search is an "embarrassingly parallel" problem a 4 core CPU can't beat a massive server farm at. But, digging deeper, how many lines of code needs to really support that feature? And why?

To call massive server farms a new era of hyper-efficiency is an interesting value judgment, to say the least. Note, I am simply saying I disagree, not that you are right or wrong. I simply want to juxtaposition your comments with Alan's, and raise some questions, and seek to further understand your viewpoint.

Edit:

Another thing to consider, in VPRI's STEPS reports, they talk about separating meaning from execution. Alan touched upon that a bit, discussing Bret Victor's work and how he has taken advantage of the Nile language to present immediate feedback to the user on what the code is doing. Contrast that approach to what Peter Norvig describes Google's Search Engine code as in the book Coders At Work: a big unholy mess of C code, where nobody knows where everything is and they have thousands of knobs to tune but have to hit compile to understand how those changes play out on 1,000s of servers. Food for thought: If Google was truly hyper efficient, how would they scale the problem of understanding better their most valuable asset?

Separating meaning from execution is key here, and I wonder to what extent Google has truly achieved what Alan is hoping the future will be like, or if they are "preventing the future" by taking "the easiest way" possible.

Because e-mail search is an

Because e-mail search is an "embarrassingly parallel" problem a 4 core CPU can't beat a massive server farm at. But, digging deeper, how many lines of code needs to really support that feature? And why?

Isn't that huge? Google wins merely by thinking beyond desktop applications and scaling everything to their massive server farms (e.g., see Jeff Dean et al's recent work on scaling of deep neural network training). And yes, its not even that hard, it just requires a shift in mindset.

Google sucks at email search

The reason Google searches your email so fast is because it creates a word index of your email archives. Outlook could do the same thing gmail does, only locally, and it would probably be just as fast or faster. But then it would be broken like Google's search and would fail to find sub-string matches. If I sound bitter, it's because the failure of gmail to find 1234ABC when searching for 1234 routinely causes me problems at work. (But I'm not really bitter).

Outlook search doesn't even

Outlook search doesn't even work at all for me unless I'm using OWA. Basically, I log into webmail when I need to search for something....sad.

Sorry to hear that from your

Sorry to hear that from your side, but I can't help but find this ironically amusing.

See the paradox : your mileage concretely requires you to use The worldwide network infrastructure (or the essential part of it anyway, namely HTTP) to just find stuff basically belonging to the 99% that is *your* privacy (or constituents thereof, messages, documents, hyperlinks cited, etc) even though it's likely your today's hard drive might as well be able to hold 100 times (if not 1000s) as much of the total of useful information you used there ... ever since you started working or even studying.

It is a bit like after tediously looking for something in your drawers or folder closet, you d be condemned to step out in the middle of the street and start shouting at whoever could hear "help me find my XYZ out here! Here are some keywords, thanks!"

Kind of technologically eerie, no?

EDIT
By that, AFAICT, I can quite easily agree with Z-Bo's points, btw; the separation between meaning and execution isn't performed well enough either locally or globally, IMO; we re still struggling with accidental complexity from the legacies, be it on the desktop with no network plugged, or not

Efficiency

Too many people are stuck with the past meaning of efficiency: make the program more efficient. But Alan Kay is working much harder on the (to me) much more interesting meaning of efficiency: make programmers more efficient.

Making programs more efficient is really well understood. Making programmers more efficient seems to be in its infancy, and is really an art form at this point. We should try to move it towards science. And of course PLs have a huge part to play in this. To bad most programmers are stuck with essentially a single language choice (C/C++/C#/Java/Go are extremely alike). Most of the languages VPRI is currently playing with are truly different than those.

Hyper-efficient at what?

Z-Bo:

Hyper-efficient at what?

At sustainingly feeding hype to these, maybe ? ;)

(just teasing)