Wirth Symposium

Celebrating Niklaus Wirth's 80th Birthday, 20th Feb 2014.

Niklaus Wirth was a Professor of Computer Science at ETH Zürich, Switzerland, from 1968 to 1999. His principal areas of contribution were programming languages and methodology, software engineering, and design of personal workstations. He designed the programming languages Algol W, Pascal, Modula-2, and Oberon, was involved in the methodologies of structured programming and stepwise refinement, and designed and built the workstations Lilith and Ceres. He published several text books for courses on programming, algorithms and data structures, and logical design of digital circuits. He has received various prizes and honorary doctorates, including the Turing Award, the IEEE Computer Pioneer, and the Award for outstanding contributions to Computer Science Education.

We celebrated Niklaus Wirth's 80th birthday at ETH Zürich with talks by Vint Cerf, Hans Eberlé, Michael Franz, Bertrand Meyer, Carroll Morgan, Martin Odersky, Clemens Szyperski, and Kathleen Jensen. Wirth himself gave a talk about his recent port of Oberon onto a low-cost Xilinx FPGA with a CPU of his own design.

The webpage includes videos of the presentations.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

The proof of OS design with GC enabled systems languages

I really appreciate Niklaus Wirth's work for showing me a world of systems programming with memory safe languages, before I got the chance to be infected with C.

Thanks to him, I learned that:

- you don't need to have bounds checking disabled by default, but can disable them explicitly if really really really required for those extra ms

- arrays don't need to be implicitly converted to pointers

- one can use safe out parameters

- fast compilation is no big issue when a language has proper module support

- type systems provide a nice way to validate certain types of inputs

- usable operating systems are possible with GC enabled systems programming languages

And yet kernels in

And yet kernels in mainstream use are yet to be written in very safe languages. Kernels have also become absolutely boring as research topics, so we are unlikely to see very many new ones in this direction (but maybe midori will work out?).

Which kernels in mainstream

Which kernels in mainstream use are written in very safe languages? [no longer relevant; above comment has been corrected]

Missed a word. I meant "not

Missed a word. I meant "not yet".

Oh. That makes much more

Oh. That makes much more sense.

crisis, schmisis?

At some point during the symposium, one of the presenters made the aside that the "software crisis" never really manifested itself in the sense which they had been expecting.

As a "post-crisis" programmer, I had to refer to the wikipedia entry which states that they had been worried that software projects were late, over budget, inefficient, poorly addressed requirements, and were difficult to maintain ... if one was lucky enough to get a running system delivered at all. With the benefit of hindsight, it seems we all have to shrug and say "nu, you were expecting maybe a pony?"

But an interview with Norm Hardy shows the past is a foreign country, and they do things differently there (or rather: they did them similarly, but ingenuously believed they were going to do them differently):

... indeed, we did write those programs. We were just off by an order of magnitude how long it would take to finish them all. ... The good news is that it finally worked! The bad news is that two generations of computers had come and gone before it began to work. ... No one had heard of big software, or at least we hadn't. I think some of the SAGE people were beginning to get a clue as to what the problems were. ... I can sort of get back in the frame of mind and, back in the days when programs were only a few thousand words long, you'd have an idea. And a few days later, if you weren't disrupted, it would be running. And the idea that some collection of programs might take years to do just hadn't occurred to us.

Blame UNIX

Personally I think it was a set of factors that somehow played together to reach the current status quo.

- in US, UNIX started to spread in some universities

- some of those students decided to create business based on UNIX

- some developers started to create C compilers for home computers as a means to be able to code at home the stuff that were doing at work. Sadly forgeting to port lint as well

- It is very hard to bring new OS into the market, so vendors build into existing toolchains, instead of starting from scratch

- Human resitance to change is underestimated most of the time, as WP7 and Android examples show, and they are only using safe languages at userland level

- UNIX OS culture, which is interweb with C, is now almost everywhere coupled with OS source code available free of charge, thus cutting any desire to do OS research or bring other OS models into the market.

Personally I follow with some hope that Singularity, MirageOS and similar endevours influence the industry, but it will take generations of developers.

I am a firm believer some technology changes can only sucessfuly happen coupled with new generations, as history has proven multiple times.

The question is do we need

The question is do we need it? Kernels have become fairly stable and static, and since we don't need a lot of them...we can invest in some careful human effort vs. better tooling. If we had to build many of them, then investing in better tooling is useful, but otherwise, we should focus safe languages at the layers that are more volatile (wrt code changes) and necessarily diverse (i.e. careful human effort is no longer scalable).

since we don't need a lot of

since we don't need a lot of them...we can invest in some careful human effort vs. better tooling. If we had to build many of them, then investing in better tooling is useful

Human effort doesn't get you the guarantees that 'tooling' provides, assuming 'tooling' here refers to any kind of formal methods, like type systems, model checking, etc.

Correct, but it is always a

Correct, but it is always a matter of cost vs. benefit right? Tools: high capital costs (you have to build them), perhaps continued compute power costs (if dynamic), but low ongoing costs. People: low capital costs, high ongoing costs, potentially lower compute power costs (since they can make optimization decisions no sane tool would dare make).

But who knows what might come out of Midori or Rust.

Real cost of tools

as illustrated by Randall Munroe.

Can we create systems that lower the total cost of tooling? I feel there are significant opportunity costs due to ad-hoc syntax, ad-hoc composition, ad-hoc effects models. The massive walls between 'applications' aren't helping - it's ridiculous that we can't easily automate or macro everything a normal user can do.

Is It Worth the Time?

You know, I had been trying to figure out what there is to be said beyond what RPG observed so long ago, and I think Munroe really said it best in "Is It Worth the Time?"

The basic issue is that, by our standards of attention span, almost all the possible schedules in this graph are rather ephemeral; there's a narrow fringe between possible times on the subdiagonals and the impossible region in the lower left. Now, if one can divide through by "how often someone does the task" and "how much time someone shaves off", the ratios become much more realistic0. However, the bottleneck of this process for innovation is that, for any individual adopter, the initial rewards on the learning curve had better be within the timeframes above the superdiagonal in the upper right corner.

Aside: I collect war stories, perhaps the most historically fascinating of which was a guy I serendipitously met at a deli last century, who had been inspired as a boy by farm automation2 and later made his pile by developing pinsetting3 machines which enabled a bowling alley boom. Although he predated Brooks by several decades, he'd even experienced the classical "second system syndrome": he'd nearly lost it all when his group tried to follow up on their initial success by automating restaurants4. (this is common knowledge in 2014, but for anyone reading LtU in the far future and needing context: we were served by a waitress that day)
(End of Aside)


0. This also explains how APL came to be: Harvard only has so many students, so they couldn't justify spending their endowment1 on Iverson, but IBM had enough people (external and internal) who could each benefit a little, that they could easily pay him for a lot of thought.
1. It is no coincidence that Plato taught in his olive grove; then (as now, in many parts of the world) olive groves play the same role (stable income with little labor) that bonds do in an institutional portfolio.
2. The New Idea Manure Spreader came into being because "Oppenheim, a schoolmaster in the small town, [was] concerned that his older male students often missed school loading and spreading manure" and in fact still exists. (for those of you who may have spent more time in front of keyboards than behind large animals: the manure this device spreads comes out of the end of the animal opposite to where the feed goes in)
3. Like "computer", once upon a time "pinsetter" was a job title, not a machine description.
4. At the symposium, there was also mention of high vs. low culture. Note that early attempts, such as the Jaquet-Droz automata(XVIII) expressed high culture: in this case, writing, art, and music. These days we've learned that low culture (eg changing a tire) can be much more difficult to automate than high (eg playing chess)

capitalism vs. bugs

This probably deserves a longer treatment but:

Can we create systems that lower the total cost of tooling?

In almost every case I can think of today, the cost firms pay for software development tools is approximately $0.

In very many cases the amount firms pay developers in wages is approximately $0 (e.g. many mobile apps, big web services, start-up web services).

In very many cases the amount of loss a bug costs a firm is very, very low compared to revenues.

The cost of bugs to firms is low because users rarely have a consumer choice between software alternatives, nevermind a choice between more buggy and less buggy software.

There are some niches that I think behave differently. For example, I understand the software used for high speed trading to be very costly to develop and I understand the losses due to a bug can be extremely high. In niches like these there is some incentive to have better software tools.

Even in those highly bug-averse niches, the potential profit from selling a new software tool is severely limited. Roughly speaking, an upper bound on the value of a new software development tool is equal to the amount of wages it can save the buyer. An upper bound on the amount of wages it can save the buyer is the amount of wages it would take the employer to build a similar or better tool from scratch.

Better software development tools thus have a potential price pretty close to 0 but what about the potential wages for building one? Can't such tools be built in the public interest, say via research grants or donations to non-profit organizations? Can they not go from academia into industry?

Deployment of a new software development tool imposes on firms (a) one time transaction costs to initially deploy the tool; (b) recurring transaction costs from adding extra steps to the firm's development process; (c) increased labor costs for running the tool and making use of its outputs.

So there are not a lot of opportunities even in academia or the non-profit world to invest a lot of work in improved software development tools.

As an example, there is a tiny amount of funding spent in the world to improve bug-killing features of GCC. This tiny amount is marginal on the maintenance costs mostly paid paid by firms that use GCC and don't want to have to switch away. Hey, maybe Apple will (superstitiously, if nothing else) kick 5 or low-6 figures towards improved dead-code detection in GCC.

For users, bugs can impose all kinds of diffuse costs and generally make life just that much more miserable. Too bad for them, they aren't actually the customers -- they just get stuck paying a lot.

In short, economically speaking, making better programming langauges and better development tools is a nice hobby and, for a few fairly randomly chosen lucky ducks, a "lifestyle job" paying enough direct or indirect wages to do OK.

Activation Energy

(This response is a bit to you and a bit to Dave.)

A large firm can often absorb the costs of developing better tools (be it for in-house software development or business). And assuming a larger population of users, and a low enough training overhead, even marginal benefits can pay for themselves. It is relatively easy to achieve small, marginal benefits. So the financial risk is low, and investment is readily justified.

However, a small company - or even a small 'branch' of a larger company (as they tend to be organized) - has a different situation. Hiring even one software developer, or investing in problem-specific tools, can account for a significant fraction of liquid assets. Tools developed for this company or branch will have a smaller population of users, and thus the benefits must be much greater to pay for themselves. Consequently, there is greater risk that an investment will not pay for itself, and a tighter grip on the purse.

In practice, software development is typically a 'small branch' of larger companies, or even many small branches (different group per project). Hence, it is generally difficult for software developers - even those in large companies - to find time and money to improve their own software development tools.

So, to lower the "total cost of tooling" isn't sufficient. We also need to lower the initial activation energy, such that developing a useful, new, problem-specific tool takes a much smaller investment of time, money, and expertise.

My impression, however, is that changing the PLs isn't sufficient for this goal. We might need to change the entire operating system, to eliminate the walls between applications and find other approaches to secure composition, i.e. such that we can readily automate behaviors that currently involve multiple apps (such as browsing the internet, grabbing some images, and shoveling them into a Photoshop pipeline).

If we could change everything...

...everything would be different.

Except not always:

My impression, however, is that changing the PLs isn't sufficient for this goal. We might need to change the entire operating system, to eliminate the walls between applications and find other approaches to secure composition, i.e. such that we can readily automate behaviors that currently involve multiple apps (such as browsing the internet, grabbing some images, and shoveling them into a Photoshop pipeline).

Please notice that the economic analysis in my comment "Capitalism vs. bugs" is independent of any particular choice of OS technology or programming language technology.

The problem, if you see it as a problem[*], is with capitalism not with software architectures.

-------------

[*] I do.

the economic analysis in my

the economic analysis in my comment "Capitalism vs. bugs" is independent of any particular choice of OS technology or programming language technology

I think your economic analysis isn't very realistic when applied to small companies, or even small branches of large companies. It's unreasonable to rate the cost of a tool or developer at $0 if a starving artist or inspired schoolteacher cannot afford it. The interests and demands of small groups and individuals are not reflected in your analysis.

Further, I believe your analysis is making technological assumptions - e.g. that users are hostage to software as a product, unable to easily perform disassembly, extension, or composition... unable to easily address bugs or enhance features or use applications in ways unanticipated by the original developers. Such assumptions may be valid today, but they're not set in stone. You say: "users rarely have a consumer choice between software alternatives, nevermind a choice between more buggy and less buggy software." - This is part economic, especially given the pressures to empower companies and create a culture of consumerism. But it is also part technological; more alternatives would exist if they were cheaper to create. With lower barriers, more users become producers, instead of settling as consumers.

I believe there is much good we can do from a technological angle. Nothing good will come from blaming capitalism.

Let's talk money

It's unreasonable to rate the cost of a tool or developer at $0 if a starving artist or inspired schoolteacher cannot afford it.

I rate the price of development tools at $0 because that's what people generally pay for them no matter how much money they have.

We can look at three examples that superficially appear to be exceptions but that really are not:

One is Microsoft's Studio line for which deep pocketed customers may pay large-seeming sums. What's noteworthy in this context is that what that buys the large shops is not so much extra great development tools but, rather, access to commodified labor market (the commodity here being the standard unit Studio-trained line coder).

The iOS developer tools, on the other hand, are gratis for use but shops pay when releasing software. Here, people are once again not paying for better software tools but, instead, are paying transaction fees for entry into Apple's "marketplace".

I won't mention a specific product but firms sometimes seem to pay for releases of free (libre) software development tools. Here they are generally paying for things like service guarantees on support issues or "dual licensing" arrangements. They aren't generally paying for the latest enhancements to the tools.

Further, I believe your analysis is making technological assumptions - e.g. that users are hostage to software as a product,

Yes I think nearly all users are hostage to software-as-a-product. No I do not think this is the result of technological choices.

more alternatives would exist if they were cheaper to create. With lower barriers, more users become producers, instead of settling as consumers.

I think you are engaging in a bit of techno-utopianism here, believing that with some cleverness in hacking we can eliminate software-as-product by enabling the teeming masses to "roll their own" software.

Well, OK, let's say I happen to agree this far[*]: we can improve tools to enable more (maybe a few, maybe a bunch of) people to make software that isn't a product -- we can make it easier for people to develop libre software.

Yes, we can do that. I don't think it will be enough anytime soon kill off the domination of "software-as-product" and "software-service-as-product". (E.g., Boycott of software-as-product seems easier to achieve by comparison.)

Yet, even agreeing that far my point still stands: approximately nobody will be paying for the tools that enable more people to abandon software-as-product. Your techno-utopian project is a nice hobby or, if you are very, very lucky, a nice lifestyle business.

Nothing good will come from blaming capitalism.

"Blame" has nothing to do with anything here.

Nothing good will come from ignoring what the logical of capitalism leads to in how money is spent on software development.

----------------

[*] ibid.

Libre software that isn't gratis

I've often wished for a copyleft license that, instead of guaranteeing gratis access to the source code for all, would guarantee fixed/exponentially decaying cost licensing to all, with a right to distribute/sell modifications to anyone else who already has all required licenses. I've entertained a number of variations on this idea, but can't convince myself that any of the schemes I've come up with would survive 1) piracy/plagiarism and 2) the lawyers. But I think the teeming masses would produce better work if it was a full time job for more of them.

For tooling, you are only

For tooling, you are only considering IDEs. There are plenty of tools in the software industry that are in the 5 or 6 USD digit range; bug checking is one of them. I'm not sure what Coverity charges commercial customers these days, but it was considerable when I was there. And let's not even get started with Boeing. The tool market is quite substantial, and companies JetBrains can even seem to make a business of selling the cheaper more general tools. Then we have the game engine companies. People can make money selling tools, and specialized tools + consulting is especially lucrative.

There also is really nothing magical about Visual Studio; there is nothing in VS that makes a "studio-trained dev" significantly better than one who uses the free Eclipse or XCode. The commodity these days is the Java, JS, PHP, or RR dev, not someone versed in the Microsoft stack (who are becoming increasingly harder to hire). Good developers are also hard to find no matter how commoditized the target stack is.

Software R & D costs are very considerable, even in capital-intensive businesses where you might think it is in the noise. Software is expensive in terms of human capital, and open source only improves the situation with the mass-used stuff (kernels, browsers...). Software has to be a product, otherwise devs will starve to death.

re: For tooling

I'm not sure what Coverity charges commercial customers these days, but it was considerable when I was there.

They just got bought for peanuts and as far as I can tell they never turned a profit.

Software has to be a product, otherwise devs will starve to death.

This is observably false.

They just got bought for

They just got bought for peanuts and as far as I can tell they never turned a profit.

Hmm, I have some stock, I should figure out if I made anything. Its irrelevant though: the tool they sold had buyers and they were willing to play $50k+ for site install.

This is observably false.

Most developers are not living on trust funds that I know of.

Cost is not Price

If you don't count a developer hour writing a make script or debug logger (i.e. a development tool), or the same hour spent learning visual studio, as a cost spent on development tools (perhaps $80, plus the opportunity cost to spend that expert's hour elsewhere), then there's something blatantly wrong with your accounting. Cost is not the same as price, and especially is not the same as sticker price.

The earlier XKCD illustration should be obvious enough on that point. It spoke of time costs, not of prices. I thought it clear I was speaking of reducing costs, not prices.

I'll grant, even companies often confuse costs with prices, perhaps because they break money down into different 'pots'. Large companies (and governments) often make penny-wise but pound-foolish decisions regarding personnel costs vs. the costs of buying extra hardware or better software, because the pot for purchases is much smaller than the pot for personnel (and is guarded by different bureaucrats).

believing that with some cleverness in hacking we can eliminate software-as-product by enabling the teeming masses to "roll their own" software

GUIs as they exist are nowhere near as composable and programmable as they could be. With alternative GUI designs, programming-by-example, and spreadsheet-like cross-application mashups are quite feasible. If the GUI is coupled to a good programming model, it should be feasible to share the resulting programs in a relatively robust way. I believe it is quite feasible to reduce the need for professional programmers by two orders of magnitude: 10% frequency need for expert programmers, and 10% the effort for expert programmers to get useful work done (focus on a specific algorithms rather than integration).

This isn't to say there will be fewer programmers, just fewer for whom 'programmer' is the primary titled career. A level of programming literacy will become standard education, leveraged in all arts, maths, and sciences. (It already is, to a degree. But most of today's languages are full of discontinuities and insecurities, and are not very suitable for gradual learning, casual use, easy sharing.)

I don't think it will be enough anytime soon kill off the domination of "software-as-product"

It doesn't need to be. We don't need to eliminate software as a product. We only need to eliminate the hostage scenario - our inability to extend, decompose, and use these software products as components or toolkits in larger software. Software as a product is fine, e.g. excellent for video games. Being hostage to software as a product is not fine. Similar is true for software-services-as-product - services are great assuming we can build upon the API in some compositional manner.

If I can accomplish something by hand (e.g. download a Youtube video, transcode it to extract mp3, break that down into microsounds, and classify those microsounds according to the newest machine-learning technology) - currently an action that would cross many 'applications' - I should be able to capture that behavior as a robust tool, then share the program as a tool or component with my community. That's the sort of "reduced cost for tooling" I'm interested in pursuing.

Even for software development tools, there are promising approaches that generalize pretty well, e.g. based on extracting source into a logic program so we can reason about patterns. But how can we make this highly accessible - trivial to leverage, extend, and share?

re: cost/price

If you don't count a developer hour writing a make script or debug logger (i.e. a development tool), or the same hour spent learning visual studio, as a cost spent on development tools (perhaps $80, plus the opportunity cost to spend that expert's hour elsewhere), then there's something blatantly wrong with your accounting.

This is just talking past one another over a minor vocabulary issue.

Cost and price are the same thing from two perspectives.

If I labor for a price, the price is mine and it is my employer's cost.

A developer who speculatively develops a new program on "his own time" and never seeks a wage for those hours has provided gratis labor. Maybe later he will seek money for, say, a "license" for the software -- but that is still not money for the labor. The price the developer has put on the work is $0. That is the cost of that software.

(There are, using a different sense of the word cost, personal opportunity costs to working on one thing rather than another thing, but that's different.)

We have gotten pretty far afield of the Wirth symposium, though.

Cost and price are not dual

Cost and price are the same thing from two perspectives. [..] The price the developer has put on the work is $0. That is the cost of that software.

I would say the cost is the time, space, energy or effort, and money spent developing the software - limited resources, expended. That is the cost of the software. One might attempt to put a dollar value on these resources, to satisfy a modern accountant's narrow-minded fetish with money. But, whether or not the developer chooses to pass this cost along in the form of a price, or simply swallow the cost, is irrelevant to my original concern of recognizing and reducing costs.

Anyhow, your economic analysis doesn't seem valid even with your narrow interpretation of cost as price, given that you don't really account for labor costs/prices. Even if it were valid, it isn't clear to me that you can reasonably generalize from such a narrow comprehension of costs. The dubious (and oft biased) simplifying assumptions of many economics arguments have left me distrusting the field as a whole.

With lower barriers, more users become producers

With lower barriers, more users become producers, instead of settling as consumers.

Returning briefly to the symposium, Szyperski's presentation was exactly targeted at this point; they attempt to provide a tool for advanced0 Excel users to compose —and share— data mashups. If I understood correctly, it's presented somewhat like a simplified version of Solidworks assemblies: there isn't any procedural abstraction, and all repetition is carefully hidden1 in the primitives, so the user can build a query by vertically sequencing steps and debug it by scrubbing up and down (à la an interactive tee(1)) to check the intermediate values. What stuck with me from his talk is that although he has seen some (from our viewpoint) lovecraftian workflow in the field, his target users would (understandably) rather stick with their own goldbergian bespandreled manual processes than risk (the attempt at) learning someone else's alternative.

0. those who have actually used the formula bar at least once; by his figures <10%
1. Warren Teitelman used to do agility classes with his dogs. One can do impressive things with canis lupus familiaris (in particular, they naturally attempt to DWIM) if one keeps in mind that they do far better with FSM-like problems than with PDA-like problems. It appears that Szyperski's group concluded this distinction is also generally applicable in homo sapiens sapiens.

Concrete Languages

One thing I'm interested in - the ability for languages to scale while remaining relatively concrete in how people use it.

One approach is, perhaps, to focus on recognizing common actions and extracting 'macros' or 'loops' by example as the users work. Another interesting approach was linked editing based on recognizing copy-paste code. A potential third technique is to model abstractions as tools (e.g. similar to brushes in Paintshop), and implicitly construct the pipelines.

Thinking in abstract terms is difficult: it comes late in children, it comes late to adults as they learn a new domain of knowledge, and it comes late within any given discipline. - Green and Blackwell, 1998

In order to develop - perhaps even to use and understand - good abstractions often takes a lot of experience with the concrete forms. I.e. we're ready to learn something when we already almost could invent it for ourselves.

Thus, while it is important for PLs to support high levels of abstraction and easy refactoring (so users have plenty room to grow, without painful ceilings and discontinuities), it also seems important to support a lot of low-level, concrete, manual programming without diving into monads and categories and other generalized abstract nonsense (so users have plenty time to grow, without working beyond their understanding).

Languages seem often to fail in one direction or the other.

Interesting.

Interesting.

In order to develop - perhaps even to use and understand - good abstractions often takes a lot of experience with the concrete forms. I.e. we're ready to learn something when we already almost could invent it for ourselves.

Perhaps this is part of why I look askance at things like monads that seem to me to be an unavoidably large step. And, conversely, why there seems to be a high correlation between people who favor monads, and people who things they're not such a large step.

Thus, while it is important for PLs to support high levels of abstraction and easy refactoring [...], it also seems important to support a lot of low-level, concrete, manual programming without diving into monads and categories and other generalized abstract nonsense [...].

Languages seem often to fail in one direction or the other.

Hm. The most stable resolution of such a problem is to find a way of looking at things so that the distinction between the two disappears.

The most stable resolution

The most stable resolution of such a problem is to find a way of looking at things so that the distinction between the two disappears.

Perhaps. There is some promise here for assisted theorem proving as a basis for programming. More generally, we might treat categories, proofs, grammars, homeomorphisms, or some other abstract concept as concrete actions upon some concrete (e.g. renderable) classes of objects.

I'm curious how universally we could apply such a technique.

My concatenative language takes a relatively concrete slice of this - just plain old function composition. Long term, I hope to support visual programming, with functions as components and tools and user actions, such that abstraction naturally arises from manual composition of reusable artifacts.

True, the problem with

True, the problem with processes is that you can avoid them, as the latest Apple flaw shows.

From attending FOSDEM in the last couple of years, I got to understand that at least in Europe, Ada usage for critical systems is increasing, partly thanks to availability of cheap compilers like GNAT and partly due to the increase in costs of maintaining secure C code bases.

The investment in secure tooling for C as driven by the LLVM project, Microsoft's integration of static analysers into Visual Studio, and safety standards for human critical applications like MISRA, show the monetary cost that business now have for building their infrasture in such fragile language.

Which funny enough did not had to be, as lint is almost as old as C, but few bothered to make it part of their tooling.

Me too am looking to Midori and Rust, but whatever improvements might come it will still need a few generations of programmers with new mindsets to adopt them.