User loginNavigation |
archivesUsing Commutative Assessments to Compare Conceptual Understanding in Blocks-based and Text-based ProgramsUsing Commutative Assessments to Compare Conceptual Understanding in Blocks-based and Text-based Programs, David Weintrop, Uri Wilensky. Proceedings of the eleventh annual International Conference on International Computing Education Research. Via Computing Education Blog.
By Manuel J. Simoni at 2015-08-17 12:25 | Paradigms | Teaching & Learning | 1 comment | other blogs | 9846 reads
State of the Haskell ecosystem - August 2015Interesting survey. Based on a brief look I am not sure I agree with all the conclusions/rankings. But most seem to make sense and the Notable Libraries and examples in each category are helpful. By Ehud Lamm at 2015-08-17 17:54 | Functional | login or register to post comments | other blogs | 18630 reads
Big questions
So, I've been (re)reading Hamming /The Are of Doing Science and Engineering/, which includes the famous talk "you and your research". That's the one where he recommends thinking about the big questions in your field. So here's one that we haven't talked about in awhile.
It seems clear that more and more things are being automated, machine learning is improving, systems are becoming harder to tinker with, and so on. So for how long are we going to be programming in ways similar to those we are used to, which have been with us essentially since the dawn of computing? Clearly, some people will be programming as long as there are computers. But is the number of people churning code going to remain significant? In five years - of course. Ten? Most likely. Fifteen - I am not so sure. Twenty? I have no idea.
One thing I am sure of: as long as programming remains something many people do, there will be debates about static type checking. Update: To put this in perspective - LtU turned fifteen last month. Wow. Update 2: Take the poll!
STABILIZER : Statistically Sound Performance EvaluationMy colleague Mike Rainey described this paper as one of the nicest he's read in a while.
STABILIZER : Statistically Sound Performance Evaluation
One take-away of the paper is the following technique for validation: they verify, empirically, that their randomization technique results in a gaussian distribution of execution time. This does not guarantee that they found all the source of measurement noise, but it guarantees that the source of noise they handled are properly randomized, and that their effect can be reasoned about rigorously using the usual tools of statisticians. Having a gaussian distribution gives you much more than just "hey, taking the average over these runs makes you resilient to {weird hardward effect blah}", it lets you compute p-values and in general use statistics. |
Browse archivesActive forum topics |
Recent comments
22 weeks 2 days ago
22 weeks 2 days ago
22 weeks 2 days ago
44 weeks 3 days ago
48 weeks 5 days ago
50 weeks 2 days ago
50 weeks 2 days ago
1 year 6 days ago
1 year 5 weeks ago
1 year 5 weeks ago