Build Your Own Blocks (BYOB)

Scratch has previously been discussed here, but I recently learned about BYOB (Build Your Own Blocks):

Welcome to the distribution center for BYOB (Build Your Own Blocks), an advanced offshoot of Scratch, a visual programming language primarily for kids from the Lifelong Kindergarten Group at the MIT Media Lab. This version, developed by Jens Mönig with design input and documentation from Brian Harvey, is an attempt to extend the brilliant accessibility of Scratch to somewhat older users—in particular, non-CS-major computer science students—without becoming inaccessible to its original audience. BYOB 3 adds first class lists and procedures to BYOB's original contribution of custom blocks and recursion.

Also check out Panther, another great advanced spinoff of Scratch with a somewhat different point of view. Panther team member sparks has created a Blocks Library here that includes a collection of downloadable BYOB blocks contributed by users. Thanks, sparks!

—Jens and Brian

Such a project poses some interesting questions: are lambdas really too hard for Scratch's audience (and has the design process been led astray into having such group think), are uncontrolled side effects and concurrency really a good idea for beginner programmers, how should a system like Scratch integrate older audiences (and its aging one) without sacrificing the sharing component, etc. Talking to one of the designers, it seems like they have some fascinating ideas going forward and are looking for collaborators!

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I'll be done with my onward

I'll be done with my onward submission by Friday or Saturday morning, honest!

BTW, I was told that AppInventor supports user-defined blocks.

Looking forward to it :)

Looking forward to it :) Pity the deadline isn't in a few weeks, finally started distilling together my innovation analysis into a more cogent format.

I agree with the BYOB side: I don't think abstraction is that hard for students, and it actually would be a great step towards increasing the legibility of scratch programs (students already have trouble reading big ones). Furthermore, it also eliminates some restrictions in the language that otherwise seem arbitrary: everything is a value!

An important part of BYOB's support for abstraction that is worth stressing is their attempts to make it visually clear. They still seem to struggle for it (e.g., what to do about lazy vs. eager semantics); there's a lot to it due to their usability goals. Curious how Apps does it..

I can comment on that... I

I can comment on that...

I don't think abstraction is easy or hard, but its something that has to be learned. The problem with most educational programming systems today is that they have found from user testing that abstraction is too hard and they've just thrown in out. But this really is like saying that math is too hard for girls, I mean come on! If you want them to learn programming, they need to learn abstract thinking also.

I like the idea of getting people to program without abstraction first, gradually programming larger and larger programs until they become a mess. Then the teacher can teach them about abstraction and encapsulation, about making their programs tidier and easier to build. Its like teaching a kid to clean up their room by letting it become really messy.

My work is related to grown up programming, can languages be graphical and capable at the same time. Abstraction obviously has a lot to do with that.

Defamiliarization

As we abstract things they become more familiar and our perception of the object changes. For example, a child draws a face with lines whereas an adult learns techniques to draw a face using conceptual contours like stroke angle, tone and light source. The concept comes from soviet critic Viktor Shklovsky.

Recently Bob Harper slammed down my GoF Design Patterns example about The Persistence of Memory, arguing proof by authority that "design patterns = functional programming done badly", but I think he simply misunderstood the importance of defamiliarization. One example I gave was the collaboration between Command and Memento, explaining that it isn't modular for a Command to undo itself and reverse an action on an ephemeral data structure, and so we have a design conflict: we need to persist some data but have only an ephemeral structure, yet we need a persistent structure. The design pattern solution is to add more structure, via a Memento. We can perhaps even specify a general transform from ephemeral to persistent structure in code using Mark Overmars's classic work on dynamic data structures.

But my perspective remains that patterns can defamiliarize and is not functional programming done badly, and that value judgments on abstraction aren't all-or-nothing propositions. There isn't a "best" abstraction, only multiple perspectives on how to capture what has become familiar and defamiliarize it. (In math we have the concept of a dual, which flips an abstraction while preserving its meaning.)

Build your own tiles

I've finally had some time to reflect. YinYang is based on a tile and board metaphor: you lay tiles down on a board to express your programs, while tiles can be implemented as boards. Laying a tile down then activates its behavior on the object being defined, this is fairly declarative and is very much like asserting a predicate ("Attack" the monster). The type system then ensures that tiles are laid down in correct contexts (I can only lay an "Attack" tile down if the "Tower" tile has already been laid down, I can only use the "Tower" tile to create an object within an object that has the "Tower Game" tile laid down). The type system is then also very useful in generating nice context menus that is needed to make graphical programming effective.

Tiles execute continuously and concurrently, which is probably easier to deal with than procedures. However, discrete is still important, which becomes harder when our semantics are continuous, which we already know given our experience with FRP. So there is a lot of support for controlled escapes to discrete behavior when necessary. We can "fork" a tile, meaning it will continue to execute even as its guards stop executing, or we can do a discrete behavior "once", meaning that a discrete execution context occurs at the beginning of a continuous execution context.

Explain

What would it mean to 'Attack' continuously and concurrently? (would monster health be adjusted by some negative integral?)

Attack is just an order, you

Attack is just an order, you do it as long as you are told to do it. If the conditions for attacking no longer hold (e.g., no longer seeing a monster to attack), then you stop doing it. There is probably some discrete behavior in how you attack; i.e., unless you are a laser cannon you are probably limited to firing at some rate (e.g., 1 bullet/500 milliseconds). The monster would only die by getting hit by a bullet, and when it is no longer a valid target (it dies or otherwise moves out of range), you move on to the next monster, or stop attacking.

Thanks

That explanation helps me understand what you were envisioning.

'Attack' is a continuous standing order to a mob/weapon/etc. (e.g. 'fire at will'), but the recipient of this order might be a stateful object that can't quite obey this order continuously, e.g. due to limited rate or munitions, overheating, lack of a valid target, insufficient personnel, etc. Presumably you could link 'Attack' to some other tiles, e.g. change 'attack monsters' to 'attack peasants'.

I misunderstood 'attack' as a continuous behavior, rather than a continuous command.

Attack is a continuous

Attack is a continuous behavior, and there is flexibility in how that behavior is implemented, so you might invoke attack but attack has no resources to execute, so it would just fail.

Attack has a parameter, what to attack, and you have to pair it with another tile to fill that parameter, usually see (but could be hear or some other sensor). So you would write:

See [filter: Monster], Attack [Seen]

Command and Consequence

Attack is a continuous behavior

Huh? You stated just above that the attack behavior - what happens as a consequence of the attack command - is generally not continuous; e.g. an attack with a bullet every 500 milliseconds is periodic and discrete, and discrete is certainly not continuous.

Yet it still makes sense to say that the attack command (or invocation) is a continuous behavior.

The interface between continuous and discrete behavior is one I've been trying with only moderate success to handle more effectively in my Temporal RDP models, so I'm perhaps a bit more sensitive to these distinctions than your normal audience. I think I get what you're trying to say, but I'm left struggling with the way you say it.

If I am "attacking"

If I am "attacking" something, I am doing it for a duration of time even if the steps involved in the attack are discrete. Its not a command, the robot is not commanding itself to attack, it is attacking and figuring out the best way to do that. Firing a gun every 500 milliseconds, on the other hand, is a discrete behavior that the continuous attack behavior might initiate. Attack is more than just firing every 500 milliseconds: target tracking (say by rotating the gun on a tripod) is a continuous sub-behavior of the attack behavior and continues even when we are not firing a bullet. So yes, attack is a continuous behavior, and it really couldn't be anything else without losing expressiveness.

Of course, there are many ways we can distinguish between continuous and discrete, we can even hide the distinction completely.

Word play

In karate, in fencing[1], in grappling, in battle, in every domain I've practiced or read about, 'attack' is used in a far more fine-grained manner. You can prepare to attack, or set up an attack, or dodge an attack. You can distinguish attacks from feints. The way you are using the word seems... atypical.

I agree that there are many behaviors that must be coordinated, planned and prepared in order to perform an attack. And I even agree with the basic position that command and control should be continuous. My RDP paradigm uses concurrent long-lived demands (which can serve as commands, queries, constraints, published data, etc.) as the sole basis for side-effects, though often the parameters in these demands will vary discretely.

But the only way I'm going to ever get what I'd call a 'continuous attack' is with something like a high-power super-cooled laser cannon. That's just how the word 'attack' is used.

Of course, there are many ways we can distinguish between continuous and discrete, we can even hide the distinction completely.

It isn't so easy to 'hide' these distinctions if you are attempting for a clean model where all behavior is well-defined from the language semantics. Generating a discrete behavior from a continuous behavior is actually quite problematic.

...

[1] aside: I had never realized how famous my fencing instructor was (though fame is fleeting). I vaguely recall him mentioning a gold medal or two, but I (a gangly boy of 16) was much more interested in his two hot assistant instructors - young women with tight bodies and thick Russian accents. Anyhow, the instructor only had one good eye left because he had, some years before moving to Kansas City, foolishly left off his helmet while fencing, which allowed the other eye to be damaged by a misfortunately placed 'attack'.

Overloading at differing abstraction levels

Words like attack are so badly overloaded in general use that trying to make an argument like this is irrelevant without reference to the entity.

If I make an attack with a battalion it is a very different thing to an attack from an individual to an attack from an individual bullet. And an attack with a battalion can last a significant time, and at the battalion level is continuous over that time, but an attack by a single bullet can be regarded as instantaneous, ie discrete.

Clearly the difference is the level of abstraction. If your language can't take continuous behavior at the battalion level abstraction and generate discrete behaviors at the bullet level then my language that can do that is going to shoot yours to pieces :-)

If my battalions could

If my battalions could attack atomically (and I don't mean with a Davey Crockett), yours would be dead before they knew what hit them. ;-)

We do need to distinguish between discrete (countable) behaviors and atomic (instantaneous) behaviors.

Even for a battalion, an 'attack' is discrete, countable. Of course, its larger scope means it's more likely you'll adjust or abort the attack, replan tactics on the fly, expend more effort to coordinate the attack, and never fully succeed due to fog of war.

Good language support for reactively replanning or retracting commands on the fly and coordinating concurrent constraints and behaviors over time is a very useful feature for a language - one I've been pursuing quite vigorously. Sandro's reactive programming models always make for an interesting study.

Attack often has a

Attack often has a continuous meaning when used in discourse: "I began attack" and "I stopped attacking." Most English verbs have continuous meaning, even the ones that we generally think of as discrete, like "firing." But then everyone has an opinion, and we can't depend on language alone to describe our abstraction.

The "attack" tile that I've defined in YinYang is simply an abstraction that I have happened to assign the "attack" name, we can argue whether that name is appropriate but the tile's semantics are continuous. After we get beyond English naming, I believe that we aren't far apart in viewpoints: a good reactive language needs good support for expressing both continuous and discrete behaviors (actions, commands, or whatever you want to call it).

Brooks' original subsumption architecture is purely continuous, it just doesn't make sense to subsume discrete commands because by definition they can't happen concurrently. We can (and in the implementation I do) get rid of the distinction between discrete and continuous, where discrete is simply the special case of a one point line. However, semantically, we want to expose these distinctions to the programmers

Behavior definition

I think we have differing definitions of behavior. For me a behavior is a result of/results in multiple actions. Using the time honored tradition of definition by example:

If a (non-warlike) robot is carrying material around a factory I want it to have a warning behavior when it detects a human in its path. This consists of flashing a light and beeping a horn.

Particular actions may be part of different behaviors, eg the flashing light may be used alone to indicate malfunctions.

A behavior can only be atomic if all its actions are atomic and occur at the same time.

But mostly a behavior exists continuously for a period of time, ie the robot continues warning until it no longer detects a human.

Behavior can only be considered discrete (countable) at the entry/exit of the behavior, *a* warning.

Behaviors need not be the result of a stimulus (detecting the human), it can also be emergent, ie if some other set of behaviors results in flashing light and beeper then the robot is performing warning behavior despite no human having been detected.

I read a book that developed a nice method of modeling both organism and machine performance based on these definitions, but I'm damned if I can remember the title or author and Google was no help :-( Does it sound familiar to anyone?

I don't attribute any

I don't attribute any particular properties to 'behavior' other than that behavior is distinct from a description or plan or need or demand for behavior. I.e. the critical distinction 'behavior' makes is the one between thinking or describing, and doing or actualizing. (And even that allows for meta-levels such as "the behavior of thinking".)

Other than that, you really need to describe the behavior. It isn't an error to say 'warning behavior', but you can't assume it means state or mode; the phrase 'warning behavior' could just as easily describe a discrete fire-and-forget event. Only in the context of your robot can you say: "the warning behavior for this robot consist of flashing nearby humans and tooting its own horn".

You seem to be equating 'behavior' with some favorite model of behaviors that happen to include a potential stateful phase. Just keep in mind that there are plenty of 'models of behavior'... like, for example, pretty much everything we talk about at lambda-the-ultimate.

Time and state

I don't attribute any particular properties to 'behavior' other than that behavior is distinct from a description or plan or need or demand for behavior. I.e. the critical distinction 'behavior' makes is the one between thinking or describing, and doing or actualizing. (And even that allows for meta-levels such as "the behavior of thinking".)

Ok, your definition is also reasonable, its just different to mine.

... like, for example, pretty much everything we talk about at lambda-the-ultimate.

Much of what is discussed on Ltu does not involve any notion of time. As I understand it, the definition of "pure" explicitly excludes any notion of time, the result is always the same. Can you point me to any models which include time but don't require state?

Programming with Time

the definition of "pure" explicitly excludes any notion of time, the result is always the same

Time logically advancing during a computation definitely constitutes a 'side-effect'.

models which include time but don't require state

I understand 'state' to mean any dependency of the future on the present. I.e. a fixpoint function with delay, or integral function over time, is stateful (at least internally, similar to an ST monad).

My RDP uses stateless behavior models with time (for explicit delay or anticipation) for data plumbing, declarative relationships, and value transforms. But for practical application I must assume there are stateful entities (agents, sensors, actuators) with which the behaviors ultimately interact.

I don't understand the

I don't understand the difference between "dependency of the future on the present" and "behavior with time (for explicit delay)"?

The calculation of the future behavior may be stateless, but delaying its application is statefull.

Note I'm definitely not saying that isolating state into monads, delays or external entities is a bad thing, there are many benefits that can be applied to the stateless part, but the PL community should not "ignore the man behind the curtain".

When applying Pl to robotics and other real-time applications a more integrated approach may be more rewarding, TRDP may be part of that approach.

Delay vs. State

I should clarify: state means a dependency of the future upon the present in a given space (i.e. such that the space is described by a step function or integral over time). You can delay a signal without introducing state so long as you simultaneously move said signal to a new space (i.e. such that the delayed signal cannot be observed in the originating space).

For example, you can introduce logical delays in a dataflow or pipeline (which you can understand as 'data' through a series of different 'spaces' that transform it) and the result will be 'stateless' unless you introduce a feedback loop.

My TRDP behaviors maintain this restriction, allowing logical delay and logical synchronization, but I prevent constructs that would introduce 'state' - at least for that programming layer. (My concern is ensuring eventual consistency and resilience to network disruption, so behaviors are carefully restricted to regenerable state - such as caching and memoization.)

You should checkout Brooks'

You should checkout Brooks' work on behavior-based programming and subsumption architectures. The definition that I'm using for behavior is accepted in robotics, whereas I think the term is undefined or vague in PL.

The robotics community isn't

The robotics community isn't any more united, with respect to vocabulary, than is the PL community. I should know, since I'm part of the robotics community.

But I'll check out Brooks' work to see what this 'subsumption architecture' is all about.

How could you be in the

How could you be in the robotics community and not know Brooks? Before Brooks, robots were programmed via rigidly applied formal rules (think something like logic programming) and couldn't adapt very easily to "real" situations. Brooks through that all away with a bunch of prioritized rules inspired by how animals behave in nature. All modern autonomous robots now have some variation of this architecture.

How could you be in the

How could you be in the robotics community and not know Brooks?

Easily, apparently. Though his work might have influenced developers of the tools I work with, his name doesn't exactly show up in the powerpoint presentations or documentation. (But now that I've looked, I do recognize similarities between the subsumption architecture and one of the architectures used in our group.) I'm much more familiar with the name 'iRobot' than 'Rodney Brooks'.

Besides that, 'robotics' is a very broad domain. My work focuses more on the communications, command, and control... and I'm now getting into computer vision. Kinematics and motion planning interest me (I'm especially interested in coordinated multi-robot behavior), but I haven't had opportunity to work on them.

Always a good idea then to

Always a good idea then to check out the classics to really understand what is going on. I'll put out a post on the Elephants paper.

For multi-robot coordinates behavior, anti-object techniques might be applicable (though this is getting back to more robotics-inspired programming).

Scale and Complexity

IIUC more recent work has moved away from purely reactive models for higher layers, because purely reactive system can't model and benefit from memory or learn.

It seems to me observing across a number of disciplines that many of them are (re)learning the lesson that physics learned, that apparent models of behavior can vary with scale, Newtonian vs atomic vs quantum.

Biology is finding that the behavior of populations is different to the behavior of individuals is different to the behavior of cells even though each is the result of the layer below.

This is the Fractal lesson, even simple small scale behaviors can lead to apparently radically different complex large scale behaviors.

And this is being learned by the behavior based robotics community, simple robots can produce complex behaviors with lots of apparently random variation, so that the next level paradigm can prune behaviors that have not previously succeeded or do not look like succeeding, making the overall robot or population of robots adapt better to meeting their goals.

But it seems to me that our programming languages and computation models don't yet acknowledge and support this change in paradigms with scale. Unless we change languages of course, but then the interactions between scales are usually constrained by the practicalities of inter-language communication.

At the moment it just makes my head hurt trying to think how to model variations in paradigm with scale and models of inter-paradigm communication.

Re: Scalable Programming Models

My interest in generative grammars is that we can embed them in a reactive system with just a little memoization of successful search paths for each indeterministic choice. Doing so gives us major advantages: learning, incremental computations, and a sort of "if it ain't broke, don't fix it" behavior stability.

Granted, this wouldn't be a 'purely reactive' system, but it also wouldn't rely on 'state' semantics - rather, it's just leveraging a bit of indeterminism in the grammar specifications (i.e. choice such as A | B) to be points where it may 'learn' to make good choices. Since the semantics aren't stateful, the model would stay resilient across network disruption. (That said, developers would still do well to grok that the overall system is stateful, and I'm quite wary about introducing indeterminism by default... though I presumably could control it by capability.)

apparent models of behavior can vary with scale, Newtonian vs atomic vs quantum.

Physical scaling laws (e.g. strength increases with square, weight with cube, or relative strength of static electricity at a smaller scale) keep things interesting when we try to construct at different scales. The Newtonian model works for many more orders of magnitude than do our architectural idioms.

it seems to me that our programming languages and computation models don't yet acknowledge and support this change in paradigms with scale

The problem there is obvious: most state-of-the-art PLs and computation models simply weren't developed to be scalable, or otherwise have gaping flaws in their design. Even the vaunted actors model is horrible: two actors have different concurrency properties than putting all the messages and state into one actor, and reliable messaging is infeasible in an open distributed system. Most models are non-compositional or non-modular or otherwise non-scalable in critical ways. We could achieve a great deal more scalability than state-of-the-art supports by focusing on, say, algebraic closure and other compositional and extensional properties of our models, while keeping an eye on our ability to faithfully and securely implement the model in an open distributed system.

The PLs and 'paradigms' in common use today are more analogous to the architectural idioms than the underlying Newtonian physics: they don't scale because they were designed to work at a specific scale. Even pure functional programming languages compromise their scalability by using 'monads' to model communication. (Monads do not nicely compose concurrently.)

Biology is finding that the behavior of populations is different to the behavior of individuals is different to the behavior of cells [...] This is the Fractal lesson, even simple small scale behaviors can lead to apparently radically different complex large scale behaviors. And this is being learned by the robotics community

There is a significant difference between biology and robotics: in biology, we're passive observers of the behavior model, and so we're making a lossy best guess. In robotics, we're creating the behavior model. We have a lot more opportunity to avoid the biases and limitations of biology.

"Convergence in Language Design, ..."

But it seems to me that our programming languages and computation models don't yet acknowledge and support this change in paradigms with scale. Unless we change languages of course, but then the interactions between scales are usually constrained by the practicalities of inter-language communication.

At the moment it just makes my head hurt trying to think how to model variations in paradigm with scale and models of inter-paradigm communication.

See the (opinionated) paper Convergence in Language Design, a Case of Lightning Striking Four Times in the Same Place (PDF), by Peter Van Roy, 2006, which has been discussed a bit on LtU :

I present four case studies of substantial research projects that tackle important problems in four quite different areas: fault-tolerant programming, secure distributed programming, network-transparent distributed programming, and teaching programming as a unified discipline. All four projects had to think about language design. In this paper, I summarize the reasons why each project designed the language it did. It turns out that all four languages have a common structure. They can be seen as layered, with the following four layers in this order: a strict functional core, then deterministic concurrency, then message-passing concurrency, and finally shared-state concurrency (usually with transactions). This confirms the importance of functional programming and message passing as important defaults; however, global mutable state is also seen as an essential ingredient.

Yes, I also liked his

Yes, I also liked his "Programming Paradigms for Dummies" also on LtU, but the range of scaling covered doesn't really move from a reactive response of a single insectoid robot leg to commanding a battalion of mixed robot types. The four concepts provide the mechanisms, but the example languages still leave you trying to program the battalion with the basic operations defined by the functional language.

Lambda before classes?

I it find strange is that BYOB seems to (from my cursory inspection) be focusing on the issue of turning the Scratch scripting language into a "real" programming language. Do they have evidence that this is what would benefit Scratch users the most?

In my (anecdotal and second-hand) experience, from seeing my wife teach Scratch to middle-school (US) students, the biggest pain point for Scratch users seems to come from the lack of a template/instance sort of mechanism for sprites.

Her students have felt the very real maintenance problems that come from copy-paste programming at the granularity of sprites, and recognize that the inability to create new sprites at run-time negatively impacts the kinds of projects they can build. I suspect that a class- or prototype-based system for sharing behavior would be well received (and thus teachable).

In contrast, I've never heard of one of her students wishing they could define their own subroutines. Is this a solution to a problem they don't have, or just to one they don't realize they have?

BYOB support two mechanisms

BYOB supports two mechanisms to achieve what you're describing: prototypes and lambda. Having both seems too redundant, and having seen both, I'd actually advocate dropping prototypes.

great expectations

When they are programming in Scratch, they have certain expectations about what they can do and any frustration will result from those expectations not being met. Because they are not introduced to procedures, they don't really miss them, but rather, they run into problems without them and look for other solutions (templating, instancing).

This is really the first tenant of design: users aren't going to tell us what feature will solve their problem, we have to do that hard work on our own.

BYOB prototypes != Self prototypes

I may have accidentally used a loaded word (although it took me quite a while to find any use of the word "prototype" in this sense in BYOB-related documents).

To quote a BYOB Manual when they talk about OOP:

Scratch comes with things that are basically objects: its sprites.

but then they take a 90-degree logical turn and say:

In [BYOB's] approach to OOP we are representing both classes and instances as procedures.

And, showing that they understand the very point I've made above:

We have not addressed the question of sprite inheritance. Scratch users have long wanted a clone block that would create a new sprite sharing behavior with an existing sprite.

BYOB is definitely focused on a functional/procedural model of computation (to the point where they encode objects as lambdas to illustrate "OOP"), but that seems like a poor match for the design of the existing Scratch system.

Scratch without BYOB invites an object-oriented interpretation, and the clone operation that users have "long wanted" clearly takes us one step toward Self-style prototype-based OOP.

I understand the claim that users can't ask for features they don't yet understand. It is just that in my (limited) experience, Scratch users run up against the lack of OOP-style class/prototype abstractions long before they create scripts complex enough to call for procedural abstractions.

BYOB prototypes = Lieberman prototypes

Hi Tim,

the manual you're referring to is about the "old" BYOB 3.0 release. We're currently hammering v3.1 out of the door, which introduces first class sprites along with real inheritance through attribute delegation on the sprite level, plus the CLONE block you've been missing.

Thanks, I had been trying to

Thanks, I had been trying to remember the reference :) Liberman provides a fairly direct explanation for what was desired here.

The more I think about it, classical prototypes, as well as Scratch/BYOB's general concurrent animation model, is too mutation-heavy if we want to talk pedagogy. A value-based approach (e.g., data flow, FRP) that automatically propagates changes over parameterized inputs (time, user interaction, etc.) seems like it would be much simpler: we'd eliminate these messy features from the language and automatically exploit what's left. Tangible functional programming is a great early realization for this.

Garnet

The whole procedural programming thing is the problem, not the prototypes. Its possible to do prototypes in a declarative system; check out Garnet:

Most programming in the Garnet system uses a declarative style that eliminates the need to write new methods. One implication is that the interface to objects is typically through their data values. This contrasts significantly with other object systems where writing methods is the central mechanism of programming. Four features are combined in a unique way in Garnet to make this possible: the use of a prototype-instance object system with structural inheritance, a retained-object model where most objects persist, the use of constraints to tie the objects together, and a new input model that makes writing event handlers unnecessary. The result is that code is easier to write for programmers, and also easier for tools, such as interactive, direct manipulation interface builders, to generate.

I think these earlier OO declarative programming models could be more relevant today in an end-user programming language.

Propagation vs. Delegation

I'm not deep into pedagogy, but it's my impression that Scratch's/BYOB's "concurrency lure" is actually quite nifty and effectively gets kids to think parallel. Regarding the issue of delegation vs. propagation I probably in part agree with you. An interesting point is, that in order to visualize changes in delegation-inherited properties in a graphical environment such as BYOB they have to be either propagated or continuously polled :-)