Programming with touch?

So the iPad has been out for awhile now, and I can only imagine that slates and similar multi-touch devices (e.g., tables and walls) will become more and more popular. Although these devices supposedly focus on content consumption, they seem to have a lot of potential for creating things...like programs. Has anyone else thought about what a programming language and environment for a multi-touch device would look like?

I think free-form textual syntax would probably not work as it is too tedious to type much text in on a virtual keyboard. However, structured editing might be feasible given the direct screen access. Perhaps such a language would be visual based on patches and data-flow connections, maybe Conal's tangible functional programming, or block-based ala Scratch? As far as I know, there are no touch-optimized VPLs out there yet, they are all designed for mice and keyboards. Even at that, most VPLs are for end-users, whereas I'm wondering if we could do serious programming on something like an iPad.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Tactile programming languages

The official name for this in the visual language community is a tactile programming language. See the linked Quantum Encyclopedia article for references.

Of course, if you want in-depth recommendations on this, you really should be asking Alan Kay or a member of the end-user programming community, not necessarily here.

However, structured editing might be feasible given the direct screen access.

If you are going to give me structured editing, then you will also want to give me voice control over the user interface. After all, structured editors use templates that allow more free-form names. I would also argue the target audience for this could be people with disabilities.

However, I don't think structured editing is a great replacement for free-form textual syntax.

I never heard Reppening use

I never heard Reppening use tactile programming to describe this, so I'm assuming this was a comment/opinion sneaked in by whatever editor of the tactile programming language wikipedia page. Tactile programming is not a common enough term to be very notable yet, but it is a nice name for this area. As far as I know, nobody in the VPL research community is looking at this problem right now. I would be very interested if there was anyone...

I wonder if a slate programming language can be close to as efficient as a keyboard-based language. Perhaps the syntax is still "free-form", just not free-form text. So we could use tiles but place them freely in a 2D space with overlapping and such to build programs. Unfortunately, I don't have anything else to compare to; e.g., there are no word processors that allow one to efficiently compose reports on a tablet without being key typing centric (perhaps use speech). Speech and touch are an interesting combination to consider.

Some Ideas

I've been experimenting with the iPad, mainly interested in using Squeak via Gaucho or something like it.

I have a couple of ideas that might be worth investigation:

  • Replace ASCII-style block indentation and braces with gestures to select "lines" and indent/outdent.
  • As Z-Bo stated, allow arbitrary phrase identifiers, possibly speakable, but let them be optional, allowing for graphical nodes with visual distinguishments like color and shape.
  • Use some entirely graphical dataflow environment like Quartz Composer, just adding multitouch support for direct manipulation.
  • Adapting Common Lisp's SLIME interaction protocol so that any values displayed are directly inspectable as objects rather than just printouts.

Of course, my Slate project was always intended for this purpose, just less mature. :-)

Adapting Common Lisp's SLIME

Adapting Common Lisp's SLIME interaction protocol so that any values displayed are directly inspectable as objects rather than just printouts.

If you like SLIME, you may want to look at Symbolics Genera & Garnet. Genera is generally considered to be the better of the two in terms of "directly inspectable as objects".

Symbolics Genera

I am a licensee, owning a MacIvory II and having, uh, experimented with the virtual machine lately. So, yeah, I'm all over that! It inspired a lot of Slate's "Chalk" UI design document which basically tries to adapt Morphic-style direct manipulation with CLIM-style direct manipulation.

I've never owned a Symbolics Lisp machine or Open Genera system

But I have read comp.lang.lisp archives, and posts such as this one.

SLIME won't work.

Seriously, who would want to program by touching 'slime'?

[above the age of 8, I mean]

Agreed :-)

Clearly, a more marketable/palatable term could be used.

I liked "Chalk" in the context of Slate, but of course people don't like touching that with their fingers, either. And "Paint" has artistic connotations.

In any case, a successful "touch" UI has to just appear to afford exactly what the user thinks it should do, and not worry about metaphor until users really get involved with it.

I think free-form textual

I think free-form textual syntax would probably not work as it is too tedious to type much text in on a virtual keyboard.

I wonder if autocompletion could be a starting point for a radical new virtual keyboard design. Autocompletion is basically navigation through a word-list which changes on typing. The standard UI widget accordingly is that of a listbox, a widget optimized for scrolling with the mouse. We are either typing and modifying the word list ( two hands ) or we are scrolling with the mouse ( one hand ). But what if we are using both hands for navigation as well as typing and we essentially optimize the navigational aspect s.t. the typing / navigation ratio decreases?

I guess at least the Vim fans among us wouldn't shy away from two dimensional modal navigation. I have at least enough confidence in this idea that I believe that we won't give up the alphabet in favour for clicking on icons and drawing boxes and arrows. Since the times we left our caves at the end of the ice age the faculty of drawing images and writing texts has been slowly and inevitably separated from each other. Programmers tend to be not very sensual/visual people, they are not Picassos who can think well in images. So icon oriented programming interfaces might just be a lost cause.

Forms and LtU's origin in outlining software

I think there's an angle towards combining autocompletion with a form-based GUI, where a form is simply something that can be displayed graphically (typically as a rectangle, nested within other forms, and having sub-forms).

You'd type the beginning of a function/operator name, and choose one from the autocompletion list. This would add a custom form (widget) for that operator to the source. The form would then have a custom GUI for interacting with (parameterizing) that operator invocation/function call. For example, a function call to a function search could have GUI knobs and options to set parameters etc.

I find the Lisp connection interesting here, too. In Lisp, expressions are already called forms, and are clearly demarcated, analogous to widget borders in GUIs. (Of course, outlining, which seems to pop up in every attempt to make source code more "hyper" - e.g. in the Dylan tools, and UserLand Frontier - goes back all the way to InterLisp. And InterLisp was where Dave Winer first saw outlining, which led to Frontier, which was the software on which the original Lambda the Ultimate ran. Lovely, huh? ;))

Squeak

I forgot to mention that there is an active effort to get Squeak's EToys and Scratch into Apple's App Store.

Dasher

This thread isn't complete without someone mentioning the Dasher text entry system.

Dasher is a zooming interface. You point where you want to go, and the display zooms in wherever you point. The world into which you are zooming is painted with letters, so that any point you zoom in on corresponds to a piece of text. The more you zoom in, the longer the piece of text you have written. You choose what you write by choosing where to zoom.

...I could see this being more programming language friendly by using the structured editor approach.

Dasher

I remember reading about it before but had not considered it in this context.

Presumably, one single gesture could trace out a "word" (or phrase of keywords?). And the lexical path space could be context-dependent.

One could prototype such a thing using JS itself using one of the multi-touch abstraction layers in development. I can't figure out if I should spend an afternoon on this myself, but it'd be good if someone could put it together to see if there's a feasible usage mode.

JavaScript iPad ide

Haven't played w/ it, but sounds like it is relevant:

http://www.jepstone.net/blog/2010/04/16/processing-js-mini-ide-for-ipad-iphone-android-chrome/

Paragraf

Interesting thread, I've released a small app Paragraf for the iPhone, for writing graphical programs using GLSL, but so far I've only made sure that the included functions have short names, so that they are easy to write.

But a structured editor, or at least something in that direction would be nice. A small step would be to have good auto-completion as mentioned above, and make sure that touch inputs on the code area focuses on a good position. Currently I almost always need to hold down the finger and move the cursor into position. The algorithm for choosing cursor position seems to prefer natural language over code.

Wonder if you good implement code snippets like textmate, but with some gestures?

TurtleScript

I've done some thinking about this topic. I think that traditional programming languages actually have syntax which is compact and well suited. I think we'll just be using touch to drag our syntax trees and variable names around, rather than typing every thing. More thoughts at http://cscott.net/Projects/TurtleScript/.

Hopefully not

Drag-and-drop is horrible according to Fitts' law, and totally inappropriate for touch (its not that great for the mouse either). Hopefully, whatever comes out has decent UX design.

Doesn't immediately follow....

I can't see why drag-and-drop would be horrible according to Fitts' law. Are you making some kind of assumption about the size of the target area that you could expand upon a little?

Space and context

Three fundamental problems with drag and drop:

  1. The distance from where you are dragging to where you are dropping can be long.
  2. The palette used to drag from consumes valuable screen real estate even when it is not being used.
  3. Because you you select the value (to drag) before you select the input site (where you drop), you cannot take advantage of any context at the input site.

Problem 2 can be mitigated by hiding the palette when not in use, which is what you see on tablets: you have a small button that you tap to bring up the palette when you want to drag something from it. This can mitigate problem 1 somewhat if the button is placed relative to the screen rather than physical space (when you move around space, the button is always in the same place). On a tablet, this is almost acceptable, except if you have a large number of options and scrolling is involved in the palette, which is pretty slow and will never become much faster as the programmer learns the system.

However, on a phone-sized display, you don't really have enough space to materialize a decent-sized palette and still expect to have your intended drop site still visible. You could hide the palette once you've selected what to drag, but then you still have the scrolling palette problem exacerbated because the screen is much smaller. I don't think you see much drag-and-drop in phone apps because it is fundamentally unworkable.

The big problem with drag-and-drop is 3, the inability to leverage context. If you know about the context on where you are going to drop, you can have a concise palette with options that are only applicable to that drop site. But since you choose the value to drag first, the computer just doesn't know and the program has to do all that filtering themselves. Now, if you are talking about a dynamically typed language, maybe there isn't much context that the computer can know about, but with a statically typed language this is very serious.

Context menus don't have any of these problems: you tap on the input site first to bring up a menu. You can make the menu hierarchical to avoid scrolling and allow programmers to input faster as they learn the system (because they can eventually navigate the hierarchy faster). On the other hand, context menus have less directness than drag-and-drop, so there is a usability tradeoff to be made.

I'm currently working on a paper that talks about this, I'll post something in about a week or so that touches on this.

Wouldn't it be feasible to

Wouldn't it be feasible to touch a pair of elements and raise a relationship-oriented context menu?

I've seen some work done with a 'hand' metaphor rather than a 'mouse' metaphor. You can tag items, group them, and perform actions that relate elements within the group or to some external element. Strategy games use this approach a great deal, and a clipboard is a related concept.

You could drag one element

You could drag one element to another and then activate a context menu related to what you could do with those two elements. But those elements have to be close to each other, on the same screen even, for that to be efficient.

I didn't say anything about

I didn't say anything about 'dragging'. Just touching.

Ah, I found the initial reference that got me interested in the hands metaphor - Hand vs Pointer [c2].

Very interesting, thanks for

Very interesting, thanks for the reference!

I can see the potential of a hand, for touch we could create slots in the chrome that you could place things in to build up hand context. It does seem a bit more complicated, but I'll have to think about it more.

Thanks

That's a pretty detailed analysis. Your paper sounds interesting, I'll look forward to seeing a link.

Nice

I'll post something in about a week or so that touches on this.

Nice pun!

Any progress on that paper?

Any progress on that paper? I'm very interesting to read more. You're bringing up some good issues -- although palette size issues are less of a concern on tablet devices, and size of palette can depend a lot on syntactic complexity of the language. Anyway, I'd love to read more; and especially to hear what you think a *good* touch-oriented programming interface might be like.

(We've had good experiences with TurtleArt/TurtleBlocks -- which isn't touch-oriented, but is definitely palette oriented. Etoys and Scratch have also been very successful; both palette-oriented.)

The submit is due Friday, so

The submit is due Friday, so I'll post it after that.

I do think EToys and Scratch have been successful in their targeted domains, but not beyond that. They lack an efficiency that programmers tend to get only with keyboard languages. Getting back that efficiency on a touch-based device is one of my goals.

Found via

Found via reddit: Touch-friendly Real-time Programming System Coming to iPad.

Its YouTube, so I can't view it yet, but sounds very interesting.