User loginNavigation |
LtU ForumRemora: An Array-Oriented Language with Static Rank Polymorphism
The paper: An Array-Oriented Language with Static Rank Polymorphism, Justin Slepak, Olin Shivers, and Panagiotis Manolios, Northeastern University, 2014.
Abstract. The array-computational model pioneered by Iverson’s lan- guages APL and J offers a simple and expressive solution to the “von Neumann bottleneck.” It includes a form of rank, or dimensional, poly- morphism, which renders much of a program’s control structure im- plicit by lifting base operators to higher-dimensional array structures. We present the first formal semantics for this model, along with the first static type system that captures the full power of the core language. Binary Representation - Is it something to rise above?There is a tension in programming language design. Are we describing operations, or are we describing semantics? I have a strong opinion that these are separate spheres - that representation should be either fully above or fully below the language's native level of abstraction - and that it is therefore hard for a language to do both well. At one extreme, C, Pascal, Fortran, etc -- all those old classic statically compiled languages -- are primarily for describing machine operations, and specifically machine operations on bits and bytes. To the extent that they have type systems at all, the types each refer to a specific range of memory - bits and bytes - mapped directly to a template that specifies the bit-level encoding of each value. At the other end, R4RS scheme - I think that was the last version that this mostly applied to. R4RS scheme was, aside from a few warts, a language in which caring how something was represented in bits and bytes just wasn't a relevant question. The programming model was a simple one in which things were values, or containers for values, and if you cared how a character or a number was represented in bits, then it could only be because you were making a mistake. Numbers were numeric values, not a specific number of bits with a specific mapping of bit patterns to numbers. Characters were character values, not a specific number of bits with a specific mapping. They had no numeric values. In principle you didn't know, or even care, whether you were working on a machine were everything was represented in nine-trit "trytes" at the bottom level; there was just no reference to representation anywhere in the language's semantics, because the abstraction barrier was drawn above the level of representation. I recently observed that C was a language for describing machine operations, and scheme a language for describing semantics, and this is what I meant. The C programming model makes no sense if types don't represent specific memory-aligned areas with specific bit-pattern-to-value mappings. R4RS scheme's programming model would have made no sense if they were. They operate at entirely different levels of abstraction. Another topic recently asked what would be the characteristics of a good successor language to scheme, or of a future direction for scheme - and I think that representational abstraction barrier is an important property of any high-level language which ought to be preserved. If you're not building a binary interface to something (like hardware or a communications channel that's specified in binary) and you're not working in a hardware-oriented language where those units and specific mappings of bit patterns to values ARE the sole underlying paradigm of your whole type system, then I submit that there is a deficiency in the design of the language. Conversely, (and here R4RS scheme failed horribly) in such a language when you *are* building an interface to something, the patterns of bits you read or write should get interpreted according to a very specific mapping. The language that abstracts above representation issues for other operations, cannot be used to interface with anything unless the programmer specifies explicitly what mapping of bits to values is to be used for I/O. And this is because you're interfacing to things that *DON'T* have a common language for communication that abstracts above the level of representation, so you can't keep the abstraction barrier above that for the channel itself. A particularly egregious failure of this principle happened in a graphics library written in Scheme. Scheme provided no 'blob' type, and there was no way to specify a bits-to-values mapping on the I/O channels, so the implementor used strings to hold bytes, assuming that the I/O would be done in Ascii + Code page 1386. And of course the code exploded when used on an interface where reading and writing characters was by default done with a different code page or in UTF16. So... in designing a programming language to operate above the representation abstraction barrier, I would absolutely ensure that bit and byte width of representations entered into the language semantics only when invoked specifically for I/O purposes. If the word "bit" came up in any chapter of the documentation other than the one about how to do I/O, I would be certain that I had made a mistake. If it were used referring to something the programmer actually needs to know to get useful work done, I would be certain that the mistake was egregious. And this is why I am so completely certain that "uniform vectors" are a mistake in Scheme. They mean the programmer has to care about binary representation for some reason other than I/O. Good books on theoretical aspects of type theory when it applies to computer science and languagesI was wondering if anyone could recommend some good books on the more theoretical aspects of type theory as it pertains to computer science and guarantees w.r.t to certain properties as related to language theory. The only books I know of are the two by Pierce but they seem more on the applied side focusing on very specific applications of various theories that in many cases have a long history which aren't really covered by either book. Are there better books covering some of the theory more in depth? Thanks. By Carter Cheng at 2016-04-06 19:47 | LtU Forum | login or register to post comments | other blogs | 3097 reads
Some questions concerning P != NPI am actually kind of perplexed by this. My understanding is that if P != NP there should exist problems in NP that are not in P that aren't NP-complete but is the reverse also true? i.e. should there be a problem whose solution is in NP but not in P but not NP-complete would this imply that P != NP? For example, it would seem to me at first blush that certain problems do fall in this class if one allows for imperfect knowledge of subroutine code (which occurs in practice). For example password cracking would be one assuming the verification subroutine runs in polynomial time. 1. It is in NP since one can nondeterministically guess all strings and verify each in P time. My understanding about this subject is limited however but it would seem that this problem is guaranteed to execute in O(P*exp) on a TM and P time on a NTM and there is no room for improvement on a TM- why doesn't this show that P != NP? It seems like something that must have been considered by many people. Thanks in advance, P.S. I asked a CS professor already and he wasn't sure about it since it isn't his specialty. So I thought it might be worth asking on a general forum. Best successor to Scheme?I do not mean to start a completely subjective navel-gazing thread. I mean to learn, "if I loved Scheme when I learned it (and I really did, back in college in 1989), what else could I look into in these modern times?" (apropos the other current Scheme thread.) If you have thoughts, please mention something like: The language name, the language web site, the bullet point reasons you think it is at all related to where Scheme comes from spiritually, the bullet point reasons you think it is "better" than Scheme, the bullet point cons (if any ha ha!) in the language. (Or tell me this is a very bad idea for a thread, and delete it...) Wishing to simultaneously learn and yet avoid any rat-holing flame-wars. :-) IEEE Scheme expiring soonScheme has an IEEE standard, IEEE 1178-1990, which describes a version of the language slightly later than R4RS ( I see three reasonable courses of action: 1) Do the work to make R7RS-small the new edition of IEEE Scheme. 2) Do the (lesser amount of) work to reaffirm IEEE Scheme unchanged for the second time. 3) Do nothing and allow the IEEE standard to expire. Does anyone still care about having an official, citable standard for Scheme? (When I brought the question up on #scheme, someone asked what R7RS-small implementations exist. Currently there are Chibi, Chicken (partial), Foment, Gauche, Guile (partial), Husk, Kawa, Larceny, Mosh (partial), Picrin, Sagittarius.) SFI Talk: Four the hard way: Computer design and living softwareJust a funny talk I found on the Interwebs by Dave Ackley. SFI Talk: Four the hard way: Computer design and living software I don't think his ideas are particularly new but it's good to see them stated directly in a talk once. The end result is something seemingly tangent to what is possible at the moment, scalable hardware with a programming language on top. By marco at 2016-04-02 21:54 | LtU Forum | login or register to post comments | other blogs | 2457 reads
Best opening sentence ever, in a paper on S-Exprs for IDEs.The paper Haskell-Like S-Expression-Based Language Designed for an IDE appeals to me at the moment because I actually like the regularity of s-exprs (to side-step the inevitable technopsychobabble corners people paint themselves into otherwise) but still really want static type checking (with some inference, probably). The intro is great and I won't spoil it, here's what is just after the opening jab.
Plus, any paper that mentions Shen and Elm and Flow et. al. can't be all bad. P.S. I think this video is very nice, too. (some extra keywords: xixixao, Shem, Golem) ¿How can a dynamically typed language not actively prevent static checking?With all due serious respect to: TypedRacket, TypedLua, Dialyzer, TypeScript, Flow, et. al., I see most if not all attempts to add some form of static type checking to a dynamic language to be indictments of the whole venture. Sure, they might get close to having something, but the something is so far away from 'the goal' that it is proof of the tilting at windmill nature of it all. From my Day Job Joe Programmer Static Typing Bigot perspective. Cf. the long and complicated list of Flow issues on GitHub. I realize that building a language with typing from the ground up is empirically hard to do, or empirically hard to conceive of doing anyway. Thus instead of asking for Magic Pixie Dust Retroactive Static Typing libraries, I think I would like to ask: How can dynamic language creators at least make their dynamic language more rather than less amenable to some possible future static typing analysis? (Of course the answer is to use some static typing engine to lint the grammar/possible productions/whatever, which is obvious and sad and ironic and undermines any such dream ever being fruitful.) Or should we just admit that anybody who thinks we can have our dynamic cake and eat it too is barking up the wrong tree? Feedback requested: A sample implementation of L-systems in HaskellA while back I published a draft of an article I wrote to explore L-systems, specifically D0L systems, and would very much enjoy any feedback (positive or negative) of what I have written. A sample implementation of L-systems in Haskell Primarily, the content which gives me pause is the mathematics revolving morphic words as I have yet to fully grok them in any greater sense than the one that I have used in my implementation. Please, critique my work and point out any errors. Kind regards, |
Browse archives
Active forum topics |
Recent comments
2 days 22 hours ago
3 days 19 hours ago
4 days 23 hours ago
4 days 23 hours ago
1 week 3 days ago
1 week 3 days ago
1 week 3 days ago
4 weeks 3 days ago
5 weeks 2 days ago
5 weeks 2 days ago