Why I Like PLT Scheme
started 3/19/2004; 8:02:18 AM - last post 3/26/2004; 10:20:42 AM
|
|
Chris Rathman - Why I Like PLT Scheme
3/19/2004; 8:02:18 AM (reads: 10744, responses: 19)
|
|
Why I Like PLT Scheme |
So, we've written a multithreaded port scanner, nicely bundled up as a library for use in our other programs. In the process we've touched on a lot of my favorite features of Scheme (and PLT Scheme in particular): its clever loop constructs, its higher-order functions, concurrency, macros, modules, and contracts.
Scheme is fresh on my mind, as I'm over the halfway point in the SICP videos (part 6a on Streams is the best one thus far). The author of the K5 article is no Abelson or Sussman, but does try to make the language accessible.
Posted to general by Chris Rathman on 3/19/04; 8:03:53 AM
|
|
|
|
Luke Gorrie - Re: Why I Like PLT Scheme
3/19/2004; 1:31:46 PM (reads: 1300, responses: 0)
|
|
Very good!
|
|
Ehud Lamm - Re: Why I Like PLT Scheme
3/19/2004; 2:49:18 PM (reads: 1285, responses: 3)
|
|
The highlight seems to be high order functions. Not really unique to Scheme or anything.
|
|
Jacob Matthews - Re: Why I Like PLT Scheme
3/20/2004; 7:01:24 PM (reads: 1101, responses: 2)
|
|
True enough, Ehud. I wrote the article mainly for people who only ever use mainstream languages, as the intro hopefully conveyed. I certainly didn't mean to give the impression that PLT Scheme was the only language ever to have higher-order functions or anything like that, just that having them was one of the reasons I liked it. Definitely "Why I Like PLT Scheme Better than GHC and SML/NJ" would be a more interesting article to the sort of people who read LtU, but that's an article for another day. :)
|
|
Ehud Lamm - Re: Why I Like PLT Scheme
3/21/2004; 12:45:22 PM (reads: 1002, responses: 1)
|
|
"Why I Like PLT Scheme Better than GHC and SML/NJ" would be a more interesting article...
That wasn't my point. I was reflecting on the fact that h.o.f are now a common technique in mainstream languages (e.g., delegates, etc.)
|
|
Dominic Fox - Re: Why I Like PLT Scheme
3/22/2004; 3:06:02 AM (reads: 934, responses: 0)
|
|
Actually using C#'s delegates the way one often wants to use a h.o.f. in Scheme (e.g. with some value bound inside a closure) is still a pain, though. Compare the following:
(define get-multiplier
(lambda (x)
(lambda (y)
(* x y))))
and
public delegate int MultiplierDelegate(int y);
public class Multiplier
{
private int x;
private Multiplier(int x) { this.x = x; }
private int Multiply(int y) { return x*y; }
public static MultiplierDelegate GetDelegate(int x)
{
Multiplier newMultiplier = new Multiplier(x);
return new MultiplierDelegate(newMultiplier.Multiply);
}
}
[...]
[Test]
public void TestMultiplier()
{
MultiplierDelegate m = Multiplier.GetDelegate(6);
System.Console.WriteLine(m(9));
}
Does anyone know a cleaner (pre-Whidbey) equivalent for the C# code?
|
|
Luke Gorrie - Re: Why I Like PLT Scheme
3/22/2004; 6:00:53 AM (reads: 879, responses: 2)
|
|
What happens if you spawn 65535 port-probing threads in C#?
|
|
Bryn Keller - Re: Why I Like PLT Scheme
3/22/2004; 9:24:31 AM (reads: 811, responses: 0)
|
|
Why would you do that, knowing that .Net uses heavyweight threads? I imagine you'd use the (included) asynchronous IO libraries to solve this particular problem.
If you spawn 65535 threads in Erlang, are they all condemned to run on a single processor? They are in PLT Scheme, for sure.
Tradeoffs...
|
|
Dominic Fox - Re: Why I Like PLT Scheme
3/22/2004; 10:05:36 AM (reads: 801, responses: 0)
|
|
I wrote a port-scanner in C# to find out. Result: it runs out of memory. That's with a gig of RAM, btw. 1000 threads at a time goes quite nicely tho'.
(Added) I did post the code here, but then thought better of it - chiefly because it was lousy.
(PPS) The asynchronous I/O stuff in .Net is actually still thread-based (every callback happens on a separate thread allocated from the thread pool), and I'm quite mystified as to what the underlying mechanism is - an asynchronous send on a socket seems to write the outgoing data to a system buffer somewhere, then call you back on a pooled thread when it's actually been sent, for instance. I've yet to see a really clear explanation of the behaviour of these libraries.
|
|
Luke Gorrie - Re: Why I Like PLT Scheme
3/23/2004; 6:43:52 AM (reads: 646, responses: 0)
|
|
I didn't particularly mean to knock C#'s threads (of which I know nothing), but to draw attention to the rather stylish use of concurrency in the Scheme program. Higher-order functions or no, there are few languages that let you write code like that. I was impressed.
I also think that the program does take advantage of SMP. All it does is make a series of system calls, so that the SMP-savvy kernel does the real work. But I don't think it's an issue for this program one way or the other: what matters is sending the SYN packets in parallel and being prepared to receive the replies in any order.
Dominic, I'd be curious to see the C# code, lousy or not :-). I've never seen a C# program.
|
|
Dominic Fox - Re: Why I Like PLT Scheme
3/23/2004; 7:33:31 AM (reads: 633, responses: 0)
|
|
Here it is minus the egregiously incorrect threading behaviour (that I spotted - there may yet be more...):
// project created on 22/03/2004 at 17:04
using System;
using System.Net;
using System.Net.Sockets;
using System.Threading;
class MainClass
{
public const int THREAD_MAX = 1024;
public static void Main(string[] args)
{
Thread myThread;
PortScanner myScanner = new PortScanner(IPAddress.Loopback);
System.Console.Write("Starting port-scanning threads");
for (int i = 0; i < THREAD_MAX; i++)
{
myThread = new Thread(new ThreadStart(myScanner.Scan));
myThread.Start();
}
myScanner.ScannedAllPorts.WaitOne();
System.Console.WriteLine();
for (int i = 0; i < THREAD_MAX; i++)
{
if (myScanner.Ports[i]) System.Console.WriteLine("Port " + i + " is open.");
}
}
private class PortScanner
{
private IPAddress remoteHost;
private bool[] ports;
private int portsScanned;
private int portToScan;
public bool[] Ports
{ get { return ports; } }
public AutoResetEvent ScannedAllPorts = new AutoResetEvent(false);
public PortScanner(IPAddress remoteHost)
{
this.remoteHost = remoteHost;
Reset();
}
public void Reset()
{
portsScanned = 0;
portToScan = 0;
ports = new bool[THREAD_MAX];
}
public void Scan()
{
int currentPort = Interlocked.Increment(ref portToScan) - 1;
TcpClient c = new TcpClient();
try
{
c.Connect(remoteHost, currentPort);
ports[currentPort] = true;
}
catch {}
c.Close();
System.Console.Write(".");
int portsScanned = Interlocked.Increment(ref this.portsScanned);
if (portsScanned==THREAD_MAX) this.ScannedAllPorts.Set();
}
}
}
Looks a lot like Java, doesn't it?
|
|
Luke Gorrie - Re: Why I Like PLT Scheme
3/23/2004; 7:41:29 AM (reads: 633, responses: 3)
|
|
If you spawn 65535 threads in Erlang, are they all condemned to run on a single processor?
Certainly not! They're not even confined to run on the same machine.
But a distinction should be drawn between concurrent programming style and parallel execution, because they are separate problems. Erlang/CML/MzScheme concurrency is about writing programs that do a lot of things at once, the same problems that other languages often tackle with asynchronous libraries. It is not in itself about squeezing maximum performance out of multiprocessor hardware.
That said, Erlang is no slouch for multiprocessor execution, whether those processors are on the same machine or not. Below is a distributed version of this port scanner: you say which host to scan, which ports to scan, and a list of machines to scan from. If one of those machines has several CPUs, just include its name in the list several times. For example, to scan the first thousand ports of "dodo" using one node on that machine and two on its neighbour "kookaburra":
(erlang@dodo)11> scan:run("dodo", lists:seq(1, 1000), ["dodo", "kookaburra", "kookaburra"]).
[9,13,21,22,23,25,37,79,80,111,113,139,199,512,513,514,515,720,955]
The code is not the shortest imaginable, but I think it's not bad given how much it's doing.
-module(scan).
-compile(export_all).
-import(lists, [split/2, sublist/3, nthtail/2, flatmap/2]).
%% Return the list of the Ports on TargetHost that are open.
%% Scan concurrently from all of SourceHosts.
run(TargetHost, Ports, SourceHosts) ->
Nodes = start_worker_nodes(SourceHosts),
upload_scanner_code(Nodes),
OpenPorts = scan_open_ports(Nodes, TargetHost, Ports),
stop_worker_nodes(Nodes),
OpenPorts.
%% Scan open Ports of Target with load sharing between Nodes.
scan_open_ports(Nodes, Target, Ports) ->
NodeAllocations = share(Ports, Nodes),
flatmap(fun({Node, PortsToCheck}) ->
order_portscan(Node, Target, PortsToCheck)
end, NodeAllocations).
%% share(List, Recipients) -> [{Recipient, SubList}]
%% Share List equally amongst Recipients.
%% Example: share([1,2,3,4,5], [a, b, c]) => [{a, [1,2]}, {b, [3, 4]}, {c, [5]}]
share([], _) ->
[];
share(List, R = [H|T]) ->
{SubList, Rest} = split(length(List) div length(R), List),
[{H, SubList} | share(Rest, T)].
%%
%% Distribution and worker-node management
%% Start slave (worker) nodes on Hosts. Return the list of node names.
start_worker_nodes(Hosts) ->
start_slaves(Hosts, 1).
start_slaves([Host|Hosts], Num) ->
{ok, Node} = slave:start_link(Host, "slave" ++ integer_to_list(Num)),
[Node | start_slaves(Hosts, Num+1)];
start_slaves([], _) -> [].
stop_worker_nodes(Nodes) -> rpc:multicall(Nodes, erlang, halt, []).
%% Load the current module onto Nodes.
upload_scanner_code(Nodes) ->
{Module, Code, Filename} = code:get_object_code(?MODULE),
rpc:multicall(Nodes, code, load_binary, [Module, Filename, Code]).
%% Order Node to find open Ports on Target. Return the list of open ports.
order_portscan(Node, Target, Ports) ->
rpc:call(Node, ?MODULE, portscan, [Target, Ports]).
%%
%% Networking
portscan(Target, Ports) ->
Flags = parmap(fun(Port) -> port_is_open(Target, Port) end, Ports),
collect(Ports, Flags).
%% Predicate: Is Port of Host open?
port_is_open(Host, Port) ->
case gen_tcp:connect(Host, Port, [], 5000) of
{ok, Socket} -> gen_tcp:close(Socket), true;
{error, _} -> false
end.
%%
%% Utility
%% Map function F over list L in parallel.
parmap(F, L) ->
Parent = self(),
[receive {Pid, Result} -> Result end
|| Pid <- [spawn(fun() -> Parent ! {self(), F(X)} end) || X <- L]].
%% collect(List, FlagList) -> Sublist
%% Return each element of L for which the corresponding FlagList
%% element is true.
collect([], []) -> [];
collect([H|T], [true|Flags]) -> [H|collect(T, Flags)];
collect([H|T], [false|Flags]) -> collect(T, Flags).
|
|
Jacob Matthews - Re: Why I Like PLT Scheme
3/23/2004; 9:13:52 AM (reads: 607, responses: 0)
|
|
a distinction should be drawn between concurrent programming style and parallel execution, because they are separate problems. Erlang/CML/MzScheme concurrency is about writing programs that do a lot of things at once, the same problems that other languages often tackle with asynchronous libraries. It is not in itself about squeezing maximum performance out of multiprocessor hardware.
I think this point deserves to be more widely understood than it is, so I thought I'd highlight it. 'Parallelism' is about making algorithms faster by taking advantage of fancy hardware, and in an ideal world the mythical Sufficiently Smart Compiler would take care of it entirely. 'Concurrency,' on the other hand, is about expressing naturally nondeterministic algorithms in a clean way, and has nothing in particular to do with fancy hardware or compiler optimizations. I've seen a lot of smart people get confused over this point.
(As a sidenote, I'm really excited to see people take such an interest in this article, which I thought was a little too basic for this crowd. Thanks for your interest, everybody!)
|
|
Luke Gorrie - Re: Why I Like PLT Scheme
3/23/2004; 9:29:27 AM (reads: 589, responses: 0)
|
|
Thanks for posting the code Dominic! It does indeed look a lot like Java :-). I guess the ref business is the "explicit boxing" I've heard about?
|
|
Bryn Keller - Re: Why I Like PLT Scheme
3/23/2004; 10:30:35 AM (reads: 571, responses: 1)
|
|
Certainly not! They're not even confined to run on the same machine.
This is a very interesting example, thanks!
To my way of thinking, though, the answer to my question, "If you spawn 65535 threads in Erlang, are they all condemned to run on a single processor?" is really "Yes, they are, but distributed programming in Erlang is so easy, you'd load the same code into different processes and/or machines and start some of the threads there instead." That is, one Erlang process (OS process, I mean, doesn't Erland call threads 'processes' also?) can spawn a huge number of threads, but they'll all run on the same processor - they're not OS threads and so the OS won't distribute the load over multiple processors. Of course you can call up other OS processes and tell them to work on the same problem. That's fine, and I think the Erlang approach is excellent, I just want to be clear.
The question I was really trying to ask might have been better put as, "I know Erlang provides its own threading, but can it use more than one OS thread per OS process to run all those userspace threads?"
The reason I ask is that I sometimes wonder whether it's better to support cheap userspace threads (like Erlang, PLT, Ocaml, etc.) and make people who want to write interfaces to C responsible for preventing all 65535 threads from being paused when one calls the C library, or whether it's better to do what's sometimes called M*N threading, that is using M number of OS threads to run N number of user threads. It seems to me that M*N threading would make life simpler for the users of the language (calls to C would automatically not suspend all the threads, all processors would automatically be available for use, and so on). I'm not aware of any language implementations that actually provide this, however, so I'm assuming the barriers to implementation must be formiddable.
Do you know of any materials which might shed light on this question?
Were M*N threads considered but rejected for Erlang, for instance?
|
|
Dominic Fox - Re: Why I Like PLT Scheme
3/23/2004; 2:01:25 PM (reads: 527, responses: 0)
|
|
ref is for passing by reference (by value is the default). It has a somewhat peculiar meaning when what you're passing is a reference type. See MSDN for clarification (sort of).
|
|
Luke Gorrie - Re: Why I Like PLT Scheme
3/24/2004; 9:02:06 AM (reads: 391, responses: 0)
|
|
The question I was really trying to ask might have been better put as, "I know Erlang provides its own threading, but can it use more than one OS thread per OS process to run all those userspace threads?"
You're right that I creatively interpreted your question for my own ends :-). Literally you are quite correct regarding the standard implementation of Erlang.
To interface C code with Erlang you typically write a separate C program and talk to it via a pipe. This is an excellent fit with the Erlang "shared nothing" philosophy of code isolation. Erlang has a port type that makes those programs look a lot like Erlang processes.
Adding C code to the Erlang runtime system is definite Real Man territory and most people never do it. You do have to make everything absolutely non-blocking. If you really want to you can use threads for your own purposes: the "file driver" does due to the lack of non-blocking system calls for file I/O.
A multithreaded Erlang runtime system has been implemented for research, but it was never merged into the main system. It's a beautiful fit with Erlang as a language, but I don't think there is huge practical demand for it. I have heard rumour that someone is going to have another go at this soon though.
|
|
Patrick Logan - Re: Why I Like PLT Scheme
3/25/2004; 7:24:04 AM (reads: 313, responses: 0)
|
|
Adding C code to the Erlang runtime system is definite Real Man territory and most people never do it. You do have to make everything absolutely non-blocking. If you really want to you can use threads for your own purposes: the "file driver" does due to the lack of non-blocking system calls for file I/O.
This is similar for other mission critical or high performance systems. For example, the Gemstone Smalltalk persistent application server can run C code in the same process, but this is almost never recommended because someone's bad C can screw up all the nice "managed code" (as the kids call it these days). Screwing up someone's C# text editor is one thing, but screwing up a bank transaction or a 911 phone call is something else.
|
|
Christopher Hendrie - Re: Why I Like PLT Scheme
3/25/2004; 8:14:17 PM (reads: 285, responses: 0)
|
|
My understanding is that GHC's implementation of Concurrent Haskell supports a "pool" of OS threads (at least experimentally) and a much larger number of user threads, which I believe is the M*N threading you're looking for.
I think there was some relevant discussion associated with Simon Marlow's Haskell Web Server (ICFP 2000?), but I'm not sure how far things have progressed since then.
|
|
GordonWeakliem - Re: Why I Like PLT Scheme
3/26/2004; 10:20:42 AM (reads: 240, responses: 0)
|
|
|
|
|