3-body Problem in Erlang
I've been learning Erlang, and trying to understand the pros and cons of its approach to concurrency, particularly for physical simuilations that are computationally-intensive and perhaps hard to parallelize. The press packets keep saying that Erlang makes it easy: if you want to model some thing that has n parts, just fork off n processes that each do the work of one part, communicating when they need to. This is by no means the only way to build a model of such a system, but it's what the Erlang evangelists tout. To start testing the wisdom of that, I made a simulation for a simple gravitional system. What follows are the tribulations of a neophyte, and there may well be better ways to do it. What's the upshot? I'm not sold on Erlang's ease-of-use for one-to-one modeling of systems.
The first conceptual problem I hit was the question of synchronization. In order for the sim to be a decent finite approximation to the continuous world of physics, we need to break it into discrete time steps, each of which depends on the previous time step (at least, that's the only way I know of to get a fair approximation). This means that each particle can't just crunch away at it's own speed, working as fast as it can to calculate its position at various times. Easily the particles could grow out of sync with one another, making an inaccurate physical model. So right at the start, Erlang's asynchronous model seems to be a disadvantage.
Another problem is that in this particular model, every process depends on every other. As such, it would be nice if each process could simply fetch the needed information about the others. In Erlang, that's not trivial, since it would require each particle acting as a server, responding to requests for its position data, and each one acting as a client, sending off requests and waiting for responses. That sounded overly complicated, so I chose a different approach: each particle broadcasts its position as soon as it has been calculated. This basically dispenses with one half of the client-server interaction; but it means that each process can receive messages that it doesn't immediately care about, so it has to manage that information.
I solved the synchronization problem by attaching a time stamp (in model time, not real time) to each position message. When it has enough information about time t, a process calculates its position for time t+1 and broadcasts a message that says, "My position at t+1 is (x, y)." The others capture this information, and when they know all they need to know about time t+1, they each calculate their position at time t+2, and so on ad nauseam.
As I said, a process can receive messages that it doesn't care about yet. In the code, you'll see that each process keeps track of a a list of "current" positions of particles, and also a list of "future" positions. In this particular model, I know that a process can only receive data that's at most one time step ahead of it. That's because a process that is "at" time t (i.e., it has not yet computed its t+1 position) cannot receive information about others' t+2 positions, because those t+2 positions would depend on its own t+1 position. That makes the future-data management a little easier in this case. A different model might not have that constraint, and would require better management of the future data.
This model is particularly pernicious, though, since it has total mutual interdependence. I'm interested in how well Erlang would work for big game worlds; maybe in those worlds each object has a small neighborhood of other objects that can affect it. But, I expect that the coding style would be the same within that neighborhood. What's more, if objects can move around and change neighborhoods, then there will be the issue of managing the set of objects in each process's neighborhood. Is this unneccessary overhead?
A final note about the paradigm: a lot of my work went into managing the messages, since they can be received at any time and in any order. The machine does a lot of work in handling the messages, too: at each time step there are n distinct messages but the machine has to deliver n2 messages. In this case, the particle positions are only written by one process, and they are read by all the others. A shared-memory approach might have an advantage here, since locking per se might not be needed.
At last, the code. To run it, start up erl
and type c(atoms).
, then atoms:start().
-module(atoms). -export([start/0, particle_loop/10, monitor/2]). sum_vec([], Init) -> Init; sum_vec([{X, Y}|Etc], Init) -> {Rx, Ry} = sum_vec(Etc, Init), {Rx + X, Ry + Y}. % force() % Calculate the force between two particles, assumed to be of the same mass force(Ax, Ay, Bx, By) -> DistSq =((Bx - Ax) * (Bx - Ax) + (By - Ay) * (By - Ay)), {0.0001 * (Bx - Ax) / DistSq, 0.0001 * (By - Ay) / DistSq}. % timestep(X, Y, Vx, Vy, Others) % Calculate the next position and veloctiy for a particle at X, Y, % moving at velocity Vx, Vy, acted upon by the forces of the % particles in Others timestep(X, Y, Vx, Vy, Others) -> {NewVx, NewVy} = sum_vec(lists:map(fun({N, Nx, Ny}) -> force(X, Y, Nx, Ny) end, Others), {Vx, Vy}), {NewX, NewY} = sum_vec([{Vx, Vy}], {X, Y}), {NewX, NewY, NewVx, NewVy}. % inform_pos() % Inform another process about my position at this time inform_pos(To, From, T, X, Y) -> To ! {posof, From, T, X, Y}. % particle_loop() % This routine is the main control loop for each particle thread. % It keeps track of the current time (according to this particle), % the particles own current position, a list of the other particle % processes, and a set of messages that have been received from % others, regarding their current and future positions. % % Synchronization is achieved by keeping this explicit time % parameter, T. No particle can process a time step until it gets % all the messages for all the other particles for that time step. % In the event it receives a message about a future time step, it % queues that up in the ParticlesLater set. Such a future message % can never be for a time later than T+1, because the other particle % must be waiting for this one to issue its position for T+1. % A slightly more robust model would allow information to be received % about any future time step at any time, and would carefully % determine which information is relevant to the iteration it's % presently calculating. particle_loop(Name, T, X, Y, Vx, Vy, Monitor, Procs, ParticlesNow, ParticlesLater) -> receive % the 'neighbors' message just lets a process know the PIDs of all % the other particles. {neighbors, MsgProcs} -> Others = [P || P <- MsgProcs, P =/= self()], [inform_pos(P, Name, T, X, Y) || P <- [Monitor|Others] ], particle_loop(Name, T, X, Y, Vx, Vy, Monitor, Others, ParticlesNow, ParticlesLater); {posof, Sender, MsgT, OtherX, OtherY} -> if MsgT == T -> NowParticlesNow = [{Sender, OtherX, OtherY}|ParticlesNow], if (length(NowParticlesNow) == length(Procs)) -> {NewX, NewY, NewVx, NewVy} = timestep(X, Y, Vx, Vy, NowParticlesNow), [P ! {posof, Name, T+1, NewX, NewY} || P <- [Monitor|Procs]], particle_loop(Name, T+1, NewX, NewY, NewVx, NewVy, Monitor, Procs, ParticlesLater, []); true -> particle_loop(Name, T, X, Y, Vx, Vy, Monitor, Procs, NowParticlesNow, ParticlesLater) end; MsgT == T+1 -> particle_loop(Name, T, X, Y, Vx, Vy, Monitor, Procs, ParticlesNow, [{Sender, OtherX, OtherY}|ParticlesLater]); true -> io:format("posof recd, no match\n") end; Msg -> io:format("rec'd unknown message"), io:write(Msg), io:nl() end. start() -> Monitor = spawn(atoms, monitor, [0, []]), Procs = [spawn(atoms, particle_loop, [N, 0, N, 0, 0, 0, Monitor, [], [], []]) || N <- [1, 2, 3]], [P ! {neighbors, Procs} || P <- Procs], io:format("initialized\n"). monitor(T, State) -> receive {posof, Particle, PT, X, Y} -> NewState = [{Particle, X, Y}|State], if (length(NewState) == 3) -> % fixme: assumes exactly 3 particles Particles = lists:keysort(1, NewState), Posns = [{X, Y} || {N, X, Y} <- Particles], io:format("~p\t~p\t~p\n", Posns), monitor(T+1, []); true -> monitor(T, NewState) end end.