A future is a programming mechanism to do some computation on the side, while the main program goes on. One construct that appears in this context is the "promise", which represents a handle for the computation on the side. In Python this could look like this:
def side_computation(): return do_stuff() promise = future(side_computation) do_other_stuff() result = promise.force()
The side_computation function will do stuff and take some time, but the main program flow doesn't need the result at once, so side computation is put into a future and the returned promise is kept, while other stuff is being done. After a while the result of side computation is needed, so the result value is forced out of the promise.
On a multicore computer do_stuff and do_other_stuff can be processed in parallel. Futures are a simple theory to do concurrent computations. The C++0x standard for example will include them to make C++ a better platform for concurrent programming, but it isn't perfect yet.
Suppose you start several threads to perform calculations or retrieve data in parallel. You want to communicate with those threads using futures. And here’s the problem: you may block on any individual future but not on all of them. While you are blocked on one, other futures might become ready.
Bartosz Milewski complains about the lack of composability. I'd like to propose an outline how this can be implemented and code like the following can be written.
promises = [future(x) for x in list_of_computations] first_promise = force_any(promises)
Suppose there is a list of computations and every computation is handled by a future, so there is now a list of promises. We'd like to implement the force_any function, that returns the first promise, which finishes its computation. A naive active waiting function could just loop over the promises until one finishes, but this burns CPU cycles that could be put to better use. A solution that blocks the waiting thread is prefered.
def force_any(promise_list): s = Semaphore() for promise in promise_list: promise.register_wake_up(s) if promise.finished: return promise s.down() # block here for promise in promise_list: if promise.finished: return promise
We use the common
mechanism to do the blocking and unblocking.
A new semaphore s is created and registered with every promise.
Afterwards the down() method call will block the current thread, until a finished
promise will wake it up again.
The last part is to find out which promise woke us and return this one.
This code isn't perfect here, since it isn't guaranteed that the
promise that woke us up is the one that is returned,
but that doesn't really matter for the behavior of
Note the check for finished promises directly after registering the semaphore. This is done for the case that all promises already finished.
This algorithm can't be executed on top of any Futures implementation, because it depends on the semaphore registering. The future needs to do this:
try: result = side_computation() # takes a long time except e: exception = e for semaphore in registered_wake_ups: semaphore.up() # may unblock a waiter
Hopefully this article showed that waiting for multiple futures isn't difficult to implement, but needs extended promises.