Categories

## Modern Java Recipes: Simple Solutions to Difficult Problems in Java 8 and 9, 1st Edition – B074R6B13N ISBN-13: 978-1491973172

See: Modern Java Recipes: Simple Solutions to Difficult Problems in Java 8 and 9, 1st Edition, Publisher ‏ : ‎ O’Reilly Media; 1st edition (August 11, 2017)

The introduction of functional programming concepts in Java SE 8 was a drastic change for this venerable object-oriented language. Lambda expressions, method references, and streams fundamentally changed the idioms of the language, and many developers have been trying to catch up ever since. This cookbook will help. With more than 70 detailed recipes, author Ken Kousen shows you how to use the newest features of Java to solve a wide range of problems.

For developers comfortable with previous Java versions, this guide covers nearly all of Java SE 8, and includes a chapter focused on changes coming in Java 9. Need to understand how functional idioms will change the way you write code? This cookbook—chock full of use cases—is for you.

Recipes cover:

• The basics of lambda expressions and method references
• Interfaces in the java.util.function package
• Stream operations for transforming and filtering data
• Comparators and Collectors for sorting and converting streaming data
• Combining lambdas, method references, and streams
• Creating instances and extract values from Java’s Optional type
• New I/O capabilities that support functional streams
• The Date-Time API that replaces the legacy Date and Calendar classes
• Mechanisms for experimenting with concurrency and parallelism

Ken Kousen is an independent consultant and trainer specializing in Spring, Hibernate, Groovy, and Grails. He holds numerous technical certifications, along with degrees in Mathematics, Mechanical and Aerospace Engineering, and Computer Science.

## Resources

Errata Page: http://oreilly.com/catalog/0636920056669/errata

Categories

## Actor model – Actor-based concurrency

” (WP)

The actor model in computer science is a mathematical model of concurrent computation that treats actor as the universal primitive of concurrent computation. In response to a message it receives, an actor can: make local decisions, create more actors, send more messages, and determine how to respond to the next message received. Actors may modify their own private state, but can only affect each other indirectly through messaging (removing the need for lock-based synchronization).

The actor model originated in 1973.[1] It has been used both as a framework for a theoretical understanding of computation and as the theoretical basis for several practical implementations of concurrent systems. The relationship of the model to other work is discussed in actor model and process calculi.

## History

Main article: History of the Actor model

According to Carl Hewitt, unlike previous models of computation, the actor model was inspired by physics, including general relativity and quantum mechanics.[citation needed] It was also influenced by the programming languages LispSimula, early versions of Smalltalkcapability-based systems, and packet switching. Its development was “motivated by the prospect of highly parallel computing machines consisting of dozens, hundreds, or even thousands of independent microprocessors, each with its own local memory and communications processor, communicating via a high-performance communications network.”[2] Since that time, the advent of massive concurrency through multi-core and manycore computer architectures has revived interest in the actor model.

Following Hewitt, Bishop, and Steiger’s 1973 publication, Irene Greif developed an operational semantics for the actor model as part of her doctoral research.[3] Two years later, Henry Baker and Hewitt published a set of axiomatic laws for actor systems.[4][5] Other major milestones include William Clinger’s 1981 dissertation introducing a denotational semantics based on power domains[2] and Gul Agha‘s 1985 dissertation which further developed a transition-based semantic model complementary to Clinger’s.[6] This resulted in the full development of actor model theory.

Major software implementation work was done by Russ Atkinson, Giuseppe Attardi, Henry Baker, Gerry Barber, Peter Bishop, Peter de Jong, Ken Kahn, Henry Lieberman, Carl Manning, Tom Reinhardt, Richard Steiger and Dan Theriault in the Message Passing Semantics Group at Massachusetts Institute of Technology (MIT). Research groups led by Chuck Seitz at California Institute of Technology (Caltech) and Bill Dally at MIT constructed computer architectures that further developed the message passing in the model. See Actor model implementation.

Research on the actor model has been carried out at California Institute of TechnologyKyoto University Tokoro Laboratory, Microelectronics and Computer Technology Corporation (MCC), MIT Artificial Intelligence LaboratorySRIStanford UniversityUniversity of Illinois at Urbana–Champaign,[7] Pierre and Marie Curie University (University of Paris 6), University of PisaUniversity of Tokyo Yonezawa Laboratory, Centrum Wiskunde & Informatica (CWI) and elsewhere.

## Fundamental concepts

The actor model adopts the philosophy that everything is an actor. This is similar to the everything is an object philosophy used by some object-oriented programming languages.

An actor is a computational entity that, in response to a message it receives, can concurrently:

• send a finite number of messages to other actors;
• create a finite number of new actors;
• designate the behavior to be used for the next message it receives.

There is no assumed sequence to the above actions and they could be carried out in parallel.

Decoupling the sender from communications sent was a fundamental advance of the actor model enabling asynchronous communication and control structures as patterns of passing messages.[8]

Recipients of messages are identified by address, sometimes called “mailing address”. Thus an actor can only communicate with actors whose addresses it has. It can obtain those from a message it receives, or if the address is for an actor it has itself created.

The actor model is characterized by inherent concurrency of computation within and among actors, dynamic creation of actors, inclusion of actor addresses in messages, and interaction only through direct asynchronous message passing with no restriction on message arrival order.

## Formal systems

Over the years, several different formal systems have been developed which permit reasoning about systems in the actor model. These include:

There are also formalisms that are not fully faithful to the actor model in that they do not formalize the guaranteed delivery of messages including the following (See Attempts to relate actor semantics to algebra and linear logic):

## Applications

The actor model can be used as a framework for modeling, understanding, and reasoning about a wide range of concurrent systems. For example:

• Electronic mail (email) can be modeled as an actor system. Accounts are modeled as actors and email addresses as actor addresses.
• Web services can be modeled with Simple Object Access Protocol (SOAP) endpoints modeled as actor addresses.
• Objects with locks (e.g., as in Java and C#) can be modeled as a serializer, provided that their implementations are such that messages can continually arrive (perhaps by being stored in an internal queue). A serializer is an important kind of actor defined by the property that it is continually available to the arrival of new messages; every message sent to a serializer is guaranteed to arrive.
• Testing and Test Control Notation (TTCN), both TTCN-2 and TTCN-3, follows actor model rather closely. In TTCN actor is a test component: either parallel test component (PTC) or main test component (MTC). Test components can send and receive messages to and from remote partners (peer test components or test system interface), the latter being identified by its address. Each test component has a behaviour tree bound to it; test components run in parallel and can be dynamically created by parent test components. Built-in language constructs allow the definition of actions to be taken when an expected message is received from the internal message queue, like sending a message to another peer entity or creating new test components.

## Message-passing semantics

The actor model is about the semantics of message passing.

### Unbounded nondeterminism controversy

Arguably, the first concurrent programs were interrupt handlers. During the course of its normal operation a computer needed to be able to receive information from outside (characters from a keyboard, packets from a network, etc). So when the information arrived the execution of the computer was interrupted and special code (called an interrupt handler) was called to put the information in a data buffer where it could be subsequently retrieved.

In the early 1960s, interrupts began to be used to simulate the concurrent execution of several programs on one processor.[15] Having concurrency with shared memory gave rise to the problem of concurrency control. Originally, this problem was conceived as being one of mutual exclusion on a single computer. Edsger Dijkstra developed semaphores and later, between 1971 and 1973,[16] Tony Hoare[17] and Per Brinch Hansen[18] developed monitors to solve the mutual exclusion problem. However, neither of these solutions provided a programming language construct that encapsulated access to shared resources. This encapsulation was later accomplished by the serializer construct ([Hewitt and Atkinson 1977, 1979] and [Atkinson 1980]).

The first models of computation (e.g.Turing machines, Post productions, the lambda calculusetc.) were based on mathematics and made use of a global state to represent a computational step (later generalized in [McCarthy and Hayes 1969] and [Dijkstra 1976] see Event orderings versus global state). Each computational step was from one global state of the computation to the next global state. The global state approach was continued in automata theory for finite-state machines and push down stack machines, including their nondeterministic versions. Such nondeterministic automata have the property of bounded nondeterminism; that is, if a machine always halts when started in its initial state, then there is a bound on the number of states in which it halts.

Edsger Dijkstra further developed the nondeterministic global state approach. Dijkstra’s model gave rise to a controversy concerning unbounded nondeterminism (also called unbounded indeterminacy), a property of concurrency by which the amount of delay in servicing a request can become unbounded as a result of arbitration of contention for shared resources while still guaranteeing that the request will eventually be serviced. Hewitt argued that the actor model should provide the guarantee of service. In Dijkstra’s model, although there could be an unbounded amount of time between the execution of sequential instructions on a computer, a (parallel) program that started out in a well defined state could terminate in only a bounded number of states [Dijkstra 1976]. Consequently, his model could not provide the guarantee of service. Dijkstra argued that it was impossible to implement unbounded nondeterminism.

Hewitt argued otherwise: there is no bound that can be placed on how long it takes a computational circuit called an arbiter to settle (see metastability (electronics)).[19] Arbiters are used in computers to deal with the circumstance that computer clocks operate asynchronously with respect to input from outside, e.g., keyboard input, disk access, network input, etc. So it could take an unbounded time for a message sent to a computer to be received and in the meantime the computer could traverse an unbounded number of states.

The actor model features unbounded nondeterminism which was captured in a mathematical model by Will Clinger using domain theory.[2] In the actor model, there is no global state.[dubious – discuss]

### Direct communication and asynchrony

Messages in the actor model are not necessarily buffered. This was a sharp break with previous approaches to models of concurrent computation. The lack of buffering caused a great deal of misunderstanding at the time of the development of the actor model and is still a controversial issue. Some researchers argued that the messages are buffered in the “ether” or the “environment”. Also, messages in the actor model are simply sent (like packets in IP); there is no requirement for a synchronous handshake with the recipient.

### Actor creation plus addresses in messages means variable topology

A natural development of the actor model was to allow addresses in messages. Influenced by packet switched networks [1961 and 1964], Hewitt proposed the development of a new model of concurrent computation in which communications would not have any required fields at all: they could be empty. Of course, if the sender of a communication desired a recipient to have access to addresses which the recipient did not already have, the address would have to be sent in the communication.

For example, an actor might need to send a message to a recipient actor from which it later expects to receive a response, but the response will actually be handled by a third actor component that has been configured to receive and handle the response (for example, a different actor implementing the observer pattern). The original actor could accomplish this by sending a communication that includes the message it wishes to send, along with the address of the third actor that will handle the response. This third actor that will handle the response is called the resumption (sometimes also called a continuation or stack frame). When the recipient actor is ready to send a response, it sends the response message to the resumption actor address that was included in the original communication.

So, the ability of actors to create new actors with which they can exchange communications, along with the ability to include the addresses of other actors in messages, gives actors the ability to create and participate in arbitrarily variable topological relationships with one another, much as the objects in Simula and other object-oriented languages may also be relationally composed into variable topologies of message-exchanging objects.

### Inherently concurrent

As opposed to the previous approach based on composing sequential processes, the actor model was developed as an inherently concurrent model. In the actor model sequentiality was a special case that derived from concurrent computation as explained in actor model theory.

### No requirement on order of message arrival

Hewitt argued against adding the requirement that messages must arrive in the order in which they are sent to the actor. If output message ordering is desired, then it can be modeled by a queue actor that provides this functionality. Such a queue actor would queue the messages that arrived so that they could be retrieved in FIFO order. So if an actor X sent a message M1 to an actor Y, and later X sent another message M2 to Y, there is no requirement that M1 arrives at Y before M2.

In this respect the actor model mirrors packet switching systems which do not guarantee that packets must be received in the order sent. Not providing the order of delivery guarantee allows packet switching to buffer packets, use multiple paths to send packets, resend damaged packets, and to provide other optimizations.

For example, actors are allowed to pipeline the processing of messages. What this means is that in the course of processing a message M1, an actor can designate the behavior to be used to process the next message, and then in fact begin processing another message M2 before it has finished processing M1. Just because an actor is allowed to pipeline the processing of messages does not mean that it must pipeline the processing. Whether a message is pipelined is an engineering tradeoff. How would an external observer know whether the processing of a message by an actor has been pipelined? There is no ambiguity in the definition of an actor created by the possibility of pipelining. Of course, it is possible to perform the pipeline optimization incorrectly in some implementations, in which case unexpected behavior may occur.

### Locality

Another important characteristic of the actor model is locality.

Locality means that in processing a message, an actor can send messages only to addresses that it receives in the message, addresses that it already had before it received the message, and addresses for actors that it creates while processing the message. (But see Synthesizing addresses of actors.)

Also locality means that there is no simultaneous change in multiple locations. In this way it differs from some other models of concurrency, e.g., the Petri net model in which tokens are simultaneously removed from multiple locations and placed in other locations.

### Composing actor systems

The idea of composing actor systems into larger ones is an important aspect of modularity that was developed in Gul Agha’s doctoral dissertation,[6] developed later by Gul Agha, Ian Mason, Scott Smith, and Carolyn Talcott.[9]

### Behaviors

A key innovation was the introduction of behavior specified as a mathematical function to express what an actor does when it processes a message, including specifying a new behavior to process the next message that arrives. Behaviors provided a mechanism to mathematically model the sharing in concurrency.

Behaviors also freed the actor model from implementation details, e.g., the Smalltalk-72 token stream interpreter. However, it is critical to understand that the efficient implementation of systems described by the actor model require extensive optimization. See Actor model implementation for details.

### Modeling other concurrency systems

Other concurrency systems (e.g.process calculi) can be modeled in the actor model using a two-phase commit protocol.[20]

### Computational Representation Theorem

There is a Computational Representation Theorem in the actor model for systems which are closed in the sense that they do not receive communications from outside. The mathematical denotation denoted by a closed system {\displaystyle {\mathtt {S}}} is constructed from an initial behavior ⊥S and a behavior-approximating function progressionS. These obtain increasingly better approximations and construct a denotation (meaning) for {\displaystyle {\mathtt {S}}} as follows [Hewitt 2008; Clinger 1981]:{\displaystyle \mathbf {Denote} _{\mathtt {S}}\equiv \lim _{i\to \infty }\mathbf {progression} _{{\mathtt {S}}^{i}}(\bot _{\mathtt {S}})}

In this way, S can be mathematically characterized in terms of all its possible behaviors (including those involving unbounded nondeterminism). Although {\displaystyle \mathbf {Denote} _{\mathtt {S}}} is not an implementation of {\displaystyle {\mathtt {S}}}, it can be used to prove a generalization of the Church-Turing-Rosser-Kleene thesis [Kleene 1943]:

A consequence of the above theorem is that a finite actor can nondeterministically respond with an uncountable[clarify] number of different outputs.

### Relationship to logic programming

One of the key motivations for the development of the actor model was to understand and deal with the control structure issues that arose in development of the Planner programming language.[citation needed] Once the actor model was initially defined, an important challenge was to understand the power of the model relative to Robert Kowalski‘s thesis that “computation can be subsumed by deduction”. Hewitt argued that Kowalski’s thesis turned out to be false for the concurrent computation in the actor model (see Indeterminacy in concurrent computation).

Nevertheless, attempts were made to extend logic programming to concurrent computation. However, Hewitt and Agha [1991] claimed that the resulting systems were not deductive in the following sense: computational steps of the concurrent logic programming systems do not follow deductively from previous steps (see Indeterminacy in concurrent computation). Recently, logic programming has been integrated into the actor model in a way that maintains logical semantics.[19]

### Migration

Migration in the actor model is the ability of actors to change locations. E.g., in his dissertation, Aki Yonezawa modeled a post office that customer actors could enter, change locations within while operating, and exit. An actor that can migrate can be modeled by having a location actor that changes when the actor migrates. However the faithfulness of this modeling is controversial and the subject of research.[citation needed]

### Security

The security of actors can be protected in the following ways:

A delicate point in the actor model is the ability to synthesize the address of an actor. In some cases security can be used to prevent the synthesis of addresses (see Security). However, if an actor address is simply a bit string then clearly it can be synthesized although it may be difficult or even infeasible to guess the address of an actor if the bit strings are long enough. SOAP uses a URL for the address of an endpoint where an actor can be reached. Since a URL is a character string, it can clearly be synthesized although encryption can make it virtually impossible to guess.

Synthesizing the addresses of actors is usually modeled using mapping. The idea is to use an actor system to perform the mapping to the actual actor addresses. For example, on a computer the memory structure of the computer can be modeled as an actor system that does the mapping. In the case of SOAP addresses, it’s modeling the DNS and the rest of the URL mapping.

### Contrast with other models of message-passing concurrency

Robin Milner‘s initial published work on concurrency[21] was also notable in that it was not based on composing sequential processes. His work differed from the actor model because it was based on a fixed number of processes of fixed topology communicating numbers and strings using synchronous communication. The original communicating sequential processes (CSP) model[22] published by Tony Hoare differed from the actor model because it was based on the parallel composition of a fixed number of sequential processes connected in a fixed topology, and communicating using synchronous message-passing based on process names (see Actor model and process calculi history). Later versions of CSP abandoned communication based on process names in favor of anonymous communication via channels, an approach also used in Milner’s work on the calculus of communicating systems and the π-calculus.

These early models by Milner and Hoare both had the property of bounded nondeterminism. Modern, theoretical CSP ([Hoare 1985] and [Roscoe 2005]) explicitly provides unbounded nondeterminism.

Petri nets and their extensions (e.g., coloured Petri nets) are like actors in that they are based on asynchronous message passing and unbounded nondeterminism, while they are like early CSP in that they define fixed topologies of elementary processing steps (transitions) and message repositories (places).

## Influence

The actor model has been influential on both theory development and practical software development.

### Theory

The actor model has influenced the development of the π-calculus and subsequent process calculi. In his Turing lecture, Robin Milner wrote:[23]

Now, the pure lambda-calculus is built with just two kinds of thing: terms and variables. Can we achieve the same economy for a process calculus? Carl Hewitt, with his actors model, responded to this challenge long ago; he declared that a value, an operator on values, and a process should all be the same kind of thing: an actor.

This goal impressed me, because it implies the homogeneity and completeness of expression … But it was long before I could see how to attain the goal in terms of an algebraic calculus…

So, in the spirit of Hewitt, our first step is to demand that all things denoted by terms or accessed by names—values, registers, operators, processes, objects—are all of the same kind of thing; they should all be processes.

### Practice

The actor model has had extensive influence on commercial practice. For example, Twitter has used actors for scalability.[24] Also, Microsoft has used the actor model in the development of its Asynchronous Agents Library.[25] There are many other actor libraries listed in the actor libraries and frameworks section below.

According to Hewitt [2006], the actor model addresses issues in computer and communications architecture, concurrent programming languages, and Web services including the following:

• Scalability: the challenge of scaling up concurrency both locally and nonlocally.
• Transparency: bridging the chasm between local and nonlocal concurrency. Transparency is currently a controversial issue. Some researchers[who?] have advocated a strict separation between local concurrency using concurrent programming languages (e.g., Java and C#) from nonlocal concurrency using SOAP for Web services. Strict separation produces a lack of transparency that causes problems when it is desirable/necessary to change between local and nonlocal access to Web services (see Distributed computing).
• Inconsistency: inconsistency is the norm because all very large knowledge systems about human information system interactions are inconsistent. This inconsistency extends to the documentation and specifications of very large systems (e.g., Microsoft Windows software, etc.), which are internally inconsistent.

Many of the ideas introduced in the actor model are now also finding application in multi-agent systems for these same reasons [Hewitt 2006b 2007b]. The key difference is that agent systems (in most definitions) impose extra constraints upon the actors, typically requiring that they make use of commitments and goals.

## Programming with actors

A number of different programming languages employ the actor model or some variation of it. These languages include:

### Actor libraries and frameworks

Actor libraries or frameworks have also been implemented to permit actor-style programming in languages that don’t have actors built-in. Some of these frameworks are:

## Reference

1. ^ Hewitt, Carl; Bishop, Peter; Steiger, Richard (1973). “A Universal Modular Actor Formalism for Artificial Intelligence”. IJCAI.
2. a b c d William Clinger (June 1981). “Foundations of Actor Semantics”. Mathematics Doctoral Dissertation. MIT. hdl:1721.1/6935.
3. a b Irene Greif (August 1975). “Semantics of Communicating Parallel Processes”. EECS Doctoral Dissertation. MIT.
4. a b Henry BakerCarl Hewitt (August 1977). “Laws for Communicating Parallel Processes”. IFIP.
5. ^ “Laws for Communicating Parallel Processes” (PDF). 10 May 1977.
6. a b c Gul Agha (1986). “Actors: A Model of Concurrent Computation in Distributed Systems”. Doctoral Dissertation. MIT Press. hdl:1721.1/6952.
7. ^ “Home”. Osl.cs.uiuc.edu. Archived from the original on 2013-02-22. Retrieved 2012-12-02.
8. ^ Carl Hewitt. Viewing Control Structures as Patterns of Passing Messages Journal of Artificial Intelligence. June 1977.
9. a b Gul Agha; Ian Mason; Scott Smith; Carolyn Talcott (January 1993). “A Foundation for Actor Computation”. Journal of Functional Programming.
10. ^ Carl Hewitt (2006-04-27). “What is Commitment? Physical, Organizational, and Social” (PDF). COIN@AAMAS.
11. ^ Mauro Gaspari; Gianluigi Zavattaro (May 1997). “An Algebra of Actors” (PDF). Formal Methods for Open Object-Based Distributed Systems. Technical Report UBLCS-97-4. University of Bologna. pp. 3–18. doi:10.1007/978-0-387-35562-7_2ISBN 978-1-4757-5266-3.
12. ^ M. Gaspari; G. Zavattaro (1999). “An Algebra of Actors”. Formal Methods for Open Object Based Systems.
13. ^ Gul Agha; Prasanna Thati (2004). “An Algebraic Theory of Actors and Its Application to a Simple Object-Based Language” (PDF). From OO to FM (Dahl Festschrift) LNCS 2635. Archived from the original (PDF) on 2004-04-20.
14. ^ John Darlington; Y. K. Guo (1994). “Formalizing Actors in Linear Logic”. International Conference on Object-Oriented Information Systems.
15. ^ Hansen, Per Brinch (2002). The Origins of Concurrent Programming: From Semaphores to Remote Procedure Calls. Springer. ISBN 978-0-387-95401-1.
16. ^ Hansen, Per Brinch (1996). “Monitors and Concurrent Pascal: A Personal History”. Communications of the ACM: 121–172.
17. ^ Hoare, Tony (October 1974). “Monitors: An Operating System Structuring Concept”. Communications of the ACM17(10): 549–557. doi:10.1145/355620.361161S2CID 1005769.
18. ^ Hansen, Per Brinch (July 1973). Operating System Principles. Prentice-Hall.
19. a b Hewitt, Carl (2012). “What is computation? Actor Model versus Turing’s Model”. In Zenil, Hector (ed.). A Computable Universe: Understanding Computation & Exploring Nature as Computation. Dedicated to the memory of Alan M. Turing on the 100th anniversary of his birth. World Scientific Publishing Company.
20. ^ Frederick Knabe. A Distributed Protocol for Channel-Based Communication with Choice PARLE 1992.
21. ^ Robin Milner. Processes: A Mathematical Model of Computing Agents in Logic Colloquium 1973.
22. ^ C.A.R. Hoare. Communicating sequential processesCACM. August 1978.
23. ^ Milner, Robin (1993). “Elements of interaction”Communications of the ACM36: 78–89. doi:10.1145/151233.151240.
24. ^ “How Twitter Is Scaling « Waiming Mok’s Blog”. Waimingmok.wordpress.com. 2009-06-27. Retrieved 2012-12-02.
25. ^ “Actor-Based Programming with the Asynchronous Agents Library” MSDN September 2010.
26. ^ Henry Lieberman (June 1981). “A Preview of Act 1”. MIT AI memo 625. hdl:1721.1/6350.
27. ^ Henry Lieberman (June 1981). “Thinking About Lots of Things at Once without Getting Confused: Parallelism in Act 1”. MIT AI memo 626. hdl:1721.1/6351.
28. ^ Jean-Pierre Briot. Acttalk: A framework for object-oriented concurrent programming-design and experience 2nd France-Japan workshop. 1999.
29. ^ Ken Kahn. A Computational Theory of Animation MIT EECS Doctoral Dissertation. August 1979.
30. ^ William Athas and Nanette Boden Cantor: An Actor Programming System for Scientific Computing in Proceedings of the NSF Workshop on Object-Based Concurrent Programming. 1988. Special Issue of SIGPLAN Notices.
31. ^ Darrell Woelk. Developing InfoSleuth Agents Using Rosette: An Actor Based Language Proceedings of the CIKM ’95 Workshop on Intelligent Information Agents. 1995.
32. ^ Dedecker J., Van Cutsem T., Mostinckx S., D’Hondt T., De Meuter W. Ambient-oriented Programming in AmbientTalk. In “Proceedings of the 20th European Conference on Object-Oriented Programming (ECOOP), Dave Thomas (Ed.), Lecture Notes in Computer Science Vol. 4067, pp. 230-254, Springer-Verlag.”, 2006
33. ^ Darryl K. Taft (2009-04-17). “Microsoft Cooking Up New Parallel Programming Language”. Eweek.com. Retrieved 2012-12-02.
34. ^ “Humus”. Dalnefre.com. Retrieved 2012-12-02.
35. ^ Brandauer, Stephan; et al. (2015). “Parallel objects for multicores: A glimpse at the parallel language encore”. Formal Methods for Multicore Programming. Springer International Publishing: 1–56.
36. ^ “The Pony Language”.
37. ^ Clebsch, Sylvan; Drossopoulou, Sophia; Blessing, Sebastian; McNeil, Andy (2015). “Deny capabilities for safe, fast actors”. Proceedings of the 5th International Workshop on Programming Based on Actors, Agents, and Decentralized Control – AGERE! 2015. pp. 1–12. doi:10.1145/2824815.2824816ISBN 9781450339018S2CID 415745. by Sylvan Clebsch, Sophia Drossopoulou, Sebastian Blessing, Andy McNeil
38. ^ “The P Language”. 2019-03-08.
39. ^ “The P# Language”. 2019-03-12.
40. ^ Carlos Varela and Gul Agha (2001). “Programming Dynamically Reconfigurable Open Systems with SALSA”. ACM SIGPLAN Notices. OOPSLA’2001 Intriguing Technology Track Proceedings36.
41. ^ Philipp Haller and Martin Odersky (September 2006). “Event-Based Programming without Inversion of Control”(PDF). Proc. JMLC 2006.
42. ^ Philipp Haller and Martin Odersky (January 2007). “Actors that Unify Threads and Events” (PDF). Technical report LAMP 2007. Archived from the original (PDF) on 2011-06-07. Retrieved 2007-12-10.
43. ^ “acteur – 0.9.1· David Bonet · Crates.io”. crates.io. Retrieved 2020-04-16.
44. ^ Bulut, Mahmut (2019-12-15). “Bastion on Crates.io”Crates.io. Retrieved 2019-12-15.
45. ^ “actix – 0.10.0· Rob Ede · Crates.io”. crates.io. Retrieved 2021-02-28.
46. ^ “Releases · zakgof/actr · GitHub”. Github.com. Retrieved 2019-04-16.
47. ^ “Akka 2.5.23 Released · Akka”. Akka. 2019-05-21. Retrieved 2019-06-03.
48. ^ Akka.NET v1.4.10 Stable Release GitHub – akkadotnet/akka.net: Port of Akka actors for .NET., Akka.NET, 2020-10-01, retrieved 2020-10-01
49. ^ Srinivasan, Sriram; Alan Mycroft (2008). “Kilim: Isolation-Typed Actors for Java” (PDF). European Conference on Object Oriented Programming ECOOP 2008. Cyprus. Retrieved 2016-02-25.
50. ^ “Releases · kilim/kilim · GitHub”. Github.com. Retrieved 2019-06-03.
51. ^ “Commit History · stevedekorte/ActorKit · GitHub”. Github.com. Retrieved 2016-02-25.
52. ^ “Commit History · haskell-distributed/distributed-process · GitHub”. Github.com. Retrieved 2012-12-02.
53. ^ “Releases · CloudI/CloudI · GitHub”. Github.com. Retrieved 2021-06-21.
54. ^ “Tags · GNOME/clutter · GitLab”. gitlab.gnome.org. Retrieved 2019-06-03.
55. ^ “Releases · ncthbrt/nact · GitHub”. Retrieved 2019-06-03.
56. ^ “Changes – retlang – Message based concurrency in .NET – Google Project Hosting”. Retrieved 2016-02-25.
57. ^ “jetlang-0.2.9-bin.zip – jetlang – jetlang-0.2.9-bin.zip – Message based concurrency for Java – Google Project Hosting”. 2012-02-14. Retrieved 2016-02-25.
58. ^ “GPars Releases”. GitHub. Retrieved 2016-02-25.
59. ^ “Releases · oosmos/oosmos · GitHub”. GitHub. Retrieved 2019-06-03.
60. ^ “Pulsar Design and Actors”. Archived from the originalon 2015-07-04.
61. ^ “Pulsar documentation”. Archived from the original on 2013-07-26.
62. ^ “Changes – Pykka 2.0.0 documentation”. pykka.org. Retrieved 2019-06-03.
63. ^ “Theron – Ashton Mason”. Retrieved 2018-08-29.
64. ^ “Theron – Version 6.00.02 released”. Theron-library.com. Archived from the original on 2016-03-16. Retrieved 2016-02-25.
65. ^ “Theron”. Theron-library.com. Archived from the originalon 2016-03-04. Retrieved 2016-02-25.
66. ^ “Releases · puniverse/quasar · GitHub”. Retrieved 2019-06-03.
67. ^ “Changes – actor-cpp – An implementation of the actor model for C++ – Google Project Hosting”. Retrieved 2012-12-02.
68. ^ “Commit History · s4/s4 · Apache”. apache.org. Archived from the original on 2016-03-06. Retrieved 2016-01-16.
69. ^ “Releases · actor-framework/actor-framework · GitHub”. Github.com. Retrieved 2020-03-07.
70. ^ “celluloid | RubyGems.org | your community gem host”. RubyGems.org. Retrieved 2019-06-03.
71. ^ “Community: Actor Framework, LV 2011 revision (version 3.0.7)”. Decibel.ni.com. 2011-09-23. Retrieved 2016-02-25.
72. ^ “Releases · orbit/orbit · GitHub”. GitHub. Retrieved 2019-06-03.
73. ^ “QP Real-Time Embedded Frameworks & Tools – Browse Files at”. Sourceforge.net. Retrieved 2019-06-03.
74. ^ “Releases · Stiffstream/sobjectizer · GitHub”. GitHub. Retrieved 2019-06-19.
75. ^ “Releases · basiliscos/cpp-rotor· GitHub”. GitHub. Retrieved 2020-10-10.
76. ^ “Releases · dotnet/orleans · GitHub”. GitHub. Retrieved 2019-06-03.
77. ^ “FunctionalJava releases”. GitHub. Retrieved 2018-08-23.

” (WP)

Categories

## B00A32NZEI

See: Programming Concurrency on the JVM: Mastering Synchronization, STM, and Actors 1st Edition

Categories

## Futures and promises

Not to be confused with Promise theory.

In computer sciencefuturepromisedelay, and deferred refer to constructs used for synchronizing program execution in some concurrent programming languages. They describe an object that acts as a proxy for a result that is initially unknown, usually because the computation of its value is not yet complete.

The term promise was proposed in 1976 by Daniel P. Friedman and David Wise,[1] and Peter Hibbard called it eventual.[2] A somewhat similar concept future was introduced in 1977 in a paper by Henry Baker and Carl Hewitt.[3]

The terms futurepromisedelay, and deferred are often used interchangeably, although some differences in usage between future and promise are treated below. Specifically, when usage is distinguished, a future is a read-only placeholder view of a variable, while a promise is a writable, single assignment container which sets the value of the future. Notably, a future may be defined without specifying which specific promise will set its value, and different possible promises may set the value of a given future, though this can be done only once for a given future. In other cases a future and a promise are created together

## Applications

Futures and promises originated in functional programming and related paradigms (such as logic programming) to decouple a value (a future) from how it was computed (a promise), allowing the computation to be done more flexibly, notably by parallelizing it. Later, it found use in distributed computing, in reducing the latency from communication round trips. Later still, it gained more use by allowing writing asynchronous programs in direct style, rather than in continuation-passing style.

## Implicit vs. explicit

Use of futures may be implicit (any use of the future automatically obtains its value, as if it were an ordinary reference) or explicit (the user must call a function to obtain the value, such as the get method of java.util.concurrent.Futurein Java). Obtaining the value of an explicit future can be called stinging or forcing. Explicit futures can be implemented as a library, whereas implicit futures are usually implemented as part of the language.

The original Baker and Hewitt paper described implicit futures, which are naturally supported in the actor model of computation and pure object-oriented programming languages like Smalltalk. The Friedman and Wise paper described only explicit futures, probably reflecting the difficulty of efficiently implementing implicit futures on stock hardware. The difficulty is that stock hardware does not deal with futures for primitive data types like integers. For example, an add instruction does not know how to deal with 3 + future factorial(100000). In pure actor or object languages this problem can be solved by sending future factorial(100000) the message +[3], which asks the future to add 3 to itself and return the result. Note that the message passing approach works regardless of when factorial(100000) finishes computation and that no stinging/forcing is needed.

## Promise pipelining

The use of futures can dramatically reduce latency in distributed systems. For instance, futures enable promise pipelining,[4][5] as implemented in the languages E and Joule, which was also called call-stream[6] in the language Argus.

Consider an expression involving conventional remote procedure calls, such as:

 t3 := ( x.a() ).c( y.b() )


which could be expanded to

 t1 := x.a();
t2 := y.b();
t3 := t1.c(t2);


Each statement needs a message to be sent and a reply received before the next statement can proceed. Suppose, for example, that xyt1, and t2 are all located on the same remote machine. In this case, two complete network round-trips to that machine must take place before the third statement can begin to execute. The third statement will then cause yet another round-trip to the same remote machine.

Using futures, the above expression could be written

 t3 := (x <- a()) <- c(y <- b())


which could be expanded to

 t1 := x <- a();
t2 := y <- b();
t3 := t1 <- c(t2);


The syntax used here is that of the language E, where x <- a() means to send the message a() asynchronously to x. All three variables are immediately assigned futures for their results, and execution proceeds to subsequent statements. Later attempts to resolve the value of t3 may cause a delay; however, pipelining can reduce the number of round-trips needed. If, as in the prior example, xyt1, and t2 are all located on the same remote machine, a pipelined implementation can compute t3 with one round-trip instead of three. Because all three messages are destined for objects which are on the same remote machine, only one request need be sent and only one response need be received containing the result. The send t1 <- c(t2) would not block even if t1 and t2 were on different machines to each other, or to x or y.

Promise pipelining should be distinguished from parallel asynchronous message passing. In a system supporting parallel message passing but not pipelining, the message sends x <- a() and y <- b() in the above example could proceed in parallel, but the send of t1 <- c(t2) would have to wait until both t1 and t2 had been received, even when xyt1, and t2 are on the same remote machine. The relative latency advantage of pipelining becomes even greater in more complicated situations involving many messages.

Promise pipelining also should not be confused with pipelined message processing in actor systems, where it is possible for an actor to specify and begin executing a behaviour for the next message before having completed processing of the current message.

In some programming languages such as OzE, and AmbientTalk, it is possible to obtain a read-only view of a future, which allows reading its value when resolved, but does not permit resolving it:

• In Oz, the !! operator is used to obtain a read-only view.
• In E and AmbientTalk, a future is represented by a pair of values called a promise/resolver pair. The promise represents the read-only view, and the resolver is needed to set the future’s value.
• In C++11 a std::future provides a read-only view. The value is set directly by using a std::promise, or set to the result of a function call using std::packaged_task or std::async.
• In the Dojo Toolkit‘s Deferred API as of version 1.5, a consumer-only promise object represents a read-only view.[7]
• In Alice ML, futures provide a read-only view, whereas a promise contains both a future and the ability to resolve the future[8][9]
• In .NET Framework 4.0 System.Threading.Tasks.Task<T> represents a read-only view. Resolving the value can be done via System.Threading.Tasks.TaskCompletionSource<T>.

Support for read-only views is consistent with the principle of least privilege, since it enables the ability to set the value to be restricted to subjects that need to set it. In a system that also supports pipelining, the sender of an asynchronous message (with result) receives the read-only promise for the result, and the target of the message receives the resolver.

Some languages, such as Alice ML, define futures that are associated with a specific thread that computes the future’s value.[9] This computation can start either eagerly when the future is created, or lazily when its value is first needed. A lazy future is similar to a thunk, in the sense of a delayed computation.

Alice ML also supports futures that can be resolved by any thread, and calls these promises.[8] This use of promise is different from its use in E as described above. In Alice, a promise is not a read-only view, and promise pipelining is unsupported. Instead, pipelining naturally happens for futures, including ones associated with promises.

## Blocking vs non-blocking semantics

If the value of a future is accessed asynchronously, for example by sending a message to it, or by explicitly waiting for it using a construct such as when in E, then there is no difficulty in delaying until the future is resolved before the message can be received or the wait completes. This is the only case to be considered in purely asynchronous systems such as pure actor languages.

However, in some systems it may also be possible to attempt to immediately or synchronously access a future’s value. Then there is a design choice to be made:

• the access could block the current thread or process until the future is resolved (possibly with a timeout). This is the semantics of dataflow variables in the language Oz.
• the attempted synchronous access could always signal an error, for example throwing an exception. This is the semantics of remote promises in E.[10]
• potentially, the access could succeed if the future is already resolved, but signal an error if it is not. This would have the disadvantage of introducing nondeterminism and the potential for race conditions, and seems to be an uncommon design choice.

As an example of the first possibility, in C++11, a thread that needs the value of a future can block until it is available by calling the wait() or get() member functions. You can also specify a timeout on the wait using the wait_for() or wait_until() member functions to avoid indefinite blocking. If the future arose from a call to std::async then a blocking wait (without a timeout) may cause synchronous invocation of the function to compute the result on the waiting thread.

## Related constructs

Future is a particular case of Event (synchronization primitive), which can be completed only once. In general, events can be reset to initial empty state and, thus, completed as many times as you like.[11]

An I-var (as in the language Id) is a future with blocking semantics as defined above. An I-structure is a data structure containing I-vars. A related synchronization construct that can be set multiple times with different values is called an M-var. M-vars support atomic operations to take or put the current value, where taking the value also sets the M-var back to its initial empty state.[12]

concurrent logic variable[citation needed] is similar to a future, but is updated by unification, in the same way as logic variables in logic programming. Thus it can be bound more than once to unifiable values, but cannot be set back to an empty or unresolved state. The dataflow variables of Oz act as concurrent logic variables, and also have blocking semantics as mentioned above.

concurrent constraint variable is a generalization of concurrent logic variables to support constraint logic programming: the constraint may be narrowed multiple times, indicating smaller sets of possible values. Typically there is a way to specify a thunk that should run whenever the constraint is narrowed further; this is needed to support constraint propagation.

## Relations between the expressiveness of different forms of future

Eager thread-specific futures can be straightforwardly implemented in non-thread-specific futures, by creating a thread to calculate the value at the same time as creating the future. In this case it is desirable to return a read-only view to the client, so that only the newly created thread is able to resolve this future.

To implement implicit lazy thread-specific futures (as provided by Alice ML, for example) in terms in non-thread-specific futures, needs a mechanism to determine when the future’s value is first needed (for example, the WaitNeeded construct in Oz[13]). If all values are objects, then the ability to implement transparent forwarding objects is sufficient, since the first message sent to the forwarder indicates that the future’s value is needed.

Non-thread-specific futures can be implemented in thread-specific futures, assuming that the system supports message passing, by having the resolving thread send a message to the future’s own thread. However, this can be viewed as unneeded complexity. In programming languages based on threads, the most expressive approach seems to be to provide a mix of non-thread-specific futures, read-only views, and either a WaitNeeded construct, or support for transparent forwarding.

## Evaluation strategy

Further information: Call by future

The evaluation strategy of futures, which may be termed call by future, is non-deterministic: the value of a future will be evaluated at some time between when the future is created and when its value is used, but the precise time is not determined beforehand and can change from run to run. The computation can start as soon as the future is created (eager evaluation) or only when the value is actually needed (lazy evaluation), and may be suspended part-way through, or executed in one run. Once the value of a future is assigned, it is not recomputed on future accesses; this is like the memoization used in call by need.

lazy future is a future that deterministically has lazy evaluation semantics: the computation of the future’s value starts when the value is first needed, as in call by need. Lazy futures are of use in languages which evaluation strategy is by default not lazy. For example, in C++11 such lazy futures can be created by passing the std::launch::deferred launch policy to std::async, along with the function to compute the value.

## Semantics of futures in the actor model

In the actor model, an expression of the form future <Expression> is defined by how it responds to an Eval message with environment E and customer C as follows: The future expression responds to the Eval message by sending the customer C a newly created actor F (the proxy for the response of evaluating <Expression>) as a return value concurrently with sending <Expression> an Eval message with environment E and customer C. The default behavior of F is as follows:

• When F receives a request R, then it checks to see if it has already received a response (that can either be a return value or a thrown exception) from evaluating <Expression> proceeding as follows:
1. If it already has a response V, then
• If V is a return value, then it is sent the request R.
• If V is an exception, then it is thrown to the customer of the request R.
2. If it does not already have a response, then R is stored in the queue of requests inside the F.
• When F receives the response V from evaluating <Expression>, then V is stored in F and
• If V is a return value, then all of the queued requests are sent to V.
• If V is an exception, then it is thrown to the customer of each of the queued requests.

However, some futures can deal with requests in special ways to provide greater parallelism. For example, the expression 1 + future factorial(n) can create a new future that will behave like the number 1+factorial(n). This trick does not always work. For example, the following conditional expression:if m>future factorial(n) then print("bigger") else print("smaller")

suspends until the future for factorial(n) has responded to the request asking if m is greater than itself.

## History

The future and/or promise constructs were first implemented in programming languages such as MultiLisp and Act 1. The use of logic variables for communication in concurrent logic programming languages was quite similar to futures. These began in Prolog with Freeze and IC Prolog, and became a true concurrency primitive with Relational Language, Concurrent Prolog, guarded Horn clauses (GHC), ParlogStrandVulcanJanusOz-MozartFlow Java, and Alice ML. The single-assignment I-var from dataflow programming languages, originating in Id and included in Reppy’s Concurrent ML, is much like the concurrent logic variable.

The promise pipelining technique (using futures to overcome latency) was invented by Barbara Liskov and Liuba Shrira in 1988,[6] and independently by Mark S. Miller, Dean Tribble and Rob Jellinghaus in the context of Project Xanadu circa 1989.[14]

The term promise was coined by Liskov and Shrira, although they referred to the pipelining mechanism by the name call-stream, which is now rarely used.

Both the design described in Liskov and Shrira’s paper, and the implementation of promise pipelining in Xanadu, had the limit that promise values were not first-class: an argument to, or the value returned by a call or send could not directly be a promise (so the example of promise pipelining given earlier, which uses a promise for the result of one send as an argument to another, would not have been directly expressible in the call-stream design or in the Xanadu implementation). It seems that promises and call-streams were never implemented in any public release of Argus,[15] the programming language used in the Liskov and Shrira paper. Argus development stopped around 1988.[16] The Xanadu implementation of promise pipelining only became publicly available with the release of the source code for Udanax Gold[17] in 1999, and was never explained in any published document.[18] The later implementations in Joule and E support fully first-class promises and resolvers.

Several early actor languages, including the Act series,[19][20] supported both parallel message passing and pipelined message processing, but not promise pipelining. (Although it is technically possible to implement the last of these features in the first two, there is no evidence that the Act languages did so.)

After 2000, a major revival of interest in futures and promises occurred, due to their use in responsiveness of user interfaces, and in web development, due to the request–response model of message-passing. Several mainstream languages now have language support for futures and promises, most notably popularized by FutureTask in Java 5 (announced 2004)[21] and the async and await constructions in .NET 4.5 (announced 2010, released 2012)[22][23] largely inspired by the asynchronous workflows of F#,[24] which dates to 2007.[25] This has subsequently been adopted by other languages, notably Dart (2014),[26] Python (2015),[27] Hack (HHVM), and drafts of ECMAScript 7 (JavaScript), Scala, and C++.

## List of implementations

Some programming languages are supporting futures, promises, concurrent logic variables, dataflow variables, or I-vars, either by direct language support or in the standard library.

### List of concepts related to futures and promises by programming language

Languages also supporting promise pipelining include:

### Coroutines

Futures can be implemented in coroutines[27] or generators,[99] resulting in the same evaluation strategy (e.g., cooperative multitasking or lazy evaluation).

### Channels

Main article: Channel (programming)

Futures can easily be implemented in channels: a future is a one-element channel, and a promise is a process that sends to the channel, fulfilling the future.[100][101] This allows futures to be implemented in concurrent programming languages with support for channels, such as CSP and Go. The resulting futures are explicit, as they must be accessed by reading from the channel, rather than only evaluation.

## References

1. ^ Friedman, Daniel; David Wise (1976). The Impact of Applicative Programming on Multiprocessing. International Conference on Parallel Processing. pp. 263–272.
2. ^ Hibbard, Peter (1976). Parallel Processing Facilities. New Directions in Algorithmic Languages, (ed.) Stephen A. Schuman, IRIA, 1976.
3. ^ Henry Baker; Carl Hewitt (August 1977). The Incremental Garbage Collection of Processes. Proceedings of the Symposium on Artificial Intelligence Programming Languages. ACM SIGPLAN Notices 12, 8. pp. 55–59.
4. ^ Promise Pipelining at erights.org
5. ^ Promise Pipelining on the C2 wiki
6. a b Barbara Liskov; Liuba Shrira (1988). “Promises: Linguistic Support for Efficient Asynchronous Procedure Calls in Distributed Systems”. Proceedings of the SIGPLAN ’88 Conference on Programming Language Design and Implementation; Atlanta, Georgia, United States. ACM. pp. 260–267. doi:10.1145/53990.54016ISBN 0-89791-269-1.Also published in ACM SIGPLAN Notices 23(7).
7. ^ Robust promises with Dojo deferred, Site Pen, 3 May 2010
8. a b “Promise”Alice Manual, DE: Uni-SB
9. a b “Future”Alice manual, DE: Uni-SB
10. ^ Promise, E rights
11. ^ 500 lines or less, “A Web Crawler With asyncio Coroutines” by A. Jesse Jiryu Davis and Guido van Rossum says “implementation uses an asyncio.Event in place of the Future shown here. The difference is an Event can be reset, whereas a Future cannot transition from resolved back to pending.”
12. ^ Control Concurrent MVar, Haskell, archived from the original on 18 April 2009
13. ^ WaitNeeded, Mozart Oz
14. ^ Promise, Sunless Sea, archived from the original on 23 October 2007
15. ^ Argus, MIT
16. ^ Liskov, Barbara, Distributed computing and Argus, Oral history, IEEE GHN
17. ^ Gold, Udanax, archived from the original on 11 October 2008
18. ^ Pipeline, E rights
19. ^ Henry Lieberman (June 1981). “A Preview of Act 1”. MIT AI memo 625.
20. ^ Henry Lieberman (June 1981). “Thinking About Lots of Things at Once without Getting Confused: Parallelism in Act 1”. MIT AI memo 626.
21. ^ Goetz, Brian (23 November 2004). “Concurrency in JDK 5.0”.
22. a b “Async in 4.5: Worth the Await – .NET Blog – Site Home – MSDN Blogs”. Blogs.msdn.com. Retrieved 13 May 2014.
23. a b c “Asynchronous Programming with Async and Await (C# and Visual Basic)”. Msdn.microsoft.com. Retrieved 13 May 2014.
24. ^ Tomas Petricek (29 October 2010). “Asynchronous C# and F# (I.): Simultaneous introduction”.
25. ^ Don Syme; Tomas Petricek; Dmitry Lomov (21 October 2010). “The F# Asynchronous Programming Model, PADL 2011”.
26. a b Gilad Bracha (October 2014). “Dart Language Asynchrony Support: Phase 1”.
27. a b “PEP 0492 – Coroutines with async and await syntax”.
28. ^ Kenjiro Taura; Satoshi Matsuoka; Akinori Yonezawa (1994). “ABCL/f: A Future-Based Polymorphic Typed Concurrent Object-Oriented Language – Its Design and Implementation.”. In Proceedings of the DIMACS workshop on Specification of Parallel Algorithms, number 18 in Dimacs Series in Discrete Mathematics and Theoretical Computer Science. American Mathematical Society. pp. 275–292. CiteSeerX 10.1.1.23.1161.
29. ^ “Dart SDK dart async Completer”.
31. ^ Steve Dekorte (2005). “Io, The Programming Language”.
32. ^ “Using promises”. Mozilla Developer Network. Retrieved 23 February 2021.
33. ^ “Making asynchronous programming easier with async and await”. Mozilla Developer Network. Retrieved 23 February 2021.
34. ^ Rich Hickey (2009). “changes.txt at 1.1.x from richhickey’s clojure”.
35. ^ Seif Haridi; Nils Franzen. “Tutorial of Oz”. Mozart Global User Library. Retrieved 12 April 2011.
36. ^ Python 3.2 Release
37. ^ Python 3.5 Release
38. ^ “Parallelism with Futures”. PLT. Retrieved 2 March 2012.
39. ^ Promise class in Perl 6
40. ^ Common Lisp Blackbird
41. ^ Common Lisp Eager Future2
42. ^ Lisp in parallel – A parallel programming library for Common Lisp
43. ^ Common Lisp PCall
44. ^ “Chapter 30. Thread 4.0.0”. Retrieved 26 June 2013.
45. ^ “Dlib C++ Library #thread_pool”. Retrieved 26 June 2013.
46. ^ “GitHub – facebook/folly: An open-source C++ library developed and used at Facebook”. 8 January 2019.
47. ^ “HPX”. 10 February 2019.
48. ^ “Threads Slides of POCO” (PDF).
49. ^ “QtCore 5.0: QFuture Class”. Qt Project. Archived from the original on 1 June 2013. Retrieved 26 June 2013.
50. ^ “Seastar”. Seastar project. Retrieved 22 August 2016.
51. ^ “stlab is the ongoing work of what was Adobe’s Software Technology Lab. The Adobe Source Libraries (ASL), Platform Libraries, and new stlab libraries are hosted on github”. 31 January 2021.
52. ^ Groovy GPars Archived 12 January 2013 at the Wayback Machine
53. ^ Cujo.js
54. ^ JavaScript when.js
55. ^ Promises/A+ specification
56. ^ promises
57. ^ JavaScript MochKit.Async
58. ^ JavaScript Angularjs
59. ^ JavaScript node-promise
60. ^ JavaScript Q
61. ^ JavaScript RSVP.js
62. ^ YUI JavaScript class library
63. ^ YUI JavaScript promise class
64. ^ JavaScript Bluebird
65. ^ Java JDeferred
66. ^ Java ParSeq
67. ^ Objective-C MAFuture GitHub
68. ^ Objective-C MAFuture mikeash.com
69. ^ Objective-C RXPromise
70. ^ ObjC-CollapsingFutures
71. ^ Objective-C PromiseKit
72. ^ Objective-C objc-promise
73. ^ Objective-C OAPromise
74. ^ OCaml Lazy
75. ^ Perl Future
76. ^ Perl Promises
77. ^ Perl Reflex
78. ^ Perl Promise::ES6
79. ^ “Promise::XS – Fast promises in Perl – metacpan.org”metacpan.org. Retrieved 14 February 2021.
80. ^ PHP React/Promise
81. ^ Python built-in implementation
82. ^ pythonfutures
83. ^ Twisted Deferreds
84. ^ R package future
85. ^ future
86. ^ Ruby Promise gem
87. ^ Ruby libuv
88. ^ Ruby Celluloid gem
89. ^ Ruby future-resource
90. ^ futures-rs crate
92. ^ Swift Async
93. ^ Swift FutureKit
94. ^ Swift Apple GCD
95. ^ Swift FutureLib
96. ^ bignerdranch/Deferred
97. ^ Thomvis/BrightFutures
98. ^ tcl-promise
99. ^ Does async/await solve a real problem?
100. ^ Go language patterns Futures
101. ^ Go Language Patterns