Agency, Agentic AI and Multi-agent Systems

We will explore the concepts of "Agency" and "Agentic AI" highlighting the differences between them and "MAS" to clarify AI development needs. This can help accelerate the creation of AI applications and products that address real-world challenges.

Agency, Agentic AI and Multi-agent Systems
ℹ️
The terminology in this post is framed within the context of computer science, AI and multi-agent systems.
ℹ️
An agent is a computational entity (refer to Agent and Multi-Agent System)

Introduction

Real-world applications across diverse domains, such as military operations, economic management, agriculture, finance, smart transportation, and smart cities. are increasingly benefiting from advancements in Artificial Intelligence (AI) research, particularly through the development of multi-agent systems (MAS) with AI-driven agents [1, 2, 3]. These domains share common requirements, including the ability to handle uncertainty, support autonomy, and scale effectively. In addition, these needs align closely with the capabilities offered by multi-agent systems. The development of these systems opens the following research hypothesis:

💡
AI-driven multi-agent systems can address human-defined problems with minimal guidance on how to achieve objectives, requiring less human intervention and enabling more independent execution.

This hypothesis drives the research community to explore further on levels of agency in AI, bringing "Agentic AI" to the forefront in both research and industry. However, the definitions of "Agency" and "Agentic AI" remain ambiguous and are often conflated with multi-agent systems. This misuse leads to unclear requirements and unrealistic expectations for AI systems tailored to specific domains. In this post, we aim to clarify the distinctions between "Agency", "Agentic AI", and "MAS" to foster clearer AI development requirements, potentially accelerating the development of AI applications/products that solve real-world problems. Given the broad interpretations of "Agency" and "Agentic AI" across disciplines, we focus specifically on their meanings within the context of computer science, AI, and MAS.

Agency

Figure 1. An overview of agency

Agency, in its general sense, refers to an entity, such as a person, business, or organization that causes an event on on behalf of another person, business, or group. In agency theory, agency is also considered a property of an agent [4], serving as a measure of the range of interactions that an agent can effectively handle [5, 6].

To illustrate the concept of agency intuitively, consider two scenarios where Alice uses a device to set an alarm:

  1. Alice manually sets an alarm each night. Here, the device has no agency in the task.
  2. Alice sets an alarm once, and the device then uses sensory data, such as Alice’s sleep hours and patterns, to automatically set an optimal wake-up time. In this case, the device demonstrates agency in performing the task.

The difference between these scenarios lies in delegation, autonomy, and context awareness. For instance, in the second scenario, the device leverages sensory data as contextual information (context awareness) to autonomously complete the task (autonomy) on behalf of Alice (delegation). This further shows that level of agency potentially depends on the organizational structure of agents which also involves delegation, autonomy, and context awareness [5, 6]. A higher level of agency can increase system complexity, necessitating advanced self-orchestration and self-maintenance processes (see also Figure 1) [5].

Agentic AI

Figure 2. Level of "agenticness" based on level of self-direction, autonomy, and adaptability

An Agentic AI system is one that adapts to its environment and pursues goal achievement with minimal specific instructions or human intervention in dynamic environments [7, 8]. The system’s level of "agenticness" is assessed by factors such as goal complexity, environmental complexity, the system’s ability to handle uncertainties, and its degree of autonomy (see also Figure 2) [8].

An Agentic AI system may comprise a single agent or multiple agents. Since multiple agents often handle environmental dynamics more effectively than a single agent, "Agentic AI" is frequently associated with multi-agent systems.

We assume that an Agentic AI system is created by a system designer to accomplish the set of goals on behalf of the designer, another system, a business, or an organization. Therefore, this system has agency. The following examples illustrates Agentic AI systems:

  1. Bob requires his air conditioner (AC) to turn on and off at intervals to enhance comfort while reducing energy costs. To achieve this, an effective solution would be an Agentic AI system capable of scheduling the AC on Bob's behalf using various sensory data inputs. This data includes patterns in Bob's comfort preferences, weather forecasts, energy prices, and room temperature, potentially gathered from multiple sources or agents. Based on this information, the Agentic AI system autonomously adjusts the AC settings and can incorporate Bob’s feedback over time for continuous optimization.
  2. Bob uses a travel website to book a flight that fits within his budget and has the shortest possible duration. To accomplish this, an Agentic AI system performs several tasks: (1) gathering flight information from multiple vendors, (2) filtering flights according to Bob's preferences, (3) presenting an ordered list of suitable flights, (4) learning from Bob’s selections over time, and (5) completing the booking with the chosen vendor. Each of these tasks may involve multiple sub-tasks, adding complexity to the system’s goal and increasing the degree of "agenticness" it demonstrates.

The difference between these scenarios is the level of "agenticness". The degree of "agenticness" in the first scenario is lower than in the second, as the goal complexity in the first scenario is less than that of the second. Note that a high level of goal complexity can lead to longer time for goal achievement, even with multiple agents. In the second scenario, we assume that the system has one agent per task, and each agent may need to complete several sub-tasks to accomplish its assigned goal. This can potentially increase the time required for task completion. Therefore, when developing an Agentic AI system, it is advisable to start with a system that has a lower degree of "agenticness". As development progresses, the level of "agenticness" can be gradually increased.

Open Research Questions

  1. Is agency a situational component of AI systems?
  2. What is the relationship between agency and autonomous systems?
  3. Are there any scenarios where an Agentic AI system does not act on behalf of any parties but itself?

References

  1. Wang, L., Ma, C., Feng, X., Zhang, Z., Yang, H., Zhang, J., Chen, Z., Tang, J., Chen, X., Lin, Y. and Zhao, W.X., 2024. A survey on large language model based autonomous agents. Frontiers of Computer Science, 18(6), p.186345.
  2. Liu, Y., Lo, S.K., Lu, Q., Zhu, L., Zhao, D., Xu, X., Harrer, S. and Whittle, J., 2024. Agent Design Pattern Catalogue: A Collection of Architectural Patterns for Foundation Model based Agents. arXiv preprint arXiv:2405.10467.
  3. Guo, T., Chen, X., Wang, Y., Chang, R., Pei, S., Chawla, N.V., Wiest, O. and Zhang, X., 2024. Large language model based multi-agents: A survey of progress and challenges. arXiv preprint arXiv:2402.01680.
  4. Leslie, A.M., 1993. A theory of agency. Rutgers Univ. Center for Cognitive Science.
  5. Sørensen, M.H. and Ziemke, T., 2007. Agents without agency?. Cognitive Semiotics, 1, pp.102-124.
  6. Dattathrani, S. and De’, R., 2023. The Concept of Agency in the era of Artificial Intelligence: dimensions and degrees. Information Systems Frontiers, 25(1), pp.29-54.
  7. Chan, A., Salganik, R., Markelius, A., Pang, C., Rajkumar, N., Krasheninnikov, D., Langosco, L., He, Z., Duan, Y., Carroll, M. and Lin, M., 2023, June. Harms from increasingly agentic algorithmic systems. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (pp. 651-666).
  8. Shavit, Y., Agarwal, S., Brundage, M., Adler, S., O’Keefe, C., Campbell, R., Lee, T., Mishkin, P., Eloundou, T., Hickey, A. and Slama, K., 2023. Practices for governing agentic AI systems. Research Paper, OpenAI, December.