Motivation:

For better or worse, 2025 is poised to be the year of AI agents. Many technologists and experts who evangelize the usage of generative AI for solving real-world problems use the term "agents" as a convenient umbrella that encompasses the usage of AI technologies in everything from an if-else statement to a self-correcting reasoning agent (also just an if-else engine?). However, some academics lament about the fact that this ignores decades of research on the topic of 'agents', especially in the context of reinforcement learning, and the meaning of the term in diluted.

On the same note but not quite on the same page, I wanted to take a look at 'Agent-Based Modelling', a technique primarily used in social sciences for modelling systems built from rule-based interactions between multiple autonomous agents, and how generative AI technologies like Large Language Models (LLMs) could be useful here.

Introduction:

The field of Agent-Based Modelling (ABM) deals with building computational models that simulate actions and interactions between autonomous agents (often people) and study how changes in different attributes (say, for example, economic status) can lead to unexpected outcomes in the overall system (the society at large), which is a phenomenon termed emergence. Here is the Wikipedia article on the topic. In this post, I focus on Schelling’s Model of Segregation, an ABM created by economist Thomas Schelling that demonstrates the unexpected consequences of mild in-group preferences.

How does it work?

The original model consists of an N x N grid. Typically, there are two types of agents, and they occupy spaces in the grid. Spaces that are not occupied by any agent are empty. Each agent desires that a fraction of the agents in its neighbourhood (either 4 or 8 adjacent agents) is from the same group → which is denoted as $\tau$ .

Like every other ABM system, the simulation takes place over rounds. In each round, the agent checks its neighbourhood to see if the fraction of neighbours belonging to the same group $(B_a)$ is greater than or equal to the desired fraction $(\tau)$. If the criterion for the agent is not satisfied, the agent is classified as unsatisfied and it moves to a randomly chosen free location in the grid at the end of the round.

The rounds continue till all the agents are satisfied, a state which could be considered as a stable equilibrium, or the maximum number of rounds is reached, which if big enough should have a chaotic equilibrium end state.

Here is the pseudocode for the process:

image.png

What was discovered?

The landmark observation with the model was that crossing a desirability fraction of nearly 1/3 was sufficient for the emergence of a “segregated” population, a phenomenon that belies conventional wisdom, thus implying emergent behavior.

Check out the code here for the implementation in the classic version.

Given below is a sample progression of a 15x15 grid with $\tau == 40\%$ over 13 rounds:

initial state.png