blank

The Theorem of Good and Evil

Traditional ethics has functioned for millennia as a set of abstract philosophical axioms, leaving room for subjective interpretations and ambiguity. Through a rigorous analytical system, we can transform these concepts into measurable quantitative variables, where “good” and “evil” become clear directional vectors in a defined topological space.

The mathematics of non-cooperative game theory provides the initial framework for this complex translation. The Nash equilibrium—the point at which all agents refuse to unilaterally change their strategy—describes a state of stability within the human network, though not necessarily the most virtuous one. In this analytical context, good corresponds to a Pareto optimum, a superior state of the system where overall utility reaches its maximum possible value without degrading the condition of any individual. Evil, consequently, becomes a strict deviation from this optimal trajectory, representing a maximization of selfish local utility at the cost of the entire global system’s collapse.

Human interactions organically evolve into a payoff matrix with strict rules and long-term consequences. Good acts as a force of negative entropy, a vector permanently oriented toward structuring, maintaining internal order, and facilitating the flow of data. Evil represents a maximization of informational entropy, accelerating the degradation and structural fragmentation of the community by introducing lies and betrayal as pure forms of white noise.

To map these trajectories precisely, we define $S(t)$ as the state of the moral system at time $t$. Its evolution is dictated by the interaction of two fundamental operators: the cooperation gradient $\nabla C(S)$ and the Shannon entropy function $H(S)$. The fundamental equation of ethics takes the following analytical form:

$$\frac{dS}{dt} = \alpha \nabla C(S) – \beta \nabla H(S)$$

In this paradigm, the coefficients $\alpha$ and $\beta$ weight the agent’s direct intent. A high value of the $\beta$ parameter, which describes immediate and irrational consumption, invariably destroys the network’s topological links. To demonstrate the absolute validity of this model, we turn to Lyapunov Stability Theory. By constructing the moral value function $V(S)$ as a strict Lyapunov function, we observe that the internal energy of a system oriented toward “good” asymptotically approaches a minimum of conflict. We formulate the function $V(S) = \frac{1}{2} \sum_{i=1}^{N} (x_i – x^*)^2$, where $x^*$ is the ideal cooperative state.

The derivative of this function with respect to time, $\frac{dV(S)}{dt} < 0$, clearly demonstrates the convergence of moral actions toward a state of lasting harmony and balance. Evil completely reverses the sign of this derivative, generating systemic divergence and an explosion of internal conflict. When the network leaves the region of stability, the fluid dynamics of society degenerate into a completely chaotic force field, requiring a massive injection of rational energy from the outside to prevent thermodynamic collapse.

The time horizon radically alters the mathematical structure of this moral equation. Repeated games with an indeterminate number of rounds activate strategies in which reciprocal altruism decisively wins the competition on an extended time axis. Evil operates exclusively in games with a finite and known number of rounds, relying on the rapid extraction of value before the inevitable collapse of interhuman trust. This phenomenon of evaluating the future is impeccably modeled by Markov Decision Processes (MDPs), where the discount factor $\gamma$ calibrates the weight of future rewards. A value close to zero defines a myopic agent—the archetype of human psychopathy—who totally ignores long-term consequences. In contrast, a value close to one defines heroic sacrifice: the computational power capable of projecting the decision tree to its final ramifications to protect the integrity of the group.

Acts of destruction or extreme selfishness instantly generate topological singularities on the differentiable manifold of our decision-making space. The immense gravitational tension within an immoral singularity tears the structural fabric of the community, a systemic cost excellently illustrated by the concept of the Price of Anarchy in algorithmic game theory. This metric quantifies the massive loss of efficiency between a virtuous society and a decentralized network trapped in greed.

Extending the analysis into the fascinating realm of quantum game theory provides the prospect of a superior solution. Quantum entanglement functions as an invisible ethical bond, mathematically demonstrating that an advanced strategy completely eliminates any advantage of betrayal, forcing rational agents to operate in perfect harmony with the fundamental laws of the universe.

To transform this demonstration into a testable instrument, here is the following Python code. It simulates the fundamental differential equation $\frac{dS}{dt} = \alpha \nabla C(S) – \beta \nabla H(S)$, tracking the health of the moral system over time based on the balance between altruistic and destructive behaviors.

import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
# Define the fundamental equation of synthetic ethics
# S: The harmony state of the system (0 = collapse, 100 = Pareto optimum)
# t: Time
# alpha: Cooperation coefficient (Good)
# beta: Entropy / selfishness coefficient (Evil)
def moral_system(S, t, alpha, beta):
# The cooperation gradient pulls the system toward the Pareto optimum (e.g., 100)
# The entropy gradient pushes the system toward degradation
gradient_C = 100 - S  # Harmony increases based on remaining potential
gradient_H = S        # Degradation hits harder the more the system has to lose
dS_dt = (alpha * gradient_C) - (beta * gradient_H)
return dS_dt
# Time vector (evolution over 50 time units)
time = np.linspace(0, 50, 200)
# Initial condition: society starts from a neutral state of fragile equilibrium (S = 50)
S0 = 50 
# Scenario 1: Good prevails (reciprocal altruism activated)
alpha_good, beta_good = 0.3, 0.05
positive_evolution = odeint(moral_system, S0, time, args=(alpha_good, beta_good))
# Scenario 2: Evil prevails (temptation of immediate gain / discount factor gamma = 0)
alpha_evil, beta_evil = 0.05, 0.4
negative_evolution = odeint(moral_system, S0, time, args=(alpha_evil, beta_evil))
# Generate the visual representation
plt.figure(figsize=(10, 6))
plt.plot(time, positive_evolution, 'g-', linewidth=2, label=f'Moral Trajectory (Cooperation > Entropy)')
plt.plot(time, negative_evolution, 'r--', linewidth=2, label=f'Immoral Trajectory (Entropy > Cooperation)')
# Format the plot
plt.axhline(100, color='blue', linestyle=':', label='Pareto Optimum (Maximum harmony)')
plt.axhline(0, color='black', linestyle='-', label='Structural Collapse (Maximum white noise)')
plt.title('The Theorem of Good and Evil: Dynamics of Ethical Systems', fontsize=14)
plt.xlabel('Time Horizon (t)', fontsize=12)
plt.ylabel('System State $S(t)$', fontsize=12)
plt.legend(loc='best')
plt.grid(True, alpha=0.3)
plt.show()

Here is the visual interpretation of this code, using Replit:

image

About this graphic:

  • Green line (Moral Trajectory): With α = 0.30, β = 0.05, cooperation dominates entropy. The system rises from Sâ‚€ = 50 and converges to a stable equilibrium around S ≈ 85.7, approaching (but not quite reaching) the Pareto Optimum.
  • Red dashed line (Immoral Trajectory): With α = 0.05, β = 0.40, entropy overwhelms cooperation. The system collapses from Sâ‚€ = 50 down to a degraded equilibrium near S ≈ 11.1.

The equilibrium point in each scenario is analytically: S = 100α / (α + β)* — so Good settles at 100·0.3/0.35 ≈ 85.7, and Evil at 100·0.05/0.45 ≈ 11.1. The shapes match exactly what matplotlib would render.

Academic References:

  • Nash, J. (1950). Equilibrium points in n-person games. Proceedings of the National Academy of Sciences.
  • Shannon, C. E. (1948). A Mathematical Theory of Communication. The Bell System Technical Journal.
  • Axelrod, R. (1984). The Evolution of Cooperation. Basic Books.
  • Prigogine, I. (1977). Self-Organization in Nonequilibrium Systems: From Dissipative Structures to Order through Fluctuations. Wiley.
  • Roughgarden, T. (2005). Selfish Routing and the Price of Anarchy. MIT Press.
  • Bellman, R. (1957). Dynamic Programming. Princeton University Press.
Share it...