Skip to article frontmatterSkip to article content

Monte Carlo

Monte Carlo methods are a class of computational algorithms that rely on random sampling to obtain numerical results. These methods are widely used in various fields, including physics, chemistry, and computer science, to solve complex problems that are difficult or impossible to solve analytically. In atomistic simulation, unlike molecular dynamics (MD), which relies on solving equations of motion for particles over time, Monte Carlo simulations are based on random sampling and statistical sampling methods to explore the configuration space of a system.

Example of Monte Carlo simulation to estimate the value of \pi using random sampling. Right panel shows the convergence of the estimated value of \pi as a function of the number of samples.

Example of Monte Carlo simulation to estimate the value of π using random sampling. Right panel shows the convergence of the estimated value of π as a function of the number of samples.

A simple example of a Monte Carlo simulation is the estimation of the value of π using random sampling. The idea is to randomly sample points in a square and calculate the fraction of points that fall within a circle inscribed in the square. The ratio of the area of the quarter circle to the area of the square is π, which can be used as an estimate of the value of π. By increasing the number of random samples, the estimate of π converges to the true value.

Metropolis Algorithm

In principle, you can randomly sample the whole phase space to calculate the properties of a system. However, this is not practical due to the vast number of possible configurations. For example, a system with 100 particles has 10150 possible configurations, which is an astronomically large number. However, most of the configurations have low probability (low Boltzmann factor) and are not relevant to the properties of the system. Therefore, we need a method to sample the phase space effectively to obtain accurate statistical averages and thermodynamic properties.

The Metropolis algorithm is a Monte Carlo method that allows you to sample the phase space effectively by accepting or rejecting proposed moves based on a probability criterion. The Metropolis algorithm is widely used in Monte Carlo simulations to generate a representative set of configurations that can be used to calculate ensemble averages.

The Metropolis algorithm consists of the following steps:

  1. Initialization: Start with an initial configuration of the system.

  2. Propose a Move: Randomly select a particle and propose a move to a new position.

  3. Calculate Energy Change: Calculate the change in energy, ΔE\Delta E, resulting from the proposed move.

  4. Acceptance Criterion: Accept the move with a probability given by the Metropolis criterion:

    P(accept)={1if ΔE0exp(ΔE/kBT)if ΔE>0P(\text{accept}) = \begin{cases} 1 & \text{if } \Delta E \leq 0 \\ \exp(-\Delta E / k_B T) & \text{if } \Delta E > 0 \end{cases}

    where kBk_B is the Boltzmann constant and TT is the temperature. Energy can be calculated using a force field using the configuration of the system.

  5. Update Configuration: If the move is accepted, update the configuration of the system.

In general, the Metropolis algorithm allows you to explore the configuration space of a system by accepting moves that lower the energy of the system and occasionally accepting moves that increase the energy. This stochastic process ensures that the system samples configurations according to their Boltzmann weights and allows you to calculate ensemble averages and thermodynamic properties.

At higher the temperature, it is more likely that the system is going to accept moves that increase the energy, allowing the system to explore a wider range of configurations. At low temperatures, the system is less likely to accept moves that increase the energy, leading to a more focused exploration of the low-energy configurations.

Applications

Comparison with Molecular Dynamics

FeatureMonte Carlo (MC)Molecular Dynamics (MD)
Sampling MethodRandom samplingDeterministic integration of equations
Time EvolutionNo explicit time evolutionExplicit time evolution
EfficiencyEfficient for equilibrium propertiesEfficient for dynamic properties
System SizeCan handle larger systemsLimited by computational cost
Temperature DependenceCan easily simulate different temperaturesRequires separate simulations for each temperature
Energy LandscapeCan escape local minimaMay get trapped in local minima
Computational CostGenerally lowerGenerally higher
ApplicationsEquilibrium properties, phase transitionsDynamic properties, transport phenomena
Nonequilibrium ProcessesCannot simulate nonequilibrium processesCan simulate nonequilibrium processes