Summer School on Computational Materials Science:

Basic Simulation Techniques


Lecture 2: Molecular Dynamics

 
Boundary Conditions in Space and Time

Spatial Boundary Conditions

What spatial boundary conditions should we use for the simulations? Boundary conditions strongly influence structure. Some possibilities:
These are different ways of representing the system. If a particle goes out one side of the box, it should eventually be put back in the other side (although sometimes one doesn't want to do this.)

How do we calculate distances and energies in periodic boundary conditions? In the minimum image convention we use the nearest distance among all the images. This implies that we must smoothly truncate the potential at L/2. Otherwise energy will not be conserved as particles switch from one image to another. See the pseudocode for force calculation in periodic boundary conditions. We will discuss next what happens to long range potentials in PBC.

It is possible to use any regular space filling lattice (Bravais) for boundary conditions such as the truncated octahedron since it is more spherical. This is also necessary when simulating non-cubic crystal structures.

It has been found that one hundred atoms in PBC can behave like an infinite system in many respects. The corrections are order (1/N) and the coefficient can be small. One cannot calculate long distance behavior (rL/2) or at large times (the traversal time for a sound wave.) Angular momentum is no longer conserved. Also PBC can influence the stability of phases. For example, PBC favor cubic lattice structures that fit well into the box.

Boundary Conditions in Time

How do we start up the system? What positions and velocities do we use? In Molecular dynamics one normally works at constant energy. Remember that in classical statistical mechanics that the average kinetic energy is (3/2)kBT. The best boundary conditions in both space and time critically depend on the physics of the system. If you make the wrong choice, the simulation may no longer be feasible!

Neighbor Tables for Short Ranged Potentials

We have already discussed how to cutoff short-ranged potentials at the box edge, necessitated by the need to have a continuous force for molecular dynamics. But what happens as the number of particles grows? Clearly the code fragment the operation count and hence the CPU scales as N2. This is its complexity.

But as the system gets bigger we do not have to keep increasing the cutoff radius. Once the cutoff radius gets outside the second neighbor distance, or if V(rc) < kBT we can fix rc. Then use of a more efficient scheme can reduce the complexity to N.

Long Ranged Potentials A & T pgs. 155-165 The answer to the first question is usually (but not always) the image potential.

Vimage (r) = S v(r+image) - background.

The background is put in to make the sum converge. If one sums spherical shells, the series is conditionally convergent. However one has a choice about surface charges. A particle interacting with another particle will see an infinite array of the other particle. One has to decide if this is physically correct. Inhomogeneous systems like a single charged impurity are problematical. One is essentially computing the interaction at a low but finite concentration. The Ewald method has the great advantage that it respects the Poisson equation. This implies that the long-wavelength limits are correct.

The second question is a little more subltle. Essentially the idea is

This technique, known as Ewald summation method, turns a conditionally convergent, long-ranged Coulomb series into a convergent series composed of a real-space and k-space contribution (see, e.g., Kittel, Intoduction to Solid State Physics for simple 1-D example). See the more detailed pdf description for particulars. The complexity of the method described there is N 3/2 , worse than the short-ranged potential but still better than the simplest direct method for any pairwise sum. The coefficient isn't so bad because the new features are Fourier transforms which are fast. We enclose a code fragment for example of how to use the Ewald sums. The code of Allen and Tildesley is also useful to follow for how to do this, but it is based on straightforward implementation in their book, see (code from A&T)

What tests can you make for your Ewald code? You can put the charges on a perfect lattice and make sure that you get the known Madelung constants.

In the last 10 years methods have been developed to do this computation with complexity N. The older methods are called particle-cell methods. More recently has been developed the fast multipole method, see R. Hockney and J. Eastwood, Computer Simulation using Particles (McGraw-Hill, NY, 1981). There has also been Fast Fourier Poisson Method (e.g., D. York and W Yang, J. Chem Phys. 101, 3298 (1994)) which is O(N ln N). For a good overview of these and other methods, see Toukmaji and Board, Jr. at Duke.

Static Correlations and Order Parameters

Today, I will discuss some of the most important information that comes from simulations, the static and dynamic correlation functions. You should develop a habit of looking at these quantities routinely since they tell you basic information about what the particles are doing, namely where they are and how they arrange themselves.

Most of the information from this talk is on the formula page pdf.

The types of static correlations can generally be divided into

The density

The density tells you where the particles are (or have been). In a translationally invariant system it will be constant except for noise if you average long enough.

You find it by histogramming, that is subdividing the total volume into a set of subvolumes.

How to histogram:

How do we set the histogramming size? By balancing the statistical error with the systematic error. The statistical error goes inversely as the square root of the volume size times the length of the run. It blows up for radial functions on a uniform mesh since the volume size goes at r2. The systematic error depends on the physical system, if the system is spread out, the density is pretty smooth, only a few bins will be needed.

Three dimensional histograms are a problem because, the number of hits/bin is low, they are hard to visualize, they take alot of memory. You might want to use symmetry or projections to make it into a 1 or 2 dimensional density.

It is also good to examine the density in Fourier space. In periodic boundary conditions the allowed K-vectors are on a grid. The Fourier density can be used to either: 1) smooth out the real space part by setting to zero the FT density larger than some value and F transforming back. (Disadvantage: the resulting density might go negative.) 2) look for periodic structures.

Pair correlations

The pair correlation function (also called the radial distribution function or g(r)) is the density of other particles around a given particle. Its first peak occurs around density -1/3. Normalization is usually defined such that at large r, g(r)=1. The theory of liquids is based on pair correlations because you can calculate the energy, pressure etc if you know g(r) (as long as the only potential is a pair potential.) (digression into functional derivatives) In a liquid or gas, g(r) depends only on the distance between two particles, not on the angle. Actually PBC break isotropy, g(r) could be different between the box axis and the body diagonal. Usually one ignores this to obtain a spherically symmetric g(r) for r less than half the box side. The pair correlation function is needed to apply 'tail' corrections to the potential energy and pressure coming about because of the truncation at the edge of the box.

The Fourier transform of g(r) is called the static structure function Sk. It is important to calculate because it can be compared directly with experiment, either neutron or X-ray scattering can measure it. (there are very expensive instruments at Argonne, Grenoble, Brookhaven that are dedicated to measuring Sk and its time dependent generalization. If the agreement is good it gives a lot of confidence concerning the potential model. But don't be carried away; agreement to 5% is typical if the potential is at all reasonable.

The first peak of Sk is around 6 density 1/3. Normalization is such that at large k: Sk=1. It can be computed in two different ways, either as a FT of g(r) or the direct way. The direct way is slower but better at low k-values or a reciprocal lattice vectors of a possible solid structure because there are not the systematic errors of ignoring anisotropic correlations and histogram effects. Also g(r) must be extended to large r to do the FT.

One should use the peak of Sk to decide is a system is solid. For a liquid Sk is smooth and order 1 everywhere. A solid shows a delta functions with peak high a fraction of the number of particles at the reciprocal lattice of the solid. Solid structure shows up in g(r) in a more subtle way: instead of a nice damped oscillations, there are bumps at the nearest neighbor, next-nearest neighbor, etc. (Where they are depends on the type of lattice). See the figures for illustrations of g(r) in liquid helium.

The structure factor also signals separation in a two phase region. An example is the formation of droplets such as occurs when the number of particles corresponds to neither a stable liquid of gas, but a mixture of liquid and gas. In that case the structure function will diverge at low wave vector. The value at zero (S0) equals N by definition (in the canonical ensemble) but this can be different than limit {k Þ 0} Sk which is proportional to the compressibility of the system.

Order parameters

A key concept from the theory of phase transitions is the idea of an "order parameter" which characterizes a phase. For the moment we will just explain how this works for the two most common phase transitions: liquid-gas and liquid-solid.

The liquid-gas transition: the order parameter is the density. a liquid and gas are identical except a liquid is higher density. At the critical point the difference disappears. There is a first order phase transition between liquid and solid so it is quite possible to do a simulation in the two phase region. What will happen? Liquid droplets will form assuming 1) the surface tension allows it an 2) the simulation is long enough. When this happens Sk will have a characteristic behavior.

The liquid-solid transition : A solid is characterized by a periodic density. For example the Fourier transform of the density will be very large for certain wavevectors (the reciprocal lattice vectors) CLAMPS allows you to input a particular set of vectors to monitor melting and freezing. (keyword READ_K file )

Correlation Functions for Dynamical Response

See the pdf formula page. Readings A & T pgs. 58-64, 185-191

Now consider the calculation of dynamical properties and the response to perturbation. For static properties it would probably be more convenient to do Monte Carlo methods since they are more robust. Dynamics is the reason that we do MD.

The simplest dynamical properties are the linear response to an external perturbation. Most macroscopic measurements are in the linear regime because macroscopic perturbations are very small on the scale of microscopic forces.

Onsager's made a well known conjecture (1935), called the Regression Hypothesis. (also known as the fluctuation-dissipation theorem.) It states that the linear response of the system to a time-varying perturbation is the same as fluctuations that naturally occur in statistical equilibrium. So there are 2 basic ways to calculate response: either wait for the system to fluctuate by itself or apply a perturbation and see what happens. (the first is often referred to as the Kubo method.)

Static perturbation

First let us consider static perturbations. Suppose our total Hamiltonian has the form:

H = H0 + c A

where H0 is the unperturbed Hamiltonian, A is a perturbation and 'c' is the coupling constant. As shown on the formula page one can write the free energy as a power series in 'c'. The linear response is simply the expectation of A in the unperturbed system. The second order term is the fluctuation about the mean. In fact if we neglect all terms above the first order we always get an upper bound to the free energy. Finally to calculate the free energy difference between the system at c=0 and c=1 we just need to integrate the average A over ensembles for various values of c between 0 and 1.

We can also find the response of some property (B) to the perturbation; it equals the correlation between A and B. For example one can find the effect of cutting off the potential. Generally one expects that if the neglected potential is smooth it is pretty much independent of the phase space point R, and hence terms higher order than first can be neglected. A second example is to apply a sinusoidal potential and look at the response of the density. It is simply S k, the structure factor. That is essentially how neutrons and X-rays measure the structure factor.

Diffusion Constant

Now consider the simplest dynamical experiment, mix two types of atoms and see how quickly diffusion makes the system a homogeneous mixture. The diffusion constant D, together with the diffusion equation, governs this process on a macroscopic scale. The Green's function for the diffusion equation is a spreading Gaussian with the mean squared displacement equal to 6 D t. A little thought shows that a way to microscopically calculate D is to study the diffusion of a single atom. Again the equation is on the formula page. One can easily transform this expression to the integral of the velocity autocorrelation function.

The transformation assumes that one is using the unwound coordinates; the positions of the atoms such that:

r(t+h) = r(t) + h v(t) + ...

Don't use periodic boundary conditions to put particles back in the central cell.

One of the key discoveries in the early days of computer simulation was the long-time tails on the v-v function. by Alder and Wainwright. The v-v correlation functions decay algebraically in time. (as t-d/2) where d is the spatial dimension.) This implies that diffusion may not really exist in 2D. This is not Markovian (random) behavior which would decay exponentially in time. The momentum of a fast moving particle gets stored in the nearby liquid as a vortex and helps to keep it moving in the same direction for a very long time. (citation)

Response to a time varying perturbation

If we measure the response to A of a perturbation B, it is the correlation between A and the time derivative of B (or vice versa) which enters. The coefficient (c ) of the susceptibility is the linear response, or transport coefficient. Some examples are the electrical and thermal conductivity, the bulk and shear viscosity, the mass diffusion coefficient and the dynamical structure factor. In linear response the perturbation and the response are at the same frequency and wavelength. Macroscopically one is often interested in only the zero-frequency, zero-wavelength limit of the response. A peak at finite frequency shows the presence of a collective mode, for example a sound wave. The derivation of the fluctuation dissipation theorem is in many textbooks. The general result for a perturbation is on the formula page.

The dynamic structure factor is a key quantity (also known as the density-density response function). If you push the system at time and place (r,t) how does it respond at (r',t')? This is what neutron scattering does. It is not hard to see that sound waves of various kinds will show up as prominent features of Sk (w). This is a good way to see them. The book The Theory of Simple Liquids by Hansen and MacDonald contains a thorough treatment of the theoretical properties of this function.
 
 
 

David Ceperley, University of Illinois, May 2001