Ising Model

Sites on a lattice with si{±1} indexed by lattice site. Then, \(E = -\sum_{j

C.f. Quantum mechanics Js^is^j=J2(Sij2Si2Sj2) can be projected to a classical heisenberg model Jsisj and then project to 1D.

In our thermodynamic models, we can transition between a gas and a liquid without a phase transition… how? The ising model elucidates how.

20230130110456-statistical_mechanics.org_20230421_115956.png

Often consider nearest neighbor interaction only: E=JijsisjBisi. This model cannot be used in dynamics, hence we don’t put a Hamiltonian, just E. Qualitatively, what happens?

Suppose B=0. If J>0 and for low temperature then spins tend to be aligned with their nearest neighbors: si±1. If J>0 and we have high temperature then all of the states are roughly equally likely so they tend to be randomly oriented/ antialigned since there are more states: si0.

Suppose J=0. Then the spins align with the magnetic field hence if B=±1 then si±1.

Solve B>0 and J=0. E=Bisi=BM. So, E=Nϵ. With M=Nm. Choose N spins and a volume V, note the volumetric dependence is in J since it relates to the neighbors. m(T)=si(T)=MN with M=isi. s=(βB) M=1ZαMαexp(Eα/kT). m=s=1Zisiexp(βϵi)=exp(β(B))exp(βB)exp(βB)+exp(βB)=exp(βB)exp(βB)exp(βB)+exp(βB)=tanh(βB).

Let J>0 and B=0, E=Ji,jsisj. E=J2ik{nearest neighbor of site i}sisk=J2i(sin.n.sk)J2i(si(n.n.s))=B~isi with B~=Jzs and z is the number of nearest neighbors. This is for looking at a single independent spin. Then, mtanh(βB~)=tanh(βJzm). This is called a (naive) mean field approximation. Note that the LHS is linear and the rhs is a tanh. Then m=0 is always a solution. Further, we tanhxx13x3. So, there could be a crossing at a finite T and m(T). In this case, as T increases the slope decreases. So at low temperature, we get a finite m. When the slopes meet, then we arrive at our final critical temperature. TTcLHSm=RHSm1JzkTc hence kTcJz. Solving for m(T) near Tc, mJzkTm13(JzkTm)3. Then, (1JzkT)13(JzkT)3m2m(T)3(1JzkT)(JzkT)3=3(1TcT)(TTc)3. 20230130110456-statistical_mechanics.org_20230421_115916.png

1TCT=13(TCT)3m2. Thus, m(T)=±3TC3/2T2TCT Then, m(TCT)β with β=12, \(T

The high temperature has more symmetry than the low temperature since rotating perspective around the lattice looks the same at high temperature rather than at low temperature where rotating can be distinguished. High-temperature has the same symmetry as the Hamiltonian.

β critical exponent.

CB|TTC|α, B=0. B,m=MN.

α CB Heat Capacity, B=0,α0
β m(TCT)β Magnetization, \(T\to T_C, T
γ χabs(TTC)γ B=0,χ=1N(MB)B=0,T
delta mB1/δ B0,T=TC.
η CC(2)(r)1rd2+η T=TC,β=0
ν χabs(TTC)ν T near TC, B=0 and CC(2)exp(r/χ(T))

C(2)(r,t) is a correlation function S(x,t)S(x+r,t+τ) Equal time correlation, C(2)(r,0). If we do this at infinite distance, C(2)(,0)=m2 since they are uncorrelated. Then, CC(2) is the connected correlation function, which is C(2)C(2)(,0). 20230130110456-statistical_mechanics.org_20230424_112213.png As you get lower temperature, regions of same spins start to appear.

1D z=2,kTC=2J,β=1/2

2D z=4,kTC=4J,β=1/2

3D z=6..12,kTC8J,β=1/2

4D+ β=1/2.

Exact solution:

1D. No transition

2D. kTC=2JkBln(1+2)2.269J,β=1/8.

3D. For Ferromagnets. Fe: β0.37. Ni: β0.36.

3D. β0.32(9).

4D+. β=1/2.

Higher spin models are called Potts models.

Approximate Solutions for Low Temperature

Low Temperature is when we have the majority of occupations in the ground state, elementary exitation, or really close.

For a 1D ground state, all the spins being aligned, ϵ0. Flipping one gives Δϵ=4J, hence ϵ1=ϵ0+Δϵ. But wait, if we flip 2, then we get ϵ1=ϵ0+2J, hence the previous was the second exitation.

F(T,V,N)=ETS with E=Ji=0N1sisi+1. F=klnZ with Z=αexp(βEα).

2 aligned states, 2(N1) with 1 neighbor pair flipped. The average magnetization is zero.

ΔF=F1F0=E1E0T(S1S0)=2JT(kln(N1)kln1)=2JkTln(N1). For N, ΔF<0.

For a 2D system. Consider a domain wall with some boundary through the middle region. For edge cases, the energy difference is 2J. So, we get a minimum where ΔE=2JL where L is the length of the domain wall. So, Lc~2N where c~22 to 3. c~2 denotes the number of choices of direction to go to the next lattice site. Starting points is c~3N. Number of domain walls, (c~2)L. c~2=2,c~3=1.

For c~2=2 we get really close to exact result.

m=s0=f(sk) translational symmetry implies s0=sk. Then, m=tanh(β(zJm+B)).

A similar approach to calculating the Helmholtz free energy, F=UTS. HW, Bragg-WIlliams approximation, see alloy section.

C.f. QM: Variational derivation of Mean Field Theory, MFT. Z=αexp(βH(α)) where α denotes a configuration. Split the Hamiltonian into two parts: H=H0+H1 such that we can evaluate Z0 for H0. Then, we write, ZZ0=αexp(β(H0+H1)α)αexp(βH0)=αexp(βH0(α))αexp(βH0(α))exp(βH1(α))=αp0(α)exp(βH1(α)). This gives the ensemble average with respect to the probability distribution p0, ZZ0=exp(βH1)0. Now, exp(f)expf.

Then, ZZ0exp(βH10). Hence, lnZlnZ0βH10 with F=kTlnZ we get FF0+H10. These two equations are the Bogolinbou or Gibs-Bog-Feynman inequalities. Use F0=H00TS0 with S0=kαpαlnp0. Then, FH00+H10TS0. Hence, FH0TS0.

So, if you don’t have a total probability distribution, or not have one for one part, then you can compute an upper bound with what you do have.

Use independent particles (spin). H0=λH~0 where λ is a variational parameter. Then, Fvar({λi})F0(λ)+H1(λ)0. Then, for our Ising model, H0=λisi, we could have written λB, but for simplicity’s sake we absorb it. Then, Z0(λ)=[2cos(βλ)]N since for 1 spin, Z1=2cosβB~ and s1=2sinh(βB~)2coshβB~=tanh(βB~). Then, H1=Jij+(λB)isi. Hence, H10=12zJNs02+N(λB)tanh(βλ)=12NzJtanh2(βλ)+N(λB)tanh(βλ) Note, F1(λ)=kTlnZ1(λ) so F0=NF1. So, Fvar(λ)=N(1βln(2coshβλ))12Jtanh2(βλ)+(λB)tanh(βλ). Minimizing this, Fvarλ=0 gives λminB=zJtanh(βλmin). Inserting this into the original expression, Fvar(λmin)=Nβln(3cosh(βλmin))+N(λminB)22zJ. Then, m=1NFB=1N(FB+FλλB)=λminBzJ=tanh(zJm+B).

Improve MF approach, general idea: cluster approximation(s). Given a system, get a small ’core’ with a good ’exact’ solution. Solve surroundings in average way.

Can do this with dimensions, DMFT-Dynamical Mean Field Theory.

Example: Bethe Approximation.

1 spin. In MFT we have everything else as an averge. If we have a lattice we can model it with z=4 neighbors. The neighbor-of-neighbor(s) are mean field, averages. So, consider the 1 spin s0 and the neighbor as sk, which is a sort of intermediate term. \(E = -J\sum_{}s_is_j - B\sum_i s_i\). Then the cluster energy is, Ec=Jk=1zs0skBs0B~k=1zsk. Compare MF, E1=B~s0, B~=Jzm. Now we calculate, s0 and sk, ’they are all the same’. All spins are equal (translational symmetry), whyich gives s0=sk (HW). Continuing this, you can also get coshz1(β(K+B~))coshz1(β(J1B~))=exp(2βB~). Then, LHSB~B~=0=RHSB~B~=0 so B0J=12lnZZ2. For a 2d lattice, kTc=2.885J where the exact is 2.269J. m(T)(TcT)β,β=1/2 is always obtained but we get better kTc.

Setup for Next Time

Exact solutions:

  1. Open chain: E=Ji=1N01sisi+1. ZN=s1=±1s2=±1sN=±1exp(β(Ji=1N1sisi+1)). So we get lots of products.

Since we have an open chain, you can start at the first or last chain, exp(βJsisi+1)exp(βJsi+1si+2)sNexp(βJsN1sN). So, exp(βJsN1)+exp(βJsN1)=2cosh(βJ). Now we have a new last spin, we can keep doing this and sum them up. (HW) Zn=2(2cosh(βJ))N1,F=kTlnZn,U=lnZβ,CV=UT.

To avoid loops, 1<c~<3 with c2. L=Nc~=Nc with NDW=NcL. ΔF=2JLkTln(NcL)=2JcNkTln(NcNC)=2JNckTlnNNkTclnc=2JNcNktclnc. For Ferromagnetic state, ΔF=2NJcNktclnc>0. kTc=2Jlnc>0,lnc>0,c>1. kTc{2.9J,c=2;1.8J,c=3}

Transfer Matrix Solution of One-Dimension Ising Model

Consider the transfer matrix solution to 1D ising model. For a ring arrangement. E=Ji=1Nsisi+1Bi=1nsi.

Symmetrizing this ii+1, E=i=1N(Jsisi+1+B2(si+si+1)). So, ZN=αexp(βE),α={s1=±1,s2=±1,,sN=±1}. Then, ZN=sii=1Nexp(β(Jsisi+1+B2(si+si+1))). Hence, ZN=Tr(TN) solves our system. We get the solution from diagonalizing the matrix, D=UTU1 hence ZN=Tr(TN)=Tr(DN). Then, ZN=Tr(TN)=λ1N+λ2N+ and FN=kTlnZN=kTln(λ1N+λ2N+). Let λ1 be the largest. Suppose we only have a 2x2, FN=NkTlnλ1kTln(1+λ2Nλ1N)NkTlnλ1 in the thermodynamic limit. Hence we only need the largest eigenvalue.

For N=2, Z2=exp(β(J(1)(1)+B2((1)+(1))))exp(β(J(1)(1)+B2((1)+(1))))+exp(β(J(1)(1)+B2((1)+(1))))exp(β(J(1)(1)+B2((1)+(1))))+exp(β(J(1)(1)+B2((1)+(1))))exp(β(J(1)(1)+B2((1)+(1))))+exp(β(J(1)(1)+B2((1)+(1))))exp(β(J(1)(1)+B2((1)+(1)))) Z2=exp(2β(JB))+exp(2βJ)+exp(2βJ)+exp(2β(J+B)) We get 2N terms in sum of products of N=2 exponentials.

From the board: exp(++)exp(++)+exp(+)exp(+)+exp(+)exp(+)+exp()exp() So, Z2=exp(2β(J+B))+2exp(2βJ)+exp(2β(JB)).

So, Tr(T2)=(t11t12t21t22)(t11t12t21t22)=t112+2t12t21+t222 Then, T=(exp(β(J+B))exp(βJ)exp(βJ)exp(β(JB))). The eigenvalues are then λ1/2=exp(βJ)cos(βB)±exp(2βJsinh(βB)+exp(2βJ)). F=NkTlnλ1, m=1NFB=kTλ1λ1B=sinh(βB)sinh2(βB)+exp(4βJ). For B=0 then m=0. But even for a slight field, we get a non-zero field. I.e. for any B0 we get sinh2(βB)exp(4βJ) hence m±1.

Calculate ensemble averages with a computer.

Monte Carlo

Ising model with NS sites. Then the number of configurations are Nα=2NS. For a 10x10 lattice, we get Nα=2100 which is more than the number of atoms in the observable universe.

If we choose states randomly, we tend to not get our ensemble average since most states are roughly zero magnetization. As we get more samples, we get a narrower peak around zero and it will start to converge to zero magnetization. With the heatbath method we no longer need to calculate the probability (boltzman factor and normalization) distribution for random samples.

Heat Bath MC

Pick a spin in the lattice si=(nx,ny). Count number of up spins among neighbors: ni. Compute mi=jnisj.

mi ni
4 4
2 3
0 2
-2 1
-4 0

Compute energy of spin i given the environment.

E+=JmiB

E=Jmi+B

Set spin si=+1 with p+=exp(βE+)exp(βE+)+exp(βE) or si=1 with p=exp(βE)exp(βE+)+exp(βE1).

Record En;MN.

Repeat N times.

E=1Nn=1NEn,M=1Nn=1NMn.

Keep track of En2 and Mn2 for variances and response function.

Monte Carlo

Sampling: Estimate of x. Randomly generating B configurations of the system. Let E[x] be the estimator of x. Then, E[x]=xB=1BiBxαiexp(βEαi)1BiBexp(βEαi). Unbiased: pα=1Nα. Then, 1Bi=1Bn=1Nα1Nαexp(βEαn)=BB1NαZ and 1Bi=1Bn=1NαxαNαexp(βEαn)=BB1NαZx. For an unbiased estimator, E[x]=x

Choose configuration αn with probability exp(βEαn), E[x]=1Bi=1Bxαi,E[x]=x. Example of importance sampling.

Markov Chain

Definition: Sequence of configurations generated by a Markov step.

Definition of a Markov step: Generates configuration based on the previous step only, not any prior. No memory effects.

We can describe a Markov step by Nα2 probabilities from P(βα)=Pβα. Conditions for a Markov chain with a desired probability distribution pα:

  1. βP(βα)=1, i.e. the system must go somewhere.
  2. Accesibility Condition: for a given configuration you must be able to get any other configuration in a finite number of steps
  3. Detailed Balance: pαP(βα)=pβP(αβ). So, P(βα)P(αβ)=pβpα. For a Boltzman distribution (MB statistics), =exp(EβEαkT).

Let ρα(n) be the probability of the n-th element in the Markov chain to be configuration α. Need to show ρα(n) converges to pα for increasing n and that if we are at convergence that ρα(n+δn) remains equal to pα.

Formulation in terms of vectors and matrices. ρα(n)=(p1(n)p2(n)pNα(n)),Pβα=(P(11)P(12)P(21)).

Proving the second one first, suppose ρα(n)=pα. Then ρα(n+1)=αpαP(αα)=αpαP(αα)=1.

Proving the first one. Suppose ρα(n)pα. Let Dnα|ρα(n)pα| be the norm we will work with. Compute Dn+1. We want Dn+1Dn. So,

Dn+1=α|ρα(n+1)pα|=α|αρα(n)P(αα)pα|=α|αρα(n)P(αα)αP(αα)pα|=α|αρα(n)P(αα)P(αα)pα|=α|α(ρα(n)pα)P(αα)|αα|(ρα(n)pα)P(αα)|=αα|(ρα(n)pα)|P(αα)=α|(ρα(n)pα)|αP(αα)=α|(ρα(n)pα)|=Dn

Metropolis Monte Carlo

Situation: identical to heat-bath MC.

We can get the transition probability from probability distribution of initial and final states.

  1. Pick random spin
  2. Count how many neighbors are the same as the picked spin
  3. Make a list

    Δn ΔE Number of spins after Number of spins before
    -4 8J 0 4
    -2 4J 1 3
    0 0 2 2
    2 -4J 3 1
    4 -8J 4 0

    Metropolis Choice: ΔE0 then we will flip the spin, otherwise flip with probability exp(βΔE).

    Showing detailed balance: Case 1. \(E_{-}

    Note: If you want to simulate a high-temperature system then it is best to use another algorithm (see Galuber) since this will lead to always flipping a random spin.

ρβ(n+1)=αP(βα)ρα(n)=αPβαρα(n). So, ρ(n+1)=Pρ(n) hence at steady state ρ=Pρ with P now being a matrix and ρ being an eigenvector of 1. Theorem (1+2): At least one eigenvalue, λ, is 1 all other eigenvaluses have eigenvectors orthogonal to it. So, λλλ=0. Theorem (3): λ=1 and all other |λ|<1. Theorem (4): Ergodic systems have only one λ=1.

ρ(n)=Pρ(n1)=Pnρ(0). Expanding around ρ, ρ(n)=αρ+|λ|<1αλρλ=αρ+|λ|<1αλλnρ(0).

Markovian System that does not obey Detailed Balance

COMPUTATIONAL PROBLEM.

Assume we have 1000 indistinguishable bacteria, 500 green and 500 red. Every hour:

  • Every bacteria divides
  • Colorblind predator eats exactly 1000 bacteria

1001 possible states (0-1000 of one color).

Stationary state: 1/2 0 of 1 color and 1/2 1000 of 1 color, pst=12R0+12R1000.

Author: Christian Cunningham

Created: 2024-05-30 Thu 21:18

Validate