Statistical Mechanics

Probability

Probability Density

Random Walks

Microcanonical Ensemble

Large Numbers

Intensive: O(N0)=O(1).

Extensive: O(N1)=O(N). Note: Ext/Ext = Int, Int x Ext = Ext.

Exponentially Large: O(eN).

How many microstates in a simple system. Consider a site with 2 states or a 4x4 grid, N=Ns2=42=16.

Ising model, si={1,+1}.

In the 4x4 grid, we have 216=65536 states. An exaflop is 1018 floating point operations per second which is roughly 260.

Estimate, 1000 Exaflops total in the world, 210.

30 million seconds, 3107. Then, 225. So, 295 sites can be calculated in a year. If we double in a year, we can calculate one more site.

Roughly, 280 atoms in the universe.

S=αexp(EαkT), Eα=i=1Nϵi.

S=α=1NPYα, 0YαO(exp(Nϕα)). We can choose it to be bounded below by any number we choose, but zero is typically most convenient. Then, 0YαYmax.

So, ymaxSNPymax.

Consider, lnSN. Then, lnymaxNlnSNplnNN+lnymaxN. As N, lnN/N0, so lnSNlnymaxN, hence lnSlnymax.

Integrals

I=dxexp(Nϕ(x)). Approximating these by the maximum value of integrand (φ(xmax)) obtained at x=xmax.

Taylor expanding around this point, Jdxexp(N(ϕ(xmax)+12ϕ(xxmax)2+)). ϕ(xmax)=0,ϕ(xmax)<0 due to x being a maxiumum.

Jexp(Nϕ(xmax))dxexp(N2|ϕ(xmax)|(xxmax)2)=2πN/ϕ(xmax)exp(Nϕ(xmax)).

2 major corrections are needed:

  • Higher order terms: powers of 1/N
  • Other local maxima: xmax’ will have a different size (Assumed a single maximum, but there may be other maximums as well as higher or lower maximums.) The relative size goes to zero as N, exp(Nϕ(x))exp(Nϕ(x))=exp(N(ϕ(x)ϕ(x))) has a positive inner argument and an overall minus sign.

Write ϕ(x)=lnxxN. Then, 0exp(Nϕ(x))dx=0xNexp(x)dx=Γ(N+1). ϕmax=ϕ(x)=1x1N. Then the maximum is at x=N. ϕ(xmax)=lnN1. ϕ(x)=1x2ϕ(xmax)=1N2. So, N!=0xNexp(x)exp(Nϕ(xmax))dxexp(N2|ϕ(xmax)|(xxmax)2)=NNeN2πN.

Gamma Function

dxxnexp(x)=Γ(n+1)=n!.

Partition Function

Z=αexp(βEα). pα=exp(Eα/kT)Z.

E=αpαEα=1ZβZ=lnZβ.

ET=EββT

C.f. CV=(UT)V,N

Cv=f(E,E2)=γ(E2E2).

Equations of State

Disorder

Entropy of Mixing

Consider a system with two parts of equal volume, V. Total volume 2V. N total particles. Put the particles equally distributed in both regions, N/2. Color the left white and the right black. The entropy for the right or left atoms is then, SW,B=kBlnVN/2(N2)!.

Ω(E)=(VNN!)((2πmE)3N2(3N2)!h3N) = space * momentum.

Sunmixed=2SW,B=2kBlnVN/2(N/2)! Smixed=2(kBln(2V)N/2(N/2)!). ΔSmixing=SmixedSunmixed=2kBln((2V)N/2VN/2)=NkBln2.

Gibbs Paradox

If the particles were both black. Sunmixed=2SW,B=2kBlnVN/2(N/2)! Smixed=kBln(2V)NN!. ΔSmixing=SmixedSunmixed=kBln(2N(N2)!2N!). Using the Stirling approximation, ΔSmixing/kB=Nln2+2(N2lnN2N2)(NlnNN)=NlnNNNln2+Nln2(NlnNN)=0.

Materials

Amorphous Solids

Glasses

The Solids are glasses if they have a glass transition.

If you heat up to a temperature T then cool slowly, as you cool you phase transition between liquid and crystal solid at Tmelt. If you do it quickly to a temperature below \(T_{glass}T_{glass}\)). Viscosity changes abruptly. Examples: Covalent hotwork glasses: SiO2. Polymer glasses.

In order to redo a glass, you have to go back above Tmelt and recool to Tg.

You get a unit of entropy (kB) per unit of material.

Theoretical glass models:

  • Spin-glasses. Consider a lattice of sites with spins ±1. Say we have some random interaction Jsisj. If J is positive, then you need a even number of neighbors to get a non-frustrated glass. I.e. if you have a hexagonal lattice you get frustration. slow>Crystalslow>Liquidfast>Glass Sres=S(T)1TdQdtdt=S(T)0T1TdQdTdTkB×#molecular units. Remark: dQdT is the heat capacity.

20230130110456-statistical_mechanics.org_20230227_111644.png

Assuming, Viδi. If kinetics is important minimal will be occupied roughly equally, ΔSqi=kBln2. [then when you cool from the high temperature state (assuming Viδi), you will find that some of the 2 states will remain since they cannot traverse the barrier to get to the one state.]

The energy stored in the higher state by accident is δi per qi, (50%). ΔSqiΔQiT. Note, we only have two temperatures, T1VikB and T2δikBVikB. In our case, T=T2. We can call T, Tfreeze. So, TfreezeδikBVikBδiδikBkB.

The time scale for this hopping is τ1012 s.

Lenard-Jones potential - Annealing MC - 39 particles. Numerical does not give good results. Can only solve the system classically and then simulate the dynamics for reasonable results.

Information Theory

Initial Remarks

If you cannot get or use the information then the entropy does not change for a system. (The information to be put in the next process)

The information you can gain about the system. It is external to the system but is a state variable.

Lecture

Sdiscrete=kBlnpi=kBipilnpi=kBi1Wln1W=kBlnW=kBE<H(P,Q)<E+δEdPdQδNρ(P,Q)lnρ(P,Q)=kBTr(ρ^lnρ^) with ρ:= density matrix. Some texts call lnp the suprisal fuction, the ensemble average of ’suprise’.

Examples

Box: 3 balls: 1 red, 2 green.

For the first draw: p(R)=13,p(G)=23.

If we put it back, we get independent draws. For the second draw (without putting back): p(R2)=p(R1)p(R2|R1)+p(G1)p(R2|G1)=0+231=13, p(G2)=p(R1)p(G2|R1)+p(G1)p(G2|G1)=13(1)+2312=23.

For the third draw (without putting back): p(R3)=p(G1G2)p(R3|G1G2)=p(G1)p(G2|G1)p(R3|G1G2)=2312=13. p(G3)=1p(R3)=23.

Measure of Suprise

What is information…

You won the Lottery! -> Huge suprise because it is highly unlikely. I.e. p0. So suprise .

You didn’t win the Lottery! -> ’Almost’ no suprise since it is highly likely. I.e. p1 so suprise 0.

Note, lnp has this behavior.

Information

Notation: outcomes Ak;k=1,,Ω;pk=p(Ak);kpk=1;B,=1,,M;q=p(B);q=1. For instance, if Ak are the possible location for your keys and B are the locations that you last saw the keys.

  1. Information is max if all the outcomes are equally likely. SI(1Ω,,1Ω)SI(p1,,pΩ) with equality only if pi=1Ωi.
  2. Information does not change if we add outcomes Aj with pj=0. SI(p1,,pΩ1)=sI(p1,,pΩ1,0).
  3. If we obtained (measure) partial information for a joint outcome C=AB by measuring B then SI(A|B)B=SI(C)SI(B) where SI(A|B)B==1MSI(A|B)q. Note, Ck=AkB,rk=p(Ck). We use the average since we don’t know the answer a priori and so we must consider all of the possibilities. Once we have a specific B then that q=1 and the rest are zero, and so this still holds up.

Ck=P(Ak|B)=P(B|Ak)P(Ak)P(B)=P(AkB)p(B)=P(AkB)p(B)=rklq.

kck=kP(Ak|B)=1.

Before you talk to your roommate, SI(AB)=SI(r11,r12,,r1M,r21,,rΩ,M)=SI(c11q1,c12q2,,cΩMqM).

After you talked to your roomate, SI(A|B). % = S(r1ℓ,r2ℓ,⋯,rΩℓ) Expected change in SI if you measure is given by (rule 3) SI(A|B)B=SI(C)SI(B)==1mSI(A|B)q.

Proving 3. SI(C)=SI(AB)=kSkrklnrk=kSkckqln(ckq)=kS[kckqlnck+kckqlnq]=q(kSkcklnck)kS[qln(q)kck]=q(kSkcklnck)kS[qln(q)]=qSI(A|B)+SI(B)=SI(A|B)B+SI(B).

Proving 2. SI=kSkΩpklnpk=kSkΩ1pklnpk+limp0plnp=SIΩ1.

Proving 1. f(p)=plnp. Thus, it is the information for 1 variable. dfdp=lnp1. d2fdp2=1p<0 for p(0,1). Thus, f(p) is concave.

For any points a,b the weighted average λa+(1λ)b. Then,

(x)(λa+(1λ)b)λf(a)+(1λ)f(b).

Jensen inequality. f(1Ωkpk)1Ωkf(pk).

Consider Ω=2. In (x) choose λ=12, a=p1,b=p2. Then, f(p1+p22)12(f(p1)+f(p2)).

For a general Ω, choose λ=Ω1Ω, a=1Ω1kpk and b=pΩ. Then by induction on a and including b with the Ω=2, we get the result we want. So, f(1Ωkpk)=f(Ω1ΩkΩ1pkΩ1+1ΩpΩ)Ω1Ωf(kΩ1pkΩ1)+1Ωf(pΩ) Assuming the Jensen inequality is true, f(1Ωkpk)=f(Ω1ΩkΩ1pkΩ1+1ΩpΩ)Ω1Ωf(kΩ1pkΩ1)+1Ωf(pΩ)1ΩkΩ1f(pk)+1Ωf(pΩ)=1Ωkf(pk).

So, SI(p1,,pΩ)=kSkpklnpk=kSkf(pk)=kSΩ1Ωkf(pk)kSΩf(1Ωkpk)=kSΩf(1Ω)=kSΩ1Ωln1Ω=kSln1Ω=SI(1Ω,,1Ω).

Conditional Probability

Definition: p(B|A) is probability of event B given event A was observed.

Joint

p(AB)=p(AB).

Bayes Rule

p(AB)=p(B|A)p(A)=p(A|B)p(B).

Independent

For independent events, p(B|A)=p(B), p(A|B)=p(A). So, p(AB)=p(A)p(B).

Beyond the Microcanonical Ensemble

Information Theory Approach

Maximize entropy, minimize bias, principle.

Max: S=kBαpαlnpα subject to constraints using Lagrange parameters.

Constraints:

  1. αpα=1
  2. E=αpαEαEU

If you do just (1), then you get pα=1Nα.

If you do both (1+2), then L=kBαpαlnpα+λ1kB(1αpα)+λ2kB(UαpαEα).

Lpα==0

How does the temperature enter?

λ2=β=1kBT.

From the Microcanonical Ensemble

Recall we had the partitioned volume with sides at S1,E1,ρ(E1) and EE1 with ρ(S1)=Ω2(EE1)Ω(E).

ρ(E1)=Ω1(E1)Ω2(EE1)Ω(E). Maximize ρ(E1) leads to 1Ω1(dΩ1dE1)E1=1Ω2(dΩ2dE2)EE1

Define S: (dS1dE1)E1,V1,N1=(dS2dE2)EE1,V2,N2

Define T 1T1=1T2.

If we consider 1 as the system and 2 as the heat bath/ environment, So, ρ(S)=Ω2(EES)Ω(E)=1Ω(E)exp(S2(EES)kB).

Comparing two states, SA,EA and SB,EB. Assume EB>EA, but this is not required.

ρ(SB)ρ(SA)=Ω2(EEB)Ω2(EEA)=exp(S2(EEB)S2(EEA)kB)

Note, S2(EBB)S2(EEA)(EAEB)(SE)

This comes from ΔS2=ΔE2T. So, ΔEsys=ΔE1=EBEA. So, ΔEenv=ΔEsys=EAEB.

Then, probability in being in a particular state is ρ(S)exp(ESkBT)=exp(βES).

Normalization: sumαρ(Sα)=1ρ(S)=exp(βES)αexp(βEα)=1Zexp(βES).

Z=αexp(βEα)=dP1Q1h3N1exp(H1(P1,Q1)kT)=dE1Ω1(E1)exp(E1kT).

Internal energy: E=αpαEα=1ZαEαexp(βEα)=lnZB.

Heat capacity at a constant volume: cV=(UT)V,N=(ET)=EββT=1kBT2Eβ=1kBT22lnZβ2. cV=T(ST)V.

Eβ=βαEαexp(βEα)αexp(βEα)=1Z2(α(Eα)exp(βEα))(αEαexp(βEα))+1ZαEα2exp(βEα)=E2+E2=(EE)2=E22EE+E2=E2E2.

The total heat capacity is then CV=Nc~V=1kBT2(E2E2)=σE2kBT2.

The energy fluctuations per particle is σEN=E2E2N=Nc~VkBT2N=kBTc~VTN. The second square root is related to the amount of heat required to rais individual particle temperatures.

So, for large systems the energy fluctuations per particle remain small. So you can analyze the system cannonically or microcannonically.

S=kBαpαlnpα=kBαexp(βEα)Zln(exp(βEα)Z)=kB1Zα(exp(βEα)ln(βEαlnZ))=kBβE+kBlnZαexp(βEα)Z=kBlnZ+ET. So, kBTlnZ=ETS=UTS=F.

Thus, we get the Helmholtz free energy from the partition function.

Equivalently, Z=exp(βF).

So, connecting a system to a heat bath is the most basic assumption we need to derive our thermodynamic quantities we are familiar with.

Exercise

We have, Z=exp(βE)=exp(βH). Consider a conjugate pair, X,y (i.e. y is a generalized force and X is a generalized displacement), then X=Xy. We then get Z=exp(β(HXy)). So, E=E(X) for internal energy.

Quantum Statistical Mechanics

N(μ)=0dϵg(ϵ)n(μ,T)dϵ.

For 3D: g(ϵ)=(4πp2)(d|p|dϵ)(L(2π))3. For 2D: g(ϵ)=(2πp)(d|p|dϵ)(L(2π))2. For 1D: g(ϵ)=(2)(d|p|dϵ)(L(2π))1.

Density Matrices

Gibbs Phase Rules

Thermal Equilibrium

Particles in thermal equilibrium follow a boltzman distribution.

Ising Model

Symmetries and Order Parameters

Correlations

Suceptibility

Nucleation

We are reaching for droplet formation from gas to liquid abrupt phase transition.

ΔG=GgasGliquid Dividing across by N, Δμ=ΔGN=GgasGliquidN.

ΔG(GgG)TΔT=(SSg)ΔT So, Δμ(SSg)ΔTN. And, ΔSLG=QTv=LNTv. So, Δμ=LΔTTv.

Consider a droplet of volume VD. Then, ΔGVD which lowers the free energy. The surface costs energy: σAD where σ is the surface tension and raises the free energy.

ΔG=4πR2σ43πR3NΔμV=4πR2σ43πR3nΔμ. Then, we get some critical radius, Rc between these two competing free energies. Rc=2σnΔμ=2σTvnLΔT gives the critical radius of the droplet. Hence, for lower barriers (lower latent heat) the critical radius increases. The barrier, B=16πσ3Tv23n2L2(ΔT)2.

Conserved order parameter: (mass) requires conservation of quantities. Non-conserved order parameter: (magnetism).

Introduction to Phenomonological Scaling

  1. α: Cβ|TTcT|α where TTc and B=0.
  2. β: m(TcT)β where TTc, T>Tc, and B=0
  3. γ: χ|TcT|γ where TTc and B=0
  4. δ: mB1/δ where T=Tc and B=0
  5. η: Cc(2)(r) propto1rd2+η where T=Tc and B=0
  6. ν: Cc(2)(r)exp(r/ξ) and ξ|TTc|ν for B=0 and T near Tc

Widom Scaling Hypothesis. The free energy, f(T,B)=t1/yψ(Btx/y) where t=|TTc|Tc. So, z=Btx/ytx/y=1zBzt1/y=B1/xz1/x. So, f(T,B)=B1/xz1/xΨ(z)=B1/xψ~(Btx/y). Then, mB=0=(fB)T=t(1x)/yψ(0) and β=1xy. From this, we can calculate the first 4, which come in combinations of x and y. For χ, γ=2x1y. For CB, α=21y. Also, δ=1x1.

Hausdorf Scaling Hypothesis, Cc(2)(r,t)=ψ(rt2αd)rd2+η. So, γd=α2. From all of these, we see that 5-6 only depend on 2 parameters needed.

Renormalization Group

Author: Christian Cunningham

Created: 2024-05-30 Thu 21:16

Validate