I. First
“Law” of Thermodynamics................................................................................. 2
A. Temperature Sec 1.1............................................................................................... 2
1. Energy................................................................................................................. 2
2. Thermal
Equilibrium............................................................................................ 2
3. Thermometers...................................................................................................... 4
B. Work........................................................................................................................ 4
1. First
“Law” of Thermodynamics Sec 1.4........................................................... 4
2. Compressive
Work Sec 1.5................................................................................. 5
3. Other
Works........................................................................................................ 8
C. Heat
Capacity Sec 1.6.......................................................................................... 10
1. Changing
Temperature...................................................................................... 10
2. Heat
Capacity and Degrees of Freedom........................................................... 11
II. Second
“Law” of Thermodynamics.......................................................................... 14
A. Combinatorics........................................................................................................ 14
1. Two State
Systems Sec 2.1............................................................................. 14
2. Einstein
Solid Sec 2.2,
2.3............................................................................... 15
B. Entropy.................................................................................................................. 17
1. Large
Systems Sec 2.4..................................................................................... 17
2. Second
“Law” Sec 2.6.................................................................................... 18
C. Creating
Entropy................................................................................................... 19
1. Temperature Sec 3.1, 3.2................................................................................. 19
2. Pressure Sec 3.4............................................................................................... 21
3. Chemical
Potential Sec
3.5.............................................................................. 22
4. Expanding
& Mixing Sec
2.6........................................................................... 23
III. Processes................................................................................................................ 25
A. Cyclic
Processes.................................................................................................... 25
1. Heat
Engines & Heat Pumps Sec 4.1, 4.2........................................................ 25
2. Otto,
Diesel, & Rankine Sec 4.3...................................................................... 27
B. Non-cyclic
Processes or Thermodynamic Potentials............................................. 30
1. Thermodynamic
Potentials Sec
5.1.................................................................. 30
2. Toward
Equilibrium Sec
5.2............................................................................. 33
3. Phase
transformations
Sec 5.3.......................................................................... 34
IV. Statistical
Mechanics............................................................................................. 40
A. Partition
Function.................................................................................................. 40
1. Boltzmann Sec 6.1........................................................................................... 40
2. Probability Sec 6.2............................................................................................ 43
3. A Couple
of Applications.................................................................................. 44
B. Adding up the States Sec 1.2, 1.7, 2.5, 6.6, 6.7..................................................... 46
1. Two-State
Paramagnet Sec
6.6........................................................................ 46
2. Ideal
Gas Sec 6.7............................................................................................. 47
3. Thermodynamic
Properties of the Ideal Monatomic Gas.................................. 49
4. Solids Sec 2.2, 3.3, 7.5..................................................................................... 51
5. Photons Sec 7.4................................................................................................ 53
6. Specific
Heat (Heat Capacity) of Solids Sec. 7.5............................................ 55
I. First “Law” of Thermodynamics
A. Temperature Sec 1.1
1. Energy
a. Definition of energy
Energy is a
fundamental physical concept. My
favorite dictionary gives as its 4th definition of energy: 4. Physics. Capacity for performing work. So, now we go to the W section for the 13th
definition of work: 13. Mech.
The transference of energy by a process involving the motion of the
point of application of a force, as when there is movement against a resisting
force or when a body is given acceleration; it is measured by the product of
the force and the displacement of its point of application in the line of
action. [How about the definition of
work number 10. The foam or froth
caused by fermentation, as in cider, in making vinegar, etc.]
That definition of work is adequate as a definition of
mechanical work, but that definition of the word energy is nearly useless. Of course, that’s what dictionaries do,
define words in terms of other words in endless circles—they are about usages
more than meanings. A fundamental
concept cannot be defined in terms of other concepts; that is what fundamental means.
b. Conservation of energy
We can list the forms that energy might take. In effect, we are saying that if such and
such happens in or to a system, then the energy of the system changes. There is potential energy, which is related
to the positions of parts of the system.
There is kinetic energy, which is related to the movements of parts of
the system. There is rest energy, which
is related to the amount of matter in the system. There is electromagnetic energy, chemical
energy, and nuclear energy, and more.
We find that for an isolated
system, the total amount of energy in the system does not change. Within the system, energy may change from one
form to another, but the total energy of all forms is a conserved
quantity. Now, if a system is not
isolated from the rest of the universe, energy may be transferred into or out
of the system, so the total energy of such a system may rise or fall.
2. Thermal Equilibrium
a. Temperature
Consider two objects, each consisting of a very large number
of atoms and/or molecules. Here, very
large means at least several multiples of Avogadro’s Number of particles. We call such an object a macroscopic object. Consider
that these two objects (they may be two blocks of aluminum, for instance,
though they need not be the same material—they might be aluminum and wood, or
anything) are isolated from the rest of the universe, but are in contact with each other.

We observe that energy flows spontaneously from one
block (A) to the other (B). We say that
block A has a higher temperature than block B. In fact, we say that the energy flow occurs because
the blocks have different temperatures.
We further observe that after the lapse of some time, called the relaxation time, the flow of energy from
A to B ceases, after which there is zero net transfer of energy between the
blocks. At this point the two blocks are
in thermal equilibrium with each
other, and we would say that they have the same temperature.
b. Heat
The word heat
refers to energy that is transferred, or energy that flows, spontaneously by
virtue of a difference in temperature.
We often say heat flows into a system or out of a system, as for
instance heat flowed from block A to block B above. It is incorrect to say that heat
resides in a system, or that a system contains a certain amount of heat.
There are three mechanisms of energy transfer: conduction,
convection, and radiation. Two objects,
or two systems, are said to be in contact
if energy can flow from one to the other.
The most obvious example is two aluminum blocks sitting side by side,
literally touching. However, another
example is the Sun and the Earth, exchanging energy by radiation. The Sun has the higher temperature, so there
is a net flow of energy from the Sun to the Earth. The Sun and the Earth are in contact.
c. Zeroth “Law” of Thermodynamics
Two systems in thermal equilibrium with each other have the
same temperature. Clearly, if we
consider three systems, A, B, & C, if A & B are in thermal equilibrium,
and A & C are in thermal equilibrium, then B & C are also in thermal equilibrium,
and all three have the same temperature.

3. Thermometers
a. Temperature scales
What matter are temperature differences. We can feel that one object is hotter than
another, but we would like to have a quantitative measure of temperature. A number of temperature scales have been
devised, based on the temperature difference between two easily recognized
conditions, such as the freezing and boiling of water. Beyond that, the definition of a degree of
temperature is more or less arbitrary.
The Fahrenheit scale has 180 degrees between the freezing and boiling
points, while the Celsius scale has 100.
Naturally, we find 100 more convenient than 180. On the other hand, it turns out that the
freezing and boiling points of water are affected by other variables,
particularly air pressure. Perhaps some
form of absolute scale would be more useful.
Such a scale is the Kelvin scale, called also the absolute temperature scale.
The temperature at which the pressure of a dilute gas at fixed volume
would go to zero is called the absolute
zero temperature. Kelvin
temperatures are measured up from that lowest limit. The unit of absolute temperature is the kelvin (K), equal in size to a degree
Celsius. It turns out that 0 K = -273.15 oC. [The text continues to label non-absolute
temperatures with the degree symbol: oC, etc., as does the
introductory University Physics
textbook. The latter also claims that temperature
intervals are labeled with the degree symbol following the letter, as Co.
That’s silly.]
b. Devices
Devices to measure temperature take advantage of a thermal
property of matter—material substances expand or contract with changes in
temperature. The electrical conductivity
of numerous materials changes with temperature.
In each case, the thermometer must itself be brought into thermal equilibrium
with the system, so that the system and the thermometer are at the same
temperature. We read a number from the
thermometer scale, and impute that value to the temperature of the system. There are bulb thermometers, and bi-metallic
strip thermometers, and gas thermometers, and thermometers that detect the
radiation emitted by a surface. All
these must be calibrated, and all have limitations on their accuracies and
reliabilities and consistencies.
B. Work
1. First “Law” of Thermodynamics Sec 1.4
a. Work
Heat is defined as the spontaneous flow of energy into or
out of a system caused by a difference in temperature between the system and
its surroundings, or between two objects whose temperatures are different. Any other transfer of energy
into or out of a system is called work. Work takes many forms, moving a piston, or
stirring, or running an electrical current through a resistance. Work is the non-spontaneous transfer
of energy. Question: is lighting a
Bunsen burner under a beaker of water work?
The hot gasses of the flame are in contact with the beaker, so that’s
heat. But, the gases are made hot by
combustion, so that’s work.
b. Internal energy
There are two ways, then, that the total energy inside a
system may change—heat and/or work. We use
the term internal energy for the
total energy inside a system, and the symbol U.
Q and W will stand for heat and work, respectively. Energy conservation gives us the First “Law”
of Thermodynamics:
.

Now, we have to be careful with the algebraic signs. In this case, Q is positive as the heat entering the system, and W is positive as the work done on
the system. So a positive Q and a
positive W both cause an increase of internal energy, U.
2. Compressive Work Sec 1.5
a. PV diagrams
Consider a system enclosed in a cylinder oriented along the
x-axis, with a moveable piston at one end.
The piston has a cross sectional area A in contact with the system.
We may as well imagine the system is a volume of gas, though it may be
liquid or solid. A force applied to the
piston from right to left (-x direction) applies a pressure on the gas of
. If the piston is
displaced a distance
, then the work done by the force is
If the displacement
is slow enough, the system can adjust so that the pressure is uniform over the
area of the piston. In that case, called
quasistatic, the work becomes
.





Now it is quite possible, even likely, that the pressure
will change as the volume changes. So we
imagine the compression (or expansion) occurring in infinitesimal steps, in
which case the work becomes an integral:

Naturally, to carry out the integral, we need to have a
specific functional form for P(V).
On a PV diagram, then, the work is the area under the P(V)
curve. In addition, the P(V)
curve is traversed in a particular direction—compression or expansion, so the
work will be positive or negative accordingly.
Notice over a closed path on a PV diagram the work is not
necessarily zero.

b. Internal energy of the Ideal Gas Sec 1.2
As an example of computing compressive work, consider an
ideal gas. But first, we need to address
the issue of the internal energy of an ideal gas. Begin with the empirical equation of
state:
. In an ideal gas, the
particles do not interact with each other.
They interact with the walls of a container only to collide elastically
with them. The Boltzmann Constant is
, while the Gas Constant is
; N is the number
of particles in the system, n is the
number of moles in the system.





On the microscopic level, the atoms of the gas collide from
time to time with the walls of the container.
Let us consider a single atom, as shown in the figure above. It collides elastically with the wall of the
container and experiences a change in momentum
. That is, the wall
exerts a force on the atom in the minus x
direction. The atom exerts an equal and
opposite force on the wall in the positive x
direction. The time-averaged force
exerted by a single atom on the wall is
. Now, in order that
the atom collides with the wall only once during
, we set
, whence the force becomes
. Next, the pressure, P, is the force averaged over the area
of the wall:
, where V is the
volume of the container. Normally, there
are many atoms in the gas, say N of
them. Each atom has a different
velocity. Therefore,
, where
is the square of the x-component of velocity, averaged over
the N atoms. Finally, we invoke the ideal gas law,
.











The translational kinetic energy of one atom is
, since
.


For an ideal gas of spherical particles (having no internal structure)
there is no potential energy and the internal energy is just the kinetic
energy.

c. Isothermal & adiabatic processes
Imagine a system in thermal contact with its
environment. The environment is much
larger than the system of interest, so that heat flow into or out of the system
has no effect on the temperature of the environment. We speak of the system being in contact with
a heat bath so that no matter what
happens to the system, its temperature remains constant. If such a system is compressed slowly enough,
its temperature is unchanged during the compression. The system is compressed isothermally. As an example,
consider an ideal gas:
|

On the other hand, the compression may be so fast that no
heat is exchanged with the environment (or, the system is isolated from the
environment) so that Q = 0. Such a process is adiabatic. Naturally, the
temperature of the system will increase.
Staying with the example of an ideal gas,

[Notice that the text uses f for final and for degrees of freedom.]
Substituting for T
with the ideal gas law, we can write
. That exponent of V is called the adiabatic exponent,
. In general,
.




An isotherm is a
curve of constant temperature, T, on
the PV diagram. An arrow indicates the
direction that the system is changing with time. For an ideal gas, an isotherm is parabolic,
since
; that’s a special case.
A curve along which Q = 0 is
called an adiabat.

3. Other Works
In our discussion of energy conservation, we spoke of work
as being any energy flow into or out of the system that was not heat. We spoke of compressive work (sometimes
called piston work) and “all other
forms of work.” The all other forms of
work included stirring (called shaft work)
and combustion and electrical currents and friction. It would also include any work done by
external forces beyond the compressive work, particularly work done by the
force of gravity. We also have been
assuming that the center of mass of the system is not moving, so there was no
kinetic energy associated with translation of the entire system. A general form of the First “Law” of
Thermodynamics ought to include all the energy of the system, not only its
internal energy. Thus for instance, the
total energy of a system might be
.

a. Steady flow process
We might consider a situation in which a fluid is flowing
steadily without friction, but with heat flow into the fluid and a change in
elevation and changes in volume and pressure and some stirring.



In engineering real devices, all the various sources of work
have to be taken into account. In any
specific device, some works can be neglected and other works not.
b. The turbine
In a turbine, a fluid flows through a pipe or tube so
quickly that Q = 0, and normally the
entry and exit heights are virtually the same.
In the case of an electrical generator, the moving fluid turns a fan, so
that the shaft work is negative. The
energy balance equation for a volume element of the fluid having a mass, M, would look something like this:

An equation like this tells us how to design our turbine to
maximize the shaft work.
c. Bernoulli’s Equation
Suppose both Q and
Wshaft are zero.

If the fluid is incompressible, then the volume is constant,
and we can divide through by V to
obtain Bernoulli’s Equation. The
internal energy is also constant because Q
= 0 and no compressive work is done. [
is the mass density of
the fluid.]



C. Heat Capacity Sec 1.6
1. Changing Temperature
By definition, the heat
capacity of an object is
. The specific heat capacity is the heat
capacity per unit mass,
. This definition is
not specific enough, however, since
. A heat capacity could
be computed for any combination of conditions—constant V,
constant P, constant P & V, etc.



b. Constant pressure heat capacity
If pressure is constant, then

The second term on the right is the energy expended to
expand the system, rather than increase the temperature.
c. Constant volume heat capacity


2. Heat Capacity and Degrees of Freedom
a. Degrees of freedom
A degree of freedom is essentially a variable whose
value may change. In the case of a
physical system, the positions of the particles that comprise the system are
degrees of freedom. For a single
particle in 3-dimensional space, there are three degrees of freedom. Three coordinates are required to specify its
location. We are particularly interested
in variables that determine the energy of the system—the velocities determine
the kinetic energy, the positions determine the potential energy, etc. In other words, we expect to associate some
kinetic energy and some potential energy with each degree of freedom.
In effect, this text treats the kinetic and potential
energies as degrees of freedom. An
isolated single particle, having no internal structure, but able to move in
three-dimensional space, has three degrees of freedom which may have energy
associated with them: the three
components of its velocity. Since the
particle is not interacting with any other particle, we do not count its position
coordinates as degrees of freedom. On
the other hand, a three-dimensional harmonic oscillator has potential energy as
well as kinetic energy, so it has 6 degrees of freedom. Molecules in a gas have more degrees of freedom than simple
spherical particles. A molecule can
rotate as well as translate and its constituent parts can vibrate. A water molecule is comprised of three atoms,
arranged in the shape of a triangle. The
molecule can translate in three dimensions, and rotate around three different
axes. That’s 3+3 = 6 degrees of freedom
for an isolated water molecule. Within
the molecules, the atoms can vibrate relative to the center of mass in three
distinct ways, or modes. That’s another
2x3 = 6 degrees of freedom. Now, finally
if the water molecule is interacting with other water molecules, then there is
interaction between the molecules, and the degrees of freedom are 2x3+2x3+2x3 =
18. Notice that if we regard the
molecule as three interacting atoms, not as a rigid shape, there are 3x2x3 = 18
degrees of freedom.
A system of N particles, such as a solid made of N
harmonic oscillators, has 6 degrees of freedom per particle for a total of 6N
degrees of freedom.
The idea is that each degree of freedom, as it were,
contains some energy. The total internal
energy of a system is the sum of all the energies of all the degrees of
freedom. Conversely, the amount of
energy that may be transferred into a system is affected by how many degrees of
freedom the system has.
b. Exciting a degree of freedom
Consider a harmonic oscillator. The energy required to raise the HO from its
ground state to the first excited state is
, where h is Planck’s Constant and
is the oscillator
frequency. If the system temperature is
such that
, then that oscillator will never be excited. It’s as if that degree of freedom does not
exist. We say that the degree of
freedom has been frozen out.



The Equipartition Theorem says that the average energy in
each quadratic degree of freedom is
. By quadratic degree
of freedom, we mean that the kinetic and potential energy terms all depend on
the position and velocity components squared.
If that is the case, the internal energy of a system of N
harmonic oscillators in a solid is


where f is the number of degrees of freedom per
particle, which we would expect to be 6.
From this, we obtain the constant volume heat capacity of Dulong-Petit,

This result is independent of temperature. The measured heat capacity for a solid
is not the same for all temperatures. In
fact, as temperature decreases, the heat capacity decreases toward zero. Working backward, it would appear that as
temperature decreases, the number of degrees of freedom available
decreases. At low temperature, energy
cannot be put into the degrees of freedom that have been frozen out. The reason is that the quantity kT is
smaller than the spacing between discrete energy levels. This applies not only to a solid. For instance, one of the intramolecular
vibrational modes of the water molecule is shown here in the sketch.

Its frequency is in the neighborhood of 1014
Hz. Assuming harmonic vibration, the
energy level spacing for that mode is about
.On the other hand, at T
= 300 K, the quantity
; we would not expect the intramolecular modes to be
excitable, as it were, at 300K.


As an example of computing the constant volume heat capacity
of something other than an ideal gas, consider liquid water and ice. An effective potential energy function is
assumed to represent the interaction between water molecules. The total kinetic and potential energy is
computed at a range of temperatures, with the volume kept fixed.

The CV
is estimated by numerically evaluating the slope of the graph:
. In this set of
molecular dynamics simulations, the intermolecular
energy, E, leaves out the kinetic
energies of the hydrogen and oxygen atoms with respect to the molecular center
of mass as well as the intramolecular potential energies—that is, the degrees
of freedom of the atoms within each molecule, such as the vibrational
mode shown above, are frozen out. The
results shown on the graph are
for the liquid phase
and
for the solid
phase. The purpose of the study was to
test the effective potential function—would it show a melting transition at the
correct temperature, and would it give correct heat capacities.



|
II. Second “Law” of Thermodynamics
A. Combinatorics
1. Two State Systems Sec 2.1
a. Micro- and macro-states
Consider a system of three coins, as described in the
text. The macrostate of this
system is described by the number of heads facing up. There are four such macrostates, labeled 0,
1, 2, & 3. We might even call these
energy levels 0, 1, 2, & 3.
Specifying the orientation of each individual coin defines a
microstate. We can list the
microstates, using H for heads and T for tails:
TTT, HTT, THT, TTH, HHT, HTH, THH, HHH.
Now, we sort the microstates into the macrostate energy
levels.
energy level
|
microstates
|
multiplicity,
![]() |
0
|
TTT
|
1
|
1
|
HTT, THT, TTH
|
3
|
2
|
HHT, HTH, THH
|
3
|
3
|
HHH
|
1
|
The multiplicity is the number of distinct ways that
a specified macrostate can be realized.
The total multiplicity of the system is the total of all the possible
microstates. For these three coins,
that’s
.

b. Two-state paramagnet
Consider a large number of non-interacting magnetic
dipole moments, let’s say N of them.
These dipoles may point in one of only two ways: up or down. If an external uniform magnetic field is
applied, say in the up direction, each dipole will experience a torque tending
to rotate it to the up direction also.
That is to say, parallel alignment with the external field is a lower
energy state than is anti-parallel alignment.
The energy of the system is characterized by the number of
dipoles aligned with the external field, q. But, we don’t care which q
dipoles of the N total are in the up state. Having q dipoles up specifies the
energy macrostate, which may be realized by the selection of any q
dipoles out of N to be up. The
number of microstates for each macrostate is just the number of combinations,
the number of ways of choosing q objects from a collection of N
objects.

Now, what are the odds of observing this paramagnet to be in
a particular energy macrostate? Assuming
every microstate is equally likely, then we have

Notice that the total multiplicity is
because each dipole
has only two possible states.

Here is a microstate for a system of N = 10 dipoles, with q = 6 (6 dipoles
point up).

The probability function, P(q), for this
system looks like this:

The most probable macrostate has one-half the dipoles
pointed up, q = 5.
2. Einstein Solid Sec 2.2, 2.3
a. More than two states
A harmonic oscillator has energy levels that are uniformly
spaced in steps of
, where h is Planck’s Constant and
is the frequency of
the oscillator. We imagine a solid made
of N such harmonic oscillators.
The total energy of the solid is
, where q is an integer which in this case may well be
greater than N. As shown in the
text, the multiplicity of the macrostate having energy
is the number of ways q
items can be selected from q + N – 1 items.





Then
is the sum of all the
.


b. Interacting systems
We are interested in the transfer of energy from one such
Einstein solid to another. Now we want
the multiplicity of q energy units distributed over both systems.

Let’s say we have NA, NB,
qA, and qB, such that
. The total
multiplicity for the two systems in contact is


Assuming that all microstates are equally probable, then the
macrostate having the greatest multiplicity is the most probable to be
observed. As q, NA,
and NB are made larger, the multiplicity curve is taller, and
more narrowly peaked at (if NA = NB)
. Say that initially,
. Then over time, as
energy is exchanged more or less randomly between oscillators in the two
systems, there will be a net flow of energy from system B to system A,
from a macrostate of lower multiplicity to a macrostate of greater
multiplicity.


The text has one numerical example on page 57. In that example,
and
. The maximum
multiplicity occurs at
. Let’s look at a case
in which
, namely
and
and
.







qA
|
![]() |
qB
|
![]() |
![]() |
P(qA)
|
0
|
1
|
8
|
6435
|
6435
|
0.031623
|
1
|
6
|
7
|
3432
|
20592
|
0.101194
|
2
|
21
|
6
|
1716
|
36036
|
0.17709
|
3
|
56
|
5
|
792
|
44352
|
0.217957
|
4
|
126
|
4
|
330
|
41580
|
0.204334
|
5
|
252
|
3
|
120
|
30240
|
0.148607
|
6
|
462
|
2
|
36
|
16632
|
0.081734
|
7
|
792
|
1
|
8
|
6336
|
0.031137
|
8
|
1287
|
0
|
1
|
1287
|
0.006325
|
|
|
|
![]() |
203490
|
|

The probability peaks at about
, rather than
, that is, at
.



[I did the calculation of
using the COMBIN
function in Excel.]

As we increase the numbers, the
s become very large very quickly, as illustrated by the text
example on pages 58 & 59.

B. Entropy
1. Large Systems Sec 2.4
a. Very large numbers
Macroscopic systems contain multiples of Avogadro’s number,
, perhaps many, many multiples. The factorials of such large numbers are even
larger—very large numbers. We’ll use Stirling’s Approximation to evaluate the factorials:


Ultimately, we will want the logarithm of N!:
.

b. Multiplicity function
Consider an Einstein solid with a large number of
oscillators, N, and energy units, q.
The multiplicity function is
.

Take the logarithm, using Stirling’s
formula.

Now further assume that q
>> N. In that case,
.

The
becomes
, whence



c. Interacting systems
The multiplicity function for a pair of interacting Einstein
solids is the product of their separate multiplicity functions. Let’s say
and
.


Then
. If we were to graph
this function vs. qA, what
would it look like? Firstly, we expect a
peak at
with a height of
. That’s a very large
number. How about the width of the
curve? In the text, the author shows
that the curve is a Gaussian:
, where
. The origin has been
shifted to the location of
. The point at which
occurs when
. Now, this is a large
number, but compared to the scale of the horizontal axis
, that peak is very narrow, since N is a large number in itself.
That is, the half width of the peak is
of the whole range of
the independent variable.











The upshot is that as N
and q become large, the multiplicity
function peak becomes narrower and narrower.
The most probable macrostate becomes more and more probable relative to
the other possible macrostates. Put
another way, fluctuations from the most probable macrostate are very small in
large systems.
2. Second “Law” Sec 2.6
a. Definition of entropy
We define the entropy
of a system to be
. The units of entropy
are the units of the Boltzmann Constant, J/K.

The total entropy of two interacting systems, such as the
two Einstein solids above, is
.

The Second “Law” of Thermodynamics says: Systems tend to evolve in the direction of
increasing multiplicity. That is,
entropy tends to increase. This is
simply because the macrostate of maximum multiplicity is the most probable to
be observed by the time the system has reached thermal equilibrium.
b. Irreversible
The concept of entropy was introduced originally to explain
why certain processes only went one way spontaneously. When heat enters a system at temperature, T, its entropy increases by
. When heat leaves a
system, the system’s entropy decreases.

Consider two identical blocks of aluminum, initially one
hotter than the other. When brought into
thermal contact, heat will flow from the warmer block(A) to the cooler(B) until
they have reached the same temperature.
The total entropy of the two blocks will have increased. Incrementally,
. In effect, because TA
is greater than TB, the entropy of block A decreases less
than the entropy of block B increases.

Processes that create new entropy cannot happen
spontaneously in reverse. Heat flow from
a warmer object to a cooler object creates entropy, and is irreversible. Mixing two
different gases creates entropy, and is irreversible. Rapid expansion or compression creates
entropy. On the other hand quasistatic
volume change can be reversible, depending what other processes are taking
place at the same time. Very slow heat
flow produces very little new entropy and may be regarded as practically
reversible.
An irreversible process may be reversed by doing work on the
system, but that also increases the total amount of entropy in the
universe.
C. Creating Entropy
1. Temperature Sec 3.1, 3.2
a. Thermal equilibrium
When two objects are in thermal equilibrium, their
temperatures are the same. According to
the Second “Law”, their total entropy is at its maximum.
Consider two objects in contact, exchanging energy. The total energy of the two objects is
fixed. At equilibrium,

The quantity
has units of K-1,
so perhaps we can define the temperature in terms of the entropy as
.


b. Heat capacities
We cannot measure entropy directly, but we can measure
changes in entropy indirectly, through the heat capacity. For instance, if no work is being done on the
system,

Of course, we need to know CV as a function of T. This is obtained by measuring Q or U
vs. T. In general, the heat capacity decreases with
decreasing temperature. At higher
temperatures, the heat capacity approaches the constant
(Dulong-Petit). For instance, the CV vs. T for a
monatomic substance would look like this:


The Third “Law” of Thermodynamics says that
as
, or alternatively, that S
= 0 when T = 0K. In reality, there remains residual entropy in
a system at T = 0K—near absolute zero, the relaxation time for the system to
settle into its very lowest energy state is very, very long.


Now notice, if indeed
as
, then absolute zero can not be attained in a finite number
of steps, since
as
. It’s like the famous
example of approaching a wall in a series of steps, each one half the previous
step.




For example, let us say that we wish to cool an ideal gas to
absolute zero. We’d have to “get rid” of
the entropy in the gas in a series of steps.
i) isothermal compression—heat and entropy is transferred to
a reservoir
ii) adiabatic expansion—temperature decreases, entropy is
constant, Q = 0
repeat
Now if we were to graph these S(T) points we have
generated we would see two curves. But
the curves are not parallel; they appear to converge at T = 0. As a result, the
gets smaller for each
successive two-stage step, the closer we get to T = 0.


In practice, a real gas would condense at some point. The text describes three real-life high-tech
coolers. In any case, there will be a
series of ever smaller steps downward between converging curves on the S(T)
graph, toward absolute zero.
2. Pressure Sec 3.4
a. Mechanical equilibrium
Consider two systems whose volumes can change as they
interact. An example might be two gases
separated by a moveable membrane. The
total energy and volume of the two systems are fixed, but the systems may
exchange energy and volume. Therefore,
the entropy is a function of the volumes as well as the internal energies. However, we will be keeping the numbers of
particles in each system fixed.

At the equilibrium point,
and
.



As we did with temperature, we can identify the pressure
with the derivative of entropy with volume, thusly:
.

b. Thermodynamic identity
Now if we envision a system whose internal energy and volume
are changing, we would write the change in entropy (a function of both U and of V) as follows:

c. Creating entropy with mechanical work
Remember that compressive work (
) is just one form of work.
If the compression is slow, and no other form of
work is done on the system, then the volume change is quasistatic,
and
. In such a case, we
are allowed to combine the First “Law” with the thermodynamic identity to
obtain



But, if the work done on the system is greater than
, then
. In other words, the
amount of entropy created in the system is more than that accounted for
by the heat flow into the system. This
might happen, for instance, with a compression that occurs faster than the
pressure can equalize throughout the volume of the system. It will happen if other forms of work are
being done, such as mixing, or stirring.
In a similar vein, if a gas is allowed to expand freely into a vacuum,
no work is done by the gas, and no heat flows into or out of the gas. Yet the gas is occupying a larger volume, so
its entropy is increased.


3. Chemical Potential Sec 3.5
Now consider a case in which the systems can exchange
particles as well as energy and volume.
a. Diffusive equilibrium


Define the chemical potential as
. Evidently, the minus
sign is attached so that particles will tend to diffuse from higher toward
lower chemical potential.

b. Generalized thermodynamic identity
For infinitesimal changes in the system,

This equation contains within it all three of the
partial-derivative formulas for T, P and for
. For instance, assume
that entropy and volume are fixed. Then
the thermodynamic identity says
, whence we can write
. To apply the
partial-derivative formulae to a particular case, we need specific expressions
for the interdependence of the variables, i.e., U as a function of N.



4. Expanding & Mixing Sec 2.6
a. Free expansion
Imagine a container of volume 2V, isolated from its surroundings, and with a partition that
divides the container in half. An ideal
gas is confined to one side of the container.
The gas is in equilibrium, with temperature T and Pressure P. Now, imagine removing the partition. Over time, the gas molecules will diffuse to
fill the larger volume.

However, in expanding the gas does no work, hence the phrase
free expansion. Because the container is isolated, no heat
flows into or out of the gas, nor does the number of molecules, N, change..

However, the entropy increases.
.

[The expression for the entropy of an ideal gas is derived
in Sec. IV B 3. It is

b. Entropy of mixing
In a similar vein, we might imagine a container divided into
two chambers, each with a different ideal gas in it. When the partition is removed, both gases
diffuse to fill the larger volume. Since
the gases are ideal, one gas doesn’t really “notice” the presence of the other
as they mix. The entropy of both gases
increases, so the entropy change of the whole system is
, assuming of course that we started with the same numbers of
molecules of both gases, etc.

III. Processes
A. Cyclic Processes
1. Heat Engines & Heat Pumps Sec 4.1, 4.2
A heat engine is a device that absorbs heat from a
reservoir and converts part of it to work.
The engine carries a working substance through a PVT cycle,
returning to the state at which it starts.
It expels “waste” heat into a cold reservoir, or into its
environment. It must do this in order
that the entropy of the engine itself does not increase with every cycle.

a. Efficiency
The efficiency of the heat engine is defined as the ratio of
work done by the engine to the heat absorbed by the engine.

We’d like to express e in terms of the temperatures
of the hot and cold reservoirs. The
First “Law” says that
. The Second “Law”
says that
. Putting these
together, we obtain
. Firstly, notice that
e cannot be greater than one.
Secondly, e cannot be one unless Tc = 0
K, which cannot be achieved. Thirdly,
is the greatest
e can be—in practice, e is less than the theoretical limit, since
always
.





b. Carnot cycle
Can a cycle be devised for which
? That’s the Carnot
cycle, which uses a gas as the working substance.


i) the gas absorbs heat from the hot reservoir. To minimize dS, we need
; the gas is allowed to expand isothermally in order to
maintain the
.


ii) the gas expands adiabatically, doing work, and cools
from Th to Tc.
iii) the gas is compressed isothermally, during which step
heat is transferred to the cold reservoir.
iv) the gas is compressed adiabatically, and warms from Tc
to Th.
Now, for the total change in entropy to be very small, the
temperature differences between the gas and the reservoirs must be very
small. But that means that the heat
transfers are very slooow. Therefore, the
Carnot cycle is not very useful in producing useful work. [Empirically, the
rate at which heat flows is proportional to the temperature difference —
.]

c. Heat pump
The purpose of a heat pump is to transport energy
from a cold reservoir to a hot one by doing work on the working substance. The work is necessary because the temperature
of the working fluid must be raised above that of the hot reservoir in order
for heat to flow in the desired direction.
Likewise, at the other side of the cycle the working fluid must be made
colder than the cold reservoir.
Rather than efficiency, the corresponding parameter for a
heat pump is the coefficient of performance,


The First “Law” says
. The Second “Law”
says
. Putting these
together, we obtain
. A Carnot cycle
running in reverse will give the maximum COP.



2. Otto, Diesel, & Rankine Sec 4.3
Real heat engines need to produce work at a more rapid rate
than a Carnot engine. Consequently,
their efficiencies are lower than that of a Carnot engine. Of course, real engines do not achieve even
their theoretical efficiencies due to friction and conductive heat loss through
the cylinder walls and the like.
a. Otto cycle
The Otto cycle is the basis for the ordinary 4-stroke gasoline
engine.
i) air-fuel mixture is compressed adiabatically from V1 to V2; pressure rises from P1 to P2.
ii) air-fuel mixture is ignited, the pressure rises
isochorically from P2 to P3.
iii) combustion products expand adiabatically from V2 to V1; pressure falls from P3 to P4.
iv) pressure falls isochorically from P4 to P1.

The temperatures also change from step to step. The efficiency is given by

The quotient
is the compression ratio. The greater the compression ratio, the
greater is the efficiency of the engine.
However, so is T3
greater. If T3 is too great, the air-fuel mixture will ignite
prematurely, before the piston reaches the top of its stroke. This reduces power, and damages the piston and
cylinder. Up to a point, chemical
additives to the fuel can alleviate the premature detonation.

Notice that there is no hot reservoir per se; rather the heat source is the chemical energy released by
the combustion of the fuel.
b. Diesel cycle
The Diesel cycle differs from the Otto cycle in that the air
is first compressed adiabatically in the cylinder, then the fuel is injected
into the hot air and ignited spontaneously, without need of a spark. The fuel injection takes place as the piston
has begun to move downward, so that constant pressure is maintained during the
fuel injection. Since the fuel is not in
the cylinder during the compression, much higher compression ratios can be
used, leading to greater efficiencies.

c. Rankine cycle
In some ways the steam engine is a more nearly exact example
of a heat engine than is the Otto engine.
No chemical reaction or combustion takes place within the working fluid,
and at least in principle the working fluid is not replaced at the beginning of
each cycle.
i) water is pumped to a high pressure into a boiler.
ii) the water is heated at constant pressure and changes to
steam (water vapour).
iii) the steam expands adiabatically, driving a piston or a
turbine, and cools and begins to condense.
iv) the partially cooled steam/water mixture is cooled
further by contact with the cold reservoir.

The efficiency of the steam engine is
. At constant
pressure,
, whence,
. Now,
since the water is not
compressed as it is pumped, and only a little energy is added to the water
(e.g., it’s not accelerated). So we look
up the enthalpies on the tables of enthalpy & entropy vs. temperature &
pressure—page 136.




d. Throttling and refrigerators Sec 4.4
For a refrigerator to work, the temperature of the working
fluid must be made less than that of the cold reservoir. This is done through what is called a throttling process.
The working fluid passes through a narrow opening from a region
of high pressure into a region of low pressure.
In doing so, it expands adiabatically (Q = 0) and cools. As the
fluid expands, the negative potential energy of interaction among the
atoms/molecules increases and the kinetic energy decreases.

From the First “Law”

In a dense gas or a liquid,
. Therefore, as the
gas expands,
the gas/liquid cools.


Subsequently, the chilled fluid absorbs heat from the cold
reservoir and vaporizes. Therefore, the
working fluid must be a substance with a low boiling point. The compressor does the work of compressing
the gas to raise its temperature, as well as maintains the pressure difference
required for the throttle valve to work.
B. Non-cyclic Processes or Thermodynamic Potentials
1. Thermodynamic Potentials Sec 5.1
A number of thermodynamic quantities have been
defined—useful under differing conditions of fixed pressure, volume,
temperature, particle number, etc. These
are the enthalpy, the Helmholtz free energy, and the Gibbs free energy. Together with the internal energy, these are
referred to as thermodynamic potentials.
a. Enthalpy Sec 1.6
The total energy required to create a system of particles at
sea level air pressure would include the expansive work done in displacing the
air.
We define the enthalpy
to be
. The enthalpy is
useful when a change takes place in a system while pressure is constant.


Now, if no other
work is done, then
exactly. In practice, tables of measured enthalpies
for various processes, usually chemical reactions or phase transitions are
compiled. The text mentions the enthalpy
of formation for liquid water.
Evidently, when oxygen and hydrogen gases are combined to form a mole of
liquid water, the change in enthalpy is -286 kJ. In other words, burning hydrogen at constant
pressure releases this much energy.

Efficiency of a steam engine:
. Now,
since the water is not
compressed as it is pumped, and only a little energy is added to the water
(it’s not accelerated). So, we look up
the enthalpies on tables of H & S vs. T & P—page 136.


PV diagram…………………………………………
|

PV diagram…………………………………………………
Problem 4-29

From Table 4.3,
At 12 bar, the boiling point is 46.3oC.

a) Pf =
1 bar and Hliquid = 16 kJ
T = -26.4oC and Hgas
= 231 kJ.
b) Starting with all
liquid at Pi,

b. Helmholtz
Let’s say the system is in contact with a heat bath, so that
the temperature is constant. The
pressure may not be constant. To create
the system, some of its total energy can be taken from the environment in the
form of heat. So the total work required
to create the system is not all of U,
but less than U. Define the Helmholtz Free Energy of the system as
.

Any change in a system at constant temperature will entail a
change in F.

Where W is all
the work done on the system.
c. Gibbs
Now, if the system is at constant pressure as well as
constant temperature, then the extra work needed to create the system is the Gibbs Free Energy,
.

If pressure is constant, we use the Gibbs free energy:

Again, W is the total work done on the system.
In a paragraph above, we burned some hydrogen. The 286 kJ released could be used to run an
Otto cycle for instance. The theoretical
efficiency of an Otto engine is about 56%.
So, at most 160 kJ are used to drive the car. It’s possible to run that reaction in a more
controlled way and extract electrical work, in a hydrogen fuel cell.

That
has to be expelled to
the environment, and the efficiency of the fuel cell alone is 83%. The fuel cell generates current which can run
an electrical motor or charge a battery.
Of course, in both instances, there are numerous losses of energy along
the way to driving the car.

d. Identities
If we envision infinitesimal changes in thermodynamic
variables, we can derive thermodynamic identities for the thermodynamic
potentials. We have already, the
thermodynamic identity for internal energy

Now, consider the enthalpy, H.

For instance, if dP
= 0 and dN = 0, then we could write
, which is equivalent to the
that we obtained
earlier.


We can do the same for F
and for G.

From this equation we can derive relationships like
.


2. Toward Equilibrium Sec 5.2
a. System and its environment
An isolated system tends to evolve toward an equilibrium
state of maximum entropy. That is, any
spontaneous rearrangements within the system increase the entropy of the
system. Now, consider a system which is
in thermal contact with its environment.
The system will tend to evolve, by exchanging energy with the
environment. The entropy of the system
may increase or decrease, but the total entropy of the universe increases in
the process.
Let’s say that the system evolves toward equilibrium
isothermally. The environment is such a
large reservoir of energy that it can exchange energy with the system without
changing temperature. It’s a heat
bath. The total change in entropy
involved with an exchange of energy would be


Assuming the V and
N for the environment are fixed, an
recalling that dUR =
- dU and T = TR , then

The increase in total entropy under conditions
of constant T, V, and N is equivalent to
a decrease in the Helmholtz free energy of the system.
In a similar vein, if the system volume is not fixed, but
the pressure is constant, then we have

The increase in total entropy under conditions
of constant T, P, and N is equivalent to
a decrease in the Gibbs free energy of the system.
system condition
|
system tendency
|
isolated—constant
U, T, V, & N
|
entropy increases
|
constant T and V and N
|
Helmholtz free energy decreases
|
constant T and P and N
|
Gibbs free energy decreases
|
b. Extensive & Intensive
The several properties of a system can be divided into two
classes—those that depend on the amount of mass in the system, and those that
do not. We imagine a system with volume V in equilibrium. The system is characterized by its mass,
number of particles, pressure, temperature, volume, chemical potential,
density, entropy, enthalpy, internal energy, Helmholtz and Gibbs free
energies. Now imagine slicing the system
in half, forming two identical systems with volumes V/2. Some properties of the
two systems are unchanged—temperature, pressure, density, and chemical potential. These are the intensive properties. The
rest are extensive—they are halved
when the original system was cut in half.
The usefulness of this concept is in checking the validity
of thermodynamic relationships. All the
terms in a thermodynamic equation must be the same type, because an extensive
quantity cannot be added to an intensive quantity. The product of an intensive quantity and an
extensive quantity is extensive. On the
other hand, dividing an extensive quantity by another yields an intensive
quantity, as in mass divided by volume gives the density.
3. Phase transformations Sec 5.3
We are familiar with water, or carbon dioxide or alcohol
changing from liquid to vapour, from solid to liquid, etc. We are aware that some metals are liquid at
room temperature while most are solid and melt if the temperature is much
higher. These are familiar phase
changes.
More generally, a phase
transformation is a discontinuous change in the properties of a substance,
not limited to changing physical structure from solid to liquid to gas, that
takes place when PVT conditions are changed only slightly.
a. Phase diagram
Which phase of a substance is the stable phase depends on
temperature and pressure. A phase diagram is a plot showing the
conditions under which each phase is the stable phase.
For something like water or carbon dioxide, the phase
diagram is divided into three regions—the solid, liquid, and gas (vapour)
regions. If we trace the P,T
values at which changes in phase take place, we trace the phase boundaries on the plot.
At those particular values of P,T the two phases can coexist in
equilibrium. At the triple point, all three phases coexist in equilibrium. The pressure on a gas-liquid or gas-solid
boundary is called the vapour pressure
of the liquid or solid.

Notice that the phase boundary between gas and liquid has an
end point, called the critical point. This signifies that at pressures and/or
temperatures beyond the critical point there is no physical distinction between
liquid and gas. The density of the gas
is so great and the thermal motion of the liquid is so great that gas and
liquid are the same.
Other sorts of phase transformations are possible, as for
instance at very high pressures there are different solid phases of ice. Similarly for carbon, there is more than one
solid phase—diamond and graphite.
Diamond is the stable phase at very high pressure while graphite is the
more stable phase at sea level air pressure.
The glittery diamonds people pay so much to possess are ever so slowly
changing into pencil lead.
Still other phase transformations are related not to
pressure, but to magnetic field strength as in the case of ferromagnets and
superconductors.
Here’s a phase diagram for water, showing the several solid
phases. They differ in crystal structure
and density, as well as other properties such as electrical conductivity.

D. Eisenberg & W. Kauzmann, The Structure and Properties of Water, Oxford Univ.
Press, 1969.
The phase diagram for water shown in the text figure 5.11 is
a teensy strip along the T axis near P = 0 on this figure. [One bar is about one atmosphere of air
pressure, so a kbar is 1000 atm.]
Here’s a phase diagram for a ferromagnet, subjected to an
external magnetic field,
. The phase boundary
is just the straight segment along the T
axis. The segment ends at the critical
point, at
.



b. van der Waals model
There are phases because the particles interact with each
other, in contrast to an ideal gas. The
interactions are complicated (quantum mechanics), so we create simplified,
effective models of the interparticle interactions in order to figure out what
properties of the interactions lead to the observed phase diagrams. For instance, the van der Waals model:
The model of a non-ideal gas is constructed as follows. Firstly, the atoms have nonzero volume—they
are not really point particles. So, the
volume of the system cannot go to zero, no matter how great the pressure or low
the temperature. The smallest V can possibly be, let’s say, is Nb. It’s like shifting the origin from V = 0 to V – Nb = 0. Secondly, the atoms exert forces on each
other. At short range, but not too
short, the forces are attractive—the atoms tend to pull one another closer. This has the tendency to reduce the pressure
that the system exerts outward on its environment (or container). We introduce a “correction” to the pressure
that is proportional to the density
and to the number of
atoms in the system, N. That is,
.




With the new V and
P, the gas law becomes the van der
Waals equation of state:


Now, the b and
are adjustable
parameters, whose values are different for different substances. They have to be fitted to empirical data.

There are countless other equations of state. For instance, there is the virial expansion,
which is an infinite series,

There is the Beattie-Bridgeman equation of state

All represent “corrections” to the ideal gas equation of
state.
c. Gibbs free energy—Clausius-Clapeyron—PV diagram
Which phase is stable at a given temperature and pressure is
that phase with the lowest Gibbs free energy.
On a phase boundary, where two phases coexist, there must be a common
Gibbs free energy. That is, on the
boundary between liquid and gas,
. Imagine changing the
pressure and temperature by small amounts in such a way as to remain on the
phase boundary. Then the Gibbs free
energy changes in such a way that


We’ve assumed that dN
= 0. This result is the slope of the
phase boundary curve on the PT
diagram.
Commonly, we express the change in entropy in terms of the
latent heat of the transformation, thusly

This is the Clausius-Clapeyron
relation, applicable to any PT
phase boundary.
Finally, we compute the Gibbs free energy for the van der
Waals model at a variety of temperatures and pressures to determine which phase
is stable in each case.


Let dN = 0, and
fix the temperature, varying only P.

Integrate

We have now expressions for both P and G as functions of V, at a fixed temperature, T.
Firstly, we plot G vs. P at some fixed V. This yields a graph with a loop in it. The loop represents unstable states, since
the Gibbs free energy is not a minimum.
Integrating dG around the closed loop should give zero.

We plot out on a PV
diagram the same points of free energy, and obtain an isotherm something like
that shown on the diagram below right.


The pressure at which the phase transition occurs, at
temperature T, is that value of P where the two shaded areas
cancel. So, tracking along an isotherm
from right to left, the gas is compressed and pressure rises until that
horizontal section is reached. At that
point, further compression does not increase the pressure because the gas is
condensing to liquid. When the phase
transition is complete, further compression causes a steep increase in
pressure, with little decrease of volume, as the liquid is much less
compressible than the gas. During the
transition, both gas and liquid phases coexist.


If the temperature is high enough, there is no phase
transition as V decreases. On a PT diagram, we see a phase
boundary between the liquid and gas phases, up to the critical point, where the
boundary terminates. Above the critical
point, there is no distinction between gas and liquid. That would correspond to the isotherms having
no flat segment on the PV diagram.
The van der Waals model is not very accurate in reality, but
it does illustrate how the observed phase behavior arises from the interactions
among the atoms or molecules of the substance.
IV. Statistical Mechanics
A. Partition Function
1. Boltzmann Sec 6.1
a. Multiplicity
Consider a system, in contact with an energy reservoir,
consisting of N weakly interacting identical particles. The energy of each particle is quantized, the
energy levels labeled by Ei.
Previously, we associated a multiplicity,
, with each different energy level. But, we could just as well list each
microstate separately. That is, each
particle energy level, Ei,
has multiplicity of 1, but some energy values occur
times in the
list. At equilibrium, the total internal
energy of the system of N particles is
constant (apart from small fluctuations)




b. Equilibrium
The equilibrium state of the system is the state that
maximizes
, subject to the constraints N = constant and U
= constant. We want to solve for the Ni that maximize
.


For indistinguishable particles,

We’ll apply the method of undetermined multipliers to
determine the equilibrium distribution of particles among the states, that is,
the Ni.

[At this point, notice that because dN = 0, the
for distinguishable
particles is exactly the same as for indistinguishable particles. So, we only have to do this once.]

Add the three equations.

Each term must be zero separately.

Solve for Ni.

Now we determine the multipliers by applying the constraints. The first one is easy.

The second one, b, is a bit more complicated. If the system were to exchange a small amount
of energy, dU, with the reservoir,
the entropy would change as small alterations occur in the {Ni}.


This is the Boltzmann
probability distribution.
. The greater the
energy of a particle state, the less likely a particle is to be in that state. [For atoms, we are making the ground state
zero,
.]



As T increases,
though, the likelihood of a higher-energy state being occupied increases as
well. The exponential decay of P is slower at higher temperatures.
c. Partition function
The sum over states (Zustandsumme) is called the partition
function. In principle, it’s a sum
over all the particle states of a system, and therefore contains the
statistical information about the system.
All of the thermodynamic properties of the system are derivable from the
partition function.

Entropy:

Pressure:

2. Probability Sec 6.2
a. Ensembles (awnsombles)
microcanonical
In an isolated system, every microstate has equal
probability of being occupied. The total
energy is fixed. The collection of all
the possible microstates of the system is called the microcanonical ensemble
of states.
canonical
If the probability distribution is the Boltzmann
distribution, the collection of energy states is called the canonical
ensemble. We’ve seen that such a
distribution applies to a system at constant temperature and fixed number of
particles, in contact with an energy reservoir.
The internal energy is not fixed, but we expect only small fluctuations
from an equilibrium value.
If the number of particles is allowed to change, then we
have to sum also over all possible numbers of particles, as well as all
possible energy states, giving the grand canonical ensemble.
b. Average values
The probability that a particle will be observed to occupy a
particular energy state is given by the Boltzmann distribution.


3. A Couple of Applications
a. Equipartition Theorem
Sec 6.3
A particle’s kinetic energy is proportional to the square of
its velocity components. In Cartesian
coordinates,
. In a similar vein,
the rotational kinetic energy of a rigid body is also proportional to the
square of the angular velocity, thus
. In the case of a
harmonic oscillator, the potential energy is also proportional to a square,
namely the displacement components,
. Very often, we
approximate the real force acting on a particle with the linear restoring force
of the harmonic oscillator. Let us
consider a generalized quadratic degree of freedom,
. Each value that q takes on represents a distinct
particle state. The energy is quantized,
so the q-values are discrete, with
spacing
.





The partition function for this “system” is a sum over those
q-states

In the classical limit
is small, and the sum
goes over to an integral


The average energy is
.

So, each quadratic degree of freedom, at equilibrium, will
have the same amount of energy. But,
this equipartition of energy theorem is valid only in the classical limit and
high temperature limits. That is, when
the energy level spacing is small compared to kT. We saw earlier that
degrees of freedom can be “frozen out” as temperature declines.
b. Maxwell Speed Distribution Sec 6.4
The distribution function has two parts. The first is the probability of an atom
having speed, v. That’s given by the Boltzmann probability
distribution. The second factor is a
multiplicity factor—how many velocity vectors have the same magnitude,
v.

The number of velocity vectors that have the same magnitude
is obtained by computing the surface area of a sphere of radius v in velocity space. That is
.



The C is a
proportionality constant, which we evaluate by normalizing the distribution
function.



B. Adding up the States Sec 1.2, 1.7, 2.5, 6.6, 6.7
1. Two-State Paramagnet Sec 6.6
The specifics of computing the partition function for a
system depend on the nature of the system—the specifics of its energy
levels. For instance, each dipole moment
in an ideal two-state paramagnet in an external magnetic field has two discrete
states.
a. Single dipole
There are two states in a system consisting of a single
magnetic dipole,
. Therefore, the
partition function is


b. Two or more dipoles
If the system consists of two non-interacting,
distinguishable dipoles, then there are four states:
.


Now if the dipoles are indistinguishable, there are
fewer distinct states, namely three for N
= 2, so
. This is because the
states
are the same state if
the dipoles are indistinguishable.


Extending to N
dipoles,
for distinguishable
dipoles;
for indistinguishable
dipoles, if N is large.


2. Ideal Gas Sec 6.7
a. One molecule

b. Two or more molecules
If the molecules are not interacting, then as before, the
partition function for N molecules is just
or
depending on whether
the molecules are distinguishable or not.
In the ideal gas, the molecules are not distinguishable one from
another. If the molecules were in a
solid, then they would be distinguishable because their positions would be
distinctly different.


[Note that in the text Section 6.7, the rotational partition
function is lumped in with the internal partition function.]
c. Internal partition function
The internal partition function sums over the internal
vibrations of the constituent atoms. We
would usually approximate the energy levels by harmonic oscillator energy
levels.

The index i labels
the vibrational modes, while n labels
the uniformly spaced energy levels for each mode. For instance, the water molecule has three
intermolecular modes. A molecule having
more atoms has more modes. A diatomic
molecule has just one mode of vibration.
d. Rotational partition function
A molecule is constrained to a particular shape (internal
vibrational motions apart), which we regard as rotating like a rigid body. The angular momentum, and therefore the
rotational kinetic energy, is quantized, thusly
, where I is the
moment of inertia of the molecule about the rotational axis. Classically, if
is the angular
velocity and L is the magnitude of
the angular momentum, then the kinetic energy of rotation is
. Quantum
mechanically, the angular momentum is quantized, so that
with j equaling an integer.




Now, this applies to each distinct axis of rotation. In three dimensions, we start with three
axes, but the symmetry of the molecule may reduce that number. The water molecule has three axes, but a
carbon monoxide molecule has only one.
Basically, we look for axes about which the molecule has a different
moment of inertia, I. But it goes beyond that. If the symmetry of the molecule is such that
we couldn’t tell, so to speak, whether the molecule was turning, then that axis
does not count. That’s why there are no
states for an axis that runs through the carbon and oxygen atoms of carbon
monoxide.
Therefore, a rotational partition function will look
something like this for three axes

e. Translational partition function
In an ideal gas, the molecules are not interacting with each
other. So the energy associated with the
molecular center of mass is just the kinetic energy,
. The molecule is
confined to a finite volume, V, so
that kinetic energy is quantized also.

First consider a molecule confined to a finite “box” of
length Lx on the
x-axis. The wave function is limited to
standing wave patterns of wavelengths
, where

nx = 1,
2, 3, 4, . . . This means that the x-component of the momentum is limited
to the discrete values
. The allowed
values of kinetic energy follow as


Naturally, the same argument holds for motion along the y- and z-axes.

Unless the temperature is very low, or the volume V is very small, then the spacing
between energy levels is small and we can go over to integrals.

The quantity
is the quantum volume of a single
molecule. It’s a box whose side is
proportional to the de Broglie wavelength of the molecule. In terms of that, the
.


[Actually, in the classical form of the partition function,
we are integrating over the possible (continuous) values of particle momentum
and position.

The classical partition functions differ from the classical
limit of the quantum mechanical partition functions by factors of h.
Because h
is constant, this makes no difference in the derivatives of the logarithm of Z.]
Putting the parts all together, for a collection of N indistinguishable molecules

3. Thermodynamic Properties of the Ideal Monatomic Gas
a. Helmholz free energy
Sec 6.5
Consider the derivative of the partition function with
respect to temperature.

On the other hand, recall the definition of the Helmholtz
free energy.



Evidently, we can identify the Helmholtz free energy in
terms of the partition function thusly,

For the monatomic ideal gas,
, whence (using the Stirling
approximation)


b. Energy & heat capacity

4. Solids Sec 2.2, 3.3, 7.5
5. Photons Sec 7.4

What we have here is the energy
per unit volume per unit energy,
, also called the spectrum of the photons. It’s named the Planck spectrum, after the fellow who first worked it out, Max
Planck.


Notice that
, and that the spectrum peaks at
. These “Laws” had
been obtained empirically, and called Stefan-Boltzmann’s “Law” and Wein’s
Displacement “Law.”


d. Black body radiation
Of course, the experimentalists were measuring the spectra
of radiation from various material bodies at various temperatures. Perhaps we should verify that the radiation
emitted by a material object is the same as the spectrum of photon energies in
the oven. So, consider an oven at
temperature T, and imagine a small
hole in one side. What is the spectrum
of photons that escape through that hole?
Well, the spectrum of the escaping photons must be the same as the
photon gas in the oven, since all photons travel at the same speed, c.
By a similar token, the energy emitted through the hole is proportional
to T4.

Finally, we might consider a perfectly absorbing material
object exchanging energy by radiation with the hole in the oven. In equilibrium (at the same T as the oven), the material object (the
black body) must radiate the same power and spectrum as the hole, else they
would be violating the Second “Law” of thermodynamics.
6. Specific Heat (Heat Capacity) of Solids Sec. 7.5
b. Debye theory
of specific heat
The oscillators do not vibrate independently. Rather, there are collective modes of
vibration in the crystal lattice. we’ll
treat the situation as elastic waves propagating in the solid. We consider that the energy residing in the
elastic wave of frequency
is quantized. Quanta of elastic vibrations are called phonons.
Secondly, there is an upper limit of the frequency that can exist in the
crystal—the cut-off frequency.

So, consider sound waves propagating in the crystal, with
the dispersion relation
. The total
vibrational energy of the crystal will be


The average energy of a mode is still
and in a continuous
medium
. Therefore,
. Now, what is the
range of frequency? Not
, but
, where
is the Debye
frequency, or cut-off frequency.
The cut-off frequency arises because the shortest possible wavelength is
determined by the inter-atomic spacing.
Put another way, the maximum possible number of vibrational modes in the
crystal is equal to the number of atoms (let’s say a mole) in the crystal, times
3. I.e.,








|

c. Reduced temperature
I. First
“Law” of Thermodynamics................................................................................. 2
A. Temperature Sec 1.1............................................................................................... 2
1. Energy................................................................................................................. 2
2. Thermal
Equilibrium............................................................................................ 2
3. Thermometers...................................................................................................... 4
B. Work........................................................................................................................ 4
1. First
“Law” of Thermodynamics Sec 1.4........................................................... 4
2. Compressive
Work Sec 1.5................................................................................. 5
3. Other
Works........................................................................................................ 8
C. Heat
Capacity Sec 1.6.......................................................................................... 10
1. Changing
Temperature...................................................................................... 10
2. Heat
Capacity and Degrees of Freedom........................................................... 11
II. Second
“Law” of Thermodynamics.......................................................................... 14
A. Combinatorics........................................................................................................ 14
1. Two State
Systems Sec 2.1............................................................................. 14
2. Einstein
Solid Sec 2.2,
2.3............................................................................... 15
B. Entropy.................................................................................................................. 17
1. Large
Systems Sec 2.4..................................................................................... 17
2. Second
“Law” Sec 2.6.................................................................................... 18
C. Creating
Entropy................................................................................................... 19
1. Temperature Sec 3.1, 3.2................................................................................. 19
2. Pressure Sec 3.4............................................................................................... 21
3. Chemical
Potential Sec
3.5.............................................................................. 22
4. Expanding
& Mixing Sec
2.6........................................................................... 23
III. Processes................................................................................................................ 25
A. Cyclic
Processes.................................................................................................... 25
1. Heat
Engines & Heat Pumps Sec 4.1, 4.2........................................................ 25
2. Otto,
Diesel, & Rankine Sec 4.3...................................................................... 27
B. Non-cyclic
Processes or Thermodynamic Potentials............................................. 30
1. Thermodynamic
Potentials Sec
5.1.................................................................. 30
2. Toward
Equilibrium Sec
5.2............................................................................. 33
3. Phase
transformations
Sec 5.3.......................................................................... 34
IV. Statistical
Mechanics............................................................................................. 40
A. Partition
Function.................................................................................................. 40
1. Boltzmann Sec 6.1........................................................................................... 40
2. Probability Sec 6.2............................................................................................ 43
3. A Couple
of Applications.................................................................................. 44
B. Adding up the States Sec 1.2, 1.7, 2.5, 6.6, 6.7..................................................... 46
1. Two-State
Paramagnet Sec
6.6........................................................................ 46
2. Ideal
Gas Sec 6.7............................................................................................. 47
3. Thermodynamic
Properties of the Ideal Monatomic Gas.................................. 49
4. Solids Sec 2.2, 3.3, 7.5..................................................................................... 51
5. Photons Sec 7.4................................................................................................ 53
6. Specific
Heat (Heat Capacity) of Solids Sec. 7.5............................................ 55
I. First “Law” of Thermodynamics
A. Temperature Sec 1.1
1. Energy
a. Definition of energy
Energy is a
fundamental physical concept. My
favorite dictionary gives as its 4th definition of energy: 4. Physics. Capacity for performing work. So, now we go to the W section for the 13th
definition of work: 13. Mech.
The transference of energy by a process involving the motion of the
point of application of a force, as when there is movement against a resisting
force or when a body is given acceleration; it is measured by the product of
the force and the displacement of its point of application in the line of
action. [How about the definition of
work number 10. The foam or froth
caused by fermentation, as in cider, in making vinegar, etc.]
That definition of work is adequate as a definition of
mechanical work, but that definition of the word energy is nearly useless. Of course, that’s what dictionaries do,
define words in terms of other words in endless circles—they are about usages
more than meanings. A fundamental
concept cannot be defined in terms of other concepts; that is what fundamental means.
b. Conservation of energy
We can list the forms that energy might take. In effect, we are saying that if such and
such happens in or to a system, then the energy of the system changes. There is potential energy, which is related
to the positions of parts of the system.
There is kinetic energy, which is related to the movements of parts of
the system. There is rest energy, which
is related to the amount of matter in the system. There is electromagnetic energy, chemical
energy, and nuclear energy, and more.
We find that for an isolated
system, the total amount of energy in the system does not change. Within the system, energy may change from one
form to another, but the total energy of all forms is a conserved
quantity. Now, if a system is not
isolated from the rest of the universe, energy may be transferred into or out
of the system, so the total energy of such a system may rise or fall.
2. Thermal Equilibrium
a. Temperature
Consider two objects, each consisting of a very large number
of atoms and/or molecules. Here, very
large means at least several multiples of Avogadro’s Number of particles. We call such an object a macroscopic object. Consider
that these two objects (they may be two blocks of aluminum, for instance,
though they need not be the same material—they might be aluminum and wood, or
anything) are isolated from the rest of the universe, but are in contact with each other.

We observe that energy flows spontaneously from one
block (A) to the other (B). We say that
block A has a higher temperature than block B. In fact, we say that the energy flow occurs because
the blocks have different temperatures.
We further observe that after the lapse of some time, called the relaxation time, the flow of energy from
A to B ceases, after which there is zero net transfer of energy between the
blocks. At this point the two blocks are
in thermal equilibrium with each
other, and we would say that they have the same temperature.
b. Heat
The word heat
refers to energy that is transferred, or energy that flows, spontaneously by
virtue of a difference in temperature.
We often say heat flows into a system or out of a system, as for
instance heat flowed from block A to block B above. It is incorrect to say that heat
resides in a system, or that a system contains a certain amount of heat.
There are three mechanisms of energy transfer: conduction,
convection, and radiation. Two objects,
or two systems, are said to be in contact
if energy can flow from one to the other.
The most obvious example is two aluminum blocks sitting side by side,
literally touching. However, another
example is the Sun and the Earth, exchanging energy by radiation. The Sun has the higher temperature, so there
is a net flow of energy from the Sun to the Earth. The Sun and the Earth are in contact.
c. Zeroth “Law” of Thermodynamics
Two systems in thermal equilibrium with each other have the
same temperature. Clearly, if we
consider three systems, A, B, & C, if A & B are in thermal equilibrium,
and A & C are in thermal equilibrium, then B & C are also in thermal equilibrium,
and all three have the same temperature.

3. Thermometers
a. Temperature scales
What matter are temperature differences. We can feel that one object is hotter than
another, but we would like to have a quantitative measure of temperature. A number of temperature scales have been
devised, based on the temperature difference between two easily recognized
conditions, such as the freezing and boiling of water. Beyond that, the definition of a degree of
temperature is more or less arbitrary.
The Fahrenheit scale has 180 degrees between the freezing and boiling
points, while the Celsius scale has 100.
Naturally, we find 100 more convenient than 180. On the other hand, it turns out that the
freezing and boiling points of water are affected by other variables,
particularly air pressure. Perhaps some
form of absolute scale would be more useful.
Such a scale is the Kelvin scale, called also the absolute temperature scale.
The temperature at which the pressure of a dilute gas at fixed volume
would go to zero is called the absolute
zero temperature. Kelvin
temperatures are measured up from that lowest limit. The unit of absolute temperature is the kelvin (K), equal in size to a degree
Celsius. It turns out that 0 K = -273.15 oC. [The text continues to label non-absolute
temperatures with the degree symbol: oC, etc., as does the
introductory University Physics
textbook. The latter also claims that temperature
intervals are labeled with the degree symbol following the letter, as Co.
That’s silly.]
b. Devices
Devices to measure temperature take advantage of a thermal
property of matter—material substances expand or contract with changes in
temperature. The electrical conductivity
of numerous materials changes with temperature.
In each case, the thermometer must itself be brought into thermal equilibrium
with the system, so that the system and the thermometer are at the same
temperature. We read a number from the
thermometer scale, and impute that value to the temperature of the system. There are bulb thermometers, and bi-metallic
strip thermometers, and gas thermometers, and thermometers that detect the
radiation emitted by a surface. All
these must be calibrated, and all have limitations on their accuracies and
reliabilities and consistencies.
B. Work
1. First “Law” of Thermodynamics Sec 1.4
a. Work
Heat is defined as the spontaneous flow of energy into or
out of a system caused by a difference in temperature between the system and
its surroundings, or between two objects whose temperatures are different. Any other transfer of energy
into or out of a system is called work. Work takes many forms, moving a piston, or
stirring, or running an electrical current through a resistance. Work is the non-spontaneous transfer
of energy. Question: is lighting a
Bunsen burner under a beaker of water work?
The hot gasses of the flame are in contact with the beaker, so that’s
heat. But, the gases are made hot by
combustion, so that’s work.
b. Internal energy
There are two ways, then, that the total energy inside a
system may change—heat and/or work. We use
the term internal energy for the
total energy inside a system, and the symbol U.
Q and W will stand for heat and work, respectively. Energy conservation gives us the First “Law”
of Thermodynamics:
.

Now, we have to be careful with the algebraic signs. In this case, Q is positive as the heat entering the system, and W is positive as the work done on
the system. So a positive Q and a
positive W both cause an increase of internal energy, U.
2. Compressive Work Sec 1.5
a. PV diagrams
Consider a system enclosed in a cylinder oriented along the
x-axis, with a moveable piston at one end.
The piston has a cross sectional area A in contact with the system.
We may as well imagine the system is a volume of gas, though it may be
liquid or solid. A force applied to the
piston from right to left (-x direction) applies a pressure on the gas of
. If the piston is
displaced a distance
, then the work done by the force is
If the displacement
is slow enough, the system can adjust so that the pressure is uniform over the
area of the piston. In that case, called
quasistatic, the work becomes
.





Now it is quite possible, even likely, that the pressure
will change as the volume changes. So we
imagine the compression (or expansion) occurring in infinitesimal steps, in
which case the work becomes an integral:

Naturally, to carry out the integral, we need to have a
specific functional form for P(V).
On a PV diagram, then, the work is the area under the P(V)
curve. In addition, the P(V)
curve is traversed in a particular direction—compression or expansion, so the
work will be positive or negative accordingly.
Notice over a closed path on a PV diagram the work is not
necessarily zero.

b. Internal energy of the Ideal Gas Sec 1.2
As an example of computing compressive work, consider an
ideal gas. But first, we need to address
the issue of the internal energy of an ideal gas. Begin with the empirical equation of
state:
. In an ideal gas, the
particles do not interact with each other.
They interact with the walls of a container only to collide elastically
with them. The Boltzmann Constant is
, while the Gas Constant is
; N is the number
of particles in the system, n is the
number of moles in the system.





On the microscopic level, the atoms of the gas collide from
time to time with the walls of the container.
Let us consider a single atom, as shown in the figure above. It collides elastically with the wall of the
container and experiences a change in momentum
. That is, the wall
exerts a force on the atom in the minus x
direction. The atom exerts an equal and
opposite force on the wall in the positive x
direction. The time-averaged force
exerted by a single atom on the wall is
. Now, in order that
the atom collides with the wall only once during
, we set
, whence the force becomes
. Next, the pressure, P, is the force averaged over the area
of the wall:
, where V is the
volume of the container. Normally, there
are many atoms in the gas, say N of
them. Each atom has a different
velocity. Therefore,
, where
is the square of the x-component of velocity, averaged over
the N atoms. Finally, we invoke the ideal gas law,
.











The translational kinetic energy of one atom is
, since
.


For an ideal gas of spherical particles (having no internal structure)
there is no potential energy and the internal energy is just the kinetic
energy.

c. Isothermal & adiabatic processes
Imagine a system in thermal contact with its
environment. The environment is much
larger than the system of interest, so that heat flow into or out of the system
has no effect on the temperature of the environment. We speak of the system being in contact with
a heat bath so that no matter what
happens to the system, its temperature remains constant. If such a system is compressed slowly enough,
its temperature is unchanged during the compression. The system is compressed isothermally. As an example,
consider an ideal gas:
|

On the other hand, the compression may be so fast that no
heat is exchanged with the environment (or, the system is isolated from the
environment) so that Q = 0. Such a process is adiabatic. Naturally, the
temperature of the system will increase.
Staying with the example of an ideal gas,

[Notice that the text uses f for final and for degrees of freedom.]
Substituting for T
with the ideal gas law, we can write
. That exponent of V is called the adiabatic exponent,
. In general,
.




An isotherm is a
curve of constant temperature, T, on
the PV diagram. An arrow indicates the
direction that the system is changing with time. For an ideal gas, an isotherm is parabolic,
since
; that’s a special case.
A curve along which Q = 0 is
called an adiabat.

3. Other Works
In our discussion of energy conservation, we spoke of work
as being any energy flow into or out of the system that was not heat. We spoke of compressive work (sometimes
called piston work) and “all other
forms of work.” The all other forms of
work included stirring (called shaft work)
and combustion and electrical currents and friction. It would also include any work done by
external forces beyond the compressive work, particularly work done by the
force of gravity. We also have been
assuming that the center of mass of the system is not moving, so there was no
kinetic energy associated with translation of the entire system. A general form of the First “Law” of
Thermodynamics ought to include all the energy of the system, not only its
internal energy. Thus for instance, the
total energy of a system might be
.

a. Steady flow process
We might consider a situation in which a fluid is flowing
steadily without friction, but with heat flow into the fluid and a change in
elevation and changes in volume and pressure and some stirring.



In engineering real devices, all the various sources of work
have to be taken into account. In any
specific device, some works can be neglected and other works not.
b. The turbine
In a turbine, a fluid flows through a pipe or tube so
quickly that Q = 0, and normally the
entry and exit heights are virtually the same.
In the case of an electrical generator, the moving fluid turns a fan, so
that the shaft work is negative. The
energy balance equation for a volume element of the fluid having a mass, M, would look something like this:

An equation like this tells us how to design our turbine to
maximize the shaft work.
c. Bernoulli’s Equation
Suppose both Q and
Wshaft are zero.

If the fluid is incompressible, then the volume is constant,
and we can divide through by V to
obtain Bernoulli’s Equation. The
internal energy is also constant because Q
= 0 and no compressive work is done. [
is the mass density of
the fluid.]



C. Heat Capacity Sec 1.6
1. Changing Temperature
By definition, the heat
capacity of an object is
. The specific heat capacity is the heat
capacity per unit mass,
. This definition is
not specific enough, however, since
. A heat capacity could
be computed for any combination of conditions—constant V,
constant P, constant P & V, etc.



b. Constant pressure heat capacity
If pressure is constant, then

The second term on the right is the energy expended to
expand the system, rather than increase the temperature.
c. Constant volume heat capacity


2. Heat Capacity and Degrees of Freedom
a. Degrees of freedom
A degree of freedom is essentially a variable whose
value may change. In the case of a
physical system, the positions of the particles that comprise the system are
degrees of freedom. For a single
particle in 3-dimensional space, there are three degrees of freedom. Three coordinates are required to specify its
location. We are particularly interested
in variables that determine the energy of the system—the velocities determine
the kinetic energy, the positions determine the potential energy, etc. In other words, we expect to associate some
kinetic energy and some potential energy with each degree of freedom.
In effect, this text treats the kinetic and potential
energies as degrees of freedom. An
isolated single particle, having no internal structure, but able to move in
three-dimensional space, has three degrees of freedom which may have energy
associated with them: the three
components of its velocity. Since the
particle is not interacting with any other particle, we do not count its position
coordinates as degrees of freedom. On
the other hand, a three-dimensional harmonic oscillator has potential energy as
well as kinetic energy, so it has 6 degrees of freedom. Molecules in a gas have more degrees of freedom than simple
spherical particles. A molecule can
rotate as well as translate and its constituent parts can vibrate. A water molecule is comprised of three atoms,
arranged in the shape of a triangle. The
molecule can translate in three dimensions, and rotate around three different
axes. That’s 3+3 = 6 degrees of freedom
for an isolated water molecule. Within
the molecules, the atoms can vibrate relative to the center of mass in three
distinct ways, or modes. That’s another
2x3 = 6 degrees of freedom. Now, finally
if the water molecule is interacting with other water molecules, then there is
interaction between the molecules, and the degrees of freedom are 2x3+2x3+2x3 =
18. Notice that if we regard the
molecule as three interacting atoms, not as a rigid shape, there are 3x2x3 = 18
degrees of freedom.
A system of N particles, such as a solid made of N
harmonic oscillators, has 6 degrees of freedom per particle for a total of 6N
degrees of freedom.
The idea is that each degree of freedom, as it were,
contains some energy. The total internal
energy of a system is the sum of all the energies of all the degrees of
freedom. Conversely, the amount of
energy that may be transferred into a system is affected by how many degrees of
freedom the system has.
b. Exciting a degree of freedom
Consider a harmonic oscillator. The energy required to raise the HO from its
ground state to the first excited state is
, where h is Planck’s Constant and
is the oscillator
frequency. If the system temperature is
such that
, then that oscillator will never be excited. It’s as if that degree of freedom does not
exist. We say that the degree of
freedom has been frozen out.



The Equipartition Theorem says that the average energy in
each quadratic degree of freedom is
. By quadratic degree
of freedom, we mean that the kinetic and potential energy terms all depend on
the position and velocity components squared.
If that is the case, the internal energy of a system of N
harmonic oscillators in a solid is


where f is the number of degrees of freedom per
particle, which we would expect to be 6.
From this, we obtain the constant volume heat capacity of Dulong-Petit,

This result is independent of temperature. The measured heat capacity for a solid
is not the same for all temperatures. In
fact, as temperature decreases, the heat capacity decreases toward zero. Working backward, it would appear that as
temperature decreases, the number of degrees of freedom available
decreases. At low temperature, energy
cannot be put into the degrees of freedom that have been frozen out. The reason is that the quantity kT is
smaller than the spacing between discrete energy levels. This applies not only to a solid. For instance, one of the intramolecular
vibrational modes of the water molecule is shown here in the sketch.

Its frequency is in the neighborhood of 1014
Hz. Assuming harmonic vibration, the
energy level spacing for that mode is about
.On the other hand, at T
= 300 K, the quantity
; we would not expect the intramolecular modes to be
excitable, as it were, at 300K.


As an example of computing the constant volume heat capacity
of something other than an ideal gas, consider liquid water and ice. An effective potential energy function is
assumed to represent the interaction between water molecules. The total kinetic and potential energy is
computed at a range of temperatures, with the volume kept fixed.

The CV
is estimated by numerically evaluating the slope of the graph:
. In this set of
molecular dynamics simulations, the intermolecular
energy, E, leaves out the kinetic
energies of the hydrogen and oxygen atoms with respect to the molecular center
of mass as well as the intramolecular potential energies—that is, the degrees
of freedom of the atoms within each molecule, such as the vibrational
mode shown above, are frozen out. The
results shown on the graph are
for the liquid phase
and
for the solid
phase. The purpose of the study was to
test the effective potential function—would it show a melting transition at the
correct temperature, and would it give correct heat capacities.



|
II. Second “Law” of Thermodynamics
A. Combinatorics
1. Two State Systems Sec 2.1
a. Micro- and macro-states
Consider a system of three coins, as described in the
text. The macrostate of this
system is described by the number of heads facing up. There are four such macrostates, labeled 0,
1, 2, & 3. We might even call these
energy levels 0, 1, 2, & 3.
Specifying the orientation of each individual coin defines a
microstate. We can list the
microstates, using H for heads and T for tails:
TTT, HTT, THT, TTH, HHT, HTH, THH, HHH.
Now, we sort the microstates into the macrostate energy
levels.
energy level
|
microstates
|
multiplicity,
![]() |
0
|
TTT
|
1
|
1
|
HTT, THT, TTH
|
3
|
2
|
HHT, HTH, THH
|
3
|
3
|
HHH
|
1
|
The multiplicity is the number of distinct ways that
a specified macrostate can be realized.
The total multiplicity of the system is the total of all the possible
microstates. For these three coins,
that’s
.

b. Two-state paramagnet
Consider a large number of non-interacting magnetic
dipole moments, let’s say N of them.
These dipoles may point in one of only two ways: up or down. If an external uniform magnetic field is
applied, say in the up direction, each dipole will experience a torque tending
to rotate it to the up direction also.
That is to say, parallel alignment with the external field is a lower
energy state than is anti-parallel alignment.
The energy of the system is characterized by the number of
dipoles aligned with the external field, q. But, we don’t care which q
dipoles of the N total are in the up state. Having q dipoles up specifies the
energy macrostate, which may be realized by the selection of any q
dipoles out of N to be up. The
number of microstates for each macrostate is just the number of combinations,
the number of ways of choosing q objects from a collection of N
objects.

Now, what are the odds of observing this paramagnet to be in
a particular energy macrostate? Assuming
every microstate is equally likely, then we have

Notice that the total multiplicity is
because each dipole
has only two possible states.

Here is a microstate for a system of N = 10 dipoles, with q = 6 (6 dipoles
point up).

The probability function, P(q), for this
system looks like this:

The most probable macrostate has one-half the dipoles
pointed up, q = 5.
2. Einstein Solid Sec 2.2, 2.3
a. More than two states
A harmonic oscillator has energy levels that are uniformly
spaced in steps of
, where h is Planck’s Constant and
is the frequency of
the oscillator. We imagine a solid made
of N such harmonic oscillators.
The total energy of the solid is
, where q is an integer which in this case may well be
greater than N. As shown in the
text, the multiplicity of the macrostate having energy
is the number of ways q
items can be selected from q + N – 1 items.





Then
is the sum of all the
.


b. Interacting systems
We are interested in the transfer of energy from one such
Einstein solid to another. Now we want
the multiplicity of q energy units distributed over both systems.

Let’s say we have NA, NB,
qA, and qB, such that
. The total
multiplicity for the two systems in contact is


Assuming that all microstates are equally probable, then the
macrostate having the greatest multiplicity is the most probable to be
observed. As q, NA,
and NB are made larger, the multiplicity curve is taller, and
more narrowly peaked at (if NA = NB)
. Say that initially,
. Then over time, as
energy is exchanged more or less randomly between oscillators in the two
systems, there will be a net flow of energy from system B to system A,
from a macrostate of lower multiplicity to a macrostate of greater
multiplicity.


The text has one numerical example on page 57. In that example,
and
. The maximum
multiplicity occurs at
. Let’s look at a case
in which
, namely
and
and
.







qA
|
![]() |
qB
|
![]() |
![]() |
P(qA)
|
0
|
1
|
8
|
6435
|
6435
|
0.031623
|
1
|
6
|
7
|
3432
|
20592
|
0.101194
|
2
|
21
|
6
|
1716
|
36036
|
0.17709
|
3
|
56
|
5
|
792
|
44352
|
0.217957
|
4
|
126
|
4
|
330
|
41580
|
0.204334
|
5
|
252
|
3
|
120
|
30240
|
0.148607
|
6
|
462
|
2
|
36
|
16632
|
0.081734
|
7
|
792
|
1
|
8
|
6336
|
0.031137
|
8
|
1287
|
0
|
1
|
1287
|
0.006325
|
|
|
|
![]() |
203490
|
|

The probability peaks at about
, rather than
, that is, at
.



[I did the calculation of
using the COMBIN
function in Excel.]

As we increase the numbers, the
s become very large very quickly, as illustrated by the text
example on pages 58 & 59.

B. Entropy
1. Large Systems Sec 2.4
a. Very large numbers
Macroscopic systems contain multiples of Avogadro’s number,
, perhaps many, many multiples. The factorials of such large numbers are even
larger—very large numbers. We’ll use Stirling’s Approximation to evaluate the factorials:


Ultimately, we will want the logarithm of N!:
.

b. Multiplicity function
Consider an Einstein solid with a large number of
oscillators, N, and energy units, q.
The multiplicity function is
.

Take the logarithm, using Stirling’s
formula.

Now further assume that q
>> N. In that case,
.

The
becomes
, whence



c. Interacting systems
The multiplicity function for a pair of interacting Einstein
solids is the product of their separate multiplicity functions. Let’s say
and
.


Then
. If we were to graph
this function vs. qA, what
would it look like? Firstly, we expect a
peak at
with a height of
. That’s a very large
number. How about the width of the
curve? In the text, the author shows
that the curve is a Gaussian:
, where
. The origin has been
shifted to the location of
. The point at which
occurs when
. Now, this is a large
number, but compared to the scale of the horizontal axis
, that peak is very narrow, since N is a large number in itself.
That is, the half width of the peak is
of the whole range of
the independent variable.











The upshot is that as N
and q become large, the multiplicity
function peak becomes narrower and narrower.
The most probable macrostate becomes more and more probable relative to
the other possible macrostates. Put
another way, fluctuations from the most probable macrostate are very small in
large systems.
2. Second “Law” Sec 2.6
a. Definition of entropy
We define the entropy
of a system to be
. The units of entropy
are the units of the Boltzmann Constant, J/K.

The total entropy of two interacting systems, such as the
two Einstein solids above, is
.

The Second “Law” of Thermodynamics says: Systems tend to evolve in the direction of
increasing multiplicity. That is,
entropy tends to increase. This is
simply because the macrostate of maximum multiplicity is the most probable to
be observed by the time the system has reached thermal equilibrium.
b. Irreversible
The concept of entropy was introduced originally to explain
why certain processes only went one way spontaneously. When heat enters a system at temperature, T, its entropy increases by
. When heat leaves a
system, the system’s entropy decreases.

Consider two identical blocks of aluminum, initially one
hotter than the other. When brought into
thermal contact, heat will flow from the warmer block(A) to the cooler(B) until
they have reached the same temperature.
The total entropy of the two blocks will have increased. Incrementally,
. In effect, because TA
is greater than TB, the entropy of block A decreases less
than the entropy of block B increases.

Processes that create new entropy cannot happen
spontaneously in reverse. Heat flow from
a warmer object to a cooler object creates entropy, and is irreversible. Mixing two
different gases creates entropy, and is irreversible. Rapid expansion or compression creates
entropy. On the other hand quasistatic
volume change can be reversible, depending what other processes are taking
place at the same time. Very slow heat
flow produces very little new entropy and may be regarded as practically
reversible.
An irreversible process may be reversed by doing work on the
system, but that also increases the total amount of entropy in the
universe.
C. Creating Entropy
1. Temperature Sec 3.1, 3.2
a. Thermal equilibrium
When two objects are in thermal equilibrium, their
temperatures are the same. According to
the Second “Law”, their total entropy is at its maximum.
Consider two objects in contact, exchanging energy. The total energy of the two objects is
fixed. At equilibrium,

The quantity
has units of K-1,
so perhaps we can define the temperature in terms of the entropy as
.


b. Heat capacities
We cannot measure entropy directly, but we can measure
changes in entropy indirectly, through the heat capacity. For instance, if no work is being done on the
system,

Of course, we need to know CV as a function of T. This is obtained by measuring Q or U
vs. T. In general, the heat capacity decreases with
decreasing temperature. At higher
temperatures, the heat capacity approaches the constant
(Dulong-Petit). For instance, the CV vs. T for a
monatomic substance would look like this:


The Third “Law” of Thermodynamics says that
as
, or alternatively, that S
= 0 when T = 0K. In reality, there remains residual entropy in
a system at T = 0K—near absolute zero, the relaxation time for the system to
settle into its very lowest energy state is very, very long.


Now notice, if indeed
as
, then absolute zero can not be attained in a finite number
of steps, since
as
. It’s like the famous
example of approaching a wall in a series of steps, each one half the previous
step.




For example, let us say that we wish to cool an ideal gas to
absolute zero. We’d have to “get rid” of
the entropy in the gas in a series of steps.
i) isothermal compression—heat and entropy is transferred to
a reservoir
ii) adiabatic expansion—temperature decreases, entropy is
constant, Q = 0
repeat
Now if we were to graph these S(T) points we have
generated we would see two curves. But
the curves are not parallel; they appear to converge at T = 0. As a result, the
gets smaller for each
successive two-stage step, the closer we get to T = 0.


In practice, a real gas would condense at some point. The text describes three real-life high-tech
coolers. In any case, there will be a
series of ever smaller steps downward between converging curves on the S(T)
graph, toward absolute zero.
2. Pressure Sec 3.4
a. Mechanical equilibrium
Consider two systems whose volumes can change as they
interact. An example might be two gases
separated by a moveable membrane. The
total energy and volume of the two systems are fixed, but the systems may
exchange energy and volume. Therefore,
the entropy is a function of the volumes as well as the internal energies. However, we will be keeping the numbers of
particles in each system fixed.

At the equilibrium point,
and
.



As we did with temperature, we can identify the pressure
with the derivative of entropy with volume, thusly:
.

b. Thermodynamic identity
Now if we envision a system whose internal energy and volume
are changing, we would write the change in entropy (a function of both U and of V) as follows:

c. Creating entropy with mechanical work
Remember that compressive work (
) is just one form of work.
If the compression is slow, and no other form of
work is done on the system, then the volume change is quasistatic,
and
. In such a case, we
are allowed to combine the First “Law” with the thermodynamic identity to
obtain



But, if the work done on the system is greater than
, then
. In other words, the
amount of entropy created in the system is more than that accounted for
by the heat flow into the system. This
might happen, for instance, with a compression that occurs faster than the
pressure can equalize throughout the volume of the system. It will happen if other forms of work are
being done, such as mixing, or stirring.
In a similar vein, if a gas is allowed to expand freely into a vacuum,
no work is done by the gas, and no heat flows into or out of the gas. Yet the gas is occupying a larger volume, so
its entropy is increased.


3. Chemical Potential Sec 3.5
Now consider a case in which the systems can exchange
particles as well as energy and volume.
a. Diffusive equilibrium


Define the chemical potential as
. Evidently, the minus
sign is attached so that particles will tend to diffuse from higher toward
lower chemical potential.

b. Generalized thermodynamic identity
For infinitesimal changes in the system,

This equation contains within it all three of the
partial-derivative formulas for T, P and for
. For instance, assume
that entropy and volume are fixed. Then
the thermodynamic identity says
, whence we can write
. To apply the
partial-derivative formulae to a particular case, we need specific expressions
for the interdependence of the variables, i.e., U as a function of N.



4. Expanding & Mixing Sec 2.6
a. Free expansion
Imagine a container of volume 2V, isolated from its surroundings, and with a partition that
divides the container in half. An ideal
gas is confined to one side of the container.
The gas is in equilibrium, with temperature T and Pressure P. Now, imagine removing the partition. Over time, the gas molecules will diffuse to
fill the larger volume.

However, in expanding the gas does no work, hence the phrase
free expansion. Because the container is isolated, no heat
flows into or out of the gas, nor does the number of molecules, N, change..

However, the entropy increases.
.

[The expression for the entropy of an ideal gas is derived
in Sec. IV B 3. It is

b. Entropy of mixing
In a similar vein, we might imagine a container divided into
two chambers, each with a different ideal gas in it. When the partition is removed, both gases
diffuse to fill the larger volume. Since
the gases are ideal, one gas doesn’t really “notice” the presence of the other
as they mix. The entropy of both gases
increases, so the entropy change of the whole system is
, assuming of course that we started with the same numbers of
molecules of both gases, etc.

III. Processes
A. Cyclic Processes
1. Heat Engines & Heat Pumps Sec 4.1, 4.2
A heat engine is a device that absorbs heat from a
reservoir and converts part of it to work.
The engine carries a working substance through a PVT cycle,
returning to the state at which it starts.
It expels “waste” heat into a cold reservoir, or into its
environment. It must do this in order
that the entropy of the engine itself does not increase with every cycle.

a. Efficiency
The efficiency of the heat engine is defined as the ratio of
work done by the engine to the heat absorbed by the engine.

We’d like to express e in terms of the temperatures
of the hot and cold reservoirs. The
First “Law” says that
. The Second “Law”
says that
. Putting these
together, we obtain
. Firstly, notice that
e cannot be greater than one.
Secondly, e cannot be one unless Tc = 0
K, which cannot be achieved. Thirdly,
is the greatest
e can be—in practice, e is less than the theoretical limit, since
always
.





b. Carnot cycle
Can a cycle be devised for which
? That’s the Carnot
cycle, which uses a gas as the working substance.


i) the gas absorbs heat from the hot reservoir. To minimize dS, we need
; the gas is allowed to expand isothermally in order to
maintain the
.


ii) the gas expands adiabatically, doing work, and cools
from Th to Tc.
iii) the gas is compressed isothermally, during which step
heat is transferred to the cold reservoir.
iv) the gas is compressed adiabatically, and warms from Tc
to Th.
Now, for the total change in entropy to be very small, the
temperature differences between the gas and the reservoirs must be very
small. But that means that the heat
transfers are very slooow. Therefore, the
Carnot cycle is not very useful in producing useful work. [Empirically, the
rate at which heat flows is proportional to the temperature difference —
.]

c. Heat pump
The purpose of a heat pump is to transport energy
from a cold reservoir to a hot one by doing work on the working substance. The work is necessary because the temperature
of the working fluid must be raised above that of the hot reservoir in order
for heat to flow in the desired direction.
Likewise, at the other side of the cycle the working fluid must be made
colder than the cold reservoir.
Rather than efficiency, the corresponding parameter for a
heat pump is the coefficient of performance,


The First “Law” says
. The Second “Law”
says
. Putting these
together, we obtain
. A Carnot cycle
running in reverse will give the maximum COP.



2. Otto, Diesel, & Rankine Sec 4.3
Real heat engines need to produce work at a more rapid rate
than a Carnot engine. Consequently,
their efficiencies are lower than that of a Carnot engine. Of course, real engines do not achieve even
their theoretical efficiencies due to friction and conductive heat loss through
the cylinder walls and the like.
a. Otto cycle
The Otto cycle is the basis for the ordinary 4-stroke gasoline
engine.
i) air-fuel mixture is compressed adiabatically from V1 to V2; pressure rises from P1 to P2.
ii) air-fuel mixture is ignited, the pressure rises
isochorically from P2 to P3.
iii) combustion products expand adiabatically from V2 to V1; pressure falls from P3 to P4.
iv) pressure falls isochorically from P4 to P1.

The temperatures also change from step to step. The efficiency is given by

The quotient
is the compression ratio. The greater the compression ratio, the
greater is the efficiency of the engine.
However, so is T3
greater. If T3 is too great, the air-fuel mixture will ignite
prematurely, before the piston reaches the top of its stroke. This reduces power, and damages the piston and
cylinder. Up to a point, chemical
additives to the fuel can alleviate the premature detonation.

Notice that there is no hot reservoir per se; rather the heat source is the chemical energy released by
the combustion of the fuel.
b. Diesel cycle
The Diesel cycle differs from the Otto cycle in that the air
is first compressed adiabatically in the cylinder, then the fuel is injected
into the hot air and ignited spontaneously, without need of a spark. The fuel injection takes place as the piston
has begun to move downward, so that constant pressure is maintained during the
fuel injection. Since the fuel is not in
the cylinder during the compression, much higher compression ratios can be
used, leading to greater efficiencies.

c. Rankine cycle
In some ways the steam engine is a more nearly exact example
of a heat engine than is the Otto engine.
No chemical reaction or combustion takes place within the working fluid,
and at least in principle the working fluid is not replaced at the beginning of
each cycle.
i) water is pumped to a high pressure into a boiler.
ii) the water is heated at constant pressure and changes to
steam (water vapour).
iii) the steam expands adiabatically, driving a piston or a
turbine, and cools and begins to condense.
iv) the partially cooled steam/water mixture is cooled
further by contact with the cold reservoir.

The efficiency of the steam engine is
. At constant
pressure,
, whence,
. Now,
since the water is not
compressed as it is pumped, and only a little energy is added to the water
(e.g., it’s not accelerated). So we look
up the enthalpies on the tables of enthalpy & entropy vs. temperature &
pressure—page 136.




d. Throttling and refrigerators Sec 4.4
For a refrigerator to work, the temperature of the working
fluid must be made less than that of the cold reservoir. This is done through what is called a throttling process.
The working fluid passes through a narrow opening from a region
of high pressure into a region of low pressure.
In doing so, it expands adiabatically (Q = 0) and cools. As the
fluid expands, the negative potential energy of interaction among the
atoms/molecules increases and the kinetic energy decreases.

From the First “Law”

In a dense gas or a liquid,
. Therefore, as the
gas expands,
the gas/liquid cools.


Subsequently, the chilled fluid absorbs heat from the cold
reservoir and vaporizes. Therefore, the
working fluid must be a substance with a low boiling point. The compressor does the work of compressing
the gas to raise its temperature, as well as maintains the pressure difference
required for the throttle valve to work.
B. Non-cyclic Processes or Thermodynamic Potentials
1. Thermodynamic Potentials Sec 5.1
A number of thermodynamic quantities have been
defined—useful under differing conditions of fixed pressure, volume,
temperature, particle number, etc. These
are the enthalpy, the Helmholtz free energy, and the Gibbs free energy. Together with the internal energy, these are
referred to as thermodynamic potentials.
a. Enthalpy Sec 1.6
The total energy required to create a system of particles at
sea level air pressure would include the expansive work done in displacing the
air.
We define the enthalpy
to be
. The enthalpy is
useful when a change takes place in a system while pressure is constant.


Now, if no other
work is done, then
exactly. In practice, tables of measured enthalpies
for various processes, usually chemical reactions or phase transitions are
compiled. The text mentions the enthalpy
of formation for liquid water.
Evidently, when oxygen and hydrogen gases are combined to form a mole of
liquid water, the change in enthalpy is -286 kJ. In other words, burning hydrogen at constant
pressure releases this much energy.

Efficiency of a steam engine:
. Now,
since the water is not
compressed as it is pumped, and only a little energy is added to the water
(it’s not accelerated). So, we look up
the enthalpies on tables of H & S vs. T & P—page 136.


PV diagram…………………………………………
|

PV diagram…………………………………………………
Problem 4-29

From Table 4.3,
At 12 bar, the boiling point is 46.3oC.

a) Pf =
1 bar and Hliquid = 16 kJ
T = -26.4oC and Hgas
= 231 kJ.
b) Starting with all
liquid at Pi,

b. Helmholtz
Let’s say the system is in contact with a heat bath, so that
the temperature is constant. The
pressure may not be constant. To create
the system, some of its total energy can be taken from the environment in the
form of heat. So the total work required
to create the system is not all of U,
but less than U. Define the Helmholtz Free Energy of the system as
.

Any change in a system at constant temperature will entail a
change in F.

Where W is all
the work done on the system.
c. Gibbs
Now, if the system is at constant pressure as well as
constant temperature, then the extra work needed to create the system is the Gibbs Free Energy,
.

If pressure is constant, we use the Gibbs free energy:

Again, W is the total work done on the system.
In a paragraph above, we burned some hydrogen. The 286 kJ released could be used to run an
Otto cycle for instance. The theoretical
efficiency of an Otto engine is about 56%.
So, at most 160 kJ are used to drive the car. It’s possible to run that reaction in a more
controlled way and extract electrical work, in a hydrogen fuel cell.

That
has to be expelled to
the environment, and the efficiency of the fuel cell alone is 83%. The fuel cell generates current which can run
an electrical motor or charge a battery.
Of course, in both instances, there are numerous losses of energy along
the way to driving the car.

d. Identities
If we envision infinitesimal changes in thermodynamic
variables, we can derive thermodynamic identities for the thermodynamic
potentials. We have already, the
thermodynamic identity for internal energy

Now, consider the enthalpy, H.

For instance, if dP
= 0 and dN = 0, then we could write
, which is equivalent to the
that we obtained
earlier.


We can do the same for F
and for G.

From this equation we can derive relationships like
.


2. Toward Equilibrium Sec 5.2
a. System and its environment
An isolated system tends to evolve toward an equilibrium
state of maximum entropy. That is, any
spontaneous rearrangements within the system increase the entropy of the
system. Now, consider a system which is
in thermal contact with its environment.
The system will tend to evolve, by exchanging energy with the
environment. The entropy of the system
may increase or decrease, but the total entropy of the universe increases in
the process.
Let’s say that the system evolves toward equilibrium
isothermally. The environment is such a
large reservoir of energy that it can exchange energy with the system without
changing temperature. It’s a heat
bath. The total change in entropy
involved with an exchange of energy would be


Assuming the V and
N for the environment are fixed, an
recalling that dUR =
- dU and T = TR , then

The increase in total entropy under conditions
of constant T, V, and N is equivalent to
a decrease in the Helmholtz free energy of the system.
In a similar vein, if the system volume is not fixed, but
the pressure is constant, then we have

The increase in total entropy under conditions
of constant T, P, and N is equivalent to
a decrease in the Gibbs free energy of the system.
system condition
|
system tendency
|
isolated—constant
U, T, V, & N
|
entropy increases
|
constant T and V and N
|
Helmholtz free energy decreases
|
constant T and P and N
|
Gibbs free energy decreases
|
b. Extensive & Intensive
The several properties of a system can be divided into two
classes—those that depend on the amount of mass in the system, and those that
do not. We imagine a system with volume V in equilibrium. The system is characterized by its mass,
number of particles, pressure, temperature, volume, chemical potential,
density, entropy, enthalpy, internal energy, Helmholtz and Gibbs free
energies. Now imagine slicing the system
in half, forming two identical systems with volumes V/2. Some properties of the
two systems are unchanged—temperature, pressure, density, and chemical potential. These are the intensive properties. The
rest are extensive—they are halved
when the original system was cut in half.
The usefulness of this concept is in checking the validity
of thermodynamic relationships. All the
terms in a thermodynamic equation must be the same type, because an extensive
quantity cannot be added to an intensive quantity. The product of an intensive quantity and an
extensive quantity is extensive. On the
other hand, dividing an extensive quantity by another yields an intensive
quantity, as in mass divided by volume gives the density.
3. Phase transformations Sec 5.3
We are familiar with water, or carbon dioxide or alcohol
changing from liquid to vapour, from solid to liquid, etc. We are aware that some metals are liquid at
room temperature while most are solid and melt if the temperature is much
higher. These are familiar phase
changes.
More generally, a phase
transformation is a discontinuous change in the properties of a substance,
not limited to changing physical structure from solid to liquid to gas, that
takes place when PVT conditions are changed only slightly.
a. Phase diagram
Which phase of a substance is the stable phase depends on
temperature and pressure. A phase diagram is a plot showing the
conditions under which each phase is the stable phase.
For something like water or carbon dioxide, the phase
diagram is divided into three regions—the solid, liquid, and gas (vapour)
regions. If we trace the P,T
values at which changes in phase take place, we trace the phase boundaries on the plot.
At those particular values of P,T the two phases can coexist in
equilibrium. At the triple point, all three phases coexist in equilibrium. The pressure on a gas-liquid or gas-solid
boundary is called the vapour pressure
of the liquid or solid.

Notice that the phase boundary between gas and liquid has an
end point, called the critical point. This signifies that at pressures and/or
temperatures beyond the critical point there is no physical distinction between
liquid and gas. The density of the gas
is so great and the thermal motion of the liquid is so great that gas and
liquid are the same.
Other sorts of phase transformations are possible, as for
instance at very high pressures there are different solid phases of ice. Similarly for carbon, there is more than one
solid phase—diamond and graphite.
Diamond is the stable phase at very high pressure while graphite is the
more stable phase at sea level air pressure.
The glittery diamonds people pay so much to possess are ever so slowly
changing into pencil lead.
Still other phase transformations are related not to
pressure, but to magnetic field strength as in the case of ferromagnets and
superconductors.
Here’s a phase diagram for water, showing the several solid
phases. They differ in crystal structure
and density, as well as other properties such as electrical conductivity.

D. Eisenberg & W. Kauzmann, The Structure and Properties of Water, Oxford Univ.
Press, 1969.
The phase diagram for water shown in the text figure 5.11 is
a teensy strip along the T axis near P = 0 on this figure. [One bar is about one atmosphere of air
pressure, so a kbar is 1000 atm.]
Here’s a phase diagram for a ferromagnet, subjected to an
external magnetic field,
. The phase boundary
is just the straight segment along the T
axis. The segment ends at the critical
point, at
.



b. van der Waals model
There are phases because the particles interact with each
other, in contrast to an ideal gas. The
interactions are complicated (quantum mechanics), so we create simplified,
effective models of the interparticle interactions in order to figure out what
properties of the interactions lead to the observed phase diagrams. For instance, the van der Waals model:
The model of a non-ideal gas is constructed as follows. Firstly, the atoms have nonzero volume—they
are not really point particles. So, the
volume of the system cannot go to zero, no matter how great the pressure or low
the temperature. The smallest V can possibly be, let’s say, is Nb. It’s like shifting the origin from V = 0 to V – Nb = 0. Secondly, the atoms exert forces on each
other. At short range, but not too
short, the forces are attractive—the atoms tend to pull one another closer. This has the tendency to reduce the pressure
that the system exerts outward on its environment (or container). We introduce a “correction” to the pressure
that is proportional to the density
and to the number of
atoms in the system, N. That is,
.




With the new V and
P, the gas law becomes the van der
Waals equation of state:


Now, the b and
are adjustable
parameters, whose values are different for different substances. They have to be fitted to empirical data.

There are countless other equations of state. For instance, there is the virial expansion,
which is an infinite series,

There is the Beattie-Bridgeman equation of state

All represent “corrections” to the ideal gas equation of
state.
c. Gibbs free energy—Clausius-Clapeyron—PV diagram
Which phase is stable at a given temperature and pressure is
that phase with the lowest Gibbs free energy.
On a phase boundary, where two phases coexist, there must be a common
Gibbs free energy. That is, on the
boundary between liquid and gas,
. Imagine changing the
pressure and temperature by small amounts in such a way as to remain on the
phase boundary. Then the Gibbs free
energy changes in such a way that


We’ve assumed that dN
= 0. This result is the slope of the
phase boundary curve on the PT
diagram.
Commonly, we express the change in entropy in terms of the
latent heat of the transformation, thusly

This is the Clausius-Clapeyron
relation, applicable to any PT
phase boundary.
Finally, we compute the Gibbs free energy for the van der
Waals model at a variety of temperatures and pressures to determine which phase
is stable in each case.


Let dN = 0, and
fix the temperature, varying only P.

Integrate

We have now expressions for both P and G as functions of V, at a fixed temperature, T.
Firstly, we plot G vs. P at some fixed V. This yields a graph with a loop in it. The loop represents unstable states, since
the Gibbs free energy is not a minimum.
Integrating dG around the closed loop should give zero.

We plot out on a PV
diagram the same points of free energy, and obtain an isotherm something like
that shown on the diagram below right.


The pressure at which the phase transition occurs, at
temperature T, is that value of P where the two shaded areas
cancel. So, tracking along an isotherm
from right to left, the gas is compressed and pressure rises until that
horizontal section is reached. At that
point, further compression does not increase the pressure because the gas is
condensing to liquid. When the phase
transition is complete, further compression causes a steep increase in
pressure, with little decrease of volume, as the liquid is much less
compressible than the gas. During the
transition, both gas and liquid phases coexist.


If the temperature is high enough, there is no phase
transition as V decreases. On a PT diagram, we see a phase
boundary between the liquid and gas phases, up to the critical point, where the
boundary terminates. Above the critical
point, there is no distinction between gas and liquid. That would correspond to the isotherms having
no flat segment on the PV diagram.
The van der Waals model is not very accurate in reality, but
it does illustrate how the observed phase behavior arises from the interactions
among the atoms or molecules of the substance.
IV. Statistical Mechanics
A. Partition Function
1. Boltzmann Sec 6.1
a. Multiplicity
Consider a system, in contact with an energy reservoir,
consisting of N weakly interacting identical particles. The energy of each particle is quantized, the
energy levels labeled by Ei.
Previously, we associated a multiplicity,
, with each different energy level. But, we could just as well list each
microstate separately. That is, each
particle energy level, Ei,
has multiplicity of 1, but some energy values occur
times in the
list. At equilibrium, the total internal
energy of the system of N particles is
constant (apart from small fluctuations)




b. Equilibrium
The equilibrium state of the system is the state that
maximizes
, subject to the constraints N = constant and U
= constant. We want to solve for the Ni that maximize
.


For indistinguishable particles,

We’ll apply the method of undetermined multipliers to
determine the equilibrium distribution of particles among the states, that is,
the Ni.

[At this point, notice that because dN = 0, the
for distinguishable
particles is exactly the same as for indistinguishable particles. So, we only have to do this once.]

Add the three equations.

Each term must be zero separately.

Solve for Ni.

Now we determine the multipliers by applying the constraints. The first one is easy.

The second one, b, is a bit more complicated. If the system were to exchange a small amount
of energy, dU, with the reservoir,
the entropy would change as small alterations occur in the {Ni}.


This is the Boltzmann
probability distribution.
. The greater the
energy of a particle state, the less likely a particle is to be in that state. [For atoms, we are making the ground state
zero,
.]



As T increases,
though, the likelihood of a higher-energy state being occupied increases as
well. The exponential decay of P is slower at higher temperatures.
c. Partition function
The sum over states (Zustandsumme) is called the partition
function. In principle, it’s a sum
over all the particle states of a system, and therefore contains the
statistical information about the system.
All of the thermodynamic properties of the system are derivable from the
partition function.

Entropy:

Pressure:

2. Probability Sec 6.2
a. Ensembles (awnsombles)
microcanonical
In an isolated system, every microstate has equal
probability of being occupied. The total
energy is fixed. The collection of all
the possible microstates of the system is called the microcanonical ensemble
of states.
canonical
If the probability distribution is the Boltzmann
distribution, the collection of energy states is called the canonical
ensemble. We’ve seen that such a
distribution applies to a system at constant temperature and fixed number of
particles, in contact with an energy reservoir.
The internal energy is not fixed, but we expect only small fluctuations
from an equilibrium value.
If the number of particles is allowed to change, then we
have to sum also over all possible numbers of particles, as well as all
possible energy states, giving the grand canonical ensemble.
b. Average values
The probability that a particle will be observed to occupy a
particular energy state is given by the Boltzmann distribution.


3. A Couple of Applications
a. Equipartition Theorem
Sec 6.3
A particle’s kinetic energy is proportional to the square of
its velocity components. In Cartesian
coordinates,
. In a similar vein,
the rotational kinetic energy of a rigid body is also proportional to the
square of the angular velocity, thus
. In the case of a
harmonic oscillator, the potential energy is also proportional to a square,
namely the displacement components,
. Very often, we
approximate the real force acting on a particle with the linear restoring force
of the harmonic oscillator. Let us
consider a generalized quadratic degree of freedom,
. Each value that q takes on represents a distinct
particle state. The energy is quantized,
so the q-values are discrete, with
spacing
.





The partition function for this “system” is a sum over those
q-states

In the classical limit
is small, and the sum
goes over to an integral


The average energy is
.

So, each quadratic degree of freedom, at equilibrium, will
have the same amount of energy. But,
this equipartition of energy theorem is valid only in the classical limit and
high temperature limits. That is, when
the energy level spacing is small compared to kT. We saw earlier that
degrees of freedom can be “frozen out” as temperature declines.
b. Maxwell Speed Distribution Sec 6.4
The distribution function has two parts. The first is the probability of an atom
having speed, v. That’s given by the Boltzmann probability
distribution. The second factor is a
multiplicity factor—how many velocity vectors have the same magnitude,
v.

The number of velocity vectors that have the same magnitude
is obtained by computing the surface area of a sphere of radius v in velocity space. That is
.



The C is a
proportionality constant, which we evaluate by normalizing the distribution
function.



B. Adding up the States Sec 1.2, 1.7, 2.5, 6.6, 6.7
1. Two-State Paramagnet Sec 6.6
The specifics of computing the partition function for a
system depend on the nature of the system—the specifics of its energy
levels. For instance, each dipole moment
in an ideal two-state paramagnet in an external magnetic field has two discrete
states.
a. Single dipole
There are two states in a system consisting of a single
magnetic dipole,
. Therefore, the
partition function is


b. Two or more dipoles
If the system consists of two non-interacting,
distinguishable dipoles, then there are four states:
.


Now if the dipoles are indistinguishable, there are
fewer distinct states, namely three for N
= 2, so
. This is because the
states
are the same state if
the dipoles are indistinguishable.


Extending to N
dipoles,
for distinguishable
dipoles;
for indistinguishable
dipoles, if N is large.


2. Ideal Gas Sec 6.7
a. One molecule

b. Two or more molecules
If the molecules are not interacting, then as before, the
partition function for N molecules is just
or
depending on whether
the molecules are distinguishable or not.
In the ideal gas, the molecules are not distinguishable one from
another. If the molecules were in a
solid, then they would be distinguishable because their positions would be
distinctly different.


[Note that in the text Section 6.7, the rotational partition
function is lumped in with the internal partition function.]
c. Internal partition function
The internal partition function sums over the internal
vibrations of the constituent atoms. We
would usually approximate the energy levels by harmonic oscillator energy
levels.

The index i labels
the vibrational modes, while n labels
the uniformly spaced energy levels for each mode. For instance, the water molecule has three
intermolecular modes. A molecule having
more atoms has more modes. A diatomic
molecule has just one mode of vibration.
d. Rotational partition function
A molecule is constrained to a particular shape (internal
vibrational motions apart), which we regard as rotating like a rigid body. The angular momentum, and therefore the
rotational kinetic energy, is quantized, thusly
, where I is the
moment of inertia of the molecule about the rotational axis. Classically, if
is the angular
velocity and L is the magnitude of
the angular momentum, then the kinetic energy of rotation is
. Quantum
mechanically, the angular momentum is quantized, so that
with j equaling an integer.




Now, this applies to each distinct axis of rotation. In three dimensions, we start with three
axes, but the symmetry of the molecule may reduce that number. The water molecule has three axes, but a
carbon monoxide molecule has only one.
Basically, we look for axes about which the molecule has a different
moment of inertia, I. But it goes beyond that. If the symmetry of the molecule is such that
we couldn’t tell, so to speak, whether the molecule was turning, then that axis
does not count. That’s why there are no
states for an axis that runs through the carbon and oxygen atoms of carbon
monoxide.
Therefore, a rotational partition function will look
something like this for three axes

e. Translational partition function
In an ideal gas, the molecules are not interacting with each
other. So the energy associated with the
molecular center of mass is just the kinetic energy,
. The molecule is
confined to a finite volume, V, so
that kinetic energy is quantized also.

First consider a molecule confined to a finite “box” of
length Lx on the
x-axis. The wave function is limited to
standing wave patterns of wavelengths
, where

nx = 1,
2, 3, 4, . . . This means that the x-component of the momentum is limited
to the discrete values
. The allowed
values of kinetic energy follow as


Naturally, the same argument holds for motion along the y- and z-axes.

Unless the temperature is very low, or the volume V is very small, then the spacing
between energy levels is small and we can go over to integrals.

The quantity
is the quantum volume of a single
molecule. It’s a box whose side is
proportional to the de Broglie wavelength of the molecule. In terms of that, the
.


[Actually, in the classical form of the partition function,
we are integrating over the possible (continuous) values of particle momentum
and position.

The classical partition functions differ from the classical
limit of the quantum mechanical partition functions by factors of h.
Because h
is constant, this makes no difference in the derivatives of the logarithm of Z.]
Putting the parts all together, for a collection of N indistinguishable molecules

3. Thermodynamic Properties of the Ideal Monatomic Gas
a. Helmholz free energy
Sec 6.5
Consider the derivative of the partition function with
respect to temperature.

On the other hand, recall the definition of the Helmholtz
free energy.



Evidently, we can identify the Helmholtz free energy in
terms of the partition function thusly,

For the monatomic ideal gas,
, whence (using the Stirling
approximation)


b. Energy & heat capacity

4. Solids Sec 2.2, 3.3, 7.5
5. Photons Sec 7.4

What we have here is the energy
per unit volume per unit energy,
, also called the spectrum of the photons. It’s named the Planck spectrum, after the fellow who first worked it out, Max
Planck.


Notice that
, and that the spectrum peaks at
. These “Laws” had
been obtained empirically, and called Stefan-Boltzmann’s “Law” and Wein’s
Displacement “Law.”


d. Black body radiation
Of course, the experimentalists were measuring the spectra
of radiation from various material bodies at various temperatures. Perhaps we should verify that the radiation
emitted by a material object is the same as the spectrum of photon energies in
the oven. So, consider an oven at
temperature T, and imagine a small
hole in one side. What is the spectrum
of photons that escape through that hole?
Well, the spectrum of the escaping photons must be the same as the
photon gas in the oven, since all photons travel at the same speed, c.
By a similar token, the energy emitted through the hole is proportional
to T4.

Finally, we might consider a perfectly absorbing material
object exchanging energy by radiation with the hole in the oven. In equilibrium (at the same T as the oven), the material object (the
black body) must radiate the same power and spectrum as the hole, else they
would be violating the Second “Law” of thermodynamics.
6. Specific Heat (Heat Capacity) of Solids Sec. 7.5
b. Debye theory
of specific heat
The oscillators do not vibrate independently. Rather, there are collective modes of
vibration in the crystal lattice. we’ll
treat the situation as elastic waves propagating in the solid. We consider that the energy residing in the
elastic wave of frequency
is quantized. Quanta of elastic vibrations are called phonons.
Secondly, there is an upper limit of the frequency that can exist in the
crystal—the cut-off frequency.

So, consider sound waves propagating in the crystal, with
the dispersion relation
. The total
vibrational energy of the crystal will be


The average energy of a mode is still
and in a continuous
medium
. Therefore,
. Now, what is the
range of frequency? Not
, but
, where
is the Debye
frequency, or cut-off frequency.
The cut-off frequency arises because the shortest possible wavelength is
determined by the inter-atomic spacing.
Put another way, the maximum possible number of vibrational modes in the
crystal is equal to the number of atoms (let’s say a mole) in the crystal, times
3. I.e.,








|

c. Reduced temperature
No comments:
Post a Comment