Campuses:
This is an old revision of the document!
I just need something clarified. In the text (page 17) they describe the momentum operator p as <math>\frac{h}{i}\frac{\partial}{\partial x}</math>, but in the front cover of the book, it shows i in the numerator instead of the denominator. I understand the change in notation between partial derivatives and the del operator, but is there any particular significance to the use of <math>\frac {1}{i}</math> instead of i?
by definition: <math>\frac {1}{i}</math> is equal to -i. there is a negative sign in the equation of the front cover.
That also had me confused at first but since i is the square root of -1 it doesn't really matter whether it is place in the denominator or numerator. Once you square i you get back -1.
Pluto 4ever, maybe it doesn't matter whether i is in the denominator or numerator in the specific case where you are squaring it, but its position does matter in general, since, for example, <math>\frac {1}{i}</math> * i gives a different result (1) than i * i (-1). Perhaps that is what you meant; I just wanted to clarify.
Can anyone explain what happened to the cross terms in the forms given for <math>\sigma^2</math> in lecture Sept. 9th? Thanks.
The equation we have for the standard deviation is <math>\sigma^2=\int(x-\bar{x})^2P(x)dx</math>. Expand the polynomial to get <math>\int(x^2-2x\bar{x}+\bar{x}^2)P(x)dx=\int x^2P(x)dx-\int 2x\bar{x}P(x)dx+\int \bar{x}^2P(x)dx</math>.
Then we can move the constants to the outside of the integrals. An important point here is to remember that <math>\bar{x}</math> is not function of x, so it can also be moved outside of the integrals.
So we have <math>\int x^2P(x)dx - 2\bar{x}\int xP(x)dx + \bar{x}^2\int P(x)dx</math>.
Assuming all the integrals are being evaluated from <math>-\infty</math> to <math>\infty</math>, <math>\int P(x)dx = 1</math> (since that is the definition of the probability density), and <math>\int xP(x)dx = \bar{x}</math>.
The final result is <math>\int x^2P(x)dx - 2\bar{x}^2 + \bar{x}^2 = \int x^2P(x)dx - \bar{x}^2</math>.
Recall that <math>\int x^2P(x)dx</math> is the variance (<math>\bar{x^2}</math>) of the distribution, so we can also write this result as <math>\sigma^2 = \bar{x^2} - \bar{x}^2</math>.
In chapter 1 it mentions on various occasions that the measurement of a wave or particle forces it to take a position or momentum, depending on the measurement. It then says that repeated measurements in quick succession would yield the same result, but if you wait long enough the wave or particle will return to its original state. We will probably return to this again in the book, but looking at the Schrodinger equation given thus far, I am having troubles wrapping my head around where the return to original state is built into the equation.
Can anyone qualitatively explain this? I am sure we will cover it again later so just a quick summary would be appreciated.
I also have that question…I think the particle has a particular wave function that it is assumed to naturally reside at, but is it just the nature of the particle to want to return to its original state some time after it has been forced to resolve into a particular value, or is there something in the equation that forces it back?
This may just be a brain fart on my part, but could someone explain how one goes from Eq[1.29] to [1.31] (that's pg 15-16)? I guess I don't quite understand how one can “peel a derivative.” If there's an adequate link to answer this, I will be satisfied, thanks.
I've attempted to answer this brain fart and only arrived at my own. The partial derivative of x with respect to x is 1, then I used integration by parts on eq. 1.30 and arrived at something like 1.31. Can somebody show how integrating by parts on 1.30 leads to 1.31 or let me know if i have overlooked anything?
A similar problem is seen in eq. 1.25 and 1.26 only without the factor of x. I dont know where the partial derivative of x goes from eq. 1.25 and 1.26. Again, what have i overlooked?
For Eq. 1.30 to 1.31: you are only integrating by parts on the second term of eq. 1.30 to yield an integral that is identical to the first term (notice that the 1/2 is gone in 1.31). If you look at the footnote on page 16, the integration by parts is explained. As Griffiths states, the boundary term is zero because the wave function goes to zero at +/- infinity.
Going from Eq. 1.25 to 1.26: you integrate both sides of eq. 1.25 with respect to x. The integral of a partial derivative of something simply yields that something.
Is the probability density for a wave function the same as a probability density function (http://en.wikipedia.org/wiki/Probability_density_function)?
On pg. 19 Griffiths touches on the uncertainty principle and explains the de Broglie formula <math>p={h\over{\lambda}}</math> with the statement “thus a spread in wavelength corresponds to a spread in momentum.” I guess I'm having a hard time understanding this statement given that momentum and wavelength are inversely related in the formula. Could someone explain?
See the question submitted by prest121 above and the associated discussion. It's the last question under 09/09/09. If that doesn't answer your question, maybe it will help you determine a more specific question.
I am just curious about how the Gaussian function were found by mathematicians. I try this way: <math>\int x^2 f(x)\,\mathrm dx = \sigma^2</math>. But I don't know how to get the solution for f(x).
Really good question!! Well, you know: <math>\sigma^2= <x^2>-<x>^2=\int x^2 f(x)\,\mathrm dx</math>. By differentiating both sides and solving for f(x), you should get the Gaussian distribution. Can you give me the link that gives you the definition <math>\int x^2 f(x)\,\mathrm dx = \sigma^2</math>. I didn't explicitly solve for f(x) myself, but I think this is the right steps to getting f(x).
The equation you are suggesting to differentiate does not have dependence on any variable, so if you differentiate, you will get a zero on both sides. Note that the integral is definite integral, not indefinite one. So after the integral is done, it represents just a number.
With just one constraint that Hardy presented, one cannot determine a function. There is not enough constraints to do so.
If I remember correctly, the Gaussian function is special in the sense that its mean (1st moment) and the variance (2nd moment) are non-zero, but all higher moments (around the mean), <math>\int (x-<x>)^n f(x)\,\mathrm dx</math>, n > 2 are zero. With these constraints, one can in principle determine the function.
We will wrap up Chapter 1 this Monday, and move on to Chap 2. I am hoping to move fairly fast through chap 2 since I am assuming that the infinite square well, etc., are familiar to you from the Modern Physics class. If I am wrong about this, please let me know. If you want me to move slower because you are not sure about some of the materials presented, please let me know that, too. If you spend time to come up with specific questions which would help yourself understand the materials better, that would be great. Yuichi
Does anyone come up with the explanation for the question raised at the end of Friday lecture? How do we interpret the physically meaning of eigenvalues , I understand how mathematically it works, how to solve for it. What kind of physical quantities would have eigenvalues, wavefuntion, energy, momentum? In the summation <math>\sum_{n=1}^{\infty} c_n\psi_n </math>, is <math>\psi_n</math> an eigenfunction?
Eigen means characteristic.To be an eigenfunction the function and its derivatives must be: single valued,continuous and finite. That being said consider time independent Schroedinger eq. which was obtained from seperation of variables. The eigenfunction is ψ(x) and the eigenvalue is E. Therefore operations on a eigenfunction will return the eigenfunction and its eigenvalue. The wave function seen in this equation is a time-independent solution determined by Ψ(x,t)=ψ(x)EXP(-iEt/ħ) which is the space dependence os the solutions Ψ(x,t) to the Sch. eq or
“A particular quantum mechanical system is described by a particular (time-independent)potential energy function. The Sch. eq.of this potential is also a time independent and the acceptable solutions exist only for certian values of the energy(eigenvalues) of the given potential. Corresponding to each eigenvalue is an eigenfunction ψ(x) which is a solution to the time independent Sch. eq. For each eigenvalue there is also a corresponding wavefunction Ψ(x,t) all of which will have the same quantum numbers. Since the eq. is linear in the wave function linear combinations of this functions will also be solutions.i.e.<math>\sum_{n=1}^{\infty} c_n\psi_n </math>. The time independent Sch. eq. is also a linear combination, therefore an arbitrary linear combination of different solutions will satisfies the equation only if they have the same E (eigenvalue)”
ref. Quantum physics of atoms,molecules,solids,nuclie and paricles second edition by Robert Eisberg and Robert Resnick ch 5 and 6
I am not sure if this is answering Can's question, but it does address why finding the eigenfunctions of the Hamiltonian is important. In fact, when we talk about “solving Schrodinger equation,” it is the same as finding its eigenfunctions and associated eigenvalues of the Hamiltonian, which is the allowed energy levels.
When the spatial part of the wave function is an eigenfunction of the Hamiltonian, the associated energy is well defined, and therefore, the time dependence can be written as EXP(-iEt/ħ). This cannot be done for arbitrary wave functions since there is no well defined energy associated with it.
If a particle is found to be in some quantum state at time t=0 (set to zero for convenience without losing generality), this implies that the wave function is known at t=0. Or Ψ(x,0) is known. However, since often this is NOT an eigenfunction of the Hamiltonian, its energy is not a well defined quantity, and as a result, we cannot say the wave function at a later time, Ψ(x,t), equals Ψ(x,0)EXP(-iEt/ħ). We wouldn't know what value we should assign to “E”.
However, when we solve the Schrodinger equation, meaning that when we find the eigenfunctions, they are “complete,” any functions can be expressed as a linear combination of these eigenfunctions. i.e. we can always expand Ψ(x,0) in the form <math>\psi(x,0)=\sum_{n=1}^{\infty} c_n\psi_n </math>. Once this is done, the time dependence of each term in the expansion can be determined: <math>{\rm e}^{-iE_n t/hbar}</math>.
As you will see in Chapter 3, once you express your wave function as a linear combination of energy eigen functions, many of the calculations are reduced to calculating <math>\int \psi^*_n H \psi_m,</math> which can be considered the dot product of <math>\psi_n</math> vector and <math>\psi_m</math> vector is “twisted” by the Hamiltonian operator, but since <math>\psi_m</math> vector is an eigenfunction of the Hamiltonian, its direction does not change after the “twist.” (Usually, operators such as Hamiltonian, angular momentum, etc. rotate, flip, change the length of the wave function vector. But when the vector is an eigenvector of the operator, there is not rotation. Just the length of the vector changes.) Because it turns out that <math>\psi_n</math> and <math>\psi_m</math> vectors are orthogonal (when m and n are not the same), the above “dot product” will be reduced to zero unless m and n are the same. So these calculations will often be simplified once wave functions are expanded as a linear combination of energy eigenfunctions.
Can someone please explain what is the difference between a poorly define wave and a single pulse?
If i understand the question, i believe this correspondence is shown on page 19. Figure 1.7 shows a wave that has no clean position in time and 1.8 shows a pulse which has position in time, but a poorly definable wavelength.
Quick thought on Expectation values: At the end of page 14 and start of page 15, it is stated that the expectation value is absolutely NOT the average if you measure the position of a particle over and over again. As long as the time scale is short enough, repeated measurements will measure the same position. However, the expectation value IS the average if you measured many particles all with the same wavefunction. My question then, is that if the timescale for the repeated measurement was sufficiently long, would the average of the one particle's measurements then match the expectation value?
Good question. Griffiths declares that averaged measurements on a single particle won't yield the expectation value (but doesn't specify for all time intervals). Consider an example: a particle lies somewhere between 0 and 10 with all points of equal probability and expectation value 5. Suppose your first measurement of the non-stationary particle shows its location at 9. Griffiths says that if you measure it a very short time later, say instantaneously later, you'll get 9 again. A slightly longer time interval might allow that single particle to move to position 8, and longer time intervals might allow it to move anywhere else in the interval. For longer time intervals and repeated measurements of a continuously moving particle I would say that over a long enough time interval between a large number of measurements you would obtain an average position very close to the expectation value. A non-moving particle or short intervals would definitely skew the results and you could average a trillion measurements that are very far from the expectation value (obviously for a stationary particle or infinitesimal measurement intervals). The main benefit I see with measurements on multiple particles with the same energy level is that moving versus stationary particles and large versus short time intervals between measurements don't matter. Each separate particle can be anywhere in the interval at any point in time, and infinitesimal measurement intervals could easily yield values on opposite sides of the interval (but on a single particle all of these measured values would be the same). Thus, statistically, you are essentially guaranteed a “random” spread in position measurements. In conclusion, to make a long answer endless, I would support the idea that long enough time intervals between each of a large number of measurements on a non-stationary particle would yield an average that is very similar to the expectation value.
Can any one tell me how Griffths goes from <math> a_{j+2} \approx \frac{2}{j}a_{j} </math> to <math> a_j \approx \frac{C}{(j/2)!} </math>, where C is some constant, on the bottom page 53 and top of page 54?
Thanks!
“ya, ya, that's right po, I been Schrodinger's Dog”
The focus of Chapter 2 of the text is the time-independent Schrodinger's equation. A quick allusion is made to the time dependent variable (<math>\phi(t)</math>) of solutions to the Schrodinger equation, then the text continues into a discussion of the significance of the time independent variable. What is the significance of the time dependent variable?
I dont understand the 1st paragraph of page 29…why doesnt the exponential part cancel in the general solution like in the separable solutions?
Across pages 26 and 27 Griffiths shows that Stationary States have a definite total energy which is constant in time. The exponential function only has the variables time and energy, which are both constant for Stationary States and therefore cancel. The key is that this only works for Stationary States, which are the separable solutions. Remember that Separable Solutions represent a very small class of solutions to the Schrodinger equation. There are potentially infinitely many more classes of solutions that aren't Separable Solutions, and in these classes the energy is not required to be definite and constant in time; with varying energy the combined exponential terms will not exactly cancel. The exact solutions we obtain with the method of Separable Solutions are based on a highly specific and restrictive class of problems, representing a very small portion of number of potential cases there are. It's a lot like division problems in arithmetic that early mathematicians restricted to only working with whole numbers. They could easily solve problems exactly that were expressed in whole numbers, but fractal numbers posed conceptual and technical issues. They had greatly simplified mathematics compared to the number of potential arithmetical problems involving all Rational numbers or including all Real Numbers (or real and imaginary that we use today), but their analysis of potential problems was greatly restricted. Just like it took more analytical development to be able to work with all Real Numbers, right now we have exact solutions in QM only for Separable Solutions until we develop our mathematics to correctly handle more classes of solutions. (I hope that description and analogy helps)
There are cross terms. <math>\displaystyle\(ax+by+cz\)\times\(\frac{a}{x}+\frac{b}{y}+\frac{c}{z}\)=a^2+ax\(\frac{b}{y}+\frac{c}{z}\)+b^2+by\(\frac{a}{x}+\frac{c}{z}\)+c^2+cz\(\frac{a}{x}+\frac{b}{y}\)
\neq a^2+b^2+c^2</math>
The x, y, and z are the time-dependent factors in the time-independent solutions. The a, b, and c are the constant and position-dependent factors. The sum is the general solution.
I have a question regarding separable solutions. On page 28 it states “It is simply a matter of finding the right constants <math>\(c_1, c_2 . . .)</math> so as to fit the initial conditions for the problem at hand. Can anyone give any example of a real problem to better illustrate this point?