===== Sept 11 (Fri) - continue to think about Chap 1 materials ===== **Return to Q&A main page: [[Q_A]]**\\ **Q&A for the previous lecture: [[Q_A_0909]]**\\ **Q&A for the next lecture: [[Q_A_0914]]** **If you want to see lecture notes, click [[lec_notes]]** **Main class wiki page: ** [[home]] ==== joh04684 - 13:46 9/10/09==== I just need something clarified. In the text (page 17) they describe the momentum operator //p// as \frac{h}{i}\frac{\partial}{\partial x}, but in the front cover of the book, it shows //i// in the numerator instead of the denominator. I understand the change in notation between partial derivatives and the //del// operator, but is there any particular significance to the use of \frac {1}{i} instead of //i//? ===Andromeda 16:05 9/10/09=== by definition: \frac {1}{i} is equal to //-i//. there is a negative sign in the equation of the front cover. ===Pluto 4ever - 19:37 9/10/09=== That also had me confused at first but since //i// is the square root of -1 it doesn't really matter whether it is place in the denominator or numerator. Once you square //i// you get back -1. ===Daniel Faraday - 20:37 9/10/09=== Pluto 4ever, maybe it doesn't matter whether //i// is in the denominator or numerator in the specific case where you are squaring it, but its position does matter in general, since, for example, \frac {1}{i} * //i// gives a different result (1) than //i// * //i// (-1). Perhaps that is what you meant; I just wanted to clarify. ==== East End 19:15 9/10/09 ==== Can anyone explain what happened to the cross terms in the forms given for \sigma^2 in lecture Sept. 9th? Thanks. === prest121 20:24 9/10/09 === The equation we have for the standard deviation is \sigma^2=\int(x-\bar{x})^2P(x)dx. Expand the polynomial to get \int(x^2-2x\bar{x}+\bar{x}^2)P(x)dx=\int x^2P(x)dx-\int 2x\bar{x}P(x)dx+\int \bar{x}^2P(x)dx. Then we can move the constants to the outside of the integrals. An important point here is to remember that \bar{x} is not function of x, so it can also be moved outside of the integrals. So we have \int x^2P(x)dx - 2\bar{x}\int xP(x)dx + \bar{x}^2\int P(x)dx. Assuming all the integrals are being evaluated from -\infty to \infty, \int P(x)dx = 1 (since that is the definition of the probability density), and \int xP(x)dx = \bar{x}. The final result is \int x^2P(x)dx - 2\bar{x}^2 + \bar{x}^2 = \int x^2P(x)dx - \bar{x}^2. Recall that \int x^2P(x)dx is the variance (\bar{x^2}) of the distribution, so we can also write this result as \sigma^2 = \bar{x^2} - \bar{x}^2. ==== Captain America 21:52 - 9/10/09 ==== In chapter 1 it mentions on various occasions that the measurement of a wave or particle forces it to take a position or momentum, depending on the measurement. It then says that repeated measurements in quick succession would yield the same result, but if you wait long enough the wave or particle will return to its original state. We will probably return to this again in the book, but looking at the Schrodinger equation given thus far, I am having troubles wrapping my head around where the return to original state is built into the equation. Can anyone qualitatively explain this? I am sure we will cover it again later so just a quick summary would be appreciated. ===joh046874 – 10:50 9/11/09=== I also have that question…I think the particle has a particular wave function that it is assumed to naturally reside at, but is it just the nature of the particle to want to return to its original state some time after it has been forced to resolve into a particular value, or is there something in the equation that forces it back? ==== ice IX 23:39 - 9/10/09 ==== This may just be a brain fart on my part, but could someone explain how one goes from Eq[1.29] to [1.31] (that's pg 15-16)? I guess I don't quite understand how one can "peel a derivative." If there's an adequate link to answer this, I will be satisfied, thanks. ===spillane 7:30 9/11=== I've attempted to answer this brain fart and only arrived at my own. The partial derivative of x with respect to x is 1, then I used integration by parts on eq. 1.30 and arrived at something like 1.31. Can somebody show how integrating by parts on 1.30 leads to 1.31 or let me know if i have overlooked anything? A similar problem is seen in eq. 1.25 and 1.26 only without the factor of x. I dont know where the partial derivative of x goes from eq. 1.25 and 1.26. Again, what have i overlooked? === Mercury 8:10 09/11/09 === For Eq. 1.30 to 1.31: you are only integrating by parts on the second term of eq. 1.30 to yield an integral that is identical to the first term (notice that the 1/2 is gone in 1.31). If you look at the footnote on page 16, the integration by parts is explained. As Griffiths states, the boundary term is zero because the wave function goes to zero at +/- infinity. Going from Eq. 1.25 to 1.26: you integrate both sides of eq. 1.25 with respect to x. The integral of a partial derivative of something simply yields that something. ===vinc0053 13:10 9/18=== I had similar difficulty, but with the proof on page 13 in general. I assume Griffiths is skipping over simple steps, but I can't interpolate what they may be. Could someone break down these steps Barney style? ==== ralph 10:25 9/11/09 ==== Is the probability density for a wave function the same as a probability density function (http://en.wikipedia.org/wiki/Probability_density_function)? === chavez 10:50 9/14/09 === Yes, with f = |\psi|^2. ==== ralph 10:35 9/11/09 ==== On pg. 19 Griffiths touches on the uncertainty principle and explains the de Broglie formula p={h\over{\lambda}} with the statement "thus a spread in wavelength corresponds to a spread in momentum." I guess I'm having a hard time understanding this statement given that momentum and wavelength are inversely related in the formula. Could someone explain? === Anaximenes 15:35 09/11/09 === See the question submitted by prest121 above and the associated discussion. It's the last question under 09/09/09. If that doesn't answer your question, maybe it will help you determine a more specific question. ==== Hardy 16:35 09/11/09 ==== I am just curious about how the Gaussian function were found by mathematicians. I try this way: \int x^2 f(x)\,\mathrm dx = \sigma^2. But I don't know how to get the solution for f(x). ===Schrodinger's Dog 1:21 09/12/09=== Really good question!! Well, you know: \sigma^2= -^2=\int x^2 f(x)\,\mathrm dx. By differentiating both sides and solving for f(x), you should get the Gaussian distribution. Can you give me the link that gives you the definition \int x^2 f(x)\,\mathrm dx = \sigma^2. I didn't explicitly solve for f(x) myself, but I think this is the right steps to getting f(x). ===Yuichi=== The equation you are suggesting to differentiate does not have dependence on any variable, so if you differentiate, you will get a zero on both sides. Note that the integral is definite integral, not indefinite one. So after the integral is done, it represents just a number. With just one constraint that Hardy presented, one cannot determine a function. There is not enough constraints to do so. If I remember correctly, the Gaussian function is special in the sense that its mean (1st moment) and the variance (2nd moment) are non-zero, but all higher moments (around the mean), \int (x-)^n f(x)\,\mathrm dx, //n// > 2 are zero. With these constraints, one can in principle determine the function.