===== Sept 14 (Mon) Chap 1 to Chap 2, up to section 2?===== **Return to Q&A main page: [[Q_A]]**\\ **Q&A for the previous lecture: [[Q_A_0911]]**\\ **Q&A for the next lecture: [[Q_A_0916]]** **If you want to see lecture notes, click [[lec_notes]]** **Main class wiki page: ** [[home]] We will wrap up Chapter 1 this Monday, and move on to Chap 2. I am hoping to move fairly fast through chap 2 since I am assuming that the infinite square well, etc., are familiar to you from the Modern Physics class. If I am wrong about this, please let me know. If you want me to move slower because you are not sure about some of the materials presented, please let me know that, too. If you spend time to come up with specific questions which would help yourself understand the materials better, that would be great. //Yuichi// ==== Can 10:28 09/11/09 ==== Does anyone come up with the explanation for the question raised at the end of Friday lecture? How do we interpret the physically meaning of eigenvalues , I understand how mathematically it works, how to solve for it. What kind of physical quantities would have eigenvalues, wavefuntion, energy, momentum? In the summation \sum_{n=1}^{\infty} c_n\psi_n , is \psi_n an eigenfunction? ===spillane 11:40 9/12=== Eigen means characteristic.To be an eigenfunction the function and its derivatives must be: single valued,continuous and finite. That being said consider time independent Schroedinger eq. which was obtained from seperation of variables. The eigenfunction is ψ(x) and the eigenvalue is E. Therefore operations on a eigenfunction will return the eigenfunction and its eigenvalue. The wave function seen in this equation is a time-independent solution determined by Ψ(x,t)=ψ(x)EXP(-iEt/ħ) which is the space dependence os the solutions Ψ(x,t) to the Sch. eq or "A particular quantum mechanical system is described by a particular (time-independent)potential energy function. The Sch. eq.of this potential is also a time independent and the acceptable solutions exist only for certian values of the energy(eigenvalues) of the given potential. Corresponding to each eigenvalue is an eigenfunction ψ(x) which is a solution to the time independent Sch. eq. For each eigenvalue there is also a corresponding wavefunction Ψ(x,t) all of which will have the same quantum numbers. Since the eq. is linear in the wave function linear combinations of this functions will also be solutions.i.e.\sum_{n=1}^{\infty} c_n\psi_n . The time independent Sch. eq. is also a linear combination, therefore an arbitrary linear combination of different solutions will satisfies the equation only if they have the same E (eigenvalue)" ref. Quantum physics of atoms,molecules,solids,nuclie and paricles second edition by Robert Eisberg and Robert Resnick ch 5 and 6 ===Yuichi 22:45 9/12/09=== I am not sure if this is answering Can's question, but it does address why finding the eigenfunctions of the Hamiltonian is important. In fact, when we talk about "solving Schrodinger equation," it is the same as finding its eigenfunctions and associated eigenvalues of the Hamiltonian, which is the allowed energy levels. When the spatial part of the wave function is an eigenfunction of the Hamiltonian, the associated energy is well defined, and therefore, the time dependence can be written as EXP(-iEt/ħ). This cannot be done for arbitrary wave functions since there is no well defined energy associated with it. If a particle is found to be in some quantum state at time t=0 (set to zero for convenience without losing generality), this implies that the wave function is known at t=0. Or Ψ(x,0) is known. However, since often this is NOT an eigenfunction of the Hamiltonian, its energy is not a well defined quantity, and as a result, we cannot say the wave function at a later time, Ψ(x,t), equals Ψ(x,0)EXP(-iEt/ħ). We wouldn't know what value we should assign to "E". However, when we solve the Schrodinger equation, meaning that when we find the eigenfunctions, they are "complete," any functions can be expressed as a linear combination of these eigenfunctions. //i.e.// we can always expand Ψ(x,0) in the form \psi(x,0)=\sum_{n=1}^{\infty} c_n\psi_n . Once this is done, the time dependence of each term in the expansion can be determined: {\rm e}^{-iE_n t/hbar}. As you will see in Chapter 3, once you express your wave function as a linear combination of energy eigen functions, many of the calculations are reduced to calculating \int \psi^*_n H \psi_m, which can be considered the dot product of \psi_n vector and \psi_m vector is "twisted" by the Hamiltonian operator, but since \psi_m vector is an eigenfunction of the Hamiltonian, __its direction does not change after the "twist."__ (Usually, operators such as Hamiltonian, angular momentum, etc. rotate, flip, change the length of the wave function vector. But when the vector is an eigenvector of the operator, there is not rotation. Just the length of the vector changes.) Because it turns out that \psi_n and \psi_m vectors are orthogonal (when //m// and //n// are not the same), the above "dot product" will be reduced to zero unless //m// and //n// are the same. So these calculations will often be simplified once wave functions are expanded as a linear combination of energy eigenfunctions. ==== Dagny 11:46 09/11/09 ==== Can someone please explain what is the difference between a poorly define wave and a single pulse? ===spillane 12:40 9/12=== If i understand the question, i believe this correspondence is shown on page 19. Figure 1.7 shows a wave that has no clean position in time and 1.8 shows a pulse which has position in time, but a poorly definable wavelength. ===The Doctor 21:50 9/14/09=== You could probably say that a single pulse is a poorly defined wave in that it's wavelength is ill-defined. But then you would be saying that a regular periodic wave is a poorly defined wave in that you don't know it's position. I actually don't remember seeing "poorly defined wave" used anywhere and it may be an unimportant definition. Mostly you want to just be looking at the poorly defined wavelength/position stuff. ==== Dark Helmet 12:33am 09/13 ==== Quick thought on Expectation values: At the end of page 14 and start of page 15, it is stated that the expectation value is absolutely NOT the average if you measure the position of a particle over and over again. As long as the time scale is short enough, repeated measurements will measure the same position. However, the expectation value IS the average if you measured many particles all with the same wavefunction. My question then, is that if the timescale for the repeated measurement was sufficiently long, would the average of the one particle's measurements then match the expectation value? === Zeno 9/13 10:45 === Good question. Griffiths declares that averaged measurements on a single particle won't yield the expectation value (but doesn't specify //for all time intervals//). Consider an example: a particle lies somewhere between 0 and 10 with all points of equal probability and expectation value 5. Suppose your first measurement of the //non-stationary// particle shows its location at 9. Griffiths says that if you measure it a //very// short time later, say instantaneously later, you'll get 9 again. A slightly longer time interval might allow that single particle to move to position 8, and longer time intervals might allow it to move anywhere else in the interval. For longer time intervals and repeated measurements of a //continuously moving// particle I would say that over a long enough time interval between a large number of measurements you would obtain an average position very close to the expectation value. A non-moving particle or short intervals would definitely skew the results and you could average a trillion measurements that are very far from the expectation value (obviously for a stationary particle or infinitesimal measurement intervals). The main benefit I see with measurements on multiple particles with the same energy level is that moving versus stationary particles and large versus short time intervals between measurements don't matter. Each separate particle can be anywhere in the interval at any point in time, and infinitesimal measurement intervals could easily yield values on opposite sides of the interval (but on a single particle all of these measured values would be the same). Thus, statistically, you are essentially guaranteed a "random" spread in position measurements. In conclusion, to make a long answer endless, I would support the idea that long enough time intervals between each of a large number of measurements on a non-stationary particle would yield an average that is very similar to the expectation value. ==== Schrodinger's Dong 4:50am 09/13 ==== Can any one tell me how Griffths goes from a_{j+2} \approx \frac{2}{j}a_{j} to a_j \approx \frac{C}{(j/2)!} , where C is some constant, on the bottom page 53 and top of page 54? Thanks! "ya, ya, that's right po, I been Schrodinger's Dog" === East End 10:20 am 9/14 === I can't look at it now (maybe after school today), but if you have a book on discrete math, look up recurrence relations. You can go from a recurrence relation to a function that will give you the nth (here jth) term without too much difficulty. I'll look more into it tonight. ==== Esquire 3:07pm 09/13 ==== The focus of Chapter 2 of the text is the time-independent Schrodinger's equation. A quick allusion is made to the time dependent variable (\phi(t)) of solutions to the Schrodinger equation, then the text continues into a discussion of the significance of the time independent variable. What is the significance of the time dependent variable? === John Galt 15:00 9/14/09=== From Wikipedia: ::i\hbar {\partial \over \partial t}\Psi=-{\hbar^2 \over 2m}\nabla^2\Psi + V(x)\Psi This is the '''time dependent Schrödinger equation'''. It is the equation for the energy in classical mechanics, turned into a differential equation by substituting: ::E\rightarrow i\hbar {\partial\over \partial t} \;\;\;\;\;\; p\rightarrow -i\hbar {\partial\over \partial x} Schrödinger studied the standing wave solutions, since these were the energy levels. Standing waves have a complicated dependence on space, but vary in time in a simple way: :: \Psi(x,t) = \psi(x) e^{- iEt / \hbar } \, substituting, the time-dependent equation becomes the standing wave equation: :: {E}\psi(x) = - {\hbar^2 \over 2m} \nabla^2 \psi(x) + V(x) \psi(x) Which is the original '''time-independent Schrödinger equation'''. ---------------------------------------------------------------------------- This doesn't entirely answer your question... also, it seems the majority of the chapter deals with the time dependent version... ====Andromeda 15:34 9/13/09==== I dont understand the 1st paragraph of page 29...why doesnt the exponential part cancel in the general solution like in the separable solutions? === Zeno 9/13 10:00pm? === Across pages 26 and 27 Griffiths shows that //Stationary States// have a definite total energy which is constant in time. The exponential function only has the variables time and energy, which are both constant for Stationary States and therefore cancel. The key is that this only works for Stationary States, which are the separable solutions. Remember that Separable Solutions represent a //very small// class of solutions to the Schrodinger equation. There are potentially infinitely many more classes of solutions that //aren't// Separable Solutions, and in these classes the energy is not required to be definite and constant in time; with varying energy the combined exponential terms will not exactly cancel. The exact solutions we obtain with the method of Separable Solutions are based on a highly specific and restrictive class of problems, representing a very small portion of number of potential cases there are. It's a lot like division problems in arithmetic that early mathematicians restricted to only working with whole numbers. They could easily solve problems exactly that were expressed in whole numbers, but fractal numbers posed conceptual and technical issues. They had greatly simplified mathematics compared to the number of potential arithmetical problems involving all Rational numbers or including all Real Numbers (or real and imaginary that we use today), but their analysis of potential problems was greatly restricted. Just like it took more analytical development to be able to work with all Real Numbers, right now we have exact solutions in QM only for Separable Solutions until we develop our mathematics to correctly handle more classes of solutions. (I hope that description and analogy helps) === Anaximenes - 22:35 09/13/09 === There are cross terms. \displaystyle\(ax+by+cz\)\times\(\frac{a}{x}+\frac{b}{y}+\frac{c}{z}\)=a^2+ax\(\frac{b}{y}+\frac{c}{z}\)+b^2+by\(\frac{a}{x}+\frac{c}{z}\)+c^2+cz\(\frac{a}{x}+\frac{b}{y}\)\\ \neq a^2+b^2+c^2 The x, y, and z are the time-dependent factors in the time-independent solutions. The a, b, and c are the constant and position-dependent factors. The sum is the general solution. ====Green Suit 09/13 ==== I have a question regarding separable solutions. On page 28 it states "It is simply a matter of finding the right constants \(c_1, c_2 . . .) so as to fit the initial conditions for the problem at hand. Can anyone give any example of a real problem to better illustrate this point? === Captain America 09/14 10:36 === I think the easiest way to think about this concept is to first remember that in quantum mechanics everything is treated as a wave. What Griffiths is saying by this sentence is that since the Fourier series can describe any wave (or Dirichlet's theorem says the same for //any// function, p.34), applying the Fourier series by utilizing the right constants \(c_1, c_2 . . .) must also work. Relating it to the Fourier series should make it a bit more clear. A real problem that hopefully helps to visualize it is describing a specific wave function, say a square wave, in terms of the summation of many sinusoidal waves. This would be a real world example of sound waves that works for describing the quantum mechanical view of particles as well. ===vinc0053 09/20 17:35=== I like to think of the simplest case where you know the wave is only in the ground state. Then you have c_1 equal to 1 and all other constants equal to zero. Then you can add the next state by, for example, having c_1, c_2 hold values reflecting their proportional make-up, with all other constants equal to zero. ====John Galt 11:02 9/14/09==== What causes the uncertainty in position of a photon? Since the wave function (edit: probability function) does not spread out over time, I would assume that it is the same as it was during its original emission. Does it reflect an inability to measure the time of emission properly, or just the lack of resolution of devices measuring the position of photons? I understand that position is uncertain due the velocity of the photon, but it seems to me that since the function does not spread out over time, an accurate enough recording of the time of emission would lessen the uncertainty in the position of a photon (narrowing the probability function) at any given point. Is this a reasonable assumption? === Spherical Chicken 14:13 9/14/09=== I think it would be fair to say that if photons had a set energy, and could not be energized to higher states, they would not be terribly uncertain as we would not change them (by adding energy) in observation. Viewing a photon by using a photon wouldn't add energy to it. However it is my understanding that photons can come in different energies -- even though they have a definite velocity. So although we don't per say change it's velocity, we can change it's energy, and hence it's momentum. Thus it's position and momentum are uncertain. as is it's energy and time. Unlike an electron, whose velocity changes with the photon we observe it with a photon doesn't necessarily change velocities (as C is constant) but it's energy does change. is this correct? Or am I as well misunderstanding the concept? ==== time to move on ==== It's time to move on to the next Q_A: [[Q_A_0916]]