Campuses:
This is an old revision of the document!
Why is it so important in Quantum Mechanics to have a diagonalizability in order to have your set of vectors to span a space. Can you perhaps give me a proof that diagonalizability implies spanning of your vector space? (Griffths failed to show this)
About the relationship between diagonalization and spanning: Griffiths says on page 452 in Appendix A just below equation A.80 that “…a matrix is diagonalizable if and only if its eigenvectors span the space.” This is a feature of linear algebra in general, not just as applied to quantum mechanics. I or someone else could probably dig up a formal proof if you want, but think of it this way: each vector in a set that can't be expressed as a linear combination of the other vectors in the set represents a dimension. Your set of vectors spans the space if and only if the set of vectors describes the same number of dimensions as the space. The vectors in a diagnalized matrix clearly can't be expressed as sums of the other elements, and the number of elements is equal to the number of dimensions in the space.
Also, This is for all you Math crazy folks again. When does a partial derivative turn into a ordinary derivative? Is there a rule out there when this happens. A link would be fine on this.
Thanks! -It was the dog who killed the cat…-Schrödinger's Dog
I'm not sure about the first question, but a partial derivative is a derivative of a multidimensional function with respect to a single variable. This means you examine the function as only one variable changes. The total derivative of a multivariable function means taking the derivative, but allowing the other variables to change also. If f is a function of x and y, then the total derivative with respect to x would be: <math>\Large \frac{d f}{d x}=\frac{\partial f}{\partial x}+\frac{\partial f}{\partial y} \frac{\partial y}{\partial x}</math> For a function which depends only on a single variable, only the first term in the above equation exists.
Yuichi - The above relation makes sense when f depends on both x and y, but some additional consideration introduces a relation between x and y, i.e. <math>y = y(x)</math>.
I thought it should be just <math>\Large \frac{d f}{d x}=\frac{\partial f}{\partial y}\frac{\partial y}{\partial x}</math>. There is no more <math>\frac{\partial f}{\partial x}</math> on the right side of the equal sign.
Yuichi - This reduction is part of what Poit0009 is saying but in a special case where f does not have an explicit dependence on x. i.e. f only depends on y and y depends only on x.
Hmmm, ok, but I don't see how this works in equation 1.21. When the derivative was outside the integral, it was ordinary, and when it came inside the integral, it became a partial derivative. They are plenty of other examples where Griffiths does this.
-“Cats & Dogs”-Schrödinger's Dog
Sch's Dog, in 1.21, it's because the derivative passes into the integral. This is legit if you look at limit laws (recall that derivatives and definite integrals are really limits). On the left, you do the integral before differentiating. You would end up with the derivative of a function of t, since you integrated with respect to x. The x's are gone - evaluated. On the right hand side, though, the derivative is inside, and applies to a function of more than one variable, so it's a partial derivative.
I think you are just thinking about this too hard. If the function you are differentiating is a function of one variable, it's a total derivative. If it's a function of more than one variable, it's a partial derivative. Griffiths also explains this in the paragraph immediately after 1.21.
Since momentum is a complex operator, does that mean that we never talk about the momentum itself in QM. <math> {p }={-ih }\frac{\partial }{\partial x}</math>. Same question for total energy <math> {E }={ih }\frac{\partial }{\partial t}</math>. Due to uncertainty principle, expectation value is the one that we care about? I interpret it this way intuitively, but was not sure about it. Anyone can come up with a more rigorous explanation?
We can talk about momentum/energy, but since there is an uncertainty to it, we have to describe it through an expectation value. This expectation value, tells us where most of the action is happening, in a statistical sense. In the book they say that you shouldn't talk about the expectation value as a “average”, but an “average repeated over and over again, with identical ensembles of particles,” but I take it loosely to be the “average”, and think in a sense, that the expectation value tells us what the momentum/energy in a non-classical way.
This may not be a rigorous argument though…give me a couple days…have to work on MXP :P.
Just because the operator is complex does not mean that it is not useful to us. The wave function itself is, in general, a complex function, but it contains all of the information that we can extract from the state, so it must be useful! An operator acting on the wave function represents the measurement of that state, and in order for these measurements to make sense, we need to get real values, not complex ones. You are right, it would not make sense to have complex values for the momentum of a state. So, in order to get real values, we require that all observables (operators) in quantum mechanics be hermitian; they must have real eigenvalues. The Hamiltonian, something that we are familiar with, is an operator that yields energy eigenvalues, and these are always real because we require the Hamiltonian to be hermitian. Also, the i in the momentum operator is there specifically to make it hermitian, as when you try to use the operator <math>\frac{d}{dx}</math> by itself, it is not hermitian, so it cannot be a possible observable.
In discussion, we looked at a particular wave function, and after normalization, obtained <math>A=\sqrt{\frac{3}{b}}</math>, if I recall. Our TA pointed out that it is actually more correctly <math>A=\sqrt{\frac{3}{b}}e^{i\theta}</math> (right?).
My question is this: how can we come to this result without knowing it ahead of time? How can this come out of the math?
I dont think he said <math>A=\sqrt{\frac{3}{b}}e^{i\theta}</math> is more correct…I think he was explaining that if you had a wave function with that normalization factor and you would square it the exponential part would cancel…
What I understood was that we make an assumption when we write <math>A=\sqrt{\frac{3}{b}}</math> that our function wasn't complex. It isn't necessarily “more” correct – because there is no reason to assume the solution should be complex as the part of the equation we were given didn't have any complex numbers. I think he was actually just showing that we should be aware of the assumption we made - and know that if we weren't wanting to make that assumption we should add it.
when <math> |z^2| = 1 </math>, the answer, if z is real, is <math> z = \pm 1</math> as you may be familiar with. If z is complex, the answer is <math>z={\rm e}^{i\theta}</math> where <math>\theta</math> is any real number. But if you don't know (remember) this answer already, you can think in the following way.
If you express z by <math>z=x+iy</math>, x and y are real, <math> |z^2| = 1 </math> will lead to <math> x^2 + y^2 = 1 </math>. This represents a unit circle centered at the origin in the complex plane. So if you express z in the polar (is this the right terminology?) coordinate (<math>z=r{\rm e}^{i\theta}</math>), the radius, r is 1, while there is no constraint on the polar angle and therefore, <math>z={\rm e}^{i\theta}</math>.
my question from chapter 1 is more about the definitions on page 4; it says “observations not only disturb what is to be measured, they PRODUCE it…” and that “the particle wasnt really anywhere” photons from the measurement device could disturb the particle we are measuring but do they really produce them in some cases also?
With regard to Andromeda's question about chapter 1, according to the Copenhagen interpretation of quantum mechanics, measurements do produce the states that you observe. Before the measurement, the state is in a superposition of all possible eigenstates that could be yielded from a measurement. After measurement takes place, the state collapses into one eigenstate. There is no way to tell ahead of time with certainty which eigenstate will be observed, unless the state was cooked up ahead of time, in which case a measurement of the same observable will, with certainty, yield the eigenstate that you prepared. Also, after making a measurement and collapsing the state, if you try to measure the state with an incompatible observable with respect to what you just measured, you will destroy the information of the state.
I have never understood why the probability density function is defined the way it is. Is it just an arbitrary choice to have the integral of the squared wave function be the probability density function or is there deeper reasoning behind it?
This may help: http://en.wikipedia.org/wiki/Probability_amplitude
Yuichi - 22:00 9/9/09
I think chavez is asking why it was sensible to speculate that <math>|\psi^2|</math> represents the probability density, and what theories and/or experiments support this speculation.
It would make sense that some sort of squaring would be necessary. <math>\psi</math> is a complex function, so some sort of operation should be required to find a physical probability.
I was questioning the same thing and went back to our freshman quantum book…section 5-3 of that book (Born's interpretation of the wave function) says: I quote : “Since the measurable quantity probability density is real and non negative, whereas the wave function is complex, it is obviously not possible to equate probability density to wave function. However, since wave function squared is always real and non negative, Born was not inconsistent in equating it to probability density.”
I was reading the same part of that text, Andromeda. Another part I found interesting here was: “Since the motion of a particle is connected with the propagation of an associated wave function, these two entities must be associated in space. That is, the particle must be at some location where the waves have an appreciable amplitude.” It goes on to say that <math>P(x,t)</math> must have an appreciable value where <math>\Psi(x,t)</math> has an appreciable value.
For probability density functions in general, there are two main requirements:
1. A probability density function <math>P(x)\geq 0</math> for all x.
2. <math>\int_{-\infty}^{\infty}P(x) = 1</math>. This means that p(x) has to go to 0 as x goes to infinity.
Since <math>|\Psi|^2</math> fulfills both of these (after normalization), it can be thought of as a probability density function. It doesn't appear that there is any direct evidence to say that it is the p.d.f. of the wavefunction, but there seems to be a lot of indirect evidence that pushes us towards that conclusion.
This isn't exactly what one might say was right onTOP of what chapter 1 discusses – but it's something I was thinking about. We mentioned the exponents with complex operator's that may and may not be discussed in this class as they are complex and don't always have hold in physical interpretations of energy and momentum etc. – but it seems like there are so many times we ignore the negative solutions or the complex solutions because they don't have any 'physical interpretation' in the real world. Then someone like Feynman or Durac comes along and applies the negative solutions to something else and voila. What mathematicians and physicists had ignored for so long was sitting right under their noses waiting for someone to find the positron or opposite time directions etc. What are we sitting on when we ignore the complex solutions to the wave equation? different types of space or time? — Physicists : “I found out why your chicken is sick – but it requires a spherical chicken in a vacuum”
I don't have enough command in quantum mechanics to say anything beyond the following: unless you can come up with an interpretation for complex eigenvalues, you are likely out of luck. I think it would be worth discussing in class though, as I would be interested in what a senior physicist, or somebody who just knows more than I do, would say.
A question regarding the uncertainty principle: on page 19, Griffiths writes that “a spread in wavelength corresponds to a spread in momentum,” since <math>p=\frac{h}{\lambda}</math>. Based on that equation, it seems to me that a larger spread in wavelength would correspond to a smaller momentum spread. So, as seen in Figure 1.8, a wavelength with a well defined position has a large uncertainty in <math>\lambda</math>, and it would seem, a small uncertainty in momentum. This clearly contradicts the uncertainty principle, so what am I overlooking?
As far as my understanding of the uncertainty principle goes, “The more precise a wave's position is, the less precise is its wavelength, and vice versa.” So in Figure 1.8 there is a well-defined position, and consequently a poorly-defined wavelength (since it isn't periodic). So I think it has a small uncertainty in position and a large uncertainty in momentum, which agrees with the conclusions the uncertainty principle makes.
Recall that the difference between a position distribution and momentum distribution is just a Fourier transform. Thus, you can think of a wide distribution in x-space corresponding to a thin distribution in p-space, and vice versa. You can think of it in this way or what the uncertainty principle dictates. Another situation to consider is the following: “Which state has the least uncertainty, i.e. least spread in position and/or momentum?” The answer is a Gaussian distribution, which makes sense when you try to apply the Fourier transform to it. I hope this answers your question.
There is an important distinction between the variables in the DeBroglie hypothesis to which you first refer, <math>p=\frac{h}{\lambda}</math> , and the Uncertainty Principle. The Uncertainty Principle states that you may not know two independent properties of a particle that fully define the particle's position and speed, and furthermore that you can only narrow the true values down to a certain minimum. The Uncertainty Principle relates the spread, or standard deviation, of variables whereas DeBroglie's formula relates the true values of the variables. The DeBroglie hypothesis represents momentum as a function of wavelength, <math>p(\lambda)</math>; any time there is uncertainty in wavelength, the error “propagates” through to momentum because the value and spread of momentum depends on the value and spread of the wavelength. While the nominal value of momentum varies inversely with wavelength, the “spread” of uncertainty varies directly. If you plug in simple numbers you can check this: if you know the wavelength is exactly–say–3, then p = h/3; however, if you can't narrow the spread of the wavelength to more than a value of “<math>\lambda</math> is somewhere between 2 and 4,” then your uncertainty in p is directly increased, and you can't know p more precisely than some value between h/4 and h/2. Therefore increasing uncertainty in <math>\lambda</math> increases uncertainty in momentum, p. (I hope that explanation made sense)
To go along with what Zeno was saying, I went ahead and used the error propagation formula:
<math>\sigma_{f(x, y)}^2=\sigma_{x}^2 \(\frac{\partial f(x, y)}{\partial x}\)^2 + \sigma_{y}^2 \(\frac{\partial f(x, y)}{\partial y}\)^2
p=\frac h\lambda
\sigma_{p}^2=\sigma_{\lambda}^2\(\frac{-h}{\lambda^2}\)^2</math>
As you can see, the uncertainty in p is proportional to the uncertainty in λ.
I just need something clarified. In the text (page 17) they describe the momentum operator p as <math>\frac{h}{i}\frac{\partial}{\partial x}</math>, but in the front cover of the book, it shows i in the numerator instead of the denominator. I understand the change in notation between partial derivatives and the del operator, but is there any particular significance to the use of <math>\frac {1}{i}</math> instead of i?
by definition: <math>\frac {1}{i}</math> is equal to -i. there is a negative sign in the equation of the front cover.
That also had me confused at first but since i is the square root of -1 it doesn't really matter whether it is place in the denominator or numerator. Once you square i you get back -1.
Pluto 4ever, maybe it doesn't matter whether i is in the denominator or numerator in the specific case where you are squaring it, but its position does matter in general, since, for example, <math>\frac {1}{i}</math> * i gives a different result (1) than i * i (-1). Perhaps that is what you meant; I just wanted to clarify.
Can anyone explain what happened to the cross terms in the forms given for <math>\sigma^2</math> in lecture Sept. 9th? Thanks.
The equation we have for the standard deviation is <math>\sigma^2=\int(x-\bar{x})^2P(x)dx</math>. Expand the polynomial to get <math>\int(x^2-2x\bar{x}+\bar{x}^2)P(x)dx=\int x^2P(x)dx-\int 2x\bar{x}P(x)dx+\int \bar{x}^2P(x)dx</math>.
Then we can move the constants to the outside of the integrals. An important point here is to remember that <math>\bar{x}</math> is not function of x, so it can also be moved outside of the integrals.
So we have <math>\int x^2P(x)dx - 2\bar{x}\int xP(x)dx + \bar{x}^2\int P(x)dx</math>.
Assuming all the integrals are being evaluated from <math>-\infty</math> to <math>\infty</math>, <math>\int P(x)dx = 1</math> (since that is the definition of the probability density), and <math>\int xP(x)dx = \bar{x}</math>.
The final result is <math>\int x^2P(x)dx - 2\bar{x}^2 + \bar{x}^2 = \int x^2P(x)dx - \bar{x}^2</math>.
Recall that <math>\int x^2P(x)dx</math> is the variance (<math>\bar{x^2}</math>) of the distribution, so we can also write this result as <math>\sigma^2 = \bar{x^2} - \bar{x}^2</math>.
In chapter 1 it mentions on various occasions that the measurement of a wave or particle forces it to take a position or momentum, depending on the measurement. It then says that repeated measurements in quick succession would yield the same result, but if you wait long enough the wave or particle will return to its original state. We will probably return to this again in the book, but looking at the Schrodinger equation given thus far, I am having troubles wrapping my head around where the return to original state is built into the equation.
Can anyone qualitatively explain this? I am sure we will cover it again later so just a quick summary would be appreciated.
I also have that question…I think the particle has a particular wave function that it is assumed to naturally reside at, but is it just the nature of the particle to want to return to its original state some time after it has been forced to resolve into a particular value, or is there something in the equation that forces it back?
This may just be a brain fart on my part, but could someone explain how one goes from Eq[1.29] to [1.31] (that's pg 15-16)? I guess I don't quite understand how one can “peel a derivative.” If there's an adequate link to answer this, I will be satisfied, thanks.
I've attempted to answer this brain fart and only arrived at my own. The partial derivative of x with respect to x is 1, then I used integration by parts on eq. 1.30 and arrived at something like 1.31. Can somebody show how integrating by parts on 1.30 leads to 1.31 or let me know if i have overlooked anything?
A similar problem is seen in eq. 1.25 and 1.26 only without the factor of x. I dont know where the partial derivative of x goes from eq. 1.25 and 1.26. Again, what have i overlooked?
For Eq. 1.30 to 1.31: you are only integrating by parts on the second term of eq. 1.30 to yield an integral that is identical to the first term (notice that the 1/2 is gone in 1.31). If you look at the footnote on page 16, the integration by parts is explained. As Griffiths states, the boundary term is zero because the wave function goes to zero at +/- infinity.
Going from Eq. 1.25 to 1.26: you integrate both sides of eq. 1.25 with respect to x. The integral of a partial derivative of something simply yields that something.
Is the probability density for a wave function the same as a probability density function (http://en.wikipedia.org/wiki/Probability_density_function)?
On pg. 19 Griffiths touches on the uncertainty principle and explains the de Broglie formula <math>p={h\over{\lambda}}</math> with the statement “thus a spread in wavelength corresponds to a spread in momentum.” I guess I'm having a hard time understanding this statement given that momentum and wavelength are inversely related in the formula. Could someone explain?
See the question submitted by prest121 above and the associated discussion. It's the last question under 09/09/09. If that doesn't answer your question, maybe it will help you determine a more specific question.
I am just curious about how the Gaussian function were found by mathematicians. I try this way: <math>\int x^2 f(x)\,\mathrm dx = \sigma^2</math>. But I don't know how to get the solution for f(x).
Really good question!! Well, you know: <math>\sigma^2= <x^2>-<x>^2=\int x^2 f(x)\,\mathrm dx</math>. By differentiating both sides and solving for f(x), you should get the Gaussian distribution. Can you give me the link that gives you the definition <math>\int x^2 f(x)\,\mathrm dx = \sigma^2</math>. I didn't explicitly solve for f(x) myself, but I think this is the right steps to getting f(x).
The equation you are suggesting to differentiate does not have dependence on any variable, so if you differentiate, you will get a zero on both sides. Note that the integral is definite integral, not indefinite one. So after the integral is done, it represents just a number.
With just one constraint that Hardy presented, one cannot determine a function. There is not enough constraints to do so.
If I remember correctly, the Gaussian function is special in the sense that its mean (1st moment) and the variance (2nd moment) are non-zero, but all higher moments (around the mean), <math>\int (x-<x>)^n f(x)\,\mathrm dx</math>, n > 2 are zero. With these constraints, one can in principle determine the function.
We will wrap up Chapter 1 this Monday, and move on to Chap 2. I am hoping to move fairly fast through chap 2 since I am assuming that the infinite square well, etc., are familiar to you from the Modern Physics class. If I am wrong about this, please let me know. If you want me to move slower because you are not sure about some of the materials presented, please let me know that, too. If you spend time to come up with specific questions which would help yourself understand the materials better, that would be great. Yuichi
Does anyone come up with the explanation for the question raised at the end of Friday lecture? How do we interpret the physically meaning of eigenvalues , I understand how mathematically it works, how to solve for it. What kind of physical quantities would have eigenvalues, wavefuntion, energy, momentum? In the summation <math>\sum_{n=1}^{\infty} c_n\psi_n </math>, is <math>\psi_n</math> an eigenfunction?
Eigen means characteristic.To be an eigenfunction the function and its derivatives must be: single valued,continuous and finite. That being said consider time independent Schroedinger eq. which was obtained from seperation of variables. The eigenfunction is ψ(x) and the eigenvalue is E. Therefore operations on a eigenfunction will return the eigenfunction and its eigenvalue. The wave function seen in this equation is a time-independent solution determined by Ψ(x,t)=ψ(x)EXP(-iEt/ħ) which is the space dependence os the solutions Ψ(x,t) to the Sch. eq or
“A particular quantum mechanical system is described by a particular (time-independent)potential energy function. The Sch. eq.of this potential is also a time independent and the acceptable solutions exist only for certian values of the energy(eigenvalues) of the given potential. Corresponding to each eigenvalue is an eigenfunction ψ(x) which is a solution to the time independent Sch. eq. For each eigenvalue there is also a corresponding wavefunction Ψ(x,t) all of which will have the same quantum numbers. Since the eq. is linear in the wave function linear combinations of this functions will also be solutions.i.e.<math>\sum_{n=1}^{\infty} c_n\psi_n </math>. The time independent Sch. eq. is also a linear combination, therefore an arbitrary linear combination of different solutions will satisfies the equation only if they have the same E (eigenvalue)”
ref. Quantum physics of atoms,molecules,solids,nuclie and paricles second edition by Robert Eisberg and Robert Resnick ch 5 and 6
I am not sure if this is answering Can's question, but it does address why finding the eigenfunctions of the Hamiltonian is important. In fact, when we talk about “solving Schrodinger equation,” it is the same as finding its eigenfunctions and associated eigenvalues of the Hamiltonian, which is the allowed energy levels.
When the spatial part of the wave function is an eigenfunction of the Hamiltonian, the associated energy is well defined, and therefore, the time dependence can be written as EXP(-iEt/ħ). This cannot be done for arbitrary wave functions since there is no well defined energy associated with it.
If a particle is found to be in some quantum state at time t=0 (set to zero for convenience without losing generality), this implies that the wave function is known at t=0. Or Ψ(x,0) is known. However, since often this is NOT an eigenfunction of the Hamiltonian, its energy is not a well defined quantity, and as a result, we cannot say the wave function at a later time, Ψ(x,t), equals Ψ(x,0)EXP(-iEt/ħ). We wouldn't know what value we should assign to “E”.
However, when we solve the Schrodinger equation, meaning that when we find the eigenfunctions, they are “complete,” any functions can be expressed as a linear combination of these eigenfunctions. i.e. we can always expand Ψ(x,0) in the form <math>\psi(x,0)=\sum_{n=1}^{\infty} c_n\psi_n </math>. Once this is done, the time dependence of each term in the expansion can be determined: <math>{\rm e}^{-iE_n t/hbar}</math>.
As you will see in Chapter 3, once you express your wave function as a linear combination of energy eigen functions, many of the calculations are reduced to calculating <math>\int \psi^*_n H \psi_m,</math> which can be considered the dot product of <math>\psi_n</math> vector and <math>\psi_m</math> vector is “twisted” by the Hamiltonian operator, but since <math>\psi_m</math> vector is an eigenfunction of the Hamiltonian, its direction does not change after the “twist.” (Usually, operators such as Hamiltonian, angular momentum, etc. rotate, flip, change the length of the wave function vector. But when the vector is an eigenvector of the operator, there is not rotation. Just the length of the vector changes.) Because it turns out that <math>\psi_n</math> and <math>\psi_m</math> vectors are orthogonal (when m and n are not the same), the above “dot product” will be reduced to zero unless m and n are the same. So these calculations will often be simplified once wave functions are expanded as a linear combination of energy eigenfunctions.
Can someone please explain what is the difference between a poorly define wave and a single pulse?
If i understand the question, i believe this correspondence is shown on page 19. Figure 1.7 shows a wave that has no clean position in time and 1.8 shows a pulse which has position in time, but a poorly definable wavelength.
Quick thought on Expectation values: At the end of page 14 and start of page 15, it is stated that the expectation value is absolutely NOT the average if you measure the position of a particle over and over again. As long as the time scale is short enough, repeated measurements will measure the same position. However, the expectation value IS the average if you measured many particles all with the same wavefunction. My question then, is that if the timescale for the repeated measurement was sufficiently long, would the average of the one particle's measurements then match the expectation value?
Can any one tell me how Griffths goes from <math> a_{j+2} \approx \frac{2}{j}a_{j} </math> to <math> a_j \approx \frac{C}{(j/2)!} </math>, where C is some constant, on the bottom page 53 and top of page 54?
Thanks!
“ya, ya, that's right po, I been Schrodinger's Dog”
The focus of Chapter 2 of the text is the time-independent Schrodinger's equation. A quick allusion is made to the time dependent variable (<math>\phi(t)</math>) of solutions to the Schrodinger equation, then the text continues into a discussion of the significance of the time independent variable. What is the significance of the time dependent variable?
I dont understand the 1st paragraph of page 29…why doesnt the exponential part cancel in the general solution like in the separable solutions?
Across pages 26 and 27 Griffiths shows that Stationary States have a definite total energy which is constant in time. The exponential function only has the variables time and energy, which are both constant for Stationary States and therefore cancel. The key is that this only works for Stationary States, which are the separable solutions. Remember that Separable Solutions represent a very small class of solutions to the Schrodinger equation. There are potentially infinitely many more classes of solutions that aren't Separable Solutions, and in these classes the energy is not required to be definite and constant in time; with varying energy the combined exponential terms will not exactly cancel. The exact solutions we obtain with the method of Separable Solutions are based on a highly specific and restrictive class of problems, representing a very small portion of number of potential cases there are. It's a lot like division problems in arithmetic that early mathematicians restricted to only working with whole numbers. They could easily solve problems exactly that were expressed in whole numbers, but fractal numbers posed conceptual and technical issues. They had greatly simplified mathematics compared to the number of potential arithmetical problems involving all Rational numbers or including all Real Numbers (or real and imaginary that we use today), but their analysis of potential problems was greatly restricted. Just like it took more analytical development to be able to work with all Real Numbers, right now we have exact solutions in QM only for Separable Solutions until we develop our mathematics to correctly handle more classes of solutions. (I hope that description and analogy helps)
There are cross terms. <math>\displaystyle\(ax+by+cz\)\times\(\frac{a}{x}+\frac{b}{y}+\frac{c}{z}\)=a^2+ax\(\frac{b}{y}+\frac{c}{z}\)+b^2+by\(\frac{a}{x}+\frac{c}{z}\)+c^2+cz\(\frac{a}{x}+\frac{b}{y}\)
\neq a^2+b^2+c^2</math>
The x, y, and z are the time-dependent factors in the time-independent solutions. The a, b, and c are the constant and position-dependent factors. The sum is the general solution.
I have a question regarding separable solutions. On page 28 it states “It is simply a matter of finding the right constants <math>\(c_1, c_2 . . .)</math> so as to fit the initial conditions for the problem at hand. Can anyone give any example of a real problem to better illustrate this point?