Go to the U of M home page
School of Physics & Astronomy
School of Physics and Astronomy Wiki

User Tools


classes:2009:fall:phys4101.001:q_a_1009

Oct 9 (Fri) Reflection/Transmission with delta-function potential

Return to Q&A main page: Q_A
Q&A for the previous lecture: Q_A_1007
Q&A for the next lecture: Q_A_1012

If you want to see lecture notes, click lec_notes

Main class wiki page: home

Spherical Chicken 10/7 12:38

I am sure this is just a misunderstanding of parameters, but when one has the second solution to the delta function wells, the one with the positive peak, and the negative peak… but it was my understanding, that for instance, in the SHO, at the top of hte peak was where the highest probability of finding the particle was. However, when you have a negative value peak and a positive value peak… how is this different. I think I'm just looking at a graph thinking it's graphing something else… but …. none the less, I'm a little unsure, conceptually, why we can have a positive and negative value peak, and both have the same meaning in terms of where the particle is found…

Daniel Faraday 10/7 13.00

The graphs we did today are all graphs of <math>\psi</math>. To get the expected value of the position, or any other measurable quantity, you have to look at <math>\psi^2</math> which gives you nonnegative probabilities everywhere.

Spherical 10/7 13:19

of course. I knew it'd be straight forward. I think I'm just used to assuming <math>\psi^2</math> for graphs these days…

Ralph 10/08 11:15am

Just remember that superposition of two waves with negative and positive peaks would make them cancel and then <math>\psi^2</math> would be zero! So they are not the same thing even if the probability of finding them individually is the same.

Daniel Faraday 10/7 12:45pm

This whole discussion has been moved to Q_A_1012

Spherical Chicken 10/7 13:31

David Hilbert's Hat - 15:00 10/7

Zeno 9/8 11:30am

Ralph 10/09 11AM

Spherical Chicken 10/8 21:18

Dark Helmet 10/08 23:12

Mercury 10/09 4:52 am

poit0009 10/9 9:58 am

joh04684 10/9/09 10:15 am

Captain America 10/9/09 10:32 am

Ekrpat 12:40pm 10/9

Hydra 10/9 8pm

Andromeda 10/10 9:31pm

Spherical 10/8 21:26

About the Delta Function. I guess I just don't feel comfortable with it because I've used it extremely little… but do we never actually define the delta function, besides just the intuitive case? I went and googled it but do we only use it in terms of limits… solving integrals where the limit is a value vs. 0? Is it more like we're borrowing the concept of the delta function?

Pluto 4ever 10/8 10:13PM

I'm also confused about the delta function. To me, it only seems that the delta function's only real purpose is to make the potential well normalizable so we can get a practical function for the well.

chavez 10/8 11:45PM

The Dirac delta function is defined explicitly in Eqn. 2.111. It's not so much a function as it is a mathematical construct. The way we are using it is analogous to how we used an infinite square well to simplify the finite square well, but instead of a well we are modeling something like an impluse (maybe a point charge/mass).

Dark Helmet 10/09 120:05 am

The delta function isn't technically a function because functions that are zero everywhere except at one point must have a total integral of zero, not one. It seems to just be an abstract concept used to simplify calculations and approximate things that we can't explain in a more accurate way. Something to just get the job done i guess.

Mercury 10/09 4:46 am

I know this was asked on Wednesday's Q&A, but I'm still confused about how to form linear combinations and how this makes the scattering wave functions normalizable (pg. 75).

Daniel Faraday 10/09 7:15am

The only place Griffiths really talks about this is on p.61 in the text and the footnote on that page. He says that the individual stationary states of a free particle are sinusoidal and extend to infinity, so they can't be summed and aren't normalizable. But if you add together many sinusoidal functions with different values of k, you can make those waves cancel out everywhere except in a very small region (which is where the particle is).

And, of course, by 'add together many…with different k' Griffiths means take an integral over all possible values of k. That's what gives you equation 2.100.

Somebody correct me if I'm wrong, but I don't think that he proves that this works. He just states it and gives a qualitative explanation as to why it works.

This same reasoning also applies to solutions to the dirac delta potential.

David Hilbert's Hat 10/09 1:00pm

It seems like the proof of what is said on p.61 is just a fourier analysis trick; given some initial wavefunction, you can find the values of φ(x) by taking the fourier transform of the initial wavefunction, and you know this works from Plancherel's theorem. Which is a quite opaque argument, but put it this way: any given sine or cosine function will be indeterminate at infinity; but in general you can compose any well behaved function by superposition of sine and cosine functions, which is the basis for fourier analysis. But since some well behaved functions do converge at infinity, the proper superposition of sine and cosine functions will converge as well. To build the proper superposition, you use 2.103.

David Hilbert's Hat 10/09 1:00pm

If the dirac delta function is defined as being either 0 or infinity, what is the point of multiplying it by some constant α? It seems like α wouldn't matter much, because the potential will either be 0 or infinite, regardless of α. Yet α still somehow defines the strength of the potential. Does it just come in because it is related to the derivative of ψ like in 2.125?

liux0756 10/09 3:33pm

I think although delta function is 0 or infinity, the integral of the function somehow reflects the 'strength'. While the delta function integral is 1, the integral of delta function multiplied by α is α.

David Hilbert's Hat 10/10 9:20am

Ah, okay, that makes much more sense.

chap0326 10/09 14:59

Yuichi mentioned that the scattering problems focus on when E>0. Why is that?

Ekrpat 3:28pm 10/9

I'm not completely sure but I think we used E>0 today because we chose <math>V(x)=- \alpha \delta (x)</math> and had to limit E to positive to get the scattering we wanted.

liux0756 3:30pm 10/09

Yes because V=0 when x is not 0, E>0 is for scattering state and E<0 is for bound state.

Blackbox 8:00pm 10/11

If I add a little more on the above two opinions,,, There are two different states which are bound state and scattering state as you know. In the case of E>0, Energy is always larger than potential energy in the entire location. Differently with the bound state, a particle can transmit through the position of x=0 due to the potential has negative infinite value at x=0. I guess that's the reason why the scattering state focuses only on when E>0.

Hydra 10/9 21:30

Ok, I just need some verification on the delta-function….. It is not so much a “function” but instead a distribution. And we know that a distribution represents a probability….. But since the delta-function is spiked at one point only….does this mean there is only one probable outcome? I apologize for my fragmented question full of dotted pauses….. but I think it effectively reflects my confusion & frustration with this seemingly hand-waving method. It works, but why?

Schrodinger's Dog 10/10 1:24am

Yup, you got everything down! Why does it work, well integrate over P of a delta function. You will find that you get 1, for some x=a, where delta(x-a). Since delta is only defined at one point to be 1 and zero on all the other points, you find that at that one point we get P=1.

David Hilbert's Hat 10/10 9:20am

I think Griffiths makes a reference to the delta function when it's used as a distribution of a point particle's mass/charge. Everywhere that's not located at the particle it's zero, but at the particle it's infinity; when you integrate over all space it comes out to be 1, because you have one particle. That seems to be the easiest way to see the delta function - it is very peculiar because it is zero everywhere except at one point it's infinity, but it integrates to 1, like a point particle's distribution (which seems a lot more familiar).

Anaximenes - 22:30 - 10/09/09

This question is about problem 2.34. The problem asks us in part c to show that <math>T=\sqrt{\frac{E-V_0}{E}}\frac{|F|^2}{|A|^2}</math>. However, Eq. 2.139 on page 75 says <math>T\equiv \frac{|F|^2}{|A|^2}</math>. That's with three lines, as in identically equal, as in any statement that they're not equal (such as that in the prompt in 2.34c) is incongruent. Now, I remember the professor said in class that <math>T=\frac{|F|^2}{|A|^2} \frac{k_2}{k_1}</math>, but how do we show that? We have no definition of T other than the (patently false) one in 2.139. For shame, Griffiths. For shame.

Yuichi 11:44 10/11

OK. Let's start from a simple case when the momentum before and after the obstacle represented by the potential is the same to show that 2.139 is sensible (if you accept the idea that |A|^2 can be interpreted as the probability for the incoming particle, which should be somehow normalized to 1. Even though you cannot really normalize the plain wave, close your eyes to such a detail.*) If |A|^2 represents that the total probability for the incoming particle (integrated over the entire space!) is one, |B|^2 must be related to the similar probability for the reflected wave, and |C|^2 is for the transmitted wave. Is this acceptable? Or more quantitatively, |B/A|^2 is the probability that the particle is in the “reflected” state, and |C/A|^2 is the probability that the particle would be found in the transmitted state.

Now in a more general case when the momentum changes after the collision with the potential well (or bump). Paying a bit more attention to what |A|^2, |B|^2 and |C|^2 are related to. They are related to the probability DENSITies that the particle is found in a unit length in the incident, reflected and transmitted states. But if you think about how the reflection and transmission is measured, it's not the spacial densities of the particle in the three waves (incident, transmitted and reflected), but temporal densities which is more directly relevant. i.e. how many particle (fractional!) is arriving as an incident wave, going out as reflected and transmitted waves per second. When you do this conversion of probability density per length to per second, you will need to consider |A|^2*k_1, etc. Hence the equation described in problem 2.34.

If you doing like to ignore the “detail” at *, we have to consider wave packets of incident, reflected and transmitted waves. Following the lecture note for 10/9,

<math>\psi(x) = e^{-ikx}</math> alone for x<0 region and <math>\Psi(x) = 0</math> will not satisfy the Schrodinger equation and boundary conditions. However,

<math>\psi_I(x) = e^{ikx}+\frac{B}{A}e^{-ikx}</math> for x<0

and

<math>\psi_{II}(x) = \frac{C}{A}e^{ikx}</math> for x>0

will for any value of k's. By taking a linear combination of these for different values of k's, the result should still satisfy the Schrodinger Equation. Actually, at this point, we are really thinking about time dependent Schrodinger equation so the time dependence should also be included in the wave function. So we are thinking about

<math>\Psi_I(x,t) = e^{i(kx-\omega t)}+\frac{B}{A}e^{-i(kx-\omega t)}</math> for x<0

and

<math>\Psi_{II}(x,t) = \frac{C}{A}e^{i(kx-\omega t)}</math> for x>0

before taking a linear combination (<math>\omega=\omega(k)</math> is a function of k and <math>\omega(k)\sim k^2</math>), and afterward

<math>\Psi_I(x) = \int \phi(k) [e^{i(kx-\omega t)}+\frac{B}{A}e^{-i(kx+\omega t)}] dk</math> for x<0

and

<math>\Psi_{II}(x) = \int \phi(k) [\frac{C}{A}e^{i(kx-\omega t)}] dk</math> for x>0.

<math>\phi(k)</math> is such that <math>\Psi_I(x) = \int \phi(k) e^{i(kx-\omega t)} dk</math> describes the wave packet for the incident particle at t«0 (well before the collision) situated in where? (x « 0). It will pass through the potential regions of x ~ 0 at t ~ 0, and moves on the x » 0 at t » 0 like as if this is a free particle.

You can show (for you to do if you are looking for a challenge) that given this situation described in the above paragraph, that the terms containing “B” is a wave packet located at x » 0 at t « 0, and x « 0 at t » 0, while the “C” term corresponds to a wave packet located at x « 0 for t « 0 and x » 0 for t » 0 (not dis-similar to the incident wave packet). Note that for t « 0, only the incident wave packet is in the x region which is appropriate for it (i.e. x < 0) and the other two wave packets are ghosts existing only in equations, but not in real life (not dis-similar to image charges in E&M, or if you remember something from waves on a string where waves are reflected, where you were probably told that reflected wave come from the point beyond the fix point of the string where waves cannot physically exist). For t » 0, the wave packet for the incident particle becomes a ghost sitting in the x » 0 region, while the other two wave packets are real.

Here are some more challenges for you, but one can show that

  1. the total probability for the incident particle <math>\int_{-\infty}^\infty |\Psi_{\rm inc}(x,t)|^2 dx</math> where <math>\Psi_{\rm inc}(x) = \int \phi(k) e^{i(kx-\omega t)} dk</math> remains unity for any time, t, once <math>\psi(k)</math> is determined appropriately (so that the total probability is unity at some time t « 0).
  2. the total probability for the particle associated with the reflected wave <math>\int_{-\infty}^\infty |\Psi_{\rm ref}(x,t)|^2 dx</math> where <math>\Psi_{\rm ref}(x) = \int \phi(k) \frac{B}{A}e^{-i(kx+\omega t)} dk</math> will be |B/A|^2 and remains the same for any time, t. For R, the reflection coefficient, one is interested in the total probability in the reflected wave packet for t » 0 over the incident wave packet for t « 0, which must be 1 if <math>\psi(k)</math> is correctly determined. This is |B/A|^2.

    To figure out the total probability associated with this wave packet, you need to modify this expression so that you can use the fact that the probability for the incident wave packet integrate to 1
  3. Very similar claim can be made for the transmitted part of the waves. Note that in case that the momentum of the transmitted wave is different, “k” in the plane wave expression has to be changed to <math>e^{i(k'x-\omega t)}</math> and the wave packet will look like,

    <math>\Psi_{\rm trans}(x) = \int \phi(k) [\frac{C}{A}e^{i(k'x-\omega t)}] dk</math>. Note that only “k” in the plane wave is changed to “k'”, but not the other. (think about why if you don't want to just accept it from me.)

    To figure out the total probability for the transmitted wave packet, make a reasonable assumption that <math>\phi(k)</math> has significant non-zero values only for a small k' range around it peak at <math>k_0'</math>, which corresponds to <math>k_0</math> for the incident wave. Then one can approximate that

    <math>k' = k_0'+\frac{k_0}{k_0'}(k-k_0)</math>. I will leave the rest for you to prove.

The following may be useful for some of the calculations needed to prove some of the things above. When the incident wave packet, <math>\Psi_{\rm inc}(x) = \int \phi(k) e^{i(kx-\omega t)} dk</math>, is properly normalized, <math>\int_{-\infty}^\infty [\Psi_{\rm inc}(x)]^*\Psi_{\rm inc}(x) dx = 1</math>. By substituting the former into the latter, and change one occurrence of k to k_2 to remember that the two integrals involve different variable of integrals,

<math>\int_{-\infty}^\infty [\int \phi^*(k_2) e^{i(k_2x-\omega_2 t)} dk_2]^*\int \phi(k) e^{i(kx-\omega t)} dk dx = 1</math>, where <math>\omega_2 \equiv \omega(k_2)</math>.

Since we are sloppy physicists, we freely change the order of the integrals. So rather than doing k and k_2 integrals, we decide to do the x integral first. This will result in

<math>\int \phi^*(k_2)dk_2 \int \phi(k) dk \int_{-\infty}^\infty dx [e^{-i(k_2x-\omega_2 t)} e^{i(kx-\omega t)}] = 1</math>.

<math>\int \phi^*(k_2)dk_2 \int \phi(k) dk [e^{+i\omega_2 t} e^{-i\omega t}]\int_{-\infty}^\infty dx [e^{-ik_2x} e^{ikx}] = 1</math>.

Since the most inside integral is one way to express the delta-function (times 2pi) (see Griffiths' hand-wave argument for this), k_2 integral becomes easy: just replace k_2 with k.

<math> 2\pi \int \phi^*(k)\phi(k) dk [e^{+i\omega t} e^{-i\omega t}] = 2\pi \int |\phi(k)|^2 dk = 1</math>

This is the normalization equation for <math>\phi(k)</math> similar to what we used to have for “A” in some practice problems, or <math>\sum_n |c_n|^2 = 1 </math>.

I may have made some mistakes like where 2pi goes, so please carefully check if you are interested in this issue.

Can 10/21 10:52am

For Yuichi, instead of saying A^2 is probability, I think it would make much more sense if we think of A^2 as incident intensity, just my personal opinion not sure if it is rigorous. Recall that the square of the amplitude of for E field is proportional to the intensity , E^2 is proportional to I, same analogy can be made here, then the probability can be interpreted as transmitted intensity versus incident intensity. <math>T=\frac{F^2}{A^2}</math>. At least, I think this might be easier to understand transmission intuitively, maybe not mathematically, one still has to work out all the math.

Liam Devlin 10/12, 10:45am

Why is k a known for a scattering problem?

Can 10/22 12:15pm

Since for scattering problem <math>E=\frac{h^2k^2}{2m}</math> which means <math>k=\frac{\sqrt{2mE}}{h}</math>, and E is the incident energy of the particle, which can be manually tuned. E is known , so k is known.

Aspirin 10/12, 2:30pm

Why does the coefficient, D, of the genernal solution which is <math> \psi_I_I = C e^{ikx} + D e^{-ikx} </math> cancel out? Yuichi mentioned the reason, but I didn't fully get it.

Blackbox 10/12, 3:20pm

I'm not quite sure I understood that correctly, but if I say, when a particle moves from the left to the right direction, it will transmitt through the Delta function well. In other words, there is no physical values from the right to the left direction. If we think of the opposite situation, a particle moves from the right to the left, in the first region, <math> \psi_I = Ae^{ikx} + Be^{-ikx} </math>, A would be removed because nothing can go back to the right direction.

Blackbox 10/12, 6:00pm

I might write wrong in my note though. On last Wednesday, Yuichi mentioned the unit of α is Energy/length. Isn't it Energy*length?

Return to Q&A main page: Q_A
Q&A for the previous lecture: Q_A_1007
Q&A for the next lecture: Q_A_1012

classes/2009/fall/phys4101.001/q_a_1009.txt · Last modified: 2009/10/22 12:21 by czhang