School of Physics & Astronomy
School of Physics and Astronomy Wiki

### User Tools

classes:2009:fall:phys4101.001:q_a_1009

## Oct 9 (Fri) Reflection/Transmission with delta-function potential

Q&A for the previous lecture: Q_A_1007
Q&A for the next lecture: Q_A_1012

If you want to see lecture notes, click lec_notes

Main class wiki page: home

### Spherical Chicken 10/7 12:38

I am sure this is just a misunderstanding of parameters, but when one has the second solution to the delta function wells, the one with the positive peak, and the negative peak… but it was my understanding, that for instance, in the SHO, at the top of hte peak was where the highest probability of finding the particle was. However, when you have a negative value peak and a positive value peak… how is this different. I think I'm just looking at a graph thinking it's graphing something else… but …. none the less, I'm a little unsure, conceptually, why we can have a positive and negative value peak, and both have the same meaning in terms of where the particle is found…

The graphs we did today are all graphs of $\psi$. To get the expected value of the position, or any other measurable quantity, you have to look at $\psi^2$ which gives you nonnegative probabilities everywhere.

#### Spherical 10/7 13:19

of course. I knew it'd be straight forward. I think I'm just used to assuming $\psi^2$ for graphs these days…

#### Ralph 10/08 11:15am

Just remember that superposition of two waves with negative and positive peaks would make them cancel and then $\psi^2$ would be zero! So they are not the same thing even if the probability of finding them individually is the same.

This whole discussion has been moved to Q_A_1012

### Spherical 10/8 21:26

About the Delta Function. I guess I just don't feel comfortable with it because I've used it extremely little… but do we never actually define the delta function, besides just the intuitive case? I went and googled it but do we only use it in terms of limits… solving integrals where the limit is a value vs. 0? Is it more like we're borrowing the concept of the delta function?

##### Pluto 4ever 10/8 10:13PM

I'm also confused about the delta function. To me, it only seems that the delta function's only real purpose is to make the potential well normalizable so we can get a practical function for the well.

##### chavez 10/8 11:45PM

The Dirac delta function is defined explicitly in Eqn. 2.111. It's not so much a function as it is a mathematical construct. The way we are using it is analogous to how we used an infinite square well to simplify the finite square well, but instead of a well we are modeling something like an impluse (maybe a point charge/mass).

##### Dark Helmet 10/09 120:05 am

The delta function isn't technically a function because functions that are zero everywhere except at one point must have a total integral of zero, not one. It seems to just be an abstract concept used to simplify calculations and approximate things that we can't explain in a more accurate way. Something to just get the job done i guess.

### Mercury 10/09 4:46 am

I know this was asked on Wednesday's Q&A, but I'm still confused about how to form linear combinations and how this makes the scattering wave functions normalizable (pg. 75).

The only place Griffiths really talks about this is on p.61 in the text and the footnote on that page. He says that the individual stationary states of a free particle are sinusoidal and extend to infinity, so they can't be summed and aren't normalizable. But if you add together many sinusoidal functions with different values of k, you can make those waves cancel out everywhere except in a very small region (which is where the particle is).

And, of course, by 'add together many…with different k' Griffiths means take an integral over all possible values of k. That's what gives you equation 2.100.

Somebody correct me if I'm wrong, but I don't think that he proves that this works. He just states it and gives a qualitative explanation as to why it works.

This same reasoning also applies to solutions to the dirac delta potential.

#### David Hilbert's Hat 10/09 1:00pm

It seems like the proof of what is said on p.61 is just a fourier analysis trick; given some initial wavefunction, you can find the values of φ(x) by taking the fourier transform of the initial wavefunction, and you know this works from Plancherel's theorem. Which is a quite opaque argument, but put it this way: any given sine or cosine function will be indeterminate at infinity; but in general you can compose any well behaved function by superposition of sine and cosine functions, which is the basis for fourier analysis. But since some well behaved functions do converge at infinity, the proper superposition of sine and cosine functions will converge as well. To build the proper superposition, you use 2.103.

### David Hilbert's Hat 10/09 1:00pm

If the dirac delta function is defined as being either 0 or infinity, what is the point of multiplying it by some constant α? It seems like α wouldn't matter much, because the potential will either be 0 or infinite, regardless of α. Yet α still somehow defines the strength of the potential. Does it just come in because it is related to the derivative of ψ like in 2.125?

#### liux0756 10/09 3:33pm

I think although delta function is 0 or infinity, the integral of the function somehow reflects the 'strength'. While the delta function integral is 1, the integral of delta function multiplied by α is α.

#### David Hilbert's Hat 10/10 9:20am

Ah, okay, that makes much more sense.

### chap0326 10/09 14:59

Yuichi mentioned that the scattering problems focus on when E>0. Why is that?

##### Ekrpat 3:28pm 10/9

I'm not completely sure but I think we used E>0 today because we chose $V(x)=- \alpha \delta (x)$ and had to limit E to positive to get the scattering we wanted.

##### liux0756 3:30pm 10/09

Yes because V=0 when x is not 0, E>0 is for scattering state and E<0 is for bound state.

##### Blackbox 8:00pm 10/11

If I add a little more on the above two opinions,,, There are two different states which are bound state and scattering state as you know. In the case of E>0, Energy is always larger than potential energy in the entire location. Differently with the bound state, a particle can transmit through the position of x=0 due to the potential has negative infinite value at x=0. I guess that's the reason why the scattering state focuses only on when E>0.

### Hydra 10/9 21:30

Ok, I just need some verification on the delta-function….. It is not so much a “function” but instead a distribution. And we know that a distribution represents a probability….. But since the delta-function is spiked at one point only….does this mean there is only one probable outcome? I apologize for my fragmented question full of dotted pauses….. but I think it effectively reflects my confusion & frustration with this seemingly hand-waving method. It works, but why?

#### Schrodinger's Dog 10/10 1:24am

Yup, you got everything down! Why does it work, well integrate over P of a delta function. You will find that you get 1, for some x=a, where delta(x-a). Since delta is only defined at one point to be 1 and zero on all the other points, you find that at that one point we get P=1.

#### David Hilbert's Hat 10/10 9:20am

I think Griffiths makes a reference to the delta function when it's used as a distribution of a point particle's mass/charge. Everywhere that's not located at the particle it's zero, but at the particle it's infinity; when you integrate over all space it comes out to be 1, because you have one particle. That seems to be the easiest way to see the delta function - it is very peculiar because it is zero everywhere except at one point it's infinity, but it integrates to 1, like a point particle's distribution (which seems a lot more familiar).

### Anaximenes - 22:30 - 10/09/09

This question is about problem 2.34. The problem asks us in part c to show that $T=\sqrt{\frac{E-V_0}{E}}\frac{|F|^2}{|A|^2}$. However, Eq. 2.139 on page 75 says $T\equiv \frac{|F|^2}{|A|^2}$. That's with three lines, as in identically equal, as in any statement that they're not equal (such as that in the prompt in 2.34c) is incongruent. Now, I remember the professor said in class that $T=\frac{|F|^2}{|A|^2} \frac{k_2}{k_1}$, but how do we show that? We have no definition of T other than the (patently false) one in 2.139. For shame, Griffiths. For shame.

#### Yuichi 11:44 10/11

OK. Let's start from a simple case when the momentum before and after the obstacle represented by the potential is the same to show that 2.139 is sensible (if you accept the idea that |A|^2 can be interpreted as the probability for the incoming particle, which should be somehow normalized to 1. Even though you cannot really normalize the plain wave, close your eyes to such a detail.*) If |A|^2 represents that the total probability for the incoming particle (integrated over the entire space!) is one, |B|^2 must be related to the similar probability for the reflected wave, and |C|^2 is for the transmitted wave. Is this acceptable? Or more quantitatively, |B/A|^2 is the probability that the particle is in the “reflected” state, and |C/A|^2 is the probability that the particle would be found in the transmitted state.

Now in a more general case when the momentum changes after the collision with the potential well (or bump). Paying a bit more attention to what |A|^2, |B|^2 and |C|^2 are related to. They are related to the probability DENSITies that the particle is found in a unit length in the incident, reflected and transmitted states. But if you think about how the reflection and transmission is measured, it's not the spacial densities of the particle in the three waves (incident, transmitted and reflected), but temporal densities which is more directly relevant. i.e. how many particle (fractional!) is arriving as an incident wave, going out as reflected and transmitted waves per second. When you do this conversion of probability density per length to per second, you will need to consider |A|^2*k_1, etc. Hence the equation described in problem 2.34.

If you doing like to ignore the “detail” at *, we have to consider wave packets of incident, reflected and transmitted waves. Following the lecture note for 10/9,

$\psi(x) = e^{-ikx}$ alone for x<0 region and $\Psi(x) = 0$ will not satisfy the Schrodinger equation and boundary conditions. However,

$\psi_I(x) = e^{ikx}+\frac{B}{A}e^{-ikx}$ for x<0

and

$\psi_{II}(x) = \frac{C}{A}e^{ikx}$ for x>0

will for any value of k's. By taking a linear combination of these for different values of k's, the result should still satisfy the Schrodinger Equation. Actually, at this point, we are really thinking about time dependent Schrodinger equation so the time dependence should also be included in the wave function. So we are thinking about

$\Psi_I(x,t) = e^{i(kx-\omega t)}+\frac{B}{A}e^{-i(kx-\omega t)}$ for x<0

and

$\Psi_{II}(x,t) = \frac{C}{A}e^{i(kx-\omega t)}$ for x>0

before taking a linear combination ($\omega=\omega(k)$ is a function of k and $\omega(k)\sim k^2$), and afterward

$\Psi_I(x) = \int \phi(k) [e^{i(kx-\omega t)}+\frac{B}{A}e^{-i(kx+\omega t)}] dk$ for x<0

and

$\Psi_{II}(x) = \int \phi(k) [\frac{C}{A}e^{i(kx-\omega t)}] dk$ for x>0.

$\phi(k)$ is such that $\Psi_I(x) = \int \phi(k) e^{i(kx-\omega t)} dk$ describes the wave packet for the incident particle at t«0 (well before the collision) situated in where? (x « 0). It will pass through the potential regions of x ~ 0 at t ~ 0, and moves on the x » 0 at t » 0 like as if this is a free particle.

You can show (for you to do if you are looking for a challenge) that given this situation described in the above paragraph, that the terms containing “B” is a wave packet located at x » 0 at t « 0, and x « 0 at t » 0, while the “C” term corresponds to a wave packet located at x « 0 for t « 0 and x » 0 for t » 0 (not dis-similar to the incident wave packet). Note that for t « 0, only the incident wave packet is in the x region which is appropriate for it (i.e. x < 0) and the other two wave packets are ghosts existing only in equations, but not in real life (not dis-similar to image charges in E&M, or if you remember something from waves on a string where waves are reflected, where you were probably told that reflected wave come from the point beyond the fix point of the string where waves cannot physically exist). For t » 0, the wave packet for the incident particle becomes a ghost sitting in the x » 0 region, while the other two wave packets are real.

Here are some more challenges for you, but one can show that

1. the total probability for the incident particle $\int_{-\infty}^\infty |\Psi_{\rm inc}(x,t)|^2 dx$ where $\Psi_{\rm inc}(x) = \int \phi(k) e^{i(kx-\omega t)} dk$ remains unity for any time, t, once $\psi(k)$ is determined appropriately (so that the total probability is unity at some time t « 0).
2. the total probability for the particle associated with the reflected wave $\int_{-\infty}^\infty |\Psi_{\rm ref}(x,t)|^2 dx$ where $\Psi_{\rm ref}(x) = \int \phi(k) \frac{B}{A}e^{-i(kx+\omega t)} dk$ will be |B/A|^2 and remains the same for any time, t. For R, the reflection coefficient, one is interested in the total probability in the reflected wave packet for t » 0 over the incident wave packet for t « 0, which must be 1 if $\psi(k)$ is correctly determined. This is |B/A|^2.

To figure out the total probability associated with this wave packet, you need to modify this expression so that you can use the fact that the probability for the incident wave packet integrate to 1
3. Very similar claim can be made for the transmitted part of the waves. Note that in case that the momentum of the transmitted wave is different, “k” in the plane wave expression has to be changed to $e^{i(k'x-\omega t)}$ and the wave packet will look like,

$\Psi_{\rm trans}(x) = \int \phi(k) [\frac{C}{A}e^{i(k'x-\omega t)}] dk$. Note that only “k” in the plane wave is changed to “k'”, but not the other. (think about why if you don't want to just accept it from me.)

To figure out the total probability for the transmitted wave packet, make a reasonable assumption that $\phi(k)$ has significant non-zero values only for a small k' range around it peak at $k_0'$, which corresponds to $k_0$ for the incident wave. Then one can approximate that

$k' = k_0'+\frac{k_0}{k_0'}(k-k_0)$. I will leave the rest for you to prove.

The following may be useful for some of the calculations needed to prove some of the things above. When the incident wave packet, $\Psi_{\rm inc}(x) = \int \phi(k) e^{i(kx-\omega t)} dk$, is properly normalized, $\int_{-\infty}^\infty [\Psi_{\rm inc}(x)]^*\Psi_{\rm inc}(x) dx = 1$. By substituting the former into the latter, and change one occurrence of k to k_2 to remember that the two integrals involve different variable of integrals,

$\int_{-\infty}^\infty [\int \phi^*(k_2) e^{i(k_2x-\omega_2 t)} dk_2]^*\int \phi(k) e^{i(kx-\omega t)} dk dx = 1$, where $\omega_2 \equiv \omega(k_2)$.

Since we are sloppy physicists, we freely change the order of the integrals. So rather than doing k and k_2 integrals, we decide to do the x integral first. This will result in

$\int \phi^*(k_2)dk_2 \int \phi(k) dk \int_{-\infty}^\infty dx [e^{-i(k_2x-\omega_2 t)} e^{i(kx-\omega t)}] = 1$.

$\int \phi^*(k_2)dk_2 \int \phi(k) dk [e^{+i\omega_2 t} e^{-i\omega t}]\int_{-\infty}^\infty dx [e^{-ik_2x} e^{ikx}] = 1$.

Since the most inside integral is one way to express the delta-function (times 2pi) (see Griffiths' hand-wave argument for this), k_2 integral becomes easy: just replace k_2 with k.

$2\pi \int \phi^*(k)\phi(k) dk [e^{+i\omega t} e^{-i\omega t}] = 2\pi \int |\phi(k)|^2 dk = 1$

This is the normalization equation for $\phi(k)$ similar to what we used to have for “A” in some practice problems, or $\sum_n |c_n|^2 = 1$.

I may have made some mistakes like where 2pi goes, so please carefully check if you are interested in this issue.

#### Can 10/21 10:52am

For Yuichi, instead of saying A^2 is probability, I think it would make much more sense if we think of A^2 as incident intensity, just my personal opinion not sure if it is rigorous. Recall that the square of the amplitude of for E field is proportional to the intensity , E^2 is proportional to I, same analogy can be made here, then the probability can be interpreted as transmitted intensity versus incident intensity. $T=\frac{F^2}{A^2}$. At least, I think this might be easier to understand transmission intuitively, maybe not mathematically, one still has to work out all the math.

### Liam Devlin 10/12, 10:45am

Why is k a known for a scattering problem?

#### Can 10/22 12:15pm

Since for scattering problem $E=\frac{h^2k^2}{2m}$ which means $k=\frac{\sqrt{2mE}}{h}$, and E is the incident energy of the particle, which can be manually tuned. E is known , so k is known.

### Aspirin 10/12, 2:30pm

Why does the coefficient, D, of the genernal solution which is $\psi_I_I = C e^{ikx} + D e^{-ikx}$ cancel out? Yuichi mentioned the reason, but I didn't fully get it.

#### Blackbox 10/12, 3:20pm

I'm not quite sure I understood that correctly, but if I say, when a particle moves from the left to the right direction, it will transmitt through the Delta function well. In other words, there is no physical values from the right to the left direction. If we think of the opposite situation, a particle moves from the right to the left, in the first region, $\psi_I = Ae^{ikx} + Be^{-ikx}$, A would be removed because nothing can go back to the right direction.

### Blackbox 10/12, 6:00pm

I might write wrong in my note though. On last Wednesday, Yuichi mentioned the unit of α is Energy/length. Isn't it Energy*length?