Go to the U of M home page
School of Physics & Astronomy
School of Physics and Astronomy Wiki

User Tools


classes:2009:fall:phys4101.001:lec_notes_0916

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
classes:2009:fall:phys4101.001:lec_notes_0916 [2009/09/20 16:57] ykclasses:2009:fall:phys4101.001:lec_notes_0916 [2009/09/20 19:08] (current) yk
Line 55: Line 55:
   * <del>Giving</del> //If you calculate// <math>\sigma^2 \equiv <\hat{H}^2> - <\hat{H}>^2 </math>//you will find it to be zero.  If you don't see it immediately, you should think about this until you get zero.//   * <del>Giving</del> //If you calculate// <math>\sigma^2 \equiv <\hat{H}^2> - <\hat{H}>^2 </math>//you will find it to be zero.  If you don't see it immediately, you should think about this until you get zero.//
   * <del>Since <math>\sigma_E^2 = 0</math>,</del> This implies that there is no fluctuation: You get the **same** measurement every time.   * <del>Since <math>\sigma_E^2 = 0</math>,</del> This implies that there is no fluctuation: You get the **same** measurement every time.
-==== editing here ==== 
- 
 == How to find C_i's <del>Linear Combination of Stationary States</del> == == How to find C_i's <del>Linear Combination of Stationary States</del> ==
  
-  * In general//, if we accept the idea that a set of stationary state wave functions forms a "complete" set, any wave functions can be expressed as// <math>\psi(x) = \Sigma c_n\psi_n</math>+  * In general//, if we accept the idea that a set of stationary state wave functions forms a "complete" set, any wave functions can be expressed as// <math>\psi(x) = \sum_n c_n\psi_n</math>
   * //A big questions may be, "How can we find out what //<math>c_i</math>//'s are, knowing what the function// <math>\psi(x)</math> // is?"//   * //A big questions may be, "How can we find out what //<math>c_i</math>//'s are, knowing what the function// <math>\psi(x)</math> // is?"//
   * //To answer that question, Griffiths talks about "Fourier's trick."//   * //To answer that question, Griffiths talks about "Fourier's trick."//
-  * //I would say, this is analogous to finding the //x// component of some vector //<math>\vec{r}</math>, which we can represent by <math>(\vec{r}\cdot\hat{x})</math>// This may sound silly because when //<math>\hat{x}=\[x, y, z \]</math>, there is no need to come up with a more complex way to figure out what their //x////y// and //z// components are.  They are obviously //x////y// and //z//.  method works even if your have not found the right basis of the vector space such that //<math>\hat{x}=\[10\]</math>//.  This is because, // +  * //I would say, this is analogous to finding the //x// component of some vector //<math>\vec{r}</math>, which we can obtain by calculating <math>\vec{r}\cdot\hat{x}</math>// This method may sound overly complex because when //<math>\vec{r}=\[x, y, z \]</math>//, there is no need to come up with a more complex way to figure out what their x, y and z components are.  They are obviously x, y and z.  However, we sometimes encounter an example of vector space where no basis vectors are defined (wave functions are one such example) so that we can express the vector as //<math>\vec{r}=\[xy\]</math>//.  The method we are discussing right now works even in such a case, or when the basis vector used to define the vector space and the basis along which you want to decompose a vector are different, as long as the dot product is definedwhich is always the case (since the dot product is the way we define the lengths of vectors.)// 
-    * //When you succeed in decomposing a vector to its components, the following expression will express the original vector:// <math>\vec{r} = x\hat{x}+y\hat{y}+z\hat{z}</math>+    * //Suppose you have succeeded in decomposing a vector to its components, the following should hold:// <math>\vec{r} = x\hat{x}+y\hat{y}+z\hat{z}</math> 
 +    * This equation is a vector equation, which means that there are effectively 3 (or whatever the dimension of the vector space is) equations.  To find the values of //x//, //y// and //z//, it may be easier if we convert it into individual equations.  Taking a dot product with this vector equation is one way to accomplish this goal.  If we take dot product with <math>\hat{x}</math>, for example, we will get <math>\hat{x}\cdot \vec{r} = \hat{x}\cdot(x\hat{x}+y\hat{y}+z\hat{z})</math>.   
 +    * If one realizes that <math>\hat{x}\cdot \hat{x}=1</math>, <math>\hat{x}\cdot \hat{y}=0</math> and <math>\hat{x}\cdot \hat{z}=0,</math>, this is just //x// //i.e.// <math>x=\hat{x}\cdot \vec{r}</math>   
   * //We have not learned this yet, but it turns out that in QM, doing this: //<math>\int\psi_n^* ({\rm wave function})dx</math>// to the wave function// is <del>essentially</del> the dot product, where the subscript //n// implies the particular component (ie, x, y, z, 1, 2, 3, etc)   * //We have not learned this yet, but it turns out that in QM, doing this: //<math>\int\psi_n^* ({\rm wave function})dx</math>// to the wave function// is <del>essentially</del> the dot product, where the subscript //n// implies the particular component (ie, x, y, z, 1, 2, 3, etc)
-    * //For example, the dot product between //<math>\psi_n(x)</math> //and// <math>\psi_m(x)</math>// would be //<math><m|n>=\int\psi^*_n(x)\psi^_m(x)dx=0</math> //unless// <math>m=n</math> //Note that in QM, <math><m|n></math> is use to represent the dot product between two wave functions.  Vectors are represented by the index only here.  We say that the two wave functions are orthogonal.  The dot product of two orthogonal vectors is zero, too.//  +    * //For example, the dot product between //<math>\psi_n(x)</math> //and// <math>\psi_m(x)</math>// would be //<math><m|n>=\int\psi^*_n(x)\psi^_m(x)dx=0</math> //unless// <math>m=n</math> //Note that in QM, a notation, //<math><m|n></math>// is used to represent the dot product between two wave functions.  Vectors are represented only by the indeces, m and n, here.  We say that the two wave functions are orthogonal.  The dot product of two orthogonal vectors is zero, too.//  
-  * Thus, we can say: <math>\int\psi_m^*(\psi_n(x))dx = \int\psi_m^*(\Sigma c_n\psi_n)dx</math> +  * //Borrowing the idea used in the vector decomposition above, we will take a dot product of the vector equation above and //<math>\psi_m</math>// and get //<math>\int\psi_m^*(\psi(x))dx = \int\psi_m^*(\sum_n c_n\psi_n)dx</math> 
-  * Since <math>\Sigma c_n</math> is a set of constants, we can pull it out such that+  * //Since //<math>\sum_n c_n</math>// is a set of constants, we can pull it out of the integral.  Since due to orthogonality:// 
-    * For <math>m \neq n</math>, we have <math>\Sigma c_n\int\psi_m^*\psi_ndx = 0</math> for different indices +    * For <math>m \neq n</math>, we have <math>\int\psi_m^*\psi_ndx = 0</math> and 
-    * For <math>m = n</math>, we have <math>\Sigma c_n\int\psi_{m=n}^*\psi_ndx = 1</math> +    * For <math>m = n</math>, we have <math>\int\psi_{m=n}^*\psi_ndx = 1</math> 
-    * This is due to orthogonality +  * The only term in the sum which survives is the term where <math> m = n</math>, leaving us <math> \sum_n c_n\int\psi_m^*\psi_ndx = c_m</math>
-  * The only terms to survive are those where <math> m = n</math>, leaving us <math> \Sigma_n c_n\int\psi_m^*\psi_ndx = c_m</math>+
   * Thus, our left hand side of the equation becomes: <math>LHS = \int\psi_m^*\psi(x)dx = c_m</math>   * Thus, our left hand side of the equation becomes: <math>LHS = \int\psi_m^*\psi(x)dx = c_m</math>
-  * With: <math>\psi(x) = \Sigma c_n\psi_n</math>+  * With: <math>\psi(x) = \sum_n c_n\psi_n</math>
  
 **To go back to the lecture note list, click [[lec_notes]]**\\ **To go back to the lecture note list, click [[lec_notes]]**\\
classes/2009/fall/phys4101.001/lec_notes_0916.1253483821.txt.gz · Last modified: 2009/09/20 16:57 by yk