1 / 17

Inner Product Spaces

Inner Product Spaces. Physical properties of vectors  aka length and angles in case of arrows Lets use the dot product Length of Cosine of the angle between How can we use the dot product to define length & angle when the dot product uses length and angle

rebecca
Télécharger la présentation

Inner Product Spaces

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Inner Product Spaces • Physical properties of vectors aka length and angles in case of arrows • Lets use the dot product • Length of • Cosine of the angle between • How can we use the dot product to define length & angle when the dot product uses length and angle • Main features of the dot product are राघववर्मा

  2. Pk Pj Linearity Explained Linearity requires that Pj + Pk = Pjk Why do we require Linearity? How does this condition ensure linearity? राघववर्मा

  3. Inner Product Space for Quantum Mechanics • Inner product in our special space will be denoted by V|W • A vector space with an inner product is called an inner product space • No explicit rule for evaluating the scalar product • The first axiom is sensitive to the order of 2 factors. This is to make V|V real. • Second  Poistive Semidefiniteness- vanishing only if the vector does. It better be because from our generalization we are going to use it to define length • Linearity of inner product when a linear superposition of a|W + b|Z  |aW + bZ appears as a second vector in the scalar product राघववर्मा

  4. Asymmetry of our space • What if the first factor in the product is a linear superposition • Expresses anti-linearity of the inner product with respect to the first factor in the inner product • The inner product of a linear superposition with another vector is the corresponding superposition of inner product if the superposition occurs in the second factor. • While the superposition with all its coefficients conjugated if the superposition occurs in the first factor. • This assymetry is going to stay with us…. राघववर्मा

  5. Some more properties of Inner Product • Two vectors are orthogonal if the inner product vanishes • The norm of the vector in my vector space is • A set of basis vectors, pairwise orthogonal will be called an orthonormal basis • If we use orthonormal basis only the diagonal terms survive. • We have defined all the properties of inner product but have not specified how to compute it. • We said that the basis is orthonormal if राघववर्मा

  6. Computing Inner Product • The double sum • Then collapses to • So now you can appreciate why we defined • If it was not defined this way then forget about the norm being positive definite it would not have been real • We have already said that the vector is uniquely defined by its components. Let me denote it by an ordered set of numbers and let me use a column to represent its components i.e. & राघववर्मा

  7. Inner Product (contd..) • The inner product in our space has been defined using the dot product as reference. The inner product also defines the norm • We need a number • To get a number from a column matrix we need to multiply it with a row matrix • Hence if |V is a column matrix • Then if W| is a represented by a row matrix • The inner product of V|W is represented by Which is then राघववर्मा

  8. Dual Space and the Dirac Notation • The column matrix associated with a vector in my space is unique • I can get a scalar from inner product if I multiply a column vector by a row vector (its transpose conjugate) • Since the column vector is unique Hence the row vector by definition also becomes unique • This row vector is called a bra in Dirac’s notation • Thus there are two vector spaces, the space of kets & a dual space of bras • There is a ket for every bra and vice-versa • Inner product is defined only between bras and kets  Hence from elements of two distinct but related spaces. राघववर्मा

  9. Dual Space (contd…) • In my n dimensional vector space there exists basis vectors |i • Similarly in the dual of this vector space there exist basis vector i| • In an orthonormal basis |i has all zeros and a 1 at the ith row • A vector can now be expanded in my orthonormal basis • A vector in my space can then also be expressed as राघववर्मा

  10. Dual Space (Contd..) • Adjoint Operation • If V| is the bra corresponding to the ket |V what bra corresponds to a |V where a is some scalar • Relation between bras and kets in linear equation • To take the adjoint of a linear equation relating kets (bras) replace every ket (bra) by its bra (ket) & complex conjugate all its coefficients राघववर्मा

  11. Orthonormal Basis • Why do we need an orthonormal basis? • Our requirement for a basis in an n dimensional space is just n linearly independent vectors • Lets see in three dimensional space • If the unit vectors u v & w are orthonormal, the dot product reduces to three terms. • If the basis is only orthogonal then • Moral of the story is that we need orthonormal basis राघववर्मा

  12. Gram Schmidt Orthonormalisation • Given a set of n linearly independent vectors we want to construct an orthonormal basis • Given n linearly independent but not orthonormal vectors |I, |II, |III,… We want to construct a set of orthonormal vectors |1, |2, |3… • Rescale the first vector by its length. This forms the first unit vector • Remove the component of |II along |1 to form |2 • Normalise |2 • Remove the component of |III along |1 & |2 to form |3 • Normalise |3 राघववर्मा

  13. Example of Schmidt Orthonormalisation • Construct an orthonormal set of vectors from the vectors राघववर्मा

  14. Subspaces • Given a vector space V a subset of its elements that form a vector space among themselves is called a subspace. • We will denote a particular subspace i of dimensionality ni by Vini • Given two subspaces Vini & Vjmj we define their sum • Vini Vjmj = Vkmk • This set contains all elements of Vjmj • All elements of Vini • All possible linear combinations of above राघववर्मा

  15. Linear Operators • An operator  is an instruction for transforming any given vector |V into another vector |V • We have seen that one can multiply a vector by a scalar just stretches the vector • An action of an operator on a vector, in general, transforms the vector • Our Stern Gerlach Experiment transformed the original vector, the spin state of the silver atom which was oriented in all possible directions in the direction of the inhomogeneous magnetic field. • We will restrict ourselves to operators which will not take the ket on which they act upon outside the vector space under consideration • If |V is a vector in my Vector Space then so is  |V = |V राघववर्मा

  16. |3 |2 + |3 R(|2 + |3) |2 |1 Linear Operators • Operators can act on bras as well • V| = V| • Linear Operators •  |Vi = |Vi • { |Vi + |Vj = |Vi + |Vj • Vi|  Vi| • (Vi| + Vj|)  Vi| + Vj| • Identity Operator I leaves the vector alone • Operator on V3® • R(/2 i)  Rotate all vectors by an angle /2 about the x axis राघववर्मा

  17. Operator R राघववर्मा

More Related