fortran-lapack
Loading...
Searching...
No Matches
la_lapack Module Reference

Data Types

interface  bbcsd
 BBCSD: computes the CS decomposition of a unitary matrix in bidiagonal-block form, [ B11 | B12 0 0 ] [ 0 | 0 -I 0 ] X = [-------------—] [ B21 | B22 0 0 ] [ 0 | 0 0 I ] [ C | -S 0 0 ] [ U1 | ] [ 0 | 0 -I 0 ] [ V1 | ]**H = [------—] [------------—] [------—] . [ | U2 ] [ S | C 0 0 ] [ | V2 ] [ 0 | 0 0 I ] X is M-by-M, its top-left block is P-by-Q, and Q must be no larger than P, M-P, or M-Q. (If Q is not the smallest index, then X must be transposed and/or permuted. This can be done in constant time using the TRANS and SIGNS options. See CUNCSD for details.) The bidiagonal matrices B11, B12, B21, and B22 are represented implicitly by angles THETA(1:Q) and PHI(1:Q-1). The unitary matrices U1, U2, V1T, and V2T are input/output. The input matrices are pre- or post-multiplied by the appropriate singular vector matrices. More...
 
interface  bdsdc
 BDSDC: computes the singular value decomposition (SVD) of a real N-by-N (upper or lower) bidiagonal matrix B: B = U * S * VT, using a divide and conquer method, where S is a diagonal matrix with non-negative diagonal elements (the singular values of B), and U and VT are orthogonal matrices of left and right singular vectors, respectively. BDSDC can be used to compute all singular values, and optionally, singular vectors or singular vectors in compact form. This code makes very mild assumptions about floating point arithmetic. It will work on machines with a guard digit in add/subtract, or on those binary machines without guard digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or Cray-2. It could conceivably fail on hexadecimal or decimal machines without guard digits, but we know of none. See DLASD3 for details. The code currently calls DLASDQ if singular values only are desired. However, it can be slightly modified to compute singular values using the divide and conquer method. More...
 
interface  bdsqr
 BDSQR: computes the singular values and, optionally, the right and/or left singular vectors from the singular value decomposition (SVD) of a real N-by-N (upper or lower) bidiagonal matrix B using the implicit zero-shift QR algorithm. The SVD of B has the form B = Q * S * P**H where S is the diagonal matrix of singular values, Q is an orthogonal matrix of left singular vectors, and P is an orthogonal matrix of right singular vectors. If left singular vectors are requested, this subroutine actually returns U*Q instead of Q, and, if right singular vectors are requested, this subroutine returns P**H*VT instead of P**H, for given complex input matrices U and VT. When U and VT are the unitary matrices that reduce a general matrix A to bidiagonal form: A = U*B*VT, as computed by CGEBRD, then A = (U*Q) * S * (P**H*VT) is the SVD of A. Optionally, the subroutine may also compute Q**H*C for a given complex input matrix C. See "Computing Small Singular Values of Bidiagonal Matrices With Guaranteed High Relative Accuracy," by J. Demmel and W. Kahan, LAPACK Working Note #3 (or SIAM J. Sci. Statist. Comput. vol. 11, no. 5, pp. 873-912, Sept 1990) and "Accurate singular values and differential qd algorithms," by B. Parlett and V. Fernando, Technical Report CPAM-554, Mathematics Department, University of California at Berkeley, July 1992 for a detailed description of the algorithm. More...
 
interface  disna
 DISNA: computes the reciprocal condition numbers for the eigenvectors of a real symmetric or complex Hermitian matrix or for the left or right singular vectors of a general m-by-n matrix. The reciprocal condition number is the 'gap' between the corresponding eigenvalue or singular value and the nearest other one. The bound on the error, measured by angle in radians, in the I-th computed vector is given by DLAMCH( 'E' ) * ( ANORM / SEP( I ) ) where ANORM = 2-norm(A) = max( abs( D(j) ) ). SEP(I) is not allowed to be smaller than DLAMCH( 'E' )*ANORM in order to limit the size of the error bound. DISNA may also be used to compute error bounds for eigenvectors of the generalized symmetric definite eigenproblem. More...
 
interface  gbbrd
 GBBRD: reduces a complex general m-by-n band matrix A to real upper bidiagonal form B by a unitary transformation: Q**H * A * P = B. The routine computes B, and optionally forms Q or P**H, or computes Q**H*C for a given matrix C. More...
 
interface  gbcon
 GBCON: estimates the reciprocal of the condition number of a complex general band matrix A, in either the 1-norm or the infinity-norm, using the LU factorization computed by CGBTRF. An estimate is obtained for norm(inv(A)), and the reciprocal of the condition number is computed as RCOND = 1 / ( norm(A) * norm(inv(A)) ). More...
 
interface  gbequ
 GBEQU: computes row and column scalings intended to equilibrate an M-by-N band matrix A and reduce its condition number. R returns the row scale factors and C the column scale factors, chosen to try to make the largest element in each row and column of the matrix B with elements B(i,j)=R(i)*A(i,j)*C(j) have absolute value 1. R(i) and C(j) are restricted to be between SMLNUM = smallest safe number and BIGNUM = largest safe number. Use of these scaling factors is not guaranteed to reduce the condition number of A but works well in practice. More...
 
interface  gbequb
 GBEQUB: computes row and column scalings intended to equilibrate an M-by-N matrix A and reduce its condition number. R returns the row scale factors and C the column scale factors, chosen to try to make the largest element in each row and column of the matrix B with elements B(i,j)=R(i)*A(i,j)*C(j) have an absolute value of at most the radix. R(i) and C(j) are restricted to be a power of the radix between SMLNUM = smallest safe number and BIGNUM = largest safe number. Use of these scaling factors is not guaranteed to reduce the condition number of A but works well in practice. This routine differs from CGEEQU by restricting the scaling factors to a power of the radix. Barring over- and underflow, scaling by these factors introduces no additional rounding errors. However, the scaled entries' magnitudes are no longer approximately 1 but lie between sqrt(radix) and 1/sqrt(radix). More...
 
interface  gbrfs
 GBRFS: improves the computed solution to a system of linear equations when the coefficient matrix is banded, and provides error bounds and backward error estimates for the solution. More...
 
interface  gbsv
 GBSV: computes the solution to a complex system of linear equations A * X = B, where A is a band matrix of order N with KL subdiagonals and KU superdiagonals, and X and B are N-by-NRHS matrices. The LU decomposition with partial pivoting and row interchanges is used to factor A as A = L * U, where L is a product of permutation and unit lower triangular matrices with KL subdiagonals, and U is upper triangular with KL+KU superdiagonals. The factored form of A is then used to solve the system of equations A * X = B. More...
 
interface  gbtrf
 GBTRF: computes an LU factorization of a complex m-by-n band matrix A using partial pivoting with row interchanges. This is the blocked version of the algorithm, calling Level 3 BLAS. More...
 
interface  gbtrs
 GBTRS: solves a system of linear equations A * X = B, A**T * X = B, or A**H * X = B with a general band matrix A using the LU factorization computed by CGBTRF. More...
 
interface  gebak
 GEBAK: forms the right or left eigenvectors of a complex general matrix by backward transformation on the computed eigenvectors of the balanced matrix output by CGEBAL. More...
 
interface  gebal
 GEBAL: balances a general complex matrix A. This involves, first, permuting A by a similarity transformation to isolate eigenvalues in the first 1 to ILO-1 and last IHI+1 to N elements on the diagonal; and second, applying a diagonal similarity transformation to rows and columns ILO to IHI to make the rows and columns as close in norm as possible. Both steps are optional. Balancing may reduce the 1-norm of the matrix, and improve the accuracy of the computed eigenvalues and/or eigenvectors. More...
 
interface  gebrd
 GEBRD: reduces a general complex M-by-N matrix A to upper or lower bidiagonal form B by a unitary transformation: Q**H * A * P = B. If m >= n, B is upper bidiagonal; if m < n, B is lower bidiagonal. More...
 
interface  gecon
 GECON: estimates the reciprocal of the condition number of a general complex matrix A, in either the 1-norm or the infinity-norm, using the LU factorization computed by CGETRF. An estimate is obtained for norm(inv(A)), and the reciprocal of the condition number is computed as RCOND = 1 / ( norm(A) * norm(inv(A)) ). More...
 
interface  geequ
 GEEQU: computes row and column scalings intended to equilibrate an M-by-N matrix A and reduce its condition number. R returns the row scale factors and C the column scale factors, chosen to try to make the largest element in each row and column of the matrix B with elements B(i,j)=R(i)*A(i,j)*C(j) have absolute value 1. R(i) and C(j) are restricted to be between SMLNUM = smallest safe number and BIGNUM = largest safe number. Use of these scaling factors is not guaranteed to reduce the condition number of A but works well in practice. More...
 
interface  geequb
 GEEQUB: computes row and column scalings intended to equilibrate an M-by-N matrix A and reduce its condition number. R returns the row scale factors and C the column scale factors, chosen to try to make the largest element in each row and column of the matrix B with elements B(i,j)=R(i)*A(i,j)*C(j) have an absolute value of at most the radix. R(i) and C(j) are restricted to be a power of the radix between SMLNUM = smallest safe number and BIGNUM = largest safe number. Use of these scaling factors is not guaranteed to reduce the condition number of A but works well in practice. This routine differs from CGEEQU by restricting the scaling factors to a power of the radix. Barring over- and underflow, scaling by these factors introduces no additional rounding errors. However, the scaled entries' magnitudes are no longer approximately 1 but lie between sqrt(radix) and 1/sqrt(radix). More...
 
interface  gees
 GEES: computes for an N-by-N complex nonsymmetric matrix A, the eigenvalues, the Schur form T, and, optionally, the matrix of Schur vectors Z. This gives the Schur factorization A = Z*T*(Z**H). Optionally, it also orders the eigenvalues on the diagonal of the Schur form so that selected eigenvalues are at the top left. The leading columns of Z then form an orthonormal basis for the invariant subspace corresponding to the selected eigenvalues. A complex matrix is in Schur form if it is upper triangular. More...
 
interface  geev
 GEEV: computes for an N-by-N complex nonsymmetric matrix A, the eigenvalues and, optionally, the left and/or right eigenvectors. The right eigenvector v(j) of A satisfies A * v(j) = lambda(j) * v(j) where lambda(j) is its eigenvalue. The left eigenvector u(j) of A satisfies u(j)**H * A = lambda(j) * u(j)**H where u(j)**H denotes the conjugate transpose of u(j). The computed eigenvectors are normalized to have Euclidean norm equal to 1 and largest component real. More...
 
interface  gehrd
 GEHRD: reduces a complex general matrix A to upper Hessenberg form H by an unitary similarity transformation: Q**H * A * Q = H . More...
 
interface  gejsv
 GEJSV: computes the singular value decomposition (SVD) of a complex M-by-N matrix [A], where M >= N. The SVD of [A] is written as [A] = [U] * [SIGMA] * [V]^*, where [SIGMA] is an N-by-N (M-by-N) matrix which is zero except for its N diagonal elements, [U] is an M-by-N (or M-by-M) unitary matrix, and [V] is an N-by-N unitary matrix. The diagonal elements of [SIGMA] are the singular values of [A]. The columns of [U] and [V] are the left and the right singular vectors of [A], respectively. The matrices [U] and [V] are computed and stored in the arrays U and V, respectively. The diagonal of [SIGMA] is computed and stored in the array SVA. More...
 
interface  gelq
 GELQ: computes an LQ factorization of a complex M-by-N matrix A: A = ( L 0 ) * Q where: Q is a N-by-N orthogonal matrix; L is a lower-triangular M-by-M matrix; 0 is a M-by-(N-M) zero matrix, if M < N. More...
 
interface  gelqf
 GELQF: computes an LQ factorization of a complex M-by-N matrix A: A = ( L 0 ) * Q where: Q is a N-by-N orthogonal matrix; L is a lower-triangular M-by-M matrix; 0 is a M-by-(N-M) zero matrix, if M < N. More...
 
interface  gelqt
 GELQT: computes a blocked LQ factorization of a complex M-by-N matrix A using the compact WY representation of Q. More...
 
interface  gelqt3
 GELQT3: recursively computes a LQ factorization of a complex M-by-N matrix A, using the compact WY representation of Q. Based on the algorithm of Elmroth and Gustavson, IBM J. Res. Develop. Vol 44 No. 4 July 2000. More...
 
interface  gels
 GELS: solves overdetermined or underdetermined complex linear systems involving an M-by-N matrix A, or its conjugate-transpose, using a QR or LQ factorization of A. It is assumed that A has full rank. The following options are provided: More...
 
interface  gelsd
 GELSD: computes the minimum-norm solution to a real linear least squares problem: minimize 2-norm(| b - A*x |) using the singular value decomposition (SVD) of A. A is an M-by-N matrix which may be rank-deficient. Several right hand side vectors b and solution vectors x can be handled in a single call; they are stored as the columns of the M-by-NRHS right hand side matrix B and the N-by-NRHS solution matrix X. The problem is solved in three steps: (1) Reduce the coefficient matrix A to bidiagonal form with Householder transformations, reducing the original problem into a "bidiagonal least squares problem" (BLS) (2) Solve the BLS using a divide and conquer approach. (3) Apply back all the Householder transformations to solve the original least squares problem. The effective rank of A is determined by treating as zero those singular values which are less than RCOND times the largest singular value. The divide and conquer algorithm makes very mild assumptions about floating point arithmetic. It will work on machines with a guard digit in add/subtract, or on those binary machines without guard digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or Cray-2. It could conceivably fail on hexadecimal or decimal machines without guard digits, but we know of none. More...
 
interface  gelss
 GELSS: computes the minimum norm solution to a complex linear least squares problem: Minimize 2-norm(| b - A*x |). using the singular value decomposition (SVD) of A. A is an M-by-N matrix which may be rank-deficient. Several right hand side vectors b and solution vectors x can be handled in a single call; they are stored as the columns of the M-by-NRHS right hand side matrix B and the N-by-NRHS solution matrix X. The effective rank of A is determined by treating as zero those singular values which are less than RCOND times the largest singular value. More...
 
interface  gelsy
 GELSY: computes the minimum-norm solution to a complex linear least squares problem: minimize || A * X - B || using a complete orthogonal factorization of A. A is an M-by-N matrix which may be rank-deficient. Several right hand side vectors b and solution vectors x can be handled in a single call; they are stored as the columns of the M-by-NRHS right hand side matrix B and the N-by-NRHS solution matrix X. The routine first computes a QR factorization with column pivoting: A * P = Q * [ R11 R12 ] [ 0 R22 ] with R11 defined as the largest leading submatrix whose estimated condition number is less than 1/RCOND. The order of R11, RANK, is the effective rank of A. Then, R22 is considered to be negligible, and R12 is annihilated by unitary transformations from the right, arriving at the complete orthogonal factorization: A * P = Q * [ T11 0 ] * Z [ 0 0 ] The minimum-norm solution is then X = P * Z**H [ inv(T11)*Q1**H*B ] [ 0 ] where Q1 consists of the first RANK columns of Q. This routine is basically identical to the original xGELSX except three differences: o The permutation of matrix B (the right hand side) is faster and more simple. o The call to the subroutine xGEQPF has been substituted by the the call to the subroutine xGEQP3. This subroutine is a Blas-3 version of the QR factorization with column pivoting. o Matrix B (the right hand side) is updated with Blas-3. More...
 
interface  gemlq
 GEMLQ: overwrites the general real M-by-N matrix C with SIDE = 'L' SIDE = 'R' TRANS = 'N': Q * C C * Q TRANS = 'C': Q**H * C C * Q**H where Q is a complex unitary matrix defined as the product of blocked elementary reflectors computed by short wide LQ factorization (CGELQ) More...
 
interface  gemlqt
 GEMLQT: overwrites the general complex M-by-N matrix C with SIDE = 'L' SIDE = 'R' TRANS = 'N': Q C C Q TRANS = 'C': Q**H C C Q**H where Q is a complex unitary matrix defined as the product of K elementary reflectors: Q = H(1) H(2) . . . H(K) = I - V T V**H generated using the compact WY representation as returned by CGELQT. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. More...
 
interface  gemqr
 GEMQR: overwrites the general real M-by-N matrix C with SIDE = 'L' SIDE = 'R' TRANS = 'N': Q * C C * Q TRANS = 'T': Q**H * C C * Q**H where Q is a complex unitary matrix defined as the product of blocked elementary reflectors computed by tall skinny QR factorization (CGEQR) More...
 
interface  gemqrt
 GEMQRT: overwrites the general complex M-by-N matrix C with SIDE = 'L' SIDE = 'R' TRANS = 'N': Q C C Q TRANS = 'C': Q**H C C Q**H where Q is a complex orthogonal matrix defined as the product of K elementary reflectors: Q = H(1) H(2) . . . H(K) = I - V T V**H generated using the compact WY representation as returned by CGEQRT. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. More...
 
interface  geqlf
 GEQLF: computes a QL factorization of a complex M-by-N matrix A: A = Q * L. More...
 
interface  geqr
 GEQR: computes a QR factorization of a complex M-by-N matrix A: A = Q * ( R ), ( 0 ) where: Q is a M-by-M orthogonal matrix; R is an upper-triangular N-by-N matrix; 0 is a (M-N)-by-N zero matrix, if M > N. More...
 
interface  geqr2p
 GEQR2P: computes a QR factorization of a complex m-by-n matrix A: A = Q * ( R ), ( 0 ) where: Q is a m-by-m orthogonal matrix; R is an upper-triangular n-by-n matrix with nonnegative diagonal entries; 0 is a (m-n)-by-n zero matrix, if m > n. More...
 
interface  geqrf
 GEQRF: computes a QR factorization of a complex M-by-N matrix A: A = Q * ( R ), ( 0 ) where: Q is a M-by-M orthogonal matrix; R is an upper-triangular N-by-N matrix; 0 is a (M-N)-by-N zero matrix, if M > N. More...
 
interface  geqrfp
 CGEQR2P computes a QR factorization of a complex M-by-N matrix A: A = Q * ( R ), ( 0 ) where: Q is a M-by-M orthogonal matrix; R is an upper-triangular N-by-N matrix with nonnegative diagonal entries; 0 is a (M-N)-by-N zero matrix, if M > N. More...
 
interface  geqrt
 GEQRT: computes a blocked QR factorization of a complex M-by-N matrix A using the compact WY representation of Q. More...
 
interface  geqrt2
 GEQRT2: computes a QR factorization of a complex M-by-N matrix A, using the compact WY representation of Q. More...
 
interface  geqrt3
 GEQRT3: recursively computes a QR factorization of a complex M-by-N matrix A, using the compact WY representation of Q. Based on the algorithm of Elmroth and Gustavson, IBM J. Res. Develop. Vol 44 No. 4 July 2000. More...
 
interface  gerfs
 GERFS: improves the computed solution to a system of linear equations and provides error bounds and backward error estimates for the solution. More...
 
interface  gerqf
 GERQF: computes an RQ factorization of a complex M-by-N matrix A: A = R * Q. More...
 
interface  gesdd
 GESDD: computes the singular value decomposition (SVD) of a complex M-by-N matrix A, optionally computing the left and/or right singular vectors, by using divide-and-conquer method. The SVD is written A = U * SIGMA * conjugate-transpose(V) where SIGMA is an M-by-N matrix which is zero except for its min(m,n) diagonal elements, U is an M-by-M unitary matrix, and V is an N-by-N unitary matrix. The diagonal elements of SIGMA are the singular values of A; they are real and non-negative, and are returned in descending order. The first min(m,n) columns of U and V are the left and right singular vectors of A. Note that the routine returns VT = V**H, not V. The divide and conquer algorithm makes very mild assumptions about floating point arithmetic. It will work on machines with a guard digit in add/subtract, or on those binary machines without guard digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or Cray-2. It could conceivably fail on hexadecimal or decimal machines without guard digits, but we know of none. More...
 
interface  gesv
 GESV: computes the solution to a complex system of linear equations A * X = B, where A is an N-by-N matrix and X and B are N-by-NRHS matrices. The LU decomposition with partial pivoting and row interchanges is used to factor A as A = P * L * U, where P is a permutation matrix, L is unit lower triangular, and U is upper triangular. The factored form of A is then used to solve the system of equations A * X = B. More...
 
interface  gesvd
 GESVD: computes the singular value decomposition (SVD) of a complex M-by-N matrix A, optionally computing the left and/or right singular vectors. The SVD is written A = U * SIGMA * conjugate-transpose(V) where SIGMA is an M-by-N matrix which is zero except for its min(m,n) diagonal elements, U is an M-by-M unitary matrix, and V is an N-by-N unitary matrix. The diagonal elements of SIGMA are the singular values of A; they are real and non-negative, and are returned in descending order. The first min(m,n) columns of U and V are the left and right singular vectors of A. Note that the routine returns V**H, not V. More...
 
interface  gesvdq
 GESVDQ: computes the singular value decomposition (SVD) of a complex M-by-N matrix A, where M >= N. The SVD of A is written as [++] [xx] [x0] [xx] A = U * SIGMA * V^*, [++] = [xx] * [ox] * [xx] [++] [xx] where SIGMA is an N-by-N diagonal matrix, U is an M-by-N orthonormal matrix, and V is an N-by-N unitary matrix. The diagonal elements of SIGMA are the singular values of A. The columns of U and V are the left and the right singular vectors of A, respectively. More...
 
interface  gesvj
 GESVJ: computes the singular value decomposition (SVD) of a complex M-by-N matrix A, where M >= N. The SVD of A is written as [++] [xx] [x0] [xx] A = U * SIGMA * V^*, [++] = [xx] * [ox] * [xx] [++] [xx] where SIGMA is an N-by-N diagonal matrix, U is an M-by-N orthonormal matrix, and V is an N-by-N unitary matrix. The diagonal elements of SIGMA are the singular values of A. The columns of U and V are the left and the right singular vectors of A, respectively. More...
 
interface  getrf
 GETRF: computes an LU factorization of a general M-by-N matrix A using partial pivoting with row interchanges. The factorization has the form A = P * L * U where P is a permutation matrix, L is lower triangular with unit diagonal elements (lower trapezoidal if m > n), and U is upper triangular (upper trapezoidal if m < n). This is the right-looking Level 3 BLAS version of the algorithm. More...
 
interface  getrf2
 GETRF2: computes an LU factorization of a general M-by-N matrix A using partial pivoting with row interchanges. The factorization has the form A = P * L * U where P is a permutation matrix, L is lower triangular with unit diagonal elements (lower trapezoidal if m > n), and U is upper triangular (upper trapezoidal if m < n). This is the recursive version of the algorithm. It divides the matrix into four submatrices: [ A11 | A12 ] where A11 is n1 by n1 and A22 is n2 by n2 A = [ --—|--— ] with n1 = min(m,n)/2 [ A21 | A22 ] n2 = n-n1 [ A11 ] The subroutine calls itself to factor [ — ], [ A12 ] [ A12 ] do the swaps on [ — ], solve A12, update A22, [ A22 ] then calls itself to factor A22 and do the swaps on A21. More...
 
interface  getri
 GETRI: computes the inverse of a matrix using the LU factorization computed by CGETRF. This method inverts U and then computes inv(A) by solving the system inv(A)*L = inv(U) for inv(A). More...
 
interface  getrs
 GETRS: solves a system of linear equations A * X = B, A**T * X = B, or A**H * X = B with a general N-by-N matrix A using the LU factorization computed by CGETRF. More...
 
interface  getsls
 GETSLS: solves overdetermined or underdetermined complex linear systems involving an M-by-N matrix A, using a tall skinny QR or short wide LQ factorization of A. It is assumed that A has full rank. The following options are provided: More...
 
interface  getsqrhrt
 GETSQRHRT: computes a NB2-sized column blocked QR-factorization of a complex M-by-N matrix A with M >= N, A = Q * R. The routine uses internally a NB1-sized column blocked and MB1-sized row blocked TSQR-factorization and perfors the reconstruction of the Householder vectors from the TSQR output. The routine also converts the R_tsqr factor from the TSQR-factorization output into the R factor that corresponds to the Householder QR-factorization, A = Q_tsqr * R_tsqr = Q * R. The output Q and R factors are stored in the same format as in CGEQRT (Q is in blocked compact WY-representation). See the documentation of CGEQRT for more details on the format. More...
 
interface  ggbak
 GGBAK: forms the right or left eigenvectors of a complex generalized eigenvalue problem A*x = lambda*B*x, by backward transformation on the computed eigenvectors of the balanced pair of matrices output by CGGBAL. More...
 
interface  ggbal
 GGBAL: balances a pair of general complex matrices (A,B). This involves, first, permuting A and B by similarity transformations to isolate eigenvalues in the first 1 to ILO$-$1 and last IHI+1 to N elements on the diagonal; and second, applying a diagonal similarity transformation to rows and columns ILO to IHI to make the rows and columns as close in norm as possible. Both steps are optional. Balancing may reduce the 1-norm of the matrices, and improve the accuracy of the computed eigenvalues and/or eigenvectors in the generalized eigenvalue problem A*x = lambda*B*x. More...
 
interface  gges
 GGES: computes for a pair of N-by-N complex nonsymmetric matrices (A,B), the generalized eigenvalues, the generalized complex Schur form (S, T), and optionally left and/or right Schur vectors (VSL and VSR). This gives the generalized Schur factorization (A,B) = ( (VSL)*S*(VSR)**H, (VSL)*T*(VSR)**H ) where (VSR)**H is the conjugate-transpose of VSR. Optionally, it also orders the eigenvalues so that a selected cluster of eigenvalues appears in the leading diagonal blocks of the upper triangular matrix S and the upper triangular matrix T. The leading columns of VSL and VSR then form an unitary basis for the corresponding left and right eigenspaces (deflating subspaces). (If only the generalized eigenvalues are needed, use the driver CGGEV instead, which is faster.) A generalized eigenvalue for a pair of matrices (A,B) is a scalar w or a ratio alpha/beta = w, such that A - w*B is singular. It is usually represented as the pair (alpha,beta), as there is a reasonable interpretation for beta=0, and even for both being zero. A pair of matrices (S,T) is in generalized complex Schur form if S and T are upper triangular and, in addition, the diagonal elements of T are non-negative real numbers. More...
 
interface  ggev
 GGEV: computes for a pair of N-by-N complex nonsymmetric matrices (A,B), the generalized eigenvalues, and optionally, the left and/or right generalized eigenvectors. A generalized eigenvalue for a pair of matrices (A,B) is a scalar lambda or a ratio alpha/beta = lambda, such that A - lambda*B is singular. It is usually represented as the pair (alpha,beta), as there is a reasonable interpretation for beta=0, and even for both being zero. The right generalized eigenvector v(j) corresponding to the generalized eigenvalue lambda(j) of (A,B) satisfies A * v(j) = lambda(j) * B * v(j). The left generalized eigenvector u(j) corresponding to the generalized eigenvalues lambda(j) of (A,B) satisfies u(j)**H * A = lambda(j) * u(j)**H * B where u(j)**H is the conjugate-transpose of u(j). More...
 
interface  ggglm
 GGGLM: solves a general Gauss-Markov linear model (GLM) problem: minimize || y ||_2 subject to d = A*x + B*y x where A is an N-by-M matrix, B is an N-by-P matrix, and d is a given N-vector. It is assumed that M <= N <= M+P, and rank(A) = M and rank( A B ) = N. Under these assumptions, the constrained equation is always consistent, and there is a unique solution x and a minimal 2-norm solution y, which is obtained using a generalized QR factorization of the matrices (A, B) given by A = Q*(R), B = Q*T*Z. (0) In particular, if matrix B is square nonsingular, then the problem GLM is equivalent to the following weighted linear least squares problem minimize || inv(B)*(d-A*x) ||_2 x where inv(B) denotes the inverse of B. More...
 
interface  gghrd
 GGHRD: reduces a pair of complex matrices (A,B) to generalized upper Hessenberg form using unitary transformations, where A is a general matrix and B is upper triangular. The form of the generalized eigenvalue problem is A*x = lambda*B*x, and B is typically made upper triangular by computing its QR factorization and moving the unitary matrix Q to the left side of the equation. This subroutine simultaneously reduces A to a Hessenberg matrix H: Q**H*A*Z = H and transforms B to another upper triangular matrix T: Q**H*B*Z = T in order to reduce the problem to its standard form H*y = lambda*T*y where y = Z**H*x. The unitary matrices Q and Z are determined as products of Givens rotations. They may either be formed explicitly, or they may be postmultiplied into input matrices Q1 and Z1, so that Q1 * A * Z1**H = (Q1*Q) * H * (Z1*Z)**H Q1 * B * Z1**H = (Q1*Q) * T * (Z1*Z)**H If Q1 is the unitary matrix from the QR factorization of B in the original equation A*x = lambda*B*x, then GGHRD reduces the original problem to generalized Hessenberg form. More...
 
interface  gglse
 GGLSE: solves the linear equality-constrained least squares (LSE) problem: minimize || c - A*x ||_2 subject to B*x = d where A is an M-by-N matrix, B is a P-by-N matrix, c is a given M-vector, and d is a given P-vector. It is assumed that P <= N <= M+P, and rank(B) = P and rank( (A) ) = N. ( (B) ) These conditions ensure that the LSE problem has a unique solution, which is obtained using a generalized RQ factorization of the matrices (B, A) given by B = (0 R)*Q, A = Z*T*Q. More...
 
interface  ggqrf
 GGQRF: computes a generalized QR factorization of an N-by-M matrix A and an N-by-P matrix B: A = Q*R, B = Q*T*Z, where Q is an N-by-N unitary matrix, Z is a P-by-P unitary matrix, and R and T assume one of the forms: if N >= M, R = ( R11 ) M , or if N < M, R = ( R11 R12 ) N, ( 0 ) N-M N M-N M where R11 is upper triangular, and if N <= P, T = ( 0 T12 ) N, or if N > P, T = ( T11 ) N-P, P-N N ( T21 ) P P where T12 or T21 is upper triangular. In particular, if B is square and nonsingular, the GQR factorization of A and B implicitly gives the QR factorization of inv(B)*A: inv(B)*A = Z**H * (inv(T)*R) where inv(B) denotes the inverse of the matrix B, and Z' denotes the conjugate transpose of matrix Z. More...
 
interface  ggrqf
 GGRQF: computes a generalized RQ factorization of an M-by-N matrix A and a P-by-N matrix B: A = R*Q, B = Z*T*Q, where Q is an N-by-N unitary matrix, Z is a P-by-P unitary matrix, and R and T assume one of the forms: if M <= N, R = ( 0 R12 ) M, or if M > N, R = ( R11 ) M-N, N-M M ( R21 ) N N where R12 or R21 is upper triangular, and if P >= N, T = ( T11 ) N , or if P < N, T = ( T11 T12 ) P, ( 0 ) P-N P N-P N where T11 is upper triangular. In particular, if B is square and nonsingular, the GRQ factorization of A and B implicitly gives the RQ factorization of A*inv(B): A*inv(B) = (R*inv(T))*Z**H where inv(B) denotes the inverse of the matrix B, and Z**H denotes the conjugate transpose of the matrix Z. More...
 
interface  gsvj0
 GSVJ0: is called from CGESVJ as a pre-processor and that is its main purpose. It applies Jacobi rotations in the same way as CGESVJ does, but it does not check convergence (stopping criterion). Few tuning parameters (marked by [TP]) are available for the implementer. More...
 
interface  gsvj1
 GSVJ1: is called from CGESVJ as a pre-processor and that is its main purpose. It applies Jacobi rotations in the same way as CGESVJ does, but it targets only particular pivots and it does not check convergence (stopping criterion). Few tuning parameters (marked by [TP]) are available for the implementer. Further Details ~~~~~~~~~~~~~~~ GSVJ1 applies few sweeps of Jacobi rotations in the column space of the input M-by-N matrix A. The pivot pairs are taken from the (1,2) off-diagonal block in the corresponding N-by-N Gram matrix A^T * A. The block-entries (tiles) of the (1,2) off-diagonal block are marked by the [x]'s in the following scheme: | * * * [x] [x] [x]| | * * * [x] [x] [x]| Row-cycling in the nblr-by-nblc [x] blocks. | * * * [x] [x] [x]| Row-cyclic pivoting inside each [x] block. |[x] [x] [x] * * * | |[x] [x] [x] * * * | |[x] [x] [x] * * * | In terms of the columns of A, the first N1 columns are rotated 'against' the remaining N-N1 columns, trying to increase the angle between the corresponding subspaces. The off-diagonal block is N1-by(N-N1) and it is tiled using quadratic tiles of side KBL. Here, KBL is a tuning parameter. The number of sweeps is given in NSWEEP and the orthogonality threshold is given in TOL. More...
 
interface  gtcon
 GTCON: estimates the reciprocal of the condition number of a complex tridiagonal matrix A using the LU factorization as computed by CGTTRF. An estimate is obtained for norm(inv(A)), and the reciprocal of the condition number is computed as RCOND = 1 / (ANORM * norm(inv(A))). More...
 
interface  gtrfs
 GTRFS: improves the computed solution to a system of linear equations when the coefficient matrix is tridiagonal, and provides error bounds and backward error estimates for the solution. More...
 
interface  gtsv
 GTSV: solves the equation A*X = B, where A is an N-by-N tridiagonal matrix, by Gaussian elimination with partial pivoting. Note that the equation A**T *X = B may be solved by interchanging the order of the arguments DU and DL. More...
 
interface  gttrf
 GTTRF: computes an LU factorization of a complex tridiagonal matrix A using elimination with partial pivoting and row interchanges. The factorization has the form A = L * U where L is a product of permutation and unit lower bidiagonal matrices and U is upper triangular with nonzeros in only the main diagonal and first two superdiagonals. More...
 
interface  gttrs
 GTTRS: solves one of the systems of equations A * X = B, A**T * X = B, or A**H * X = B, with a tridiagonal matrix A using the LU factorization computed by CGTTRF. More...
 
interface  hb2st_kernels
 HB2ST_KERNELS: is an internal routine used by the CHETRD_HB2ST subroutine. More...
 
interface  hbev
 HBEV: computes all the eigenvalues and, optionally, eigenvectors of a complex Hermitian band matrix A. More...
 
interface  hbevd
 HBEVD: computes all the eigenvalues and, optionally, eigenvectors of a complex Hermitian band matrix A. If eigenvectors are desired, it uses a divide and conquer algorithm. The divide and conquer algorithm makes very mild assumptions about floating point arithmetic. It will work on machines with a guard digit in add/subtract, or on those binary machines without guard digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or Cray-2. It could conceivably fail on hexadecimal or decimal machines without guard digits, but we know of none. More...
 
interface  hbgst
 HBGST: reduces a complex Hermitian-definite banded generalized eigenproblem A*x = lambda*B*x to standard form C*y = lambda*y, such that C has the same bandwidth as A. B must have been previously factorized as S**H*S by CPBSTF, using a split Cholesky factorization. A is overwritten by C = X**H*A*X, where X = S**(-1)*Q and Q is a unitary matrix chosen to preserve the bandwidth of A. More...
 
interface  hbgv
 HBGV: computes all the eigenvalues, and optionally, the eigenvectors of a complex generalized Hermitian-definite banded eigenproblem, of the form A*x=(lambda)*B*x. Here A and B are assumed to be Hermitian and banded, and B is also positive definite. More...
 
interface  hbgvd
 HBGVD: computes all the eigenvalues, and optionally, the eigenvectors of a complex generalized Hermitian-definite banded eigenproblem, of the form A*x=(lambda)*B*x. Here A and B are assumed to be Hermitian and banded, and B is also positive definite. If eigenvectors are desired, it uses a divide and conquer algorithm. The divide and conquer algorithm makes very mild assumptions about floating point arithmetic. It will work on machines with a guard digit in add/subtract, or on those binary machines without guard digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or Cray-2. It could conceivably fail on hexadecimal or decimal machines without guard digits, but we know of none. More...
 
interface  hbtrd
 HBTRD: reduces a complex Hermitian band matrix A to real symmetric tridiagonal form T by a unitary similarity transformation: Q**H * A * Q = T. More...
 
interface  hecon
 HECON: estimates the reciprocal of the condition number of a complex Hermitian matrix A using the factorization A = U*D*U**H or A = L*D*L**H computed by CHETRF. An estimate is obtained for norm(inv(A)), and the reciprocal of the condition number is computed as RCOND = 1 / (ANORM * norm(inv(A))). More...
 
interface  hecon_rook
 HECON_ROOK: estimates the reciprocal of the condition number of a complex Hermitian matrix A using the factorization A = U*D*U**H or A = L*D*L**H computed by CHETRF_ROOK. An estimate is obtained for norm(inv(A)), and the reciprocal of the condition number is computed as RCOND = 1 / (ANORM * norm(inv(A))). More...
 
interface  heequb
 HEEQUB: computes row and column scalings intended to equilibrate a Hermitian matrix A (with respect to the Euclidean norm) and reduce its condition number. The scale factors S are computed by the BIN algorithm (see references) so that the scaled matrix B with elements B(i,j) = S(i)*A(i,j)*S(j) has a condition number within a factor N of the smallest possible condition number over all possible diagonal scalings. More...
 
interface  heev
 HEEV: computes all eigenvalues and, optionally, eigenvectors of a complex Hermitian matrix A. More...
 
interface  heevd
 HEEVD: computes all eigenvalues and, optionally, eigenvectors of a complex Hermitian matrix A. If eigenvectors are desired, it uses a divide and conquer algorithm. The divide and conquer algorithm makes very mild assumptions about floating point arithmetic. It will work on machines with a guard digit in add/subtract, or on those binary machines without guard digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or Cray-2. It could conceivably fail on hexadecimal or decimal machines without guard digits, but we know of none. More...
 
interface  heevr
 HEEVR: computes selected eigenvalues and, optionally, eigenvectors of a complex Hermitian matrix A. Eigenvalues and eigenvectors can be selected by specifying either a range of values or a range of indices for the desired eigenvalues. HEEVR first reduces the matrix A to tridiagonal form T with a call to CHETRD. Then, whenever possible, HEEVR calls CSTEMR to compute the eigenspectrum using Relatively Robust Representations. CSTEMR computes eigenvalues by the dqds algorithm, while orthogonal eigenvectors are computed from various "good" L D L^T representations (also known as Relatively Robust Representations). Gram-Schmidt orthogonalization is avoided as far as possible. More specifically, the various steps of the algorithm are as follows. For each unreduced block (submatrix) of T, (a) Compute T - sigma I = L D L^T, so that L and D define all the wanted eigenvalues to high relative accuracy. This means that small relative changes in the entries of D and L cause only small relative changes in the eigenvalues and eigenvectors. The standard (unfactored) representation of the tridiagonal matrix T does not have this property in general. (b) Compute the eigenvalues to suitable accuracy. If the eigenvectors are desired, the algorithm attains full accuracy of the computed eigenvalues only right before the corresponding vectors have to be computed, see steps c) and d). (c) For each cluster of close eigenvalues, select a new shift close to the cluster, find a new factorization, and refine the shifted eigenvalues to suitable accuracy. (d) For each eigenvalue with a large enough relative separation compute the corresponding eigenvector by forming a rank revealing twisted factorization. Go back to (c) for any clusters that remain. The desired accuracy of the output can be specified by the input parameter ABSTOL. For more details, see CSTEMR's documentation and: More...
 
interface  hegst
 HEGST: reduces a complex Hermitian-definite generalized eigenproblem to standard form. If ITYPE = 1, the problem is A*x = lambda*B*x, and A is overwritten by inv(U**H)*A*inv(U) or inv(L)*A*inv(L**H) If ITYPE = 2 or 3, the problem is A*B*x = lambda*x or B*A*x = lambda*x, and A is overwritten by U*A*U**H or L**H*A*L. B must have been previously factorized as U**H*U or L*L**H by CPOTRF. More...
 
interface  hegv
 HEGV: computes all the eigenvalues, and optionally, the eigenvectors of a complex generalized Hermitian-definite eigenproblem, of the form A*x=(lambda)*B*x, A*Bx=(lambda)*x, or B*A*x=(lambda)*x. Here A and B are assumed to be Hermitian and B is also positive definite. More...
 
interface  hegvd
 HEGVD: computes all the eigenvalues, and optionally, the eigenvectors of a complex generalized Hermitian-definite eigenproblem, of the form A*x=(lambda)*B*x, A*Bx=(lambda)*x, or B*A*x=(lambda)*x. Here A and B are assumed to be Hermitian and B is also positive definite. If eigenvectors are desired, it uses a divide and conquer algorithm. The divide and conquer algorithm makes very mild assumptions about floating point arithmetic. It will work on machines with a guard digit in add/subtract, or on those binary machines without guard digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or Cray-2. It could conceivably fail on hexadecimal or decimal machines without guard digits, but we know of none. More...
 
interface  herfs
 HERFS: improves the computed solution to a system of linear equations when the coefficient matrix is Hermitian indefinite, and provides error bounds and backward error estimates for the solution. More...
 
interface  hesv
 HESV: computes the solution to a complex system of linear equations A * X = B, where A is an N-by-N Hermitian matrix and X and B are N-by-NRHS matrices. The diagonal pivoting method is used to factor A as A = U * D * U**H, if UPLO = 'U', or A = L * D * L**H, if UPLO = 'L', where U (or L) is a product of permutation and unit upper (lower) triangular matrices, and D is Hermitian and block diagonal with 1-by-1 and 2-by-2 diagonal blocks. The factored form of A is then used to solve the system of equations A * X = B. More...
 
interface  hesv_aa
 HESV_AA: computes the solution to a complex system of linear equations A * X = B, where A is an N-by-N Hermitian matrix and X and B are N-by-NRHS matrices. Aasen's algorithm is used to factor A as A = U**H * T * U, if UPLO = 'U', or A = L * T * L**H, if UPLO = 'L', where U (or L) is a product of permutation and unit upper (lower) triangular matrices, and T is Hermitian and tridiagonal. The factored form of A is then used to solve the system of equations A * X = B. More...
 
interface  hesv_rk
 HESV_RK: computes the solution to a complex system of linear equations A * X = B, where A is an N-by-N Hermitian matrix and X and B are N-by-NRHS matrices. The bounded Bunch-Kaufman (rook) diagonal pivoting method is used to factor A as A = P*U*D*(U**H)*(P**T), if UPLO = 'U', or A = P*L*D*(L**H)*(P**T), if UPLO = 'L', where U (or L) is unit upper (or lower) triangular matrix, U**H (or L**H) is the conjugate of U (or L), P is a permutation matrix, P**T is the transpose of P, and D is Hermitian and block diagonal with 1-by-1 and 2-by-2 diagonal blocks. CHETRF_RK is called to compute the factorization of a complex Hermitian matrix. The factored form of A is then used to solve the system of equations A * X = B by calling BLAS3 routine CHETRS_3. More...
 
interface  hesv_rook
 HESV_ROOK: computes the solution to a complex system of linear equations A * X = B, where A is an N-by-N Hermitian matrix and X and B are N-by-NRHS matrices. The bounded Bunch-Kaufman ("rook") diagonal pivoting method is used to factor A as A = U * D * U**T, if UPLO = 'U', or A = L * D * L**T, if UPLO = 'L', where U (or L) is a product of permutation and unit upper (lower) triangular matrices, and D is Hermitian and block diagonal with 1-by-1 and 2-by-2 diagonal blocks. CHETRF_ROOK is called to compute the factorization of a complex Hermition matrix A using the bounded Bunch-Kaufman ("rook") diagonal pivoting method. The factored form of A is then used to solve the system of equations A * X = B by calling CHETRS_ROOK (uses BLAS 2). More...
 
interface  heswapr
 HESWAPR: applies an elementary permutation on the rows and the columns of a hermitian matrix. More...
 
interface  hetf2_rk
 HETF2_RK: computes the factorization of a complex Hermitian matrix A using the bounded Bunch-Kaufman (rook) diagonal pivoting method: A = P*U*D*(U**H)*(P**T) or A = P*L*D*(L**H)*(P**T), where U (or L) is unit upper (or lower) triangular matrix, U**H (or L**H) is the conjugate of U (or L), P is a permutation matrix, P**T is the transpose of P, and D is Hermitian and block diagonal with 1-by-1 and 2-by-2 diagonal blocks. This is the unblocked version of the algorithm, calling Level 2 BLAS. For more information see Further Details section. More...
 
interface  hetf2_rook
 HETF2_ROOK: computes the factorization of a complex Hermitian matrix A using the bounded Bunch-Kaufman ("rook") diagonal pivoting method: A = U*D*U**H or A = L*D*L**H where U (or L) is a product of permutation and unit upper (lower) triangular matrices, U**H is the conjugate transpose of U, and D is Hermitian and block diagonal with 1-by-1 and 2-by-2 diagonal blocks. This is the unblocked version of the algorithm, calling Level 2 BLAS. More...
 
interface  hetrd
 HETRD: reduces a complex Hermitian matrix A to real symmetric tridiagonal form T by a unitary similarity transformation: Q**H * A * Q = T. More...
 
interface  hetrd_hb2st
 HETRD_HB2ST: reduces a complex Hermitian band matrix A to real symmetric tridiagonal form T by a unitary similarity transformation: Q**H * A * Q = T. More...
 
interface  hetrd_he2hb
 HETRD_HE2HB: reduces a complex Hermitian matrix A to complex Hermitian band-diagonal form AB by a unitary similarity transformation: Q**H * A * Q = AB. More...
 
interface  hetrf
 HETRF: computes the factorization of a complex Hermitian matrix A using the Bunch-Kaufman diagonal pivoting method. The form of the factorization is A = U*D*U**H or A = L*D*L**H where U (or L) is a product of permutation and unit upper (lower) triangular matrices, and D is Hermitian and block diagonal with 1-by-1 and 2-by-2 diagonal blocks. This is the blocked version of the algorithm, calling Level 3 BLAS. More...
 
interface  hetrf_aa
 HETRF_AA: computes the factorization of a complex hermitian matrix A using the Aasen's algorithm. The form of the factorization is A = U**H*T*U or A = L*T*L**H where U (or L) is a product of permutation and unit upper (lower) triangular matrices, and T is a hermitian tridiagonal matrix. This is the blocked version of the algorithm, calling Level 3 BLAS. More...
 
interface  hetrf_rk
 HETRF_RK: computes the factorization of a complex Hermitian matrix A using the bounded Bunch-Kaufman (rook) diagonal pivoting method: A = P*U*D*(U**H)*(P**T) or A = P*L*D*(L**H)*(P**T), where U (or L) is unit upper (or lower) triangular matrix, U**H (or L**H) is the conjugate of U (or L), P is a permutation matrix, P**T is the transpose of P, and D is Hermitian and block diagonal with 1-by-1 and 2-by-2 diagonal blocks. This is the blocked version of the algorithm, calling Level 3 BLAS. For more information see Further Details section. More...
 
interface  hetrf_rook
 HETRF_ROOK: computes the factorization of a complex Hermitian matrix A using the bounded Bunch-Kaufman ("rook") diagonal pivoting method. The form of the factorization is A = U*D*U**T or A = L*D*L**T where U (or L) is a product of permutation and unit upper (lower) triangular matrices, and D is Hermitian and block diagonal with 1-by-1 and 2-by-2 diagonal blocks. This is the blocked version of the algorithm, calling Level 3 BLAS. More...
 
interface  hetri
 HETRI: computes the inverse of a complex Hermitian indefinite matrix A using the factorization A = U*D*U**H or A = L*D*L**H computed by CHETRF. More...
 
interface  hetri_rook
 HETRI_ROOK: computes the inverse of a complex Hermitian indefinite matrix A using the factorization A = U*D*U**H or A = L*D*L**H computed by CHETRF_ROOK. More...
 
interface  hetrs
 HETRS: solves a system of linear equations A*X = B with a complex Hermitian matrix A using the factorization A = U*D*U**H or A = L*D*L**H computed by CHETRF. More...
 
interface  hetrs2
 HETRS2: solves a system of linear equations A*X = B with a complex Hermitian matrix A using the factorization A = U*D*U**H or A = L*D*L**H computed by CHETRF and converted by CSYCONV. More...
 
interface  hetrs_3
 HETRS_3: solves a system of linear equations A * X = B with a complex Hermitian matrix A using the factorization computed by CHETRF_RK or CHETRF_BK: A = P*U*D*(U**H)*(P**T) or A = P*L*D*(L**H)*(P**T), where U (or L) is unit upper (or lower) triangular matrix, U**H (or L**H) is the conjugate of U (or L), P is a permutation matrix, P**T is the transpose of P, and D is Hermitian and block diagonal with 1-by-1 and 2-by-2 diagonal blocks. This algorithm is using Level 3 BLAS. More...
 
interface  hetrs_aa
 HETRS_AA: solves a system of linear equations A*X = B with a complex hermitian matrix A using the factorization A = U**H*T*U or A = L*T*L**H computed by CHETRF_AA. More...
 
interface  hetrs_rook
 HETRS_ROOK: solves a system of linear equations A*X = B with a complex Hermitian matrix A using the factorization A = U*D*U**H or A = L*D*L**H computed by CHETRF_ROOK. More...
 
interface  hfrk
 Level 3 BLAS like routine for C in RFP Format. HFRK: performs one of the Hermitian rank–k operations C := alpha*A*A**H + beta*C, or C := alpha*A**H*A + beta*C, where alpha and beta are real scalars, C is an n–by–n Hermitian matrix and A is an n–by–k matrix in the first case and a k–by–n matrix in the second case. More...
 
interface  hgeqz
 HGEQZ: computes the eigenvalues of a complex matrix pair (H,T), where H is an upper Hessenberg matrix and T is upper triangular, using the single-shift QZ method. Matrix pairs of this type are produced by the reduction to generalized upper Hessenberg form of a complex matrix pair (A,B): A = Q1*H*Z1**H, B = Q1*T*Z1**H, as computed by CGGHRD. If JOB='S', then the Hessenberg-triangular pair (H,T) is also reduced to generalized Schur form, H = Q*S*Z**H, T = Q*P*Z**H, where Q and Z are unitary matrices and S and P are upper triangular. Optionally, the unitary matrix Q from the generalized Schur factorization may be postmultiplied into an input matrix Q1, and the unitary matrix Z may be postmultiplied into an input matrix Z1. If Q1 and Z1 are the unitary matrices from CGGHRD that reduced the matrix pair (A,B) to generalized Hessenberg form, then the output matrices Q1*Q and Z1*Z are the unitary factors from the generalized Schur factorization of (A,B): A = (Q1*Q)*S*(Z1*Z)**H, B = (Q1*Q)*P*(Z1*Z)**H. To avoid overflow, eigenvalues of the matrix pair (H,T) (equivalently, of (A,B)) are computed as a pair of complex values (alpha,beta). If beta is nonzero, lambda = alpha / beta is an eigenvalue of the generalized nonsymmetric eigenvalue problem (GNEP) A*x = lambda*B*x and if alpha is nonzero, mu = beta / alpha is an eigenvalue of the alternate form of the GNEP mu*A*y = B*y. The values of alpha and beta for the i-th eigenvalue can be read directly from the generalized Schur form: alpha = S(i,i), beta = P(i,i). Ref: C.B. Moler Eigenvalue Problems", SIAM J. Numer. Anal., 10(1973), pp. 241–256. More...
 
interface  hpcon
 HPCON: estimates the reciprocal of the condition number of a complex Hermitian packed matrix A using the factorization A = U*D*U**H or A = L*D*L**H computed by CHPTRF. An estimate is obtained for norm(inv(A)), and the reciprocal of the condition number is computed as RCOND = 1 / (ANORM * norm(inv(A))). More...
 
interface  hpev
 HPEV: computes all the eigenvalues and, optionally, eigenvectors of a complex Hermitian matrix in packed storage. More...
 
interface  hpevd
 HPEVD: computes all the eigenvalues and, optionally, eigenvectors of a complex Hermitian matrix A in packed storage. If eigenvectors are desired, it uses a divide and conquer algorithm. The divide and conquer algorithm makes very mild assumptions about floating point arithmetic. It will work on machines with a guard digit in add/subtract, or on those binary machines without guard digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or Cray-2. It could conceivably fail on hexadecimal or decimal machines without guard digits, but we know of none. More...
 
interface  hpgst
 HPGST: reduces a complex Hermitian-definite generalized eigenproblem to standard form, using packed storage. If ITYPE = 1, the problem is A*x = lambda*B*x, and A is overwritten by inv(U**H)*A*inv(U) or inv(L)*A*inv(L**H) If ITYPE = 2 or 3, the problem is A*B*x = lambda*x or B*A*x = lambda*x, and A is overwritten by U*A*U**H or L**H*A*L. B must have been previously factorized as U**H*U or L*L**H by CPPTRF. More...
 
interface  hpgv
 HPGV: computes all the eigenvalues and, optionally, the eigenvectors of a complex generalized Hermitian-definite eigenproblem, of the form A*x=(lambda)*B*x, A*Bx=(lambda)*x, or B*A*x=(lambda)*x. Here A and B are assumed to be Hermitian, stored in packed format, and B is also positive definite. More...
 
interface  hpgvd
 HPGVD: computes all the eigenvalues and, optionally, the eigenvectors of a complex generalized Hermitian-definite eigenproblem, of the form A*x=(lambda)*B*x, A*Bx=(lambda)*x, or B*A*x=(lambda)*x. Here A and B are assumed to be Hermitian, stored in packed format, and B is also positive definite. If eigenvectors are desired, it uses a divide and conquer algorithm. The divide and conquer algorithm makes very mild assumptions about floating point arithmetic. It will work on machines with a guard digit in add/subtract, or on those binary machines without guard digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or Cray-2. It could conceivably fail on hexadecimal or decimal machines without guard digits, but we know of none. More...
 
interface  hprfs
 HPRFS: improves the computed solution to a system of linear equations when the coefficient matrix is Hermitian indefinite and packed, and provides error bounds and backward error estimates for the solution. More...
 
interface  hpsv
 HPSV: computes the solution to a complex system of linear equations A * X = B, where A is an N-by-N Hermitian matrix stored in packed format and X and B are N-by-NRHS matrices. The diagonal pivoting method is used to factor A as A = U * D * U**H, if UPLO = 'U', or A = L * D * L**H, if UPLO = 'L', where U (or L) is a product of permutation and unit upper (lower) triangular matrices, D is Hermitian and block diagonal with 1-by-1 and 2-by-2 diagonal blocks. The factored form of A is then used to solve the system of equations A * X = B. More...
 
interface  hptrd
 HPTRD: reduces a complex Hermitian matrix A stored in packed form to real symmetric tridiagonal form T by a unitary similarity transformation: Q**H * A * Q = T. More...
 
interface  hptrf
 HPTRF: computes the factorization of a complex Hermitian packed matrix A using the Bunch-Kaufman diagonal pivoting method: A = U*D*U**H or A = L*D*L**H where U (or L) is a product of permutation and unit upper (lower) triangular matrices, and D is Hermitian and block diagonal with 1-by-1 and 2-by-2 diagonal blocks. More...
 
interface  hptri
 HPTRI: computes the inverse of a complex Hermitian indefinite matrix A in packed storage using the factorization A = U*D*U**H or A = L*D*L**H computed by CHPTRF. More...
 
interface  hptrs
 HPTRS: solves a system of linear equations A*X = B with a complex Hermitian matrix A stored in packed format using the factorization A = U*D*U**H or A = L*D*L**H computed by CHPTRF. More...
 
interface  hsein
 HSEIN: uses inverse iteration to find specified right and/or left eigenvectors of a complex upper Hessenberg matrix H. The right eigenvector x and the left eigenvector y of the matrix H corresponding to an eigenvalue w are defined by: H * x = w * x, y**h * H = w * y**h where y**h denotes the conjugate transpose of the vector y. More...
 
interface  hseqr
 HSEQR: computes the eigenvalues of a Hessenberg matrix H and, optionally, the matrices T and Z from the Schur decomposition H = Z T Z**H, where T is an upper triangular matrix (the Schur form), and Z is the unitary matrix of Schur vectors. Optionally Z may be postmultiplied into an input unitary matrix Q so that this routine can give the Schur factorization of a matrix A which has been reduced to the Hessenberg form H by the unitary matrix Q: A = Q*H*Q**H = (QZ)*T*(QZ)**H. More...
 
interface  isnan
 ISNAN: returns .TRUE. if its argument is NaN, and .FALSE. otherwise. To be replaced by the Fortran 2003 intrinsic in the future. More...
 
interface  la_gbamv
 LA_GBAMV: performs one of the matrix-vector operations y := alpha*abs(A)*abs(x) + beta*abs(y), or y := alpha*abs(A)**T*abs(x) + beta*abs(y), where alpha and beta are scalars, x and y are vectors and A is an m by n matrix. This function is primarily used in calculating error bounds. To protect against underflow during evaluation, components in the resulting vector are perturbed away from zero by (N+1) times the underflow threshold. To prevent unnecessarily large errors for block-structure embedded in general matrices, "symbolically" zero components are not perturbed. A zero entry is considered "symbolic" if all multiplications involved in computing that entry have at least one zero multiplicand. More...
 
interface  la_gbrcond
 LA_GBRCOND: Estimates the Skeel condition number of op(A) * op2(C) where op2 is determined by CMODE as follows CMODE = 1 op2(C) = C CMODE = 0 op2(C) = I CMODE = -1 op2(C) = inv(C) The Skeel condition number cond(A) = norminf( |inv(A)||A| ) is computed by computing scaling factors R such that diag(R)*A*op2(C) is row equilibrated and computing the standard infinity-norm condition number. More...
 
interface  la_gbrcond_c
 LA_GBRCOND_C: Computes the infinity norm condition number of op(A) * inv(diag(C)) where C is a REAL vector. More...
 
interface  la_gbrpvgrw
 LA_GBRPVGRW: computes the reciprocal pivot growth factor norm(A)/norm(U). The "max absolute element" norm is used. If this is much less than 1, the stability of the LU factorization of the (equilibrated) matrix A could be poor. This also means that the solution X, estimated condition numbers, and error bounds could be unreliable. More...
 
interface  la_geamv
 LA_GEAMV: performs one of the matrix-vector operations y := alpha*abs(A)*abs(x) + beta*abs(y), or y := alpha*abs(A)**T*abs(x) + beta*abs(y), where alpha and beta are scalars, x and y are vectors and A is an m by n matrix. This function is primarily used in calculating error bounds. To protect against underflow during evaluation, components in the resulting vector are perturbed away from zero by (N+1) times the underflow threshold. To prevent unnecessarily large errors for block-structure embedded in general matrices, "symbolically" zero components are not perturbed. A zero entry is considered "symbolic" if all multiplications involved in computing that entry have at least one zero multiplicand. More...
 
interface  la_gercond
 LA_GERCOND: estimates the Skeel condition number of op(A) * op2(C) where op2 is determined by CMODE as follows CMODE = 1 op2(C) = C CMODE = 0 op2(C) = I CMODE = -1 op2(C) = inv(C) The Skeel condition number cond(A) = norminf( |inv(A)||A| ) is computed by computing scaling factors R such that diag(R)*A*op2(C) is row equilibrated and computing the standard infinity-norm condition number. More...
 
interface  la_gercond_c
 LA_GERCOND_C: computes the infinity norm condition number of op(A) * inv(diag(C)) where C is a REAL vector. More...
 
interface  la_gerpvgrw
 LA_GERPVGRW: computes the reciprocal pivot growth factor norm(A)/norm(U). The "max absolute element" norm is used. If this is much less than 1, the stability of the LU factorization of the (equilibrated) matrix A could be poor. This also means that the solution X, estimated condition numbers, and error bounds could be unreliable. More...
 
interface  la_heamv
 CLA_SYAMV performs the matrix-vector operation y := alpha*abs(A)*abs(x) + beta*abs(y), where alpha and beta are scalars, x and y are vectors and A is an n by n symmetric matrix. This function is primarily used in calculating error bounds. To protect against underflow during evaluation, components in the resulting vector are perturbed away from zero by (N+1) times the underflow threshold. To prevent unnecessarily large errors for block-structure embedded in general matrices, "symbolically" zero components are not perturbed. A zero entry is considered "symbolic" if all multiplications involved in computing that entry have at least one zero multiplicand. More...
 
interface  la_hercond_c
 LA_HERCOND_C: computes the infinity norm condition number of op(A) * inv(diag(C)) where C is a REAL vector. More...
 
interface  la_herpvgrw
 LA_HERPVGRW: computes the reciprocal pivot growth factor norm(A)/norm(U). The "max absolute element" norm is used. If this is much less than 1, the stability of the LU factorization of the (equilibrated) matrix A could be poor. This also means that the solution X, estimated condition numbers, and error bounds could be unreliable. More...
 
interface  la_lin_berr
 LA_LIN_BERR: computes componentwise relative backward error from the formula max(i) ( abs(R(i)) / ( abs(op(A_s))*abs(Y) + abs(B_s) )(i) ) where abs(Z) is the componentwise absolute value of the matrix or vector Z. More...
 
interface  la_porcond
 LA_PORCOND: Estimates the Skeel condition number of op(A) * op2(C) where op2 is determined by CMODE as follows CMODE = 1 op2(C) = C CMODE = 0 op2(C) = I CMODE = -1 op2(C) = inv(C) The Skeel condition number cond(A) = norminf( |inv(A)||A| ) is computed by computing scaling factors R such that diag(R)*A*op2(C) is row equilibrated and computing the standard infinity-norm condition number. More...
 
interface  la_porcond_c
 LA_PORCOND_C: Computes the infinity norm condition number of op(A) * inv(diag(C)) where C is a REAL vector. More...
 
interface  la_porpvgrw
 LA_PORPVGRW: computes the reciprocal pivot growth factor norm(A)/norm(U). The "max absolute element" norm is used. If this is much less than 1, the stability of the LU factorization of the (equilibrated) matrix A could be poor. This also means that the solution X, estimated condition numbers, and error bounds could be unreliable. More...
 
interface  la_syamv
 LA_SYAMV: performs the matrix-vector operation y := alpha*abs(A)*abs(x) + beta*abs(y), where alpha and beta are scalars, x and y are vectors and A is an n by n symmetric matrix. This function is primarily used in calculating error bounds. To protect against underflow during evaluation, components in the resulting vector are perturbed away from zero by (N+1) times the underflow threshold. To prevent unnecessarily large errors for block-structure embedded in general matrices, "symbolically" zero components are not perturbed. A zero entry is considered "symbolic" if all multiplications involved in computing that entry have at least one zero multiplicand. More...
 
interface  la_syrcond
 LA_SYRCOND: estimates the Skeel condition number of op(A) * op2(C) where op2 is determined by CMODE as follows CMODE = 1 op2(C) = C CMODE = 0 op2(C) = I CMODE = -1 op2(C) = inv(C) The Skeel condition number cond(A) = norminf( |inv(A)||A| ) is computed by computing scaling factors R such that diag(R)*A*op2(C) is row equilibrated and computing the standard infinity-norm condition number. More...
 
interface  la_syrcond_c
 LA_SYRCOND_C: Computes the infinity norm condition number of op(A) * inv(diag(C)) where C is a REAL vector. More...
 
interface  la_syrpvgrw
 LA_SYRPVGRW: computes the reciprocal pivot growth factor norm(A)/norm(U). The "max absolute element" norm is used. If this is much less than 1, the stability of the LU factorization of the (equilibrated) matrix A could be poor. This also means that the solution X, estimated condition numbers, and error bounds could be unreliable. More...
 
interface  la_wwaddw
 LA_WWADDW: adds a vector W into a doubled-single vector (X, Y). This works for all extant IBM's hex and binary floating point arithmetic, but not for decimal. More...
 
interface  labad
 LABAD: takes as input the values computed by DLAMCH for underflow and overflow, and returns the square root of each of these values if the log of LARGE is sufficiently large. This subroutine is intended to identify machines with a large exponent range, such as the Crays, and redefine the underflow and overflow limits to be the square roots of the values computed by DLAMCH. This subroutine is needed because DLAMCH does not compensate for poor arithmetic in the upper half of the exponent range, as is found on a Cray. More...
 
interface  labrd
 LABRD: reduces the first NB rows and columns of a complex general m by n matrix A to upper or lower real bidiagonal form by a unitary transformation Q**H * A * P, and returns the matrices X and Y which are needed to apply the transformation to the unreduced part of A. If m >= n, A is reduced to upper bidiagonal form; if m < n, to lower bidiagonal form. This is an auxiliary routine called by CGEBRD. More...
 
interface  lacgv
 LACGV: conjugates a complex vector of length N. More...
 
interface  lacon
 LACON: estimates the 1-norm of a square, complex matrix A. Reverse communication is used for evaluating matrix-vector products. More...
 
interface  lacpy
 LACPY: copies all or part of a two-dimensional matrix A to another matrix B. More...
 
interface  lacrm
 LACRM: performs a very simple matrix-matrix multiplication: C := A * B, where A is M by N and complex; B is N by N and real; C is M by N and complex. More...
 
interface  lacrt
 LACRT: performs the operation ( c s )( x ) ==> ( x ) ( -s c )( y ) ( y ) where c and s are complex and the vectors x and y are complex. More...
 
interface  ladiv1
 
interface  ladiv2
 
interface  ladiv_f
 LADIV_F: := X / Y, where X and Y are complex. The computation of X / Y will not overflow on an intermediary step unless the results overflows. More...
 
interface  ladiv_s
 LADIV_S: performs complex division in real arithmetic a + i*b p + i*q = ------— c + i*d The algorithm is due to Michael Baudin and Robert L. Smith and can be found in the paper "A Robust Complex Division in Scilab". More...
 
interface  laebz
 LAEBZ: contains the iteration loops which compute and use the function N(w), which is the count of eigenvalues of a symmetric tridiagonal matrix T less than or equal to its argument w. It performs a choice of two types of loops: IJOB=1, followed by IJOB=2: It takes as input a list of intervals and returns a list of sufficiently small intervals whose union contains the same eigenvalues as the union of the original intervals. The input intervals are (AB(j,1),AB(j,2)], j=1,...,MINP. The output interval (AB(j,1),AB(j,2)] will contain eigenvalues NAB(j,1)+1,...,NAB(j,2), where 1 <= j <= MOUT. IJOB=3: It performs a binary search in each input interval (AB(j,1),AB(j,2)] for a point w(j) such that N(w(j))=NVAL(j), and uses C(j) as the starting point of the search. If such a w(j) is found, then on output AB(j,1)=AB(j,2)=w. If no such w(j) is found, then on output (AB(j,1),AB(j,2)] will be a small interval containing the point where N(w) jumps through NVAL(j), unless that point lies outside the initial interval. Note that the intervals are in all cases half-open intervals, i.e., of the form (a,b] , which includes b but not a . To avoid underflow, the matrix should be scaled so that its largest element is no greater than overflow**(1/2) * underflow**(1/4) in absolute value. To assure the most accurate computation of small eigenvalues, the matrix should be scaled to be not much smaller than that, either. See W. Kahan "Accurate Eigenvalues of a Symmetric Tridiagonal Matrix", Report CS41, Computer Science Dept., Stanford University, July 21, 1966 Note: the arguments are, in general, not checked for unreasonable values. More...
 
interface  laed0
 Using the divide and conquer method, LAED0: computes all eigenvalues of a symmetric tridiagonal matrix which is one diagonal block of those from reducing a dense or band Hermitian matrix and corresponding eigenvectors of the dense or band matrix. More...
 
interface  laed1
 LAED1: computes the updated eigensystem of a diagonal matrix after modification by a rank-one symmetric matrix. This routine is used only for the eigenproblem which requires all eigenvalues and eigenvectors of a tridiagonal matrix. DLAED7 handles the case in which eigenvalues only or eigenvalues and eigenvectors of a full symmetric matrix (which was reduced to tridiagonal form) are desired. T = Q(in) ( D(in) + RHO * Z*Z**T ) Q**T(in) = Q(out) * D(out) * Q**T(out) where Z = Q**T*u, u is a vector of length N with ones in the CUTPNT and CUTPNT + 1 th elements and zeros elsewhere. The eigenvectors of the original matrix are stored in Q, and the eigenvalues are in D. The algorithm consists of three stages: The first stage consists of deflating the size of the problem when there are multiple eigenvalues or if there is a zero in the Z vector. For each such occurrence the dimension of the secular equation problem is reduced by one. This stage is performed by the routine DLAED2. The second stage consists of calculating the updated eigenvalues. This is done by finding the roots of the secular equation via the routine DLAED4 (as called by DLAED3). This routine also calculates the eigenvectors of the current problem. The final stage consists of computing the updated eigenvectors directly using the updated eigenvalues. The eigenvectors for the current problem are multiplied with the eigenvectors from the overall problem. More...
 
interface  laed4
 This subroutine computes the I-th updated eigenvalue of a symmetric rank-one modification to a diagonal matrix whose elements are given in the array d, and that D(i) < D(j) for i < j and that RHO > 0. This is arranged by the calling routine, and is no loss in generality. The rank-one modified system is thus diag( D ) + RHO * Z * Z_transpose. where we assume the Euclidean norm of Z is 1. The method consists of approximating the rational functions in the secular equation by simpler interpolating rational functions. More...
 
interface  laed5
 This subroutine computes the I-th eigenvalue of a symmetric rank-one modification of a 2-by-2 diagonal matrix diag( D ) + RHO * Z * transpose(Z) . The diagonal elements in the array D are assumed to satisfy D(i) < D(j) for i < j . We also assume RHO > 0 and that the Euclidean norm of the vector Z is one. More...
 
interface  laed6
 LAED6: computes the positive or negative root (closest to the origin) of z(1) z(2) z(3) f(x) = rho + ------— + -------— + ------— d(1)-x d(2)-x d(3)-x It is assumed that if ORGATI = .true. the root is between d(2) and d(3); otherwise it is between d(1) and d(2) This routine will be called by DLAED4 when necessary. In most cases, the root sought is the smallest in magnitude, though it might not be in some extremely rare situations. More...
 
interface  laed7
 LAED7: computes the updated eigensystem of a diagonal matrix after modification by a rank-one symmetric matrix. This routine is used only for the eigenproblem which requires all eigenvalues and optionally eigenvectors of a dense or banded Hermitian matrix that has been reduced to tridiagonal form. T = Q(in) ( D(in) + RHO * Z*Z**H ) Q**H(in) = Q(out) * D(out) * Q**H(out) where Z = Q**Hu, u is a vector of length N with ones in the CUTPNT and CUTPNT + 1 th elements and zeros elsewhere. The eigenvectors of the original matrix are stored in Q, and the eigenvalues are in D. The algorithm consists of three stages: The first stage consists of deflating the size of the problem when there are multiple eigenvalues or if there is a zero in the Z vector. For each such occurrence the dimension of the secular equation problem is reduced by one. This stage is performed by the routine SLAED2. The second stage consists of calculating the updated eigenvalues. This is done by finding the roots of the secular equation via the routine SLAED4 (as called by SLAED3). This routine also calculates the eigenvectors of the current problem. The final stage consists of computing the updated eigenvectors directly using the updated eigenvalues. The eigenvectors for the current problem are multiplied with the eigenvectors from the overall problem. More...
 
interface  laed8
 LAED8: merges the two sets of eigenvalues together into a single sorted set. Then it tries to deflate the size of the problem. There are two ways in which deflation can occur: when two or more eigenvalues are close together or if there is a tiny element in the Z vector. For each such occurrence the order of the related secular equation problem is reduced by one. More...
 
interface  laed9
 LAED9: finds the roots of the secular equation, as defined by the values in D, Z, and RHO, between KSTART and KSTOP. It makes the appropriate calls to DLAED4 and then stores the new matrix of eigenvectors for use in calculating the next level of Z vectors. More...
 
interface  laeda
 LAEDA: computes the Z vector corresponding to the merge step in the CURLVLth step of the merge process with TLVLS steps for the CURPBMth problem. More...
 
interface  laein
 LAEIN: uses inverse iteration to find a right or left eigenvector corresponding to the eigenvalue W of a complex upper Hessenberg matrix H. More...
 
interface  laesy
 LAESY: computes the eigendecomposition of a 2-by-2 symmetric matrix ( ( A, B );( B, C ) ) provided the norm of the matrix of eigenvectors is larger than some threshold value. RT1 is the eigenvalue of larger absolute value, and RT2 of smaller absolute value. If the eigenvectors are computed, then on return ( CS1, SN1 ) is the unit eigenvector for RT1, hence [ CS1 SN1 ] . [ A B ] . [ CS1 -SN1 ] = [ RT1 0 ] [ -SN1 CS1 ] [ B C ] [ SN1 CS1 ] [ 0 RT2 ]. More...
 
interface  laexc
 LAEXC: swaps adjacent diagonal blocks T11 and T22 of order 1 or 2 in an upper quasi-triangular matrix T by an orthogonal similarity transformation. T must be in Schur canonical form, that is, block upper triangular with 1-by-1 and 2-by-2 diagonal blocks; each 2-by-2 diagonal block has its diagonal elements equal and its off-diagonal elements of opposite sign. More...
 
interface  lagtf
 LAGTF: factorizes the matrix (T - lambda*I), where T is an n by n tridiagonal matrix and lambda is a scalar, as T - lambda*I = PLU, where P is a permutation matrix, L is a unit lower tridiagonal matrix with at most one non-zero sub-diagonal elements per column and U is an upper triangular matrix with at most two non-zero super-diagonal elements per column. The factorization is obtained by Gaussian elimination with partial pivoting and implicit row scaling. The parameter LAMBDA is included in the routine so that LAGTF may be used, in conjunction with DLAGTS, to obtain eigenvectors of T by inverse iteration. More...
 
interface  lagtm
 LAGTM: performs a matrix-vector product of the form B := alpha * A * X + beta * B where A is a tridiagonal matrix of order N, B and X are N by NRHS matrices, and alpha and beta are real scalars, each of which may be 0., 1., or -1. More...
 
interface  lagts
 LAGTS: may be used to solve one of the systems of equations (T - lambda*I)*x = y or (T - lambda*I)**T*x = y, where T is an n by n tridiagonal matrix, for x, following the factorization of (T - lambda*I) as (T - lambda*I) = P*L*U , by routine DLAGTF. The choice of equation to be solved is controlled by the argument JOB, and in each case there is an option to perturb zero or very small diagonal elements of U, this option being intended for use in applications such as inverse iteration. More...
 
interface  lahef
 LAHEF: computes a partial factorization of a complex Hermitian matrix A using the Bunch-Kaufman diagonal pivoting method. The partial factorization has the form: A = ( I U12 ) ( A11 0 ) ( I 0 ) if UPLO = 'U', or: ( 0 U22 ) ( 0 D ) ( U12**H U22**H ) A = ( L11 0 ) ( D 0 ) ( L11**H L21**H ) if UPLO = 'L' ( L21 I ) ( 0 A22 ) ( 0 I ) where the order of D is at most NB. The actual order is returned in the argument KB, and is either NB or NB-1, or N if N <= NB. Note that U**H denotes the conjugate transpose of U. LAHEF is an auxiliary routine called by CHETRF. It uses blocked code (calling Level 3 BLAS) to update the submatrix A11 (if UPLO = 'U') or A22 (if UPLO = 'L'). More...
 
interface  lahef_aa
 LAHEF_AA: factorizes a panel of a complex hermitian matrix A using the Aasen's algorithm. The panel consists of a set of NB rows of A when UPLO is U, or a set of NB columns when UPLO is L. In order to factorize the panel, the Aasen's algorithm requires the last row, or column, of the previous panel. The first row, or column, of A is set to be the first row, or column, of an identity matrix, which is used to factorize the first panel. The resulting J-th row of U, or J-th column of L, is stored in the (J-1)-th row, or column, of A (without the unit diagonals), while the diagonal and subdiagonal of A are overwritten by those of T. More...
 
interface  lahef_rk
 LAHEF_RK: computes a partial factorization of a complex Hermitian matrix A using the bounded Bunch-Kaufman (rook) diagonal pivoting method. The partial factorization has the form: A = ( I U12 ) ( A11 0 ) ( I 0 ) if UPLO = 'U', or: ( 0 U22 ) ( 0 D ) ( U12**H U22**H ) A = ( L11 0 ) ( D 0 ) ( L11**H L21**H ) if UPLO = 'L', ( L21 I ) ( 0 A22 ) ( 0 I ) where the order of D is at most NB. The actual order is returned in the argument KB, and is either NB or NB-1, or N if N <= NB. LAHEF_RK is an auxiliary routine called by CHETRF_RK. It uses blocked code (calling Level 3 BLAS) to update the submatrix A11 (if UPLO = 'U') or A22 (if UPLO = 'L'). More...
 
interface  lahef_rook
 LAHEF_ROOK: computes a partial factorization of a complex Hermitian matrix A using the bounded Bunch-Kaufman ("rook") diagonal pivoting method. The partial factorization has the form: A = ( I U12 ) ( A11 0 ) ( I 0 ) if UPLO = 'U', or: ( 0 U22 ) ( 0 D ) ( U12**H U22**H ) A = ( L11 0 ) ( D 0 ) ( L11**H L21**H ) if UPLO = 'L' ( L21 I ) ( 0 A22 ) ( 0 I ) where the order of D is at most NB. The actual order is returned in the argument KB, and is either NB or NB-1, or N if N <= NB. Note that U**H denotes the conjugate transpose of U. LAHEF_ROOK is an auxiliary routine called by CHETRF_ROOK. It uses blocked code (calling Level 3 BLAS) to update the submatrix A11 (if UPLO = 'U') or A22 (if UPLO = 'L'). More...
 
interface  lahqr
 LAHQR: is an auxiliary routine called by CHSEQR to update the eigenvalues and Schur decomposition already computed by CHSEQR, by dealing with the Hessenberg submatrix in rows and columns ILO to IHI. More...
 
interface  laic1
 LAIC1: applies one step of incremental condition estimation in its simplest version: Let x, twonorm(x) = 1, be an approximate singular vector of an j-by-j lower triangular matrix L, such that twonorm(L*x) = sest Then LAIC1 computes sestpr, s, c such that the vector [ s*x ] xhat = [ c ] is an approximate singular vector of [ L 0 ] Lhat = [ w**H gamma ] in the sense that twonorm(Lhat*xhat) = sestpr. Depending on JOB, an estimate for the largest or smallest singular value is computed. Note that [s c]**H and sestpr**2 is an eigenpair of the system diag(sest*sest, 0) + [alpha gamma] * [ conjg(alpha) ] [ conjg(gamma) ] where alpha = x**H*w. More...
 
interface  laisnan
 This routine is not for general use. It exists solely to avoid over-optimization in DISNAN. LAISNAN: checks for NaNs by comparing its two arguments for inequality. NaN is the only floating-point value where NaN != NaN returns .TRUE. To check for NaNs, pass the same variable as both arguments. A compiler must assume that the two arguments are not the same variable, and the test will not be optimized away. Interprocedural or whole-program optimization may delete this test. The ISNAN functions will be replaced by the correct Fortran 03 intrinsic once the intrinsic is widely available. More...
 
interface  lals0
 LALS0: applies back the multiplying factors of either the left or the right singular vector matrix of a diagonal matrix appended by a row to the right hand side matrix B in solving the least squares problem using the divide-and-conquer SVD approach. For the left singular vector matrix, three types of orthogonal matrices are involved: (1L) Givens rotations: the number of such rotations is GIVPTR; the pairs of columns/rows they were applied to are stored in GIVCOL; and the C- and S-values of these rotations are stored in GIVNUM. (2L) Permutation. The (NL+1)-st row of B is to be moved to the first row, and for J=2:N, PERM(J)-th row of B is to be moved to the J-th row. (3L) The left singular vector matrix of the remaining matrix. For the right singular vector matrix, four types of orthogonal matrices are involved: (1R) The right singular vector matrix of the remaining matrix. (2R) If SQRE = 1, one extra Givens rotation to generate the right null space. (3R) The inverse transformation of (2L). (4R) The inverse transformation of (1L). More...
 
interface  lalsa
 LALSA: is an itermediate step in solving the least squares problem by computing the SVD of the coefficient matrix in compact form (The singular vectors are computed as products of simple orthorgonal matrices.). If ICOMPQ = 0, LALSA applies the inverse of the left singular vector matrix of an upper bidiagonal matrix to the right hand side; and if ICOMPQ = 1, LALSA applies the right singular vector matrix to the right hand side. The singular vector matrices were generated in compact form by LALSA. More...
 
interface  lalsd
 LALSD: uses the singular value decomposition of A to solve the least squares problem of finding X to minimize the Euclidean norm of each column of A*X-B, where A is N-by-N upper bidiagonal, and X and B are N-by-NRHS. The solution X overwrites B. The singular values of A smaller than RCOND times the largest singular value are treated as zero in solving the least squares problem; in this case a minimum norm solution is returned. The actual singular values are returned in D in ascending order. This code makes very mild assumptions about floating point arithmetic. It will work on machines with a guard digit in add/subtract, or on those binary machines without guard digits which subtract like the Cray XMP, Cray YMP, Cray C 90, or Cray 2. It could conceivably fail on hexadecimal or decimal machines without guard digits, but we know of none. More...
 
interface  lamrg
 LAMRG: will create a permutation list which will merge the elements of A (which is composed of two independently sorted sets) into a single set which is sorted in ascending order. More...
 
interface  lamswlq
 LAMSWLQ: overwrites the general complex M-by-N matrix C with SIDE = 'L' SIDE = 'R' TRANS = 'N': Q * C C * Q TRANS = 'T': Q**H * C C * Q**H where Q is a complex unitary matrix defined as the product of blocked elementary reflectors computed by short wide LQ factorization (CLASWLQ) More...
 
interface  lamtsqr
 LAMTSQR: overwrites the general complex M-by-N matrix C with SIDE = 'L' SIDE = 'R' TRANS = 'N': Q * C C * Q TRANS = 'C': Q**H * C C * Q**H where Q is a complex unitary matrix defined as the product of blocked elementary reflectors computed by tall skinny QR factorization (CLATSQR) More...
 
interface  laneg
 LANEG: computes the Sturm count, the number of negative pivots encountered while factoring tridiagonal T - sigma I = L D L^T. This implementation works directly on the factors without forming the tridiagonal matrix T. The Sturm count is also the number of eigenvalues of T less than sigma. This routine is called from DLARRB. The current routine does not use the PIVMIN parameter but rather requires IEEE-754 propagation of Infinities and NaNs. This routine also has no input range restrictions but does require default exception handling such that x/0 produces Inf when x is non-zero, and Inf/Inf produces NaN. For more information, see: Marques, Riedy, and Voemel, "Benefits of IEEE-754 Features in Modern Symmetric Tridiagonal Eigensolvers," SIAM Journal on Scientific Computing, v28, n5, 2006. DOI 10.1137/050641624 (Tech report version in LAWN 172 with the same title.) More...
 
interface  langb
 LANGB: returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of an n by n band matrix A, with kl sub-diagonals and ku super-diagonals. More...
 
interface  lange
 LANGE: returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a complex matrix A. More...
 
interface  langt
 LANGT: returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a complex tridiagonal matrix A. More...
 
interface  lanhb
 LANHB: returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of an n by n hermitian band matrix A, with k super-diagonals. More...
 
interface  lanhe
 LANHE: returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a complex hermitian matrix A. More...
 
interface  lanhf
 LANHF: returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a complex Hermitian matrix A in RFP format. More...
 
interface  lanhp
 LANHP: returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a complex hermitian matrix A, supplied in packed form. More...
 
interface  lanhs
 LANHS: returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a Hessenberg matrix A. More...
 
interface  lanht
 LANHT: returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a complex Hermitian tridiagonal matrix A. More...
 
interface  lansb
 LANSB: returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of an n by n symmetric band matrix A, with k super-diagonals. More...
 
interface  lansf
 LANSF: returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a real symmetric matrix A in RFP format. More...
 
interface  lansp
 LANSP: returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a complex symmetric matrix A, supplied in packed form. More...
 
interface  lanst
 LANST: returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a real symmetric tridiagonal matrix A. More...
 
interface  lansy
 LANSY: returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a complex symmetric matrix A. More...
 
interface  lantb
 LANTB: returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of an n by n triangular band matrix A, with ( k + 1 ) diagonals. More...
 
interface  lantp
 LANTP: returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a triangular matrix A, supplied in packed form. More...
 
interface  lantr
 LANTR: returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a trapezoidal or triangular matrix A. More...
 
interface  laorhr_col_getrfnp
 LAORHR_COL_GETRFNP: computes the modified LU factorization without pivoting of a real general M-by-N matrix A. The factorization has the form: A - S = L * U, where: S is a m-by-n diagonal sign matrix with the diagonal D, so that D(i) = S(i,i), 1 <= i <= min(M,N). The diagonal D is constructed as D(i)=-SIGN(A(i,i)), where A(i,i) is the value after performing i-1 steps of Gaussian elimination. This means that the diagonal element at each step of "modified" Gaussian elimination is at least one in absolute value (so that division-by-zero not not possible during the division by the diagonal element); L is a M-by-N lower triangular matrix with unit diagonal elements (lower trapezoidal if M > N); and U is a M-by-N upper triangular matrix (upper trapezoidal if M < N). This routine is an auxiliary routine used in the Householder reconstruction routine DORHR_COL. In DORHR_COL, this routine is applied to an M-by-N matrix A with orthonormal columns, where each element is bounded by one in absolute value. With the choice of the matrix S above, one can show that the diagonal element at each step of Gaussian elimination is the largest (in absolute value) in the column on or below the diagonal, so that no pivoting is required for numerical stability [1]. For more details on the Householder reconstruction algorithm, including the modified LU factorization, see [1]. This is the blocked right-looking version of the algorithm, calling Level 3 BLAS to update the submatrix. To factorize a block, this routine calls the recursive routine LAORHR_COL_GETRFNP2. [1] "Reconstructing Householder vectors from tall-skinny QR", G. Ballard, J. Demmel, L. Grigori, M. Jacquelin, H.D. Nguyen, E. Solomonik, J. Parallel Distrib. Comput., vol. 85, pp. 3-31, 2015. More...
 
interface  laorhr_col_getrfnp2
 LAORHR_COL_GETRFNP2: computes the modified LU factorization without pivoting of a real general M-by-N matrix A. The factorization has the form: A - S = L * U, where: S is a m-by-n diagonal sign matrix with the diagonal D, so that D(i) = S(i,i), 1 <= i <= min(M,N). The diagonal D is constructed as D(i)=-SIGN(A(i,i)), where A(i,i) is the value after performing i-1 steps of Gaussian elimination. This means that the diagonal element at each step of "modified" Gaussian elimination is at least one in absolute value (so that division-by-zero not possible during the division by the diagonal element); L is a M-by-N lower triangular matrix with unit diagonal elements (lower trapezoidal if M > N); and U is a M-by-N upper triangular matrix (upper trapezoidal if M < N). This routine is an auxiliary routine used in the Householder reconstruction routine DORHR_COL. In DORHR_COL, this routine is applied to an M-by-N matrix A with orthonormal columns, where each element is bounded by one in absolute value. With the choice of the matrix S above, one can show that the diagonal element at each step of Gaussian elimination is the largest (in absolute value) in the column on or below the diagonal, so that no pivoting is required for numerical stability [1]. For more details on the Householder reconstruction algorithm, including the modified LU factorization, see [1]. This is the recursive version of the LU factorization algorithm. Denote A - S by B. The algorithm divides the matrix B into four submatrices: [ B11 | B12 ] where B11 is n1 by n1, B = [ --—|--— ] B21 is (m-n1) by n1, [ B21 | B22 ] B12 is n1 by n2, B22 is (m-n1) by n2, with n1 = min(m,n)/2, n2 = n-n1. The subroutine calls itself to factor B11, solves for B21, solves for B12, updates B22, then calls itself to factor B22. For more details on the recursive LU algorithm, see [2]. LAORHR_COL_GETRFNP2 is called to factorize a block by the blocked routine DLAORHR_COL_GETRFNP, which uses blocked code calling Level 3 BLAS to update the submatrix. However, LAORHR_COL_GETRFNP2 is self-sufficient and can be used without DLAORHR_COL_GETRFNP. [1] "Reconstructing Householder vectors from tall-skinny QR", G. Ballard, J. Demmel, L. Grigori, M. Jacquelin, H.D. Nguyen, E. Solomonik, J. Parallel Distrib. Comput., vol. 85, pp. 3-31, 2015. [2] "Recursion leads to automatic variable blocking for dense linear algebra algorithms", F. Gustavson, IBM J. of Res. and Dev., vol. 41, no. 6, pp. 737-755, 1997. More...
 
interface  lapll
 Given two column vectors X and Y, let A = ( X Y ). The subroutine first computes the QR factorization of A = Q*R, and then computes the SVD of the 2-by-2 upper triangular matrix R. The smaller singular value of R is returned in SSMIN, which is used as the measurement of the linear dependency of the vectors X and Y. More...
 
interface  lapmr
 LAPMR: rearranges the rows of the M by N matrix X as specified by the permutation K(1),K(2),...,K(M) of the integers 1,...,M. If FORWRD = .TRUE., forward permutation: X(K(I),*) is moved X(I,*) for I = 1,2,...,M. If FORWRD = .FALSE., backward permutation: X(I,*) is moved to X(K(I),*) for I = 1,2,...,M. More...
 
interface  lapmt
 LAPMT: rearranges the columns of the M by N matrix X as specified by the permutation K(1),K(2),...,K(N) of the integers 1,...,N. If FORWRD = .TRUE., forward permutation: X(*,K(J)) is moved X(*,J) for J = 1,2,...,N. If FORWRD = .FALSE., backward permutation: X(*,J) is moved to X(*,K(J)) for J = 1,2,...,N. More...
 
interface  laqgb
 LAQGB: equilibrates a general M by N band matrix A with KL subdiagonals and KU superdiagonals using the row and scaling factors in the vectors R and C. More...
 
interface  laqge
 LAQGE: equilibrates a general M by N matrix A using the row and column scaling factors in the vectors R and C. More...
 
interface  laqhb
 LAQHB: equilibrates an Hermitian band matrix A using the scaling factors in the vector S. More...
 
interface  laqhe
 LAQHE: equilibrates a Hermitian matrix A using the scaling factors in the vector S. More...
 
interface  laqhp
 LAQHP: equilibrates a Hermitian matrix A using the scaling factors in the vector S. More...
 
interface  laqps
 LAQPS: computes a step of QR factorization with column pivoting of a complex M-by-N matrix A by using Blas-3. It tries to factorize NB columns from A starting from the row OFFSET+1, and updates all of the matrix with Blas-3 xGEMM. In some cases, due to catastrophic cancellations, it cannot factorize NB columns. Hence, the actual number of factorized columns is returned in KB. Block A(1:OFFSET,1:N) is accordingly pivoted, but not factorized. More...
 
interface  laqr0
 LAQR0: computes the eigenvalues of a Hessenberg matrix H and, optionally, the matrices T and Z from the Schur decomposition H = Z T Z**H, where T is an upper triangular matrix (the Schur form), and Z is the unitary matrix of Schur vectors. Optionally Z may be postmultiplied into an input unitary matrix Q so that this routine can give the Schur factorization of a matrix A which has been reduced to the Hessenberg form H by the unitary matrix Q: A = Q*H*Q**H = (QZ)*H*(QZ)**H. More...
 
interface  laqr1
 Given a 2-by-2 or 3-by-3 matrix H, LAQR1: sets v to a scalar multiple of the first column of the product (*) K = (H - s1*I)*(H - s2*I) scaling to avoid overflows and most underflows. This is useful for starting double implicit shift bulges in the QR algorithm. More...
 
interface  laqr4
 LAQR4: implements one level of recursion for CLAQR0. It is a complete implementation of the small bulge multi-shift QR algorithm. It may be called by CLAQR0 and, for large enough deflation window size, it may be called by CLAQR3. This subroutine is identical to CLAQR0 except that it calls CLAQR2 instead of CLAQR3. LAQR4 computes the eigenvalues of a Hessenberg matrix H and, optionally, the matrices T and Z from the Schur decomposition H = Z T Z**H, where T is an upper triangular matrix (the Schur form), and Z is the unitary matrix of Schur vectors. Optionally Z may be postmultiplied into an input unitary matrix Q so that this routine can give the Schur factorization of a matrix A which has been reduced to the Hessenberg form H by the unitary matrix Q: A = Q*H*Q**H = (QZ)*H*(QZ)**H. More...
 
interface  laqr5
 LAQR5: called by CLAQR0 performs a single small-bulge multi-shift QR sweep. More...
 
interface  laqsb
 LAQSB: equilibrates a symmetric band matrix A using the scaling factors in the vector S. More...
 
interface  laqsp
 LAQSP: equilibrates a symmetric matrix A using the scaling factors in the vector S. More...
 
interface  laqsy
 LAQSY: equilibrates a symmetric matrix A using the scaling factors in the vector S. More...
 
interface  laqtr
 LAQTR: solves the real quasi-triangular system op(T)*p = scale*c, if LREAL = .TRUE. or the complex quasi-triangular systems op(T + iB)*(p+iq) = scale*(c+id), if LREAL = .FALSE. in real arithmetic, where T is upper quasi-triangular. If LREAL = .FALSE., then the first diagonal block of T must be 1 by 1, B is the specially structured matrix B = [ b(1) b(2) ... b(n) ] [ w ] [ w ] [ . ] [ w ] op(A) = A or A**T, A**T denotes the transpose of matrix A. On input, X = [ c ]. On output, X = [ p ]. [ d ] [ q ] This subroutine is designed for the condition number estimation in routine DTRSNA. More...
 
interface  laqz0
 LAQZ0: computes the eigenvalues of a matrix pair (H,T), where H is an upper Hessenberg matrix and T is upper triangular, using the double-shift QZ method. Matrix pairs of this type are produced by the reduction to generalized upper Hessenberg form of a matrix pair (A,B): A = Q1*H*Z1**H, B = Q1*T*Z1**H, as computed by CGGHRD. If JOB='S', then the Hessenberg-triangular pair (H,T) is also reduced to generalized Schur form, H = Q*S*Z**H, T = Q*P*Z**H, where Q and Z are unitary matrices, P and S are an upper triangular matrices. Optionally, the unitary matrix Q from the generalized Schur factorization may be postmultiplied into an input matrix Q1, and the unitary matrix Z may be postmultiplied into an input matrix Z1. If Q1 and Z1 are the unitary matrices from CGGHRD that reduced the matrix pair (A,B) to generalized upper Hessenberg form, then the output matrices Q1*Q and Z1*Z are the unitary factors from the generalized Schur factorization of (A,B): A = (Q1*Q)*S*(Z1*Z)**H, B = (Q1*Q)*P*(Z1*Z)**H. To avoid overflow, eigenvalues of the matrix pair (H,T) (equivalently, of (A,B)) are computed as a pair of values (alpha,beta), where alpha is complex and beta real. If beta is nonzero, lambda = alpha / beta is an eigenvalue of the generalized nonsymmetric eigenvalue problem (GNEP) A*x = lambda*B*x and if alpha is nonzero, mu = beta / alpha is an eigenvalue of the alternate form of the GNEP mu*A*y = B*y. Eigenvalues can be read directly from the generalized Schur form: alpha = S(i,i), beta = P(i,i). Ref: C.B. Moler Eigenvalue Problems", SIAM J. Numer. Anal., 10(1973), pp. 241&ndash;256. Ref: B. Kagstrom, D. Kressner, "Multishift Variants of the QZ Algorithm with Aggressive Early Deflation", SIAM J. Numer. Anal., 29(2006), pp. 199&ndash;227. Ref: T. Steel, D. Camps, K. Meerbergen, R. Vandebril "A multishift, multipole rational QZ method with agressive early deflation". More...
 
interface  laqz1
 LAQZ1: chases a 1x1 shift bulge in a matrix pencil down a single position. More...
 
interface  laqz4
 LAQZ4: Executes a single multishift QZ sweep. More...
 
interface  lar1v
 LAR1V: computes the (scaled) r-th column of the inverse of the sumbmatrix in rows B1 through BN of the tridiagonal matrix L D L**T - sigma I. When sigma is close to an eigenvalue, the computed vector is an accurate eigenvector. Usually, r corresponds to the index where the eigenvector is largest in magnitude. The following steps accomplish this computation : (a) Stationary qd transform, L D L**T - sigma I = L(+) D(+) L(+)**T, (b) Progressive qd transform, L D L**T - sigma I = U(-) D(-) U(-)**T, (c) Computation of the diagonal elements of the inverse of L D L**T - sigma I by combining the above transforms, and choosing r as the index where the diagonal of the inverse is (one of the) largest in magnitude. (d) Computation of the (scaled) r-th column of the inverse using the twisted factorization obtained by combining the top part of the the stationary and the bottom part of the progressive transform. More...
 
interface  lar2v
 LAR2V: applies a vector of complex plane rotations with real cosines from both sides to a sequence of 2-by-2 complex Hermitian matrices, defined by the elements of the vectors x, y and z. For i = 1,2,...,n ( x(i) z(i) ) := ( conjg(z(i)) y(i) ) ( c(i) conjg(s(i)) ) ( x(i) z(i) ) ( c(i) -conjg(s(i)) ) ( -s(i) c(i) ) ( conjg(z(i)) y(i) ) ( s(i) c(i) ) More...
 
interface  larcm
 LARCM: performs a very simple matrix-matrix multiplication: C := A * B, where A is M by M and real; B is M by N and complex; C is M by N and complex. More...
 
interface  larf
 LARF: applies a complex elementary reflector H to a complex M-by-N matrix C, from either the left or the right. H is represented in the form H = I - tau * v * v**H where tau is a complex scalar and v is a complex vector. If tau = 0, then H is taken to be the unit matrix. To apply H**H (the conjugate transpose of H), supply conjg(tau) instead tau. More...
 
interface  larfb
 LARFB: applies a complex block reflector H or its transpose H**H to a complex M-by-N matrix C, from either the left or the right. More...
 
interface  larfb_gett
 LARFB_GETT: applies a complex Householder block reflector H from the left to a complex (K+M)-by-N "triangular-pentagonal" matrix composed of two block matrices: an upper trapezoidal K-by-N matrix A stored in the array A, and a rectangular M-by-(N-K) matrix B, stored in the array B. The block reflector H is stored in a compact WY-representation, where the elementary reflectors are in the arrays A, B and T. See Further Details section. More...
 
interface  larfg
 LARFG: generates a complex elementary reflector H of order n, such that H**H * ( alpha ) = ( beta ), H**H * H = I. ( x ) ( 0 ) where alpha and beta are scalars, with beta real, and x is an (n-1)-element complex vector. H is represented in the form H = I - tau * ( 1 ) * ( 1 v**H ) , ( v ) where tau is a complex scalar and v is a complex (n-1)-element vector. Note that H is not hermitian. If the elements of x are all zero and alpha is real, then tau = 0 and H is taken to be the unit matrix. Otherwise 1 <= real(tau) <= 2 and abs(tau-1) <= 1 . More...
 
interface  larfgp
 LARFGP: generates a complex elementary reflector H of order n, such that H**H * ( alpha ) = ( beta ), H**H * H = I. ( x ) ( 0 ) where alpha and beta are scalars, beta is real and non-negative, and x is an (n-1)-element complex vector. H is represented in the form H = I - tau * ( 1 ) * ( 1 v**H ) , ( v ) where tau is a complex scalar and v is a complex (n-1)-element vector. Note that H is not hermitian. If the elements of x are all zero and alpha is real, then tau = 0 and H is taken to be the unit matrix. More...
 
interface  larft
 LARFT: forms the triangular factor T of a complex block reflector H of order n, which is defined as a product of k elementary reflectors. If DIRECT = 'F', H = H(1) H(2) . . . H(k) and T is upper triangular; If DIRECT = 'B', H = H(k) . . . H(2) H(1) and T is lower triangular. If STOREV = 'C', the vector which defines the elementary reflector H(i) is stored in the i-th column of the array V, and H = I - V * T * V**H If STOREV = 'R', the vector which defines the elementary reflector H(i) is stored in the i-th row of the array V, and H = I - V**H * T * V. More...
 
interface  larfy
 LARFY: applies an elementary reflector, or Householder matrix, H, to an n x n Hermitian matrix C, from both the left and the right. H is represented in the form H = I - tau * v * v' where tau is a scalar and v is a vector. If tau is zero, then H is taken to be the unit matrix. More...
 
interface  largv
 LARGV: generates a vector of complex plane rotations with real cosines, determined by elements of the complex vectors x and y. For i = 1,2,...,n ( c(i) s(i) ) ( x(i) ) = ( r(i) ) ( -conjg(s(i)) c(i) ) ( y(i) ) = ( 0 ) where c(i)**2 + ABS(s(i))**2 = 1 The following conventions are used (these are the same as in CLARTG, but differ from the BLAS1 routine CROTG): If y(i)=0, then c(i)=1 and s(i)=0. If x(i)=0, then c(i)=0 and s(i) is chosen so that r(i) is real. More...
 
interface  larnv
 LARNV: returns a vector of n random complex numbers from a uniform or normal distribution. More...
 
interface  larra
 Compute the splitting points with threshold SPLTOL. LARRA: sets any "small" off-diagonal elements to zero. More...
 
interface  larrb
 Given the relatively robust representation(RRR) L D L^T, LARRB: does "limited" bisection to refine the eigenvalues of L D L^T, W( IFIRST-OFFSET ) through W( ILAST-OFFSET ), to more accuracy. Initial guesses for these eigenvalues are input in W, the corresponding estimate of the error in these guesses and their gaps are input in WERR and WGAP, respectively. During bisection, intervals [left, right] are maintained by storing their mid-points and semi-widths in the arrays W and WERR respectively. More...
 
interface  larrc
 Find the number of eigenvalues of the symmetric tridiagonal matrix T that are in the interval (VL,VU] if JOBT = 'T', and of L D L^T if JOBT = 'L'. More...
 
interface  larrd
 LARRD: computes the eigenvalues of a symmetric tridiagonal matrix T to suitable accuracy. This is an auxiliary code to be called from DSTEMR. The user may ask for all eigenvalues, all eigenvalues in the half-open interval (VL, VU], or the IL-th through IU-th eigenvalues. To avoid overflow, the matrix must be scaled so that its largest element is no greater than overflow**(1/2) * underflow**(1/4) in absolute value, and for greatest accuracy, it should not be much smaller than that. See W. Kahan "Accurate Eigenvalues of a Symmetric Tridiagonal Matrix", Report CS41, Computer Science Dept., Stanford University, July 21, 1966. More...
 
interface  larre
 To find the desired eigenvalues of a given real symmetric tridiagonal matrix T, LARRE: sets any "small" off-diagonal elements to zero, and for each unreduced block T_i, it finds (a) a suitable shift at one end of the block's spectrum, (b) the base representation, T_i - sigma_i I = L_i D_i L_i^T, and (c) eigenvalues of each L_i D_i L_i^T. The representations and eigenvalues found are then used by DSTEMR to compute the eigenvectors of T. The accuracy varies depending on whether bisection is used to find a few eigenvalues or the dqds algorithm (subroutine DLASQ2) to conpute all and then discard any unwanted one. As an added benefit, LARRE also outputs the n Gerschgorin intervals for the matrices L_i D_i L_i^T. More...
 
interface  larrf
 Given the initial representation L D L^T and its cluster of close eigenvalues (in a relative measure), W( CLSTRT ), W( CLSTRT+1 ), ... W( CLEND ), LARRF: finds a new relatively robust representation L D L^T - SIGMA I = L(+) D(+) L(+)^T such that at least one of the eigenvalues of L(+) D(+) L(+)^T is relatively isolated. More...
 
interface  larrj
 Given the initial eigenvalue approximations of T, LARRJ: does bisection to refine the eigenvalues of T, W( IFIRST-OFFSET ) through W( ILAST-OFFSET ), to more accuracy. Initial guesses for these eigenvalues are input in W, the corresponding estimate of the error in these guesses in WERR. During bisection, intervals [left, right] are maintained by storing their mid-points and semi-widths in the arrays W and WERR respectively. More...
 
interface  larrk
 LARRK: computes one eigenvalue of a symmetric tridiagonal matrix T to suitable accuracy. This is an auxiliary code to be called from DSTEMR. To avoid overflow, the matrix must be scaled so that its largest element is no greater than overflow**(1/2) * underflow**(1/4) in absolute value, and for greatest accuracy, it should not be much smaller than that. See W. Kahan "Accurate Eigenvalues of a Symmetric Tridiagonal Matrix", Report CS41, Computer Science Dept., Stanford University, July 21, 1966. More...
 
interface  larrr
 Perform tests to decide whether the symmetric tridiagonal matrix T warrants expensive computations which guarantee high relative accuracy in the eigenvalues. More...
 
interface  larrv
 LARRV: computes the eigenvectors of the tridiagonal matrix T = L D L**T given L, D and APPROXIMATIONS to the eigenvalues of L D L**T. The input eigenvalues should have been computed by SLARRE. More...
 
interface  lartg
 ! More...
 
interface  lartgp
 LARTGP: generates a plane rotation so that [ CS SN ] . [ F ] = [ R ] where CS**2 + SN**2 = 1. [ -SN CS ] [ G ] [ 0 ] This is a slower, more accurate version of the Level 1 BLAS routine DROTG, with the following other differences: F and G are unchanged on return. If G=0, then CS=(+/-)1 and SN=0. If F=0 and (G .ne. 0), then CS=0 and SN=(+/-)1. The sign is chosen so that R >= 0. More...
 
interface  lartgs
 LARTGS: generates a plane rotation designed to introduce a bulge in Golub-Reinsch-style implicit QR iteration for the bidiagonal SVD problem. X and Y are the top-row entries, and SIGMA is the shift. The computed CS and SN define a plane rotation satisfying [ CS SN ] . [ X^2 - SIGMA ] = [ R ], [ -SN CS ] [ X * Y ] [ 0 ] with R nonnegative. If X^2 - SIGMA and X * Y are 0, then the rotation is by PI/2. More...
 
interface  lartv
 LARTV: applies a vector of complex plane rotations with real cosines to elements of the complex vectors x and y. For i = 1,2,...,n ( x(i) ) := ( c(i) s(i) ) ( x(i) ) ( y(i) ) ( -conjg(s(i)) c(i) ) ( y(i) ) More...
 
interface  laruv
 LARUV: returns a vector of n random real numbers from a uniform (0,1) distribution (n <= 128). This is an auxiliary routine called by DLARNV and ZLARNV. More...
 
interface  larz
 LARZ: applies a complex elementary reflector H to a complex M-by-N matrix C, from either the left or the right. H is represented in the form H = I - tau * v * v**H where tau is a complex scalar and v is a complex vector. If tau = 0, then H is taken to be the unit matrix. To apply H**H (the conjugate transpose of H), supply conjg(tau) instead tau. H is a product of k elementary reflectors as returned by CTZRZF. More...
 
interface  larzb
 LARZB: applies a complex block reflector H or its transpose H**H to a complex distributed M-by-N C from the left or the right. Currently, only STOREV = 'R' and DIRECT = 'B' are supported. More...
 
interface  larzt
 LARZT: forms the triangular factor T of a complex block reflector H of order > n, which is defined as a product of k elementary reflectors. If DIRECT = 'F', H = H(1) H(2) . . . H(k) and T is upper triangular; If DIRECT = 'B', H = H(k) . . . H(2) H(1) and T is lower triangular. If STOREV = 'C', the vector which defines the elementary reflector H(i) is stored in the i-th column of the array V, and H = I - V * T * V**H If STOREV = 'R', the vector which defines the elementary reflector H(i) is stored in the i-th row of the array V, and H = I - V**H * T * V Currently, only STOREV = 'R' and DIRECT = 'B' are supported. More...
 
interface  lascl
 LASCL: multiplies the M by N complex matrix A by the real scalar CTO/CFROM. This is done without over/underflow as long as the final result CTO*A(I,J)/CFROM does not over/underflow. TYPE specifies that A may be full, upper triangular, lower triangular, upper Hessenberg, or banded. More...
 
interface  lasd0
 Using a divide and conquer approach, LASD0: computes the singular value decomposition (SVD) of a real upper bidiagonal N-by-M matrix B with diagonal D and offdiagonal E, where M = N + SQRE. The algorithm computes orthogonal matrices U and VT such that B = U * S * VT. The singular values S are overwritten on D. A related subroutine, DLASDA, computes only the singular values, and optionally, the singular vectors in compact form. More...
 
interface  lasd1
 LASD1: computes the SVD of an upper bidiagonal N-by-M matrix B, where N = NL + NR + 1 and M = N + SQRE. LASD1 is called from DLASD0. A related subroutine DLASD7 handles the case in which the singular values (and the singular vectors in factored form) are desired. LASD1 computes the SVD as follows: ( D1(in) 0 0 0 ) B = U(in) * ( Z1**T a Z2**T b ) * VT(in) ( 0 0 D2(in) 0 ) = U(out) * ( D(out) 0) * VT(out) where Z**T = (Z1**T a Z2**T b) = u**T VT**T, and u is a vector of dimension M with ALPHA and BETA in the NL+1 and NL+2 th entries and zeros elsewhere; and the entry b is empty if SQRE = 0. The left singular vectors of the original matrix are stored in U, and the transpose of the right singular vectors are stored in VT, and the singular values are in D. The algorithm consists of three stages: The first stage consists of deflating the size of the problem when there are multiple singular values or when there are zeros in the Z vector. For each such occurrence the dimension of the secular equation problem is reduced by one. This stage is performed by the routine DLASD2. The second stage consists of calculating the updated singular values. This is done by finding the square roots of the roots of the secular equation via the routine DLASD4 (as called by DLASD3). This routine also calculates the singular vectors of the current problem. The final stage consists of computing the updated singular vectors directly using the updated singular values. The singular vectors for the current problem are multiplied with the singular vectors from the overall problem. More...
 
interface  lasd4
 This subroutine computes the square root of the I-th updated eigenvalue of a positive symmetric rank-one modification to a positive diagonal matrix whose entries are given as the squares of the corresponding entries in the array d, and that 0 <= D(i) < D(j) for i < j and that RHO > 0. This is arranged by the calling routine, and is no loss in generality. The rank-one modified system is thus diag( D ) * diag( D ) + RHO * Z * Z_transpose. where we assume the Euclidean norm of Z is 1. The method consists of approximating the rational functions in the secular equation by simpler interpolating rational functions. More...
 
interface  lasd5
 This subroutine computes the square root of the I-th eigenvalue of a positive symmetric rank-one modification of a 2-by-2 diagonal matrix diag( D ) * diag( D ) + RHO * Z * transpose(Z) . The diagonal entries in the array D are assumed to satisfy 0 <= D(i) < D(j) for i < j . We also assume RHO > 0 and that the Euclidean norm of the vector Z is one. More...
 
interface  lasd6
 LASD6: computes the SVD of an updated upper bidiagonal matrix B obtained by merging two smaller ones by appending a row. This routine is used only for the problem which requires all singular values and optionally singular vector matrices in factored form. B is an N-by-M matrix with N = NL + NR + 1 and M = N + SQRE. A related subroutine, DLASD1, handles the case in which all singular values and singular vectors of the bidiagonal matrix are desired. LASD6 computes the SVD as follows: ( D1(in) 0 0 0 ) B = U(in) * ( Z1**T a Z2**T b ) * VT(in) ( 0 0 D2(in) 0 ) = U(out) * ( D(out) 0) * VT(out) where Z**T = (Z1**T a Z2**T b) = u**T VT**T, and u is a vector of dimension M with ALPHA and BETA in the NL+1 and NL+2 th entries and zeros elsewhere; and the entry b is empty if SQRE = 0. The singular values of B can be computed using D1, D2, the first components of all the right singular vectors of the lower block, and the last components of all the right singular vectors of the upper block. These components are stored and updated in VF and VL, respectively, in LASD6. Hence U and VT are not explicitly referenced. The singular values are stored in D. The algorithm consists of two stages: The first stage consists of deflating the size of the problem when there are multiple singular values or if there is a zero in the Z vector. For each such occurrence the dimension of the secular equation problem is reduced by one. This stage is performed by the routine DLASD7. The second stage consists of calculating the updated singular values. This is done by finding the roots of the secular equation via the routine DLASD4 (as called by DLASD8). This routine also updates VF and VL and computes the distances between the updated singular values and the old singular values. LASD6 is called from DLASDA. More...
 
interface  lasd7
 LASD7: merges the two sets of singular values together into a single sorted set. Then it tries to deflate the size of the problem. There are two ways in which deflation can occur: when two or more singular values are close together or if there is a tiny entry in the Z vector. For each such occurrence the order of the related secular equation problem is reduced by one. LASD7 is called from DLASD6. More...
 
interface  lasd8
 LASD8: finds the square roots of the roots of the secular equation, as defined by the values in DSIGMA and Z. It makes the appropriate calls to DLASD4, and stores, for each element in D, the distance to its two nearest poles (elements in DSIGMA). It also updates the arrays VF and VL, the first and last components of all the right singular vectors of the original bidiagonal matrix. LASD8 is called from DLASD6. More...
 
interface  lasda
 Using a divide and conquer approach, LASDA: computes the singular value decomposition (SVD) of a real upper bidiagonal N-by-M matrix B with diagonal D and offdiagonal E, where M = N + SQRE. The algorithm computes the singular values in the SVD B = U * S * VT. The orthogonal matrices U and VT are optionally computed in compact form. A related subroutine, DLASD0, computes the singular values and the singular vectors in explicit form. More...
 
interface  lasdq
 LASDQ: computes the singular value decomposition (SVD) of a real (upper or lower) bidiagonal matrix with diagonal D and offdiagonal E, accumulating the transformations if desired. Letting B denote the input bidiagonal matrix, the algorithm computes orthogonal matrices Q and P such that B = Q * S * P**T (P**T denotes the transpose of P). The singular values S are overwritten on D. The input matrix U is changed to U * Q if desired. The input matrix VT is changed to P**T * VT if desired. The input matrix C is changed to Q**T * C if desired. See "Computing Small Singular Values of Bidiagonal Matrices With Guaranteed High Relative Accuracy," by J. Demmel and W. Kahan, LAPACK Working Note #3, for a detailed description of the algorithm. More...
 
interface  laset
 LASET: initializes a 2-D array A to BETA on the diagonal and ALPHA on the offdiagonals. More...
 
interface  lasq1
 LASQ1: computes the singular values of a real N-by-N bidiagonal matrix with diagonal D and off-diagonal E. The singular values are computed to high relative accuracy, in the absence of denormalization, underflow and overflow. The algorithm was first presented in "Accurate singular values and differential qd algorithms" by K. V. Fernando and B. N. Parlett, Numer. Math., Vol-67, No. 2, pp. 191-230, 1994, and the present implementation is described in "An implementation of the dqds Algorithm (Positive Case)", LAPACK Working Note. More...
 
interface  lasq4
 LASQ4: computes an approximation TAU to the smallest eigenvalue using values of d from the previous transform. More...
 
interface  lasq5
 LASQ5: computes one dqds transform in ping-pong form, one version for IEEE machines another for non IEEE machines. More...
 
interface  lasq6
 LASQ6: computes one dqd (shift equal to zero) transform in ping-pong form, with protection against underflow and overflow. More...
 
interface  lasr
 LASR: applies a sequence of real plane rotations to a complex matrix A, from either the left or the right. When SIDE = 'L', the transformation takes the form A := P*A and when SIDE = 'R', the transformation takes the form A := A*P**T where P is an orthogonal matrix consisting of a sequence of z plane rotations, with z = M when SIDE = 'L' and z = N when SIDE = 'R', and P**T is the transpose of P. When DIRECT = 'F' (Forward sequence), then P = P(z-1) * ... * P(2) * P(1) and when DIRECT = 'B' (Backward sequence), then P = P(1) * P(2) * ... * P(z-1) where P(k) is a plane rotation matrix defined by the 2-by-2 rotation R(k) = ( c(k) s(k) ) = ( -s(k) c(k) ). When PIVOT = 'V' (Variable pivot), the rotation is performed for the plane (k,k+1), i.e., P(k) has the form P(k) = ( 1 ) ( ... ) ( 1 ) ( c(k) s(k) ) ( -s(k) c(k) ) ( 1 ) ( ... ) ( 1 ) where R(k) appears as a rank-2 modification to the identity matrix in rows and columns k and k+1. When PIVOT = 'T' (Top pivot), the rotation is performed for the plane (1,k+1), so P(k) has the form P(k) = ( c(k) s(k) ) ( 1 ) ( ... ) ( 1 ) ( -s(k) c(k) ) ( 1 ) ( ... ) ( 1 ) where R(k) appears in rows and columns 1 and k+1. Similarly, when PIVOT = 'B' (Bottom pivot), the rotation is performed for the plane (k,z), giving P(k) the form P(k) = ( 1 ) ( ... ) ( 1 ) ( c(k) s(k) ) ( 1 ) ( ... ) ( 1 ) ( -s(k) c(k) ) where R(k) appears in rows and columns k and z. The rotations are performed without ever forming P(k) explicitly. More...
 
interface  lasrt
 Sort the numbers in D in increasing order (if ID = 'I') or in decreasing order (if ID = 'D' ). Use Quick Sort, reverting to Insertion sort on arrays of size <= 20. Dimension of STACK limits N to about 2**32. More...
 
interface  lassq
 ! More...
 
interface  laswlq
 LASWLQ: computes a blocked Tall-Skinny LQ factorization of a complex M-by-N matrix A for M <= N: A = ( L 0 ) * Q, where: Q is a n-by-N orthogonal matrix, stored on exit in an implicit form in the elements above the diagonal of the array A and in the elements of the array T; L is a lower-triangular M-by-M matrix stored on exit in the elements on and below the diagonal of the array A. 0 is a M-by-(N-M) zero matrix, if M < N, and is not stored. More...
 
interface  laswp
 LASWP: performs a series of row interchanges on the matrix A. One row interchange is initiated for each of rows K1 through K2 of A. More...
 
interface  lasyf
 LASYF: computes a partial factorization of a complex symmetric matrix A using the Bunch-Kaufman diagonal pivoting method. The partial factorization has the form: A = ( I U12 ) ( A11 0 ) ( I 0 ) if UPLO = 'U', or: ( 0 U22 ) ( 0 D ) ( U12**T U22**T ) A = ( L11 0 ) ( D 0 ) ( L11**T L21**T ) if UPLO = 'L' ( L21 I ) ( 0 A22 ) ( 0 I ) where the order of D is at most NB. The actual order is returned in the argument KB, and is either NB or NB-1, or N if N <= NB. Note that U**T denotes the transpose of U. LASYF is an auxiliary routine called by CSYTRF. It uses blocked code (calling Level 3 BLAS) to update the submatrix A11 (if UPLO = 'U') or A22 (if UPLO = 'L'). More...
 
interface  lasyf_aa
 DLATRF_AA factorizes a panel of a complex symmetric matrix A using the Aasen's algorithm. The panel consists of a set of NB rows of A when UPLO is U, or a set of NB columns when UPLO is L. In order to factorize the panel, the Aasen's algorithm requires the last row, or column, of the previous panel. The first row, or column, of A is set to be the first row, or column, of an identity matrix, which is used to factorize the first panel. The resulting J-th row of U, or J-th column of L, is stored in the (J-1)-th row, or column, of A (without the unit diagonals), while the diagonal and subdiagonal of A are overwritten by those of T. More...
 
interface  lasyf_rk
 LASYF_RK: computes a partial factorization of a complex symmetric matrix A using the bounded Bunch-Kaufman (rook) diagonal pivoting method. The partial factorization has the form: A = ( I U12 ) ( A11 0 ) ( I 0 ) if UPLO = 'U', or: ( 0 U22 ) ( 0 D ) ( U12**T U22**T ) A = ( L11 0 ) ( D 0 ) ( L11**T L21**T ) if UPLO = 'L', ( L21 I ) ( 0 A22 ) ( 0 I ) where the order of D is at most NB. The actual order is returned in the argument KB, and is either NB or NB-1, or N if N <= NB. LASYF_RK is an auxiliary routine called by CSYTRF_RK. It uses blocked code (calling Level 3 BLAS) to update the submatrix A11 (if UPLO = 'U') or A22 (if UPLO = 'L'). More...
 
interface  lasyf_rook
 LASYF_ROOK: computes a partial factorization of a complex symmetric matrix A using the bounded Bunch-Kaufman ("rook") diagonal pivoting method. The partial factorization has the form: A = ( I U12 ) ( A11 0 ) ( I 0 ) if UPLO = 'U', or: ( 0 U22 ) ( 0 D ) ( U12**T U22**T ) A = ( L11 0 ) ( D 0 ) ( L11**T L21**T ) if UPLO = 'L' ( L21 I ) ( 0 A22 ) ( 0 I ) where the order of D is at most NB. The actual order is returned in the argument KB, and is either NB or NB-1, or N if N <= NB. LASYF_ROOK is an auxiliary routine called by CSYTRF_ROOK. It uses blocked code (calling Level 3 BLAS) to update the submatrix A11 (if UPLO = 'U') or A22 (if UPLO = 'L'). More...
 
interface  latbs
 LATBS: solves one of the triangular systems A * x = s*b, A**T * x = s*b, or A**H * x = s*b, with scaling to prevent overflow, where A is an upper or lower triangular band matrix. Here A**T denotes the transpose of A, x and b are n-element vectors, and s is a scaling factor, usually less than or equal to 1, chosen so that the components of x will be less than the overflow threshold. If the unscaled problem will not cause overflow, the Level 2 BLAS routine CTBSV is called. If the matrix A is singular (A(j,j) = 0 for some j), then s is set to 0 and a non-trivial solution to A*x = 0 is returned. More...
 
interface  latdf
 LATDF: computes the contribution to the reciprocal Dif-estimate by solving for x in Z * x = b, where b is chosen such that the norm of x is as large as possible. It is assumed that LU decomposition of Z has been computed by CGETC2. On entry RHS = f holds the contribution from earlier solved sub-systems, and on return RHS = x. The factorization of Z returned by CGETC2 has the form Z = P * L * U * Q, where P and Q are permutation matrices. L is lower triangular with unit diagonal elements and U is upper triangular. More...
 
interface  latps
 LATPS: solves one of the triangular systems A * x = s*b, A**T * x = s*b, or A**H * x = s*b, with scaling to prevent overflow, where A is an upper or lower triangular matrix stored in packed form. Here A**T denotes the transpose of A, A**H denotes the conjugate transpose of A, x and b are n-element vectors, and s is a scaling factor, usually less than or equal to 1, chosen so that the components of x will be less than the overflow threshold. If the unscaled problem will not cause overflow, the Level 2 BLAS routine CTPSV is called. If the matrix A is singular (A(j,j) = 0 for some j), then s is set to 0 and a non-trivial solution to A*x = 0 is returned. More...
 
interface  latrd
 LATRD: reduces NB rows and columns of a complex Hermitian matrix A to Hermitian tridiagonal form by a unitary similarity transformation Q**H * A * Q, and returns the matrices V and W which are needed to apply the transformation to the unreduced part of A. If UPLO = 'U', LATRD reduces the last NB rows and columns of a matrix, of which the upper triangle is supplied; if UPLO = 'L', LATRD reduces the first NB rows and columns of a matrix, of which the lower triangle is supplied. This is an auxiliary routine called by CHETRD. More...
 
interface  latrs
 LATRS: solves one of the triangular systems A * x = s*b, A**T * x = s*b, or A**H * x = s*b, with scaling to prevent overflow. Here A is an upper or lower triangular matrix, A**T denotes the transpose of A, A**H denotes the conjugate transpose of A, x and b are n-element vectors, and s is a scaling factor, usually less than or equal to 1, chosen so that the components of x will be less than the overflow threshold. If the unscaled problem will not cause overflow, the Level 2 BLAS routine CTRSV is called. If the matrix A is singular (A(j,j) = 0 for some j), then s is set to 0 and a non-trivial solution to A*x = 0 is returned. More...
 
interface  latrz
 LATRZ: factors the M-by-(M+L) complex upper trapezoidal matrix [ A1 A2 ] = [ A(1:M,1:M) A(1:M,N-L+1:N) ] as ( R 0 ) * Z by means of unitary transformations, where Z is an (M+L)-by-(M+L) unitary matrix and, R and A1 are M-by-M upper triangular matrices. More...
 
interface  latsqr
 LATSQR: computes a blocked Tall-Skinny QR factorization of a complex M-by-N matrix A for M >= N: A = Q * ( R ), ( 0 ) where: Q is a M-by-M orthogonal matrix, stored on exit in an implicit form in the elements below the diagonal of the array A and in the elements of the array T; R is an upper-triangular N-by-N matrix, stored on exit in the elements on and above the diagonal of the array A. 0 is a (M-N)-by-N zero matrix, and is not stored. More...
 
interface  launhr_col_getrfnp
 LAUNHR_COL_GETRFNP: computes the modified LU factorization without pivoting of a complex general M-by-N matrix A. The factorization has the form: A - S = L * U, where: S is a m-by-n diagonal sign matrix with the diagonal D, so that D(i) = S(i,i), 1 <= i <= min(M,N). The diagonal D is constructed as D(i)=-SIGN(A(i,i)), where A(i,i) is the value after performing i-1 steps of Gaussian elimination. This means that the diagonal element at each step of "modified" Gaussian elimination is at least one in absolute value (so that division-by-zero not not possible during the division by the diagonal element); L is a M-by-N lower triangular matrix with unit diagonal elements (lower trapezoidal if M > N); and U is a M-by-N upper triangular matrix (upper trapezoidal if M < N). This routine is an auxiliary routine used in the Householder reconstruction routine CUNHR_COL. In CUNHR_COL, this routine is applied to an M-by-N matrix A with orthonormal columns, where each element is bounded by one in absolute value. With the choice of the matrix S above, one can show that the diagonal element at each step of Gaussian elimination is the largest (in absolute value) in the column on or below the diagonal, so that no pivoting is required for numerical stability [1]. For more details on the Householder reconstruction algorithm, including the modified LU factorization, see [1]. This is the blocked right-looking version of the algorithm, calling Level 3 BLAS to update the submatrix. To factorize a block, this routine calls the recursive routine LAUNHR_COL_GETRFNP2. [1] "Reconstructing Householder vectors from tall-skinny QR", G. Ballard, J. Demmel, L. Grigori, M. Jacquelin, H.D. Nguyen, E. Solomonik, J. Parallel Distrib. Comput., vol. 85, pp. 3-31, 2015. More...
 
interface  launhr_col_getrfnp2
 LAUNHR_COL_GETRFNP2: computes the modified LU factorization without pivoting of a complex general M-by-N matrix A. The factorization has the form: A - S = L * U, where: S is a m-by-n diagonal sign matrix with the diagonal D, so that D(i) = S(i,i), 1 <= i <= min(M,N). The diagonal D is constructed as D(i)=-SIGN(A(i,i)), where A(i,i) is the value after performing i-1 steps of Gaussian elimination. This means that the diagonal element at each step of "modified" Gaussian elimination is at least one in absolute value (so that division-by-zero not possible during the division by the diagonal element); L is a M-by-N lower triangular matrix with unit diagonal elements (lower trapezoidal if M > N); and U is a M-by-N upper triangular matrix (upper trapezoidal if M < N). This routine is an auxiliary routine used in the Householder reconstruction routine CUNHR_COL. In CUNHR_COL, this routine is applied to an M-by-N matrix A with orthonormal columns, where each element is bounded by one in absolute value. With the choice of the matrix S above, one can show that the diagonal element at each step of Gaussian elimination is the largest (in absolute value) in the column on or below the diagonal, so that no pivoting is required for numerical stability [1]. For more details on the Householder reconstruction algorithm, including the modified LU factorization, see [1]. This is the recursive version of the LU factorization algorithm. Denote A - S by B. The algorithm divides the matrix B into four submatrices: [ B11 | B12 ] where B11 is n1 by n1, B = [ --—|--— ] B21 is (m-n1) by n1, [ B21 | B22 ] B12 is n1 by n2, B22 is (m-n1) by n2, with n1 = min(m,n)/2, n2 = n-n1. The subroutine calls itself to factor B11, solves for B21, solves for B12, updates B22, then calls itself to factor B22. For more details on the recursive LU algorithm, see [2]. LAUNHR_COL_GETRFNP2 is called to factorize a block by the blocked routine CLAUNHR_COL_GETRFNP, which uses blocked code calling Level 3 BLAS to update the submatrix. However, LAUNHR_COL_GETRFNP2 is self-sufficient and can be used without CLAUNHR_COL_GETRFNP. [1] "Reconstructing Householder vectors from tall-skinny QR", G. Ballard, J. Demmel, L. Grigori, M. Jacquelin, H.D. Nguyen, E. Solomonik, J. Parallel Distrib. Comput., vol. 85, pp. 3-31, 2015. [2] "Recursion leads to automatic variable blocking for dense linear algebra algorithms", F. Gustavson, IBM J. of Res. and Dev., vol. 41, no. 6, pp. 737-755, 1997. More...
 
interface  lauum
 LAUUM: computes the product U * U**H or L**H * L, where the triangular factor U or L is stored in the upper or lower triangular part of the array A. If UPLO = 'U' or 'u' then the upper triangle of the result is stored, overwriting the factor U in A. If UPLO = 'L' or 'l' then the lower triangle of the result is stored, overwriting the factor L in A. This is the blocked form of the algorithm, calling Level 3 BLAS. More...
 
interface  opgtr
 OPGTR: generates a real orthogonal matrix Q which is defined as the product of n-1 elementary reflectors H(i) of order n, as returned by DSPTRD using packed storage: if UPLO = 'U', Q = H(n-1) . . . H(2) H(1), if UPLO = 'L', Q = H(1) H(2) . . . H(n-1). More...
 
interface  opmtr
 OPMTR: overwrites the general real M-by-N matrix C with SIDE = 'L' SIDE = 'R' TRANS = 'N': Q * C C * Q TRANS = 'T': Q**T * C C * Q**T where Q is a real orthogonal matrix of order nq, with nq = m if SIDE = 'L' and nq = n if SIDE = 'R'. Q is defined as the product of nq-1 elementary reflectors, as returned by DSPTRD using packed storage: if UPLO = 'U', Q = H(nq-1) . . . H(2) H(1); if UPLO = 'L', Q = H(1) H(2) . . . H(nq-1). More...
 
interface  orbdb
 ORBDB: simultaneously bidiagonalizes the blocks of an M-by-M partitioned orthogonal matrix X: [ B11 | B12 0 0 ] [ X11 | X12 ] [ P1 | ] [ 0 | 0 -I 0 ] [ Q1 | ]**T X = [--------—] = [------—] [-------------—] [------—] . [ X21 | X22 ] [ | P2 ] [ B21 | B22 0 0 ] [ | Q2 ] [ 0 | 0 0 I ] X11 is P-by-Q. Q must be no larger than P, M-P, or M-Q. (If this is not the case, then X must be transposed and/or permuted. This can be done in constant time using the TRANS and SIGNS options. See DORCSD for details.) The orthogonal matrices P1, P2, Q1, and Q2 are P-by-P, (M-P)-by- (M-P), Q-by-Q, and (M-Q)-by-(M-Q), respectively. They are represented implicitly by Householder vectors. B11, B12, B21, and B22 are Q-by-Q bidiagonal matrices represented implicitly by angles THETA, PHI. More...
 
interface  orbdb1
 ORBDB1: simultaneously bidiagonalizes the blocks of a tall and skinny matrix X with orthonomal columns: [ B11 ] [ X11 ] [ P1 | ] [ 0 ] [--—] = [------—] [--—] Q1**T . [ X21 ] [ | P2 ] [ B21 ] [ 0 ] X11 is P-by-Q, and X21 is (M-P)-by-Q. Q must be no larger than P, M-P, or M-Q. Routines DORBDB2, DORBDB3, and DORBDB4 handle cases in which Q is not the minimum dimension. The orthogonal matrices P1, P2, and Q1 are P-by-P, (M-P)-by-(M-P), and (M-Q)-by-(M-Q), respectively. They are represented implicitly by Householder vectors. B11 and B12 are Q-by-Q bidiagonal matrices represented implicitly by angles THETA, PHI. More...
 
interface  orbdb2
 ORBDB2: simultaneously bidiagonalizes the blocks of a tall and skinny matrix X with orthonomal columns: [ B11 ] [ X11 ] [ P1 | ] [ 0 ] [--—] = [------—] [--—] Q1**T . [ X21 ] [ | P2 ] [ B21 ] [ 0 ] X11 is P-by-Q, and X21 is (M-P)-by-Q. P must be no larger than M-P, Q, or M-Q. Routines DORBDB1, DORBDB3, and DORBDB4 handle cases in which P is not the minimum dimension. The orthogonal matrices P1, P2, and Q1 are P-by-P, (M-P)-by-(M-P), and (M-Q)-by-(M-Q), respectively. They are represented implicitly by Householder vectors. B11 and B12 are P-by-P bidiagonal matrices represented implicitly by angles THETA, PHI. More...
 
interface  orbdb3
 ORBDB3: simultaneously bidiagonalizes the blocks of a tall and skinny matrix X with orthonomal columns: [ B11 ] [ X11 ] [ P1 | ] [ 0 ] [--—] = [------—] [--—] Q1**T . [ X21 ] [ | P2 ] [ B21 ] [ 0 ] X11 is P-by-Q, and X21 is (M-P)-by-Q. M-P must be no larger than P, Q, or M-Q. Routines DORBDB1, DORBDB2, and DORBDB4 handle cases in which M-P is not the minimum dimension. The orthogonal matrices P1, P2, and Q1 are P-by-P, (M-P)-by-(M-P), and (M-Q)-by-(M-Q), respectively. They are represented implicitly by Householder vectors. B11 and B12 are (M-P)-by-(M-P) bidiagonal matrices represented implicitly by angles THETA, PHI. More...
 
interface  orbdb4
 ORBDB4: simultaneously bidiagonalizes the blocks of a tall and skinny matrix X with orthonomal columns: [ B11 ] [ X11 ] [ P1 | ] [ 0 ] [--—] = [------—] [--—] Q1**T . [ X21 ] [ | P2 ] [ B21 ] [ 0 ] X11 is P-by-Q, and X21 is (M-P)-by-Q. M-Q must be no larger than P, M-P, or Q. Routines DORBDB1, DORBDB2, and DORBDB3 handle cases in which M-Q is not the minimum dimension. The orthogonal matrices P1, P2, and Q1 are P-by-P, (M-P)-by-(M-P), and (M-Q)-by-(M-Q), respectively. They are represented implicitly by Householder vectors. B11 and B12 are (M-Q)-by-(M-Q) bidiagonal matrices represented implicitly by angles THETA, PHI. More...
 
interface  orbdb5
 ORBDB5: orthogonalizes the column vector X = [ X1 ] [ X2 ] with respect to the columns of Q = [ Q1 ] . [ Q2 ] The columns of Q must be orthonormal. If the projection is zero according to Kahan's "twice is enough" criterion, then some other vector from the orthogonal complement is returned. This vector is chosen in an arbitrary but deterministic way. More...
 
interface  orbdb6
 ORBDB6: orthogonalizes the column vector X = [ X1 ] [ X2 ] with respect to the columns of Q = [ Q1 ] . [ Q2 ] The columns of Q must be orthonormal. If the projection is zero according to Kahan's "twice is enough" criterion, then the zero vector is returned. More...
 
interface  orcsd
 ORCSD: computes the CS decomposition of an M-by-M partitioned orthogonal matrix X: [ I 0 0 | 0 0 0 ] [ 0 C 0 | 0 -S 0 ] [ X11 | X12 ] [ U1 | ] [ 0 0 0 | 0 0 -I ] [ V1 | ]**T X = [--------—] = [------—] [------------------—] [------—] . [ X21 | X22 ] [ | U2 ] [ 0 0 0 | I 0 0 ] [ | V2 ] [ 0 S 0 | 0 C 0 ] [ 0 0 I | 0 0 0 ] X11 is P-by-Q. The orthogonal matrices U1, U2, V1, and V2 are P-by-P, (M-P)-by-(M-P), Q-by-Q, and (M-Q)-by-(M-Q), respectively. C and S are R-by-R nonnegative diagonal matrices satisfying C^2 + S^2 = I, in which R = MIN(P,M-P,Q,M-Q). More...
 
interface  orcsd2by1
 ORCSD2BY1: computes the CS decomposition of an M-by-Q matrix X with orthonormal columns that has been partitioned into a 2-by-1 block structure: [ I1 0 0 ] [ 0 C 0 ] [ X11 ] [ U1 | ] [ 0 0 0 ] X = [--—] = [------—] [-------—] V1**T . [ X21 ] [ | U2 ] [ 0 0 0 ] [ 0 S 0 ] [ 0 0 I2] X11 is P-by-Q. The orthogonal matrices U1, U2, and V1 are P-by-P, (M-P)-by-(M-P), and Q-by-Q, respectively. C and S are R-by-R nonnegative diagonal matrices satisfying C^2 + S^2 = I, in which R = MIN(P,M-P,Q,M-Q). I1 is a K1-by-K1 identity matrix and I2 is a K2-by-K2 identity matrix, where K1 = MAX(Q+P-M,0), K2 = MAX(Q-P,0). More...
 
interface  org2l
 ORG2L: generates an m by n real matrix Q with orthonormal columns, which is defined as the last n columns of a product of k elementary reflectors of order m Q = H(k) . . . H(2) H(1) as returned by DGEQLF. More...
 
interface  org2r
 ORG2R: generates an m by n real matrix Q with orthonormal columns, which is defined as the first n columns of a product of k elementary reflectors of order m Q = H(1) H(2) . . . H(k) as returned by DGEQRF. More...
 
interface  orgbr
 ORGBR: generates one of the real orthogonal matrices Q or P**T determined by DGEBRD when reducing a real matrix A to bidiagonal form: A = Q * B * P**T. Q and P**T are defined as products of elementary reflectors H(i) or G(i) respectively. If VECT = 'Q', A is assumed to have been an M-by-K matrix, and Q is of order M: if m >= k, Q = H(1) H(2) . . . H(k) and ORGBR returns the first n columns of Q, where m >= n >= k; if m < k, Q = H(1) H(2) . . . H(m-1) and ORGBR returns Q as an M-by-M matrix. If VECT = 'P', A is assumed to have been a K-by-N matrix, and P**T is of order N: if k < n, P**T = G(k) . . . G(2) G(1) and ORGBR returns the first m rows of P**T, where n >= m >= k; if k >= n, P**T = G(n-1) . . . G(2) G(1) and ORGBR returns P**T as an N-by-N matrix. More...
 
interface  orghr
 ORGHR: generates a real orthogonal matrix Q which is defined as the product of IHI-ILO elementary reflectors of order N, as returned by DGEHRD: Q = H(ilo) H(ilo+1) . . . H(ihi-1). More...
 
interface  orglq
 ORGLQ: generates an M-by-N real matrix Q with orthonormal rows, which is defined as the first M rows of a product of K elementary reflectors of order N Q = H(k) . . . H(2) H(1) as returned by DGELQF. More...
 
interface  orgql
 ORGQL: generates an M-by-N real matrix Q with orthonormal columns, which is defined as the last N columns of a product of K elementary reflectors of order M Q = H(k) . . . H(2) H(1) as returned by DGEQLF. More...
 
interface  orgqr
 ORGQR: generates an M-by-N real matrix Q with orthonormal columns, which is defined as the first N columns of a product of K elementary reflectors of order M Q = H(1) H(2) . . . H(k) as returned by GEQRF. More...
 
interface  orgrq
 ORGRQ: generates an M-by-N real matrix Q with orthonormal rows, which is defined as the last M rows of a product of K elementary reflectors of order N Q = H(1) H(2) . . . H(k) as returned by DGERQF. More...
 
interface  orgtr
 ORGTR: generates a real orthogonal matrix Q which is defined as the product of n-1 elementary reflectors of order N, as returned by DSYTRD: if UPLO = 'U', Q = H(n-1) . . . H(2) H(1), if UPLO = 'L', Q = H(1) H(2) . . . H(n-1). More...
 
interface  orgtsqr
 ORGTSQR: generates an M-by-N real matrix Q_out with orthonormal columns, which are the first N columns of a product of real orthogonal matrices of order M which are returned by DLATSQR Q_out = first_N_columns_of( Q(1)_in * Q(2)_in * ... * Q(k)_in ). See the documentation for DLATSQR. More...
 
interface  orgtsqr_row
 ORGTSQR_ROW: generates an M-by-N real matrix Q_out with orthonormal columns from the output of DLATSQR. These N orthonormal columns are the first N columns of a product of complex unitary matrices Q(k)_in of order M, which are returned by DLATSQR in a special format. Q_out = first_N_columns_of( Q(1)_in * Q(2)_in * ... * Q(k)_in ). The input matrices Q(k)_in are stored in row and column blocks in A. See the documentation of DLATSQR for more details on the format of Q(k)_in, where each Q(k)_in is represented by block Householder transformations. This routine calls an auxiliary routine DLARFB_GETT, where the computation is performed on each individual block. The algorithm first sweeps NB-sized column blocks from the right to left starting in the bottom row block and continues to the top row block (hence _ROW in the routine name). This sweep is in reverse order of the order in which DLATSQR generates the output blocks. More...
 
interface  orhr_col
 ORHR_COL: takes an M-by-N real matrix Q_in with orthonormal columns as input, stored in A, and performs Householder Reconstruction (HR), i.e. reconstructs Householder vectors V(i) implicitly representing another M-by-N matrix Q_out, with the property that Q_in = Q_out*S, where S is an N-by-N diagonal matrix with diagonal entries equal to +1 or -1. The Householder vectors (columns V(i) of V) are stored in A on output, and the diagonal entries of S are stored in D. Block reflectors are also returned in T (same output format as DGEQRT). More...
 
interface  orm2l
 ORM2L: overwrites the general real m by n matrix C with Q * C if SIDE = 'L' and TRANS = 'N', or Q**T * C if SIDE = 'L' and TRANS = 'T', or C * Q if SIDE = 'R' and TRANS = 'N', or C * Q**T if SIDE = 'R' and TRANS = 'T', where Q is a real orthogonal matrix defined as the product of k elementary reflectors Q = H(k) . . . H(2) H(1) as returned by DGEQLF. Q is of order m if SIDE = 'L' and of order n if SIDE = 'R'. More...
 
interface  orm2r
 ORM2R: overwrites the general real m by n matrix C with Q * C if SIDE = 'L' and TRANS = 'N', or Q**T* C if SIDE = 'L' and TRANS = 'T', or C * Q if SIDE = 'R' and TRANS = 'N', or C * Q**T if SIDE = 'R' and TRANS = 'T', where Q is a real orthogonal matrix defined as the product of k elementary reflectors Q = H(1) H(2) . . . H(k) as returned by DGEQRF. Q is of order m if SIDE = 'L' and of order n if SIDE = 'R'. More...
 
interface  ormbr
 If VECT = 'Q', ORMBR: overwrites the general real M-by-N matrix C with SIDE = 'L' SIDE = 'R' TRANS = 'N': Q * C C * Q TRANS = 'T': Q**T * C C * Q**T If VECT = 'P', ORMBR overwrites the general real M-by-N matrix C with SIDE = 'L' SIDE = 'R' TRANS = 'N': P * C C * P TRANS = 'T': P**T * C C * P**T Here Q and P**T are the orthogonal matrices determined by DGEBRD when reducing a real matrix A to bidiagonal form: A = Q * B * P**T. Q and P**T are defined as products of elementary reflectors H(i) and G(i) respectively. Let nq = m if SIDE = 'L' and nq = n if SIDE = 'R'. Thus nq is the order of the orthogonal matrix Q or P**T that is applied. If VECT = 'Q', A is assumed to have been an NQ-by-K matrix: if nq >= k, Q = H(1) H(2) . . . H(k); if nq < k, Q = H(1) H(2) . . . H(nq-1). If VECT = 'P', A is assumed to have been a K-by-NQ matrix: if k < nq, P = G(1) G(2) . . . G(k); if k >= nq, P = G(1) G(2) . . . G(nq-1). More...
 
interface  ormhr
 ORMHR: overwrites the general real M-by-N matrix C with SIDE = 'L' SIDE = 'R' TRANS = 'N': Q * C C * Q TRANS = 'T': Q**T * C C * Q**T where Q is a real orthogonal matrix of order nq, with nq = m if SIDE = 'L' and nq = n if SIDE = 'R'. Q is defined as the product of IHI-ILO elementary reflectors, as returned by DGEHRD: Q = H(ilo) H(ilo+1) . . . H(ihi-1). More...
 
interface  ormlq
 ORMLQ: overwrites the general real M-by-N matrix C with SIDE = 'L' SIDE = 'R' TRANS = 'N': Q * C C * Q TRANS = 'T': Q**T * C C * Q**T where Q is a real orthogonal matrix defined as the product of k elementary reflectors Q = H(k) . . . H(2) H(1) as returned by DGELQF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. More...
 
interface  ormql
 ORMQL: overwrites the general real M-by-N matrix C with SIDE = 'L' SIDE = 'R' TRANS = 'N': Q * C C * Q TRANS = 'T': Q**T * C C * Q**T where Q is a real orthogonal matrix defined as the product of k elementary reflectors Q = H(k) . . . H(2) H(1) as returned by DGEQLF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. More...
 
interface  ormqr
 ORMQR: overwrites the general real M-by-N matrix C with SIDE = 'L' SIDE = 'R' TRANS = 'N': Q * C C * Q TRANS = 'T': Q**T * C C * Q**T where Q is a real orthogonal matrix defined as the product of k elementary reflectors Q = H(1) H(2) . . . H(k) as returned by DGEQRF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. More...
 
interface  ormrq
 ORMRQ: overwrites the general real M-by-N matrix C with SIDE = 'L' SIDE = 'R' TRANS = 'N': Q * C C * Q TRANS = 'T': Q**T * C C * Q**T where Q is a real orthogonal matrix defined as the product of k elementary reflectors Q = H(1) H(2) . . . H(k) as returned by DGERQF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. More...
 
interface  ormrz
 ORMRZ: overwrites the general real M-by-N matrix C with SIDE = 'L' SIDE = 'R' TRANS = 'N': Q * C C * Q TRANS = 'T': Q**T * C C * Q**T where Q is a real orthogonal matrix defined as the product of k elementary reflectors Q = H(1) H(2) . . . H(k) as returned by DTZRZF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. More...
 
interface  ormtr
 ORMTR: overwrites the general real M-by-N matrix C with SIDE = 'L' SIDE = 'R' TRANS = 'N': Q * C C * Q TRANS = 'T': Q**T * C C * Q**T where Q is a real orthogonal matrix of order nq, with nq = m if SIDE = 'L' and nq = n if SIDE = 'R'. Q is defined as the product of nq-1 elementary reflectors, as returned by DSYTRD: if UPLO = 'U', Q = H(nq-1) . . . H(2) H(1); if UPLO = 'L', Q = H(1) H(2) . . . H(nq-1). More...
 
interface  pbcon
 PBCON: estimates the reciprocal of the condition number (in the 1-norm) of a complex Hermitian positive definite band matrix using the Cholesky factorization A = U**H*U or A = L*L**H computed by CPBTRF. An estimate is obtained for norm(inv(A)), and the reciprocal of the condition number is computed as RCOND = 1 / (ANORM * norm(inv(A))). More...
 
interface  pbequ
 PBEQU: computes row and column scalings intended to equilibrate a Hermitian positive definite band matrix A and reduce its condition number (with respect to the two-norm). S contains the scale factors, S(i) = 1/sqrt(A(i,i)), chosen so that the scaled matrix B with elements B(i,j) = S(i)*A(i,j)*S(j) has ones on the diagonal. This choice of S puts the condition number of B within a factor N of the smallest possible condition number over all possible diagonal scalings. More...
 
interface  pbrfs
 PBRFS: improves the computed solution to a system of linear equations when the coefficient matrix is Hermitian positive definite and banded, and provides error bounds and backward error estimates for the solution. More...
 
interface  pbstf
 PBSTF: computes a split Cholesky factorization of a complex Hermitian positive definite band matrix A. This routine is designed to be used in conjunction with CHBGST. The factorization has the form A = S**H*S where S is a band matrix of the same bandwidth as A and the following structure: S = ( U ) ( M L ) where U is upper triangular of order m = (n+kd)/2, and L is lower triangular of order n-m. More...
 
interface  pbsv
 PBSV: computes the solution to a complex system of linear equations A * X = B, where A is an N-by-N Hermitian positive definite band matrix and X and B are N-by-NRHS matrices. The Cholesky decomposition is used to factor A as A = U**H * U, if UPLO = 'U', or A = L * L**H, if UPLO = 'L', where U is an upper triangular band matrix, and L is a lower triangular band matrix, with the same number of superdiagonals or subdiagonals as A. The factored form of A is then used to solve the system of equations A * X = B. More...
 
interface  pbtrf
 PBTRF: computes the Cholesky factorization of a complex Hermitian positive definite band matrix A. The factorization has the form A = U**H * U, if UPLO = 'U', or A = L * L**H, if UPLO = 'L', where U is an upper triangular matrix and L is lower triangular. More...
 
interface  pbtrs
 PBTRS: solves a system of linear equations A*X = B with a Hermitian positive definite band matrix A using the Cholesky factorization A = U**H*U or A = L*L**H computed by CPBTRF. More...
 
interface  pftrf
 PFTRF: computes the Cholesky factorization of a complex Hermitian positive definite matrix A. The factorization has the form A = U**H * U, if UPLO = 'U', or A = L * L**H, if UPLO = 'L', where U is an upper triangular matrix and L is lower triangular. This is the block version of the algorithm, calling Level 3 BLAS. More...
 
interface  pftri
 PFTRI: computes the inverse of a complex Hermitian positive definite matrix A using the Cholesky factorization A = U**H*U or A = L*L**H computed by CPFTRF. More...
 
interface  pftrs
 PFTRS: solves a system of linear equations A*X = B with a Hermitian positive definite matrix A using the Cholesky factorization A = U**H*U or A = L*L**H computed by CPFTRF. More...
 
interface  pocon
 POCON: estimates the reciprocal of the condition number (in the 1-norm) of a complex Hermitian positive definite matrix using the Cholesky factorization A = U**H*U or A = L*L**H computed by CPOTRF. An estimate is obtained for norm(inv(A)), and the reciprocal of the condition number is computed as RCOND = 1 / (ANORM * norm(inv(A))). More...
 
interface  poequ
 POEQU: computes row and column scalings intended to equilibrate a Hermitian positive definite matrix A and reduce its condition number (with respect to the two-norm). S contains the scale factors, S(i) = 1/sqrt(A(i,i)), chosen so that the scaled matrix B with elements B(i,j) = S(i)*A(i,j)*S(j) has ones on the diagonal. This choice of S puts the condition number of B within a factor N of the smallest possible condition number over all possible diagonal scalings. More...
 
interface  poequb
 POEQUB: computes row and column scalings intended to equilibrate a Hermitian positive definite matrix A and reduce its condition number (with respect to the two-norm). S contains the scale factors, S(i) = 1/sqrt(A(i,i)), chosen so that the scaled matrix B with elements B(i,j) = S(i)*A(i,j)*S(j) has ones on the diagonal. This choice of S puts the condition number of B within a factor N of the smallest possible condition number over all possible diagonal scalings. This routine differs from CPOEQU by restricting the scaling factors to a power of the radix. Barring over- and underflow, scaling by these factors introduces no additional rounding errors. However, the scaled diagonal entries are no longer approximately 1 but lie between sqrt(radix) and 1/sqrt(radix). More...
 
interface  porfs
 PORFS: improves the computed solution to a system of linear equations when the coefficient matrix is Hermitian positive definite, and provides error bounds and backward error estimates for the solution. More...
 
interface  posv
 POSV: computes the solution to a complex system of linear equations A * X = B, where A is an N-by-N Hermitian positive definite matrix and X and B are N-by-NRHS matrices. The Cholesky decomposition is used to factor A as A = U**H* U, if UPLO = 'U', or A = L * L**H, if UPLO = 'L', where U is an upper triangular matrix and L is a lower triangular matrix. The factored form of A is then used to solve the system of equations A * X = B. More...
 
interface  potrf
 POTRF: computes the Cholesky factorization of a complex Hermitian positive definite matrix A. The factorization has the form A = U**H * U, if UPLO = 'U', or A = L * L**H, if UPLO = 'L', where U is an upper triangular matrix and L is lower triangular. This is the block version of the algorithm, calling Level 3 BLAS. More...
 
interface  potrf2
 POTRF2: computes the Cholesky factorization of a Hermitian positive definite matrix A using the recursive algorithm. The factorization has the form A = U**H * U, if UPLO = 'U', or A = L * L**H, if UPLO = 'L', where U is an upper triangular matrix and L is lower triangular. This is the recursive version of the algorithm. It divides the matrix into four submatrices: [ A11 | A12 ] where A11 is n1 by n1 and A22 is n2 by n2 A = [ --—|--— ] with n1 = n/2 [ A21 | A22 ] n2 = n-n1 The subroutine calls itself to factor A11. Update and scale A21 or A12, update A22 then calls itself to factor A22. More...
 
interface  potri
 POTRI: computes the inverse of a complex Hermitian positive definite matrix A using the Cholesky factorization A = U**H*U or A = L*L**H computed by CPOTRF. More...
 
interface  potrs
 POTRS: solves a system of linear equations A*X = B with a Hermitian positive definite matrix A using the Cholesky factorization A = U**H*U or A = L*L**H computed by CPOTRF. More...
 
interface  ppcon
 PPCON: estimates the reciprocal of the condition number (in the 1-norm) of a complex Hermitian positive definite packed matrix using the Cholesky factorization A = U**H*U or A = L*L**H computed by CPPTRF. An estimate is obtained for norm(inv(A)), and the reciprocal of the condition number is computed as RCOND = 1 / (ANORM * norm(inv(A))). More...
 
interface  ppequ
 PPEQU: computes row and column scalings intended to equilibrate a Hermitian positive definite matrix A in packed storage and reduce its condition number (with respect to the two-norm). S contains the scale factors, S(i)=1/sqrt(A(i,i)), chosen so that the scaled matrix B with elements B(i,j)=S(i)*A(i,j)*S(j) has ones on the diagonal. This choice of S puts the condition number of B within a factor N of the smallest possible condition number over all possible diagonal scalings. More...
 
interface  pprfs
 PPRFS: improves the computed solution to a system of linear equations when the coefficient matrix is Hermitian positive definite and packed, and provides error bounds and backward error estimates for the solution. More...
 
interface  ppsv
 PPSV: computes the solution to a complex system of linear equations A * X = B, where A is an N-by-N Hermitian positive definite matrix stored in packed format and X and B are N-by-NRHS matrices. The Cholesky decomposition is used to factor A as A = U**H * U, if UPLO = 'U', or A = L * L**H, if UPLO = 'L', where U is an upper triangular matrix and L is a lower triangular matrix. The factored form of A is then used to solve the system of equations A * X = B. More...
 
interface  pptrf
 PPTRF: computes the Cholesky factorization of a complex Hermitian positive definite matrix A stored in packed format. The factorization has the form A = U**H * U, if UPLO = 'U', or A = L * L**H, if UPLO = 'L', where U is an upper triangular matrix and L is lower triangular. More...
 
interface  pptri
 PPTRI: computes the inverse of a complex Hermitian positive definite matrix A using the Cholesky factorization A = U**H*U or A = L*L**H computed by CPPTRF. More...
 
interface  pptrs
 PPTRS: solves a system of linear equations A*X = B with a Hermitian positive definite matrix A in packed storage using the Cholesky factorization A = U**H*U or A = L*L**H computed by CPPTRF. More...
 
interface  pstrf
 PSTRF: computes the Cholesky factorization with complete pivoting of a complex Hermitian positive semidefinite matrix A. The factorization has the form P**T * A * P = U**H * U , if UPLO = 'U', P**T * A * P = L * L**H, if UPLO = 'L', where U is an upper triangular matrix and L is lower triangular, and P is stored as vector PIV. This algorithm does not attempt to check that A is positive semidefinite. This version of the algorithm calls level 3 BLAS. More...
 
interface  ptcon
 PTCON: computes the reciprocal of the condition number (in the 1-norm) of a complex Hermitian positive definite tridiagonal matrix using the factorization A = L*D*L**H or A = U**H*D*U computed by CPTTRF. Norm(inv(A)) is computed by a direct method, and the reciprocal of the condition number is computed as RCOND = 1 / (ANORM * norm(inv(A))). More...
 
interface  pteqr
 PTEQR: computes all eigenvalues and, optionally, eigenvectors of a symmetric positive definite tridiagonal matrix by first factoring the matrix using SPTTRF and then calling CBDSQR to compute the singular values of the bidiagonal factor. This routine computes the eigenvalues of the positive definite tridiagonal matrix to high relative accuracy. This means that if the eigenvalues range over many orders of magnitude in size, then the small eigenvalues and corresponding eigenvectors will be computed more accurately than, for example, with the standard QR method. The eigenvectors of a full or band positive definite Hermitian matrix can also be found if CHETRD, CHPTRD, or CHBTRD has been used to reduce this matrix to tridiagonal form. (The reduction to tridiagonal form, however, may preclude the possibility of obtaining high relative accuracy in the small eigenvalues of the original matrix, if these eigenvalues range over many orders of magnitude.) More...
 
interface  ptrfs
 PTRFS: improves the computed solution to a system of linear equations when the coefficient matrix is Hermitian positive definite and tridiagonal, and provides error bounds and backward error estimates for the solution. More...
 
interface  ptsv
 PTSV: computes the solution to a complex system of linear equations A*X = B, where A is an N-by-N Hermitian positive definite tridiagonal matrix, and X and B are N-by-NRHS matrices. A is factored as A = L*D*L**H, and the factored form of A is then used to solve the system of equations. More...
 
interface  pttrf
 PTTRF: computes the L*D*L**H factorization of a complex Hermitian positive definite tridiagonal matrix A. The factorization may also be regarded as having the form A = U**H *D*U. More...
 
interface  pttrs
 PTTRS: solves a tridiagonal system of the form A * X = B using the factorization A = U**H*D*U or A = L*D*L**H computed by CPTTRF. D is a diagonal matrix specified in the vector D, U (or L) is a unit bidiagonal matrix whose superdiagonal (subdiagonal) is specified in the vector E, and X and B are N by NRHS matrices. More...
 
interface  rot
 ROT: applies a plane rotation, where the cos (C) is real and the sin (S) is complex, and the vectors CX and CY are complex. More...
 
interface  rscl
 RSCL: multiplies an n-element real vector x by the real scalar 1/a. This is done without overflow or underflow as long as the final result x/a does not overflow or underflow. More...
 
interface  sb2st_kernels
 SB2ST_KERNELS: is an internal routine used by the DSYTRD_SB2ST subroutine. More...
 
interface  sbev
 SBEV: computes all the eigenvalues and, optionally, eigenvectors of a real symmetric band matrix A. More...
 
interface  sbevd
 SBEVD: computes all the eigenvalues and, optionally, eigenvectors of a real symmetric band matrix A. If eigenvectors are desired, it uses a divide and conquer algorithm. The divide and conquer algorithm makes very mild assumptions about floating point arithmetic. It will work on machines with a guard digit in add/subtract, or on those binary machines without guard digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or Cray-2. It could conceivably fail on hexadecimal or decimal machines without guard digits, but we know of none. More...
 
interface  sbgst
 SBGST: reduces a real symmetric-definite banded generalized eigenproblem A*x = lambda*B*x to standard form C*y = lambda*y, such that C has the same bandwidth as A. B must have been previously factorized as S**T*S by DPBSTF, using a split Cholesky factorization. A is overwritten by C = X**T*A*X, where X = S**(-1)*Q and Q is an orthogonal matrix chosen to preserve the bandwidth of A. More...
 
interface  sbgv
 SBGV: computes all the eigenvalues, and optionally, the eigenvectors of a real generalized symmetric-definite banded eigenproblem, of the form A*x=(lambda)*B*x. Here A and B are assumed to be symmetric and banded, and B is also positive definite. More...
 
interface  sbgvd
 SBGVD: computes all the eigenvalues, and optionally, the eigenvectors of a real generalized symmetric-definite banded eigenproblem, of the form A*x=(lambda)*B*x. Here A and B are assumed to be symmetric and banded, and B is also positive definite. If eigenvectors are desired, it uses a divide and conquer algorithm. The divide and conquer algorithm makes very mild assumptions about floating point arithmetic. It will work on machines with a guard digit in add/subtract, or on those binary machines without guard digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or Cray-2. It could conceivably fail on hexadecimal or decimal machines without guard digits, but we know of none. More...
 
interface  sbtrd
 SBTRD: reduces a real symmetric band matrix A to symmetric tridiagonal form T by an orthogonal similarity transformation: Q**T * A * Q = T. More...
 
interface  sfrk
 Level 3 BLAS like routine for C in RFP Format. SFRK: performs one of the symmetric rank–k operations C := alpha*A*A**T + beta*C, or C := alpha*A**T*A + beta*C, where alpha and beta are real scalars, C is an n–by–n symmetric matrix and A is an n–by–k matrix in the first case and a k–by–n matrix in the second case. More...
 
interface  spcon
 SPCON: estimates the reciprocal of the condition number (in the 1-norm) of a complex symmetric packed matrix A using the factorization A = U*D*U**T or A = L*D*L**T computed by CSPTRF. An estimate is obtained for norm(inv(A)), and the reciprocal of the condition number is computed as RCOND = 1 / (ANORM * norm(inv(A))). More...
 
interface  spev
 SPEV: computes all the eigenvalues and, optionally, eigenvectors of a real symmetric matrix A in packed storage. More...
 
interface  spevd
 SPEVD: computes all the eigenvalues and, optionally, eigenvectors of a real symmetric matrix A in packed storage. If eigenvectors are desired, it uses a divide and conquer algorithm. The divide and conquer algorithm makes very mild assumptions about floating point arithmetic. It will work on machines with a guard digit in add/subtract, or on those binary machines without guard digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or Cray-2. It could conceivably fail on hexadecimal or decimal machines without guard digits, but we know of none. More...
 
interface  spgst
 SPGST: reduces a real symmetric-definite generalized eigenproblem to standard form, using packed storage. If ITYPE = 1, the problem is A*x = lambda*B*x, and A is overwritten by inv(U**T)*A*inv(U) or inv(L)*A*inv(L**T) If ITYPE = 2 or 3, the problem is A*B*x = lambda*x or B*A*x = lambda*x, and A is overwritten by U*A*U**T or L**T*A*L. B must have been previously factorized as U**T*U or L*L**T by DPPTRF. More...
 
interface  spgv
 SPGV: computes all the eigenvalues and, optionally, the eigenvectors of a real generalized symmetric-definite eigenproblem, of the form A*x=(lambda)*B*x, A*Bx=(lambda)*x, or B*A*x=(lambda)*x. Here A and B are assumed to be symmetric, stored in packed format, and B is also positive definite. More...
 
interface  spgvd
 SPGVD: computes all the eigenvalues, and optionally, the eigenvectors of a real generalized symmetric-definite eigenproblem, of the form A*x=(lambda)*B*x, A*Bx=(lambda)*x, or B*A*x=(lambda)*x. Here A and B are assumed to be symmetric, stored in packed format, and B is also positive definite. If eigenvectors are desired, it uses a divide and conquer algorithm. The divide and conquer algorithm makes very mild assumptions about floating point arithmetic. It will work on machines with a guard digit in add/subtract, or on those binary machines without guard digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or Cray-2. It could conceivably fail on hexadecimal or decimal machines without guard digits, but we know of none. More...
 
interface  spmv
 SPMV: performs the matrix-vector operation y := alpha*A*x + beta*y, where alpha and beta are scalars, x and y are n element vectors and A is an n by n symmetric matrix, supplied in packed form. More...
 
interface  spr
 SPR: performs the symmetric rank 1 operation A := alpha*x*x**H + A, where alpha is a complex scalar, x is an n element vector and A is an n by n symmetric matrix, supplied in packed form. More...
 
interface  sprfs
 SPRFS: improves the computed solution to a system of linear equations when the coefficient matrix is symmetric indefinite and packed, and provides error bounds and backward error estimates for the solution. More...
 
interface  spsv
 SPSV: computes the solution to a complex system of linear equations A * X = B, where A is an N-by-N symmetric matrix stored in packed format and X and B are N-by-NRHS matrices. The diagonal pivoting method is used to factor A as A = U * D * U**T, if UPLO = 'U', or A = L * D * L**T, if UPLO = 'L', where U (or L) is a product of permutation and unit upper (lower) triangular matrices, D is symmetric and block diagonal with 1-by-1 and 2-by-2 diagonal blocks. The factored form of A is then used to solve the system of equations A * X = B. More...
 
interface  sptrd
 SPTRD: reduces a real symmetric matrix A stored in packed form to symmetric tridiagonal form T by an orthogonal similarity transformation: Q**T * A * Q = T. More...
 
interface  sptrf
 SPTRF: computes the factorization of a complex symmetric matrix A stored in packed format using the Bunch-Kaufman diagonal pivoting method: A = U*D*U**T or A = L*D*L**T where U (or L) is a product of permutation and unit upper (lower) triangular matrices, and D is symmetric and block diagonal with 1-by-1 and 2-by-2 diagonal blocks. More...
 
interface  sptri
 SPTRI: computes the inverse of a complex symmetric indefinite matrix A in packed storage using the factorization A = U*D*U**T or A = L*D*L**T computed by CSPTRF. More...
 
interface  sptrs
 SPTRS: solves a system of linear equations A*X = B with a complex symmetric matrix A stored in packed format using the factorization A = U*D*U**T or A = L*D*L**T computed by CSPTRF. More...
 
interface  stebz
 STEBZ: computes the eigenvalues of a symmetric tridiagonal matrix T. The user may ask for all eigenvalues, all eigenvalues in the half-open interval (VL, VU], or the IL-th through IU-th eigenvalues. To avoid overflow, the matrix must be scaled so that its largest element is no greater than overflow**(1/2) * underflow**(1/4) in absolute value, and for greatest accuracy, it should not be much smaller than that. See W. Kahan "Accurate Eigenvalues of a Symmetric Tridiagonal Matrix", Report CS41, Computer Science Dept., Stanford University, July 21, 1966. More...
 
interface  stedc
 STEDC: computes all eigenvalues and, optionally, eigenvectors of a symmetric tridiagonal matrix using the divide and conquer method. The eigenvectors of a full or band complex Hermitian matrix can also be found if CHETRD or CHPTRD or CHBTRD has been used to reduce this matrix to tridiagonal form. This code makes very mild assumptions about floating point arithmetic. It will work on machines with a guard digit in add/subtract, or on those binary machines without guard digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or Cray-2. It could conceivably fail on hexadecimal or decimal machines without guard digits, but we know of none. See SLAED3 for details. More...
 
interface  stegr
 STEGR: computes selected eigenvalues and, optionally, eigenvectors of a real symmetric tridiagonal matrix T. Any such unreduced matrix has a well defined set of pairwise different real eigenvalues, the corresponding real eigenvectors are pairwise orthogonal. The spectrum may be computed either completely or partially by specifying either an interval (VL,VU] or a range of indices IL:IU for the desired eigenvalues. STEGR is a compatibility wrapper around the improved CSTEMR routine. See SSTEMR for further details. One important change is that the ABSTOL parameter no longer provides any benefit and hence is no longer used. Note : STEGR and CSTEMR work only on machines which follow IEEE-754 floating-point standard in their handling of infinities and NaNs. Normal execution may create these exceptiona values and hence may abort due to a floating point exception in environments which do not conform to the IEEE-754 standard. More...
 
interface  stein
 STEIN: computes the eigenvectors of a real symmetric tridiagonal matrix T corresponding to specified eigenvalues, using inverse iteration. The maximum number of iterations allowed for each eigenvector is specified by an internal parameter MAXITS (currently set to 5). Although the eigenvectors are real, they are stored in a complex array, which may be passed to CUNMTR or CUPMTR for back transformation to the eigenvectors of a complex Hermitian matrix which was reduced to tridiagonal form. More...
 
interface  stemr
 STEMR: computes selected eigenvalues and, optionally, eigenvectors of a real symmetric tridiagonal matrix T. Any such unreduced matrix has a well defined set of pairwise different real eigenvalues, the corresponding real eigenvectors are pairwise orthogonal. The spectrum may be computed either completely or partially by specifying either an interval (VL,VU] or a range of indices IL:IU for the desired eigenvalues. Depending on the number of desired eigenvalues, these are computed either by bisection or the dqds algorithm. Numerically orthogonal eigenvectors are computed by the use of various suitable L D L^T factorizations near clusters of close eigenvalues (referred to as RRRs, Relatively Robust Representations). An informal sketch of the algorithm follows. For each unreduced block (submatrix) of T, (a) Compute T - sigma I = L D L^T, so that L and D define all the wanted eigenvalues to high relative accuracy. This means that small relative changes in the entries of D and L cause only small relative changes in the eigenvalues and eigenvectors. The standard (unfactored) representation of the tridiagonal matrix T does not have this property in general. (b) Compute the eigenvalues to suitable accuracy. If the eigenvectors are desired, the algorithm attains full accuracy of the computed eigenvalues only right before the corresponding vectors have to be computed, see steps c) and d). (c) For each cluster of close eigenvalues, select a new shift close to the cluster, find a new factorization, and refine the shifted eigenvalues to suitable accuracy. (d) For each eigenvalue with a large enough relative separation compute the corresponding eigenvector by forming a rank revealing twisted factorization. Go back to (c) for any clusters that remain. For more details, see: More...
 
interface  steqr
 STEQR: computes all eigenvalues and, optionally, eigenvectors of a symmetric tridiagonal matrix using the implicit QL or QR method. The eigenvectors of a full or band complex Hermitian matrix can also be found if CHETRD or CHPTRD or CHBTRD has been used to reduce this matrix to tridiagonal form. More...
 
interface  sterf
 STERF: computes all eigenvalues of a symmetric tridiagonal matrix using the Pal-Walker-Kahan variant of the QL or QR algorithm. More...
 
interface  stev
 STEV: computes all eigenvalues and, optionally, eigenvectors of a real symmetric tridiagonal matrix A. More...
 
interface  stevd
 STEVD: computes all eigenvalues and, optionally, eigenvectors of a real symmetric tridiagonal matrix. If eigenvectors are desired, it uses a divide and conquer algorithm. The divide and conquer algorithm makes very mild assumptions about floating point arithmetic. It will work on machines with a guard digit in add/subtract, or on those binary machines without guard digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or Cray-2. It could conceivably fail on hexadecimal or decimal machines without guard digits, but we know of none. More...
 
interface  stevr
 STEVR: computes selected eigenvalues and, optionally, eigenvectors of a real symmetric tridiagonal matrix T. Eigenvalues and eigenvectors can be selected by specifying either a range of values or a range of indices for the desired eigenvalues. Whenever possible, STEVR calls DSTEMR to compute the eigenspectrum using Relatively Robust Representations. DSTEMR computes eigenvalues by the dqds algorithm, while orthogonal eigenvectors are computed from various "good" L D L^T representations (also known as Relatively Robust Representations). Gram-Schmidt orthogonalization is avoided as far as possible. More specifically, the various steps of the algorithm are as follows. For the i-th unreduced block of T, (a) Compute T - sigma_i = L_i D_i L_i^T, such that L_i D_i L_i^T is a relatively robust representation, (b) Compute the eigenvalues, lambda_j, of L_i D_i L_i^T to high relative accuracy by the dqds algorithm, (c) If there is a cluster of close eigenvalues, "choose" sigma_i close to the cluster, and go to step (a), (d) Given the approximate eigenvalue lambda_j of L_i D_i L_i^T, compute the corresponding eigenvector by forming a rank-revealing twisted factorization. The desired accuracy of the output can be specified by the input parameter ABSTOL. For more details, see "A new O(n^2) algorithm for the symmetric tridiagonal eigenvalue/eigenvector problem", by Inderjit Dhillon, Computer Science Division Technical Report No. UCB//CSD-97-971, UC Berkeley, May 1997. Note 1 : STEVR calls DSTEMR when the full spectrum is requested on machines which conform to the ieee-754 floating point standard. STEVR calls DSTEBZ and DSTEIN on non-ieee machines and when partial spectrum requests are made. Normal execution of DSTEMR may create NaNs and infinities and hence may abort due to a floating point exception in environments which do not handle NaNs and infinities in the ieee standard default manner. More...
 
interface  sycon
 SYCON: estimates the reciprocal of the condition number (in the 1-norm) of a complex symmetric matrix A using the factorization A = U*D*U**T or A = L*D*L**T computed by CSYTRF. An estimate is obtained for norm(inv(A)), and the reciprocal of the condition number is computed as RCOND = 1 / (ANORM * norm(inv(A))). More...
 
interface  sycon_rook
 SYCON_ROOK: estimates the reciprocal of the condition number (in the 1-norm) of a complex symmetric matrix A using the factorization A = U*D*U**T or A = L*D*L**T computed by CSYTRF_ROOK. An estimate is obtained for norm(inv(A)), and the reciprocal of the condition number is computed as RCOND = 1 / (ANORM * norm(inv(A))). More...
 
interface  syconv
 SYCONV: convert A given by TRF into L and D and vice-versa. Get Non-diag elements of D (returned in workspace) and apply or reverse permutation done in TRF. More...
 
interface  syconvf
 If parameter WAY = 'C': SYCONVF: converts the factorization output format used in CSYTRF provided on entry in parameter A into the factorization output format used in CSYTRF_RK (or CSYTRF_BK) that is stored on exit in parameters A and E. It also converts in place details of the intechanges stored in IPIV from the format used in CSYTRF into the format used in CSYTRF_RK (or CSYTRF_BK). If parameter WAY = 'R': SYCONVF performs the conversion in reverse direction, i.e. converts the factorization output format used in CSYTRF_RK (or CSYTRF_BK) provided on entry in parameters A and E into the factorization output format used in CSYTRF that is stored on exit in parameter A. It also converts in place details of the intechanges stored in IPIV from the format used in CSYTRF_RK (or CSYTRF_BK) into the format used in CSYTRF. SYCONVF can also convert in Hermitian matrix case, i.e. between formats used in CHETRF and CHETRF_RK (or CHETRF_BK). More...
 
interface  syconvf_rook
 If parameter WAY = 'C': SYCONVF_ROOK: converts the factorization output format used in CSYTRF_ROOK provided on entry in parameter A into the factorization output format used in CSYTRF_RK (or CSYTRF_BK) that is stored on exit in parameters A and E. IPIV format for CSYTRF_ROOK and CSYTRF_RK (or CSYTRF_BK) is the same and is not converted. If parameter WAY = 'R': SYCONVF_ROOK performs the conversion in reverse direction, i.e. converts the factorization output format used in CSYTRF_RK (or CSYTRF_BK) provided on entry in parameters A and E into the factorization output format used in CSYTRF_ROOK that is stored on exit in parameter A. IPIV format for CSYTRF_ROOK and CSYTRF_RK (or CSYTRF_BK) is the same and is not converted. SYCONVF_ROOK can also convert in Hermitian matrix case, i.e. between formats used in CHETRF_ROOK and CHETRF_RK (or CHETRF_BK). More...
 
interface  syequb
 SYEQUB: computes row and column scalings intended to equilibrate a symmetric matrix A (with respect to the Euclidean norm) and reduce its condition number. The scale factors S are computed by the BIN algorithm (see references) so that the scaled matrix B with elements B(i,j) = S(i)*A(i,j)*S(j) has a condition number within a factor N of the smallest possible condition number over all possible diagonal scalings. More...
 
interface  syev
 SYEV: computes all eigenvalues and, optionally, eigenvectors of a real symmetric matrix A. More...
 
interface  syevd
 SYEVD: computes all eigenvalues and, optionally, eigenvectors of a real symmetric matrix A. If eigenvectors are desired, it uses a divide and conquer algorithm. The divide and conquer algorithm makes very mild assumptions about floating point arithmetic. It will work on machines with a guard digit in add/subtract, or on those binary machines without guard digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or Cray-2. It could conceivably fail on hexadecimal or decimal machines without guard digits, but we know of none. Because of large use of BLAS of level 3, SYEVD needs N**2 more workspace than DSYEVX. More...
 
interface  syevr
 SYEVR: computes selected eigenvalues and, optionally, eigenvectors of a real symmetric matrix A. Eigenvalues and eigenvectors can be selected by specifying either a range of values or a range of indices for the desired eigenvalues. SYEVR first reduces the matrix A to tridiagonal form T with a call to DSYTRD. Then, whenever possible, SYEVR calls DSTEMR to compute the eigenspectrum using Relatively Robust Representations. DSTEMR computes eigenvalues by the dqds algorithm, while orthogonal eigenvectors are computed from various "good" L D L^T representations (also known as Relatively Robust Representations). Gram-Schmidt orthogonalization is avoided as far as possible. More specifically, the various steps of the algorithm are as follows. For each unreduced block (submatrix) of T, (a) Compute T - sigma I = L D L^T, so that L and D define all the wanted eigenvalues to high relative accuracy. This means that small relative changes in the entries of D and L cause only small relative changes in the eigenvalues and eigenvectors. The standard (unfactored) representation of the tridiagonal matrix T does not have this property in general. (b) Compute the eigenvalues to suitable accuracy. If the eigenvectors are desired, the algorithm attains full accuracy of the computed eigenvalues only right before the corresponding vectors have to be computed, see steps c) and d). (c) For each cluster of close eigenvalues, select a new shift close to the cluster, find a new factorization, and refine the shifted eigenvalues to suitable accuracy. (d) For each eigenvalue with a large enough relative separation compute the corresponding eigenvector by forming a rank revealing twisted factorization. Go back to (c) for any clusters that remain. The desired accuracy of the output can be specified by the input parameter ABSTOL. For more details, see DSTEMR's documentation and: More...
 
interface  sygst
 SYGST: reduces a real symmetric-definite generalized eigenproblem to standard form. If ITYPE = 1, the problem is A*x = lambda*B*x, and A is overwritten by inv(U**T)*A*inv(U) or inv(L)*A*inv(L**T) If ITYPE = 2 or 3, the problem is A*B*x = lambda*x or B*A*x = lambda*x, and A is overwritten by U*A*U**T or L**T*A*L. B must have been previously factorized as U**T*U or L*L**T by DPOTRF. More...
 
interface  sygv
 SYGV: computes all the eigenvalues, and optionally, the eigenvectors of a real generalized symmetric-definite eigenproblem, of the form A*x=(lambda)*B*x, A*Bx=(lambda)*x, or B*A*x=(lambda)*x. Here A and B are assumed to be symmetric and B is also positive definite. More...
 
interface  sygvd
 SYGVD: computes all the eigenvalues, and optionally, the eigenvectors of a real generalized symmetric-definite eigenproblem, of the form A*x=(lambda)*B*x, A*Bx=(lambda)*x, or B*A*x=(lambda)*x. Here A and B are assumed to be symmetric and B is also positive definite. If eigenvectors are desired, it uses a divide and conquer algorithm. The divide and conquer algorithm makes very mild assumptions about floating point arithmetic. It will work on machines with a guard digit in add/subtract, or on those binary machines without guard digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or Cray-2. It could conceivably fail on hexadecimal or decimal machines without guard digits, but we know of none. More...
 
interface  symv
 SYMV: performs the matrix-vector operation y := alpha*A*x + beta*y, where alpha and beta are scalars, x and y are n element vectors and A is an n by n symmetric matrix. More...
 
interface  syr
 SYR: performs the symmetric rank 1 operation A := alpha*x*x**H + A, where alpha is a complex scalar, x is an n element vector and A is an n by n symmetric matrix. More...
 
interface  syrfs
 SYRFS: improves the computed solution to a system of linear equations when the coefficient matrix is symmetric indefinite, and provides error bounds and backward error estimates for the solution. More...
 
interface  sysv
 SYSV: computes the solution to a complex system of linear equations A * X = B, where A is an N-by-N symmetric matrix and X and B are N-by-NRHS matrices. The diagonal pivoting method is used to factor A as A = U * D * U**T, if UPLO = 'U', or A = L * D * L**T, if UPLO = 'L', where U (or L) is a product of permutation and unit upper (lower) triangular matrices, and D is symmetric and block diagonal with 1-by-1 and 2-by-2 diagonal blocks. The factored form of A is then used to solve the system of equations A * X = B. More...
 
interface  sysv_aa
 CSYSV computes the solution to a complex system of linear equations A * X = B, where A is an N-by-N symmetric matrix and X and B are N-by-NRHS matrices. Aasen's algorithm is used to factor A as A = U**T * T * U, if UPLO = 'U', or A = L * T * L**T, if UPLO = 'L', where U (or L) is a product of permutation and unit upper (lower) triangular matrices, and T is symmetric tridiagonal. The factored form of A is then used to solve the system of equations A * X = B. More...
 
interface  sysv_rk
 SYSV_RK: computes the solution to a complex system of linear equations A * X = B, where A is an N-by-N symmetric matrix and X and B are N-by-NRHS matrices. The bounded Bunch-Kaufman (rook) diagonal pivoting method is used to factor A as A = P*U*D*(U**T)*(P**T), if UPLO = 'U', or A = P*L*D*(L**T)*(P**T), if UPLO = 'L', where U (or L) is unit upper (or lower) triangular matrix, U**T (or L**T) is the transpose of U (or L), P is a permutation matrix, P**T is the transpose of P, and D is symmetric and block diagonal with 1-by-1 and 2-by-2 diagonal blocks. CSYTRF_RK is called to compute the factorization of a complex symmetric matrix. The factored form of A is then used to solve the system of equations A * X = B by calling BLAS3 routine CSYTRS_3. More...
 
interface  sysv_rook
 SYSV_ROOK: computes the solution to a complex system of linear equations A * X = B, where A is an N-by-N symmetric matrix and X and B are N-by-NRHS matrices. The diagonal pivoting method is used to factor A as A = U * D * U**T, if UPLO = 'U', or A = L * D * L**T, if UPLO = 'L', where U (or L) is a product of permutation and unit upper (lower) triangular matrices, and D is symmetric and block diagonal with 1-by-1 and 2-by-2 diagonal blocks. CSYTRF_ROOK is called to compute the factorization of a complex symmetric matrix A using the bounded Bunch-Kaufman ("rook") diagonal pivoting method. The factored form of A is then used to solve the system of equations A * X = B by calling CSYTRS_ROOK. More...
 
interface  syswapr
 SYSWAPR: applies an elementary permutation on the rows and the columns of a symmetric matrix. More...
 
interface  sytf2_rk
 SYTF2_RK: computes the factorization of a complex symmetric matrix A using the bounded Bunch-Kaufman (rook) diagonal pivoting method: A = P*U*D*(U**T)*(P**T) or A = P*L*D*(L**T)*(P**T), where U (or L) is unit upper (or lower) triangular matrix, U**T (or L**T) is the transpose of U (or L), P is a permutation matrix, P**T is the transpose of P, and D is symmetric and block diagonal with 1-by-1 and 2-by-2 diagonal blocks. This is the unblocked version of the algorithm, calling Level 2 BLAS. For more information see Further Details section. More...
 
interface  sytf2_rook
 SYTF2_ROOK: computes the factorization of a complex symmetric matrix A using the bounded Bunch-Kaufman ("rook") diagonal pivoting method: A = U*D*U**T or A = L*D*L**T where U (or L) is a product of permutation and unit upper (lower) triangular matrices, U**T is the transpose of U, and D is symmetric and block diagonal with 1-by-1 and 2-by-2 diagonal blocks. This is the unblocked version of the algorithm, calling Level 2 BLAS. More...
 
interface  sytrd
 SYTRD: reduces a real symmetric matrix A to real symmetric tridiagonal form T by an orthogonal similarity transformation: Q**T * A * Q = T. More...
 
interface  sytrd_sb2st
 SYTRD_SB2ST: reduces a real symmetric band matrix A to real symmetric tridiagonal form T by a orthogonal similarity transformation: Q**T * A * Q = T. More...
 
interface  sytrd_sy2sb
 SYTRD_SY2SB: reduces a real symmetric matrix A to real symmetric band-diagonal form AB by a orthogonal similarity transformation: Q**T * A * Q = AB. More...
 
interface  sytrf
 SYTRF: computes the factorization of a complex symmetric matrix A using the Bunch-Kaufman diagonal pivoting method. The form of the factorization is A = U*D*U**T or A = L*D*L**T where U (or L) is a product of permutation and unit upper (lower) triangular matrices, and D is symmetric and block diagonal with 1-by-1 and 2-by-2 diagonal blocks. This is the blocked version of the algorithm, calling Level 3 BLAS. More...
 
interface  sytrf_aa
 SYTRF_AA: computes the factorization of a complex symmetric matrix A using the Aasen's algorithm. The form of the factorization is A = U**T*T*U or A = L*T*L**T where U (or L) is a product of permutation and unit upper (lower) triangular matrices, and T is a complex symmetric tridiagonal matrix. This is the blocked version of the algorithm, calling Level 3 BLAS. More...
 
interface  sytrf_rk
 SYTRF_RK: computes the factorization of a complex symmetric matrix A using the bounded Bunch-Kaufman (rook) diagonal pivoting method: A = P*U*D*(U**T)*(P**T) or A = P*L*D*(L**T)*(P**T), where U (or L) is unit upper (or lower) triangular matrix, U**T (or L**T) is the transpose of U (or L), P is a permutation matrix, P**T is the transpose of P, and D is symmetric and block diagonal with 1-by-1 and 2-by-2 diagonal blocks. This is the blocked version of the algorithm, calling Level 3 BLAS. For more information see Further Details section. More...
 
interface  sytrf_rook
 SYTRF_ROOK: computes the factorization of a complex symmetric matrix A using the bounded Bunch-Kaufman ("rook") diagonal pivoting method. The form of the factorization is A = U*D*U**T or A = L*D*L**T where U (or L) is a product of permutation and unit upper (lower) triangular matrices, and D is symmetric and block diagonal with 1-by-1 and 2-by-2 diagonal blocks. This is the blocked version of the algorithm, calling Level 3 BLAS. More...
 
interface  sytri
 SYTRI: computes the inverse of a complex symmetric indefinite matrix A using the factorization A = U*D*U**T or A = L*D*L**T computed by CSYTRF. More...
 
interface  sytri_rook
 SYTRI_ROOK: computes the inverse of a complex symmetric matrix A using the factorization A = U*D*U**T or A = L*D*L**T computed by CSYTRF_ROOK. More...
 
interface  sytrs
 SYTRS: solves a system of linear equations A*X = B with a complex symmetric matrix A using the factorization A = U*D*U**T or A = L*D*L**T computed by CSYTRF. More...
 
interface  sytrs2
 SYTRS2: solves a system of linear equations A*X = B with a complex symmetric matrix A using the factorization A = U*D*U**T or A = L*D*L**T computed by CSYTRF and converted by CSYCONV. More...
 
interface  sytrs_3
 SYTRS_3: solves a system of linear equations A * X = B with a complex symmetric matrix A using the factorization computed by CSYTRF_RK or CSYTRF_BK: A = P*U*D*(U**T)*(P**T) or A = P*L*D*(L**T)*(P**T), where U (or L) is unit upper (or lower) triangular matrix, U**T (or L**T) is the transpose of U (or L), P is a permutation matrix, P**T is the transpose of P, and D is symmetric and block diagonal with 1-by-1 and 2-by-2 diagonal blocks. This algorithm is using Level 3 BLAS. More...
 
interface  sytrs_aa
 SYTRS_AA: solves a system of linear equations A*X = B with a complex symmetric matrix A using the factorization A = U**T*T*U or A = L*T*L**T computed by CSYTRF_AA. More...
 
interface  sytrs_rook
 SYTRS_ROOK: solves a system of linear equations A*X = B with a complex symmetric matrix A using the factorization A = U*D*U**T or A = L*D*L**T computed by CSYTRF_ROOK. More...
 
interface  tbcon
 TBCON: estimates the reciprocal of the condition number of a triangular band matrix A, in either the 1-norm or the infinity-norm. The norm of A is computed and an estimate is obtained for norm(inv(A)), then the reciprocal of the condition number is computed as RCOND = 1 / ( norm(A) * norm(inv(A)) ). More...
 
interface  tbrfs
 TBRFS: provides error bounds and backward error estimates for the solution to a system of linear equations with a triangular band coefficient matrix. The solution matrix X must be computed by CTBTRS or some other means before entering this routine. TBRFS does not do iterative refinement because doing so cannot improve the backward error. More...
 
interface  tbtrs
 TBTRS: solves a triangular system of the form A * X = B, A**T * X = B, or A**H * X = B, where A is a triangular band matrix of order N, and B is an N-by-NRHS matrix. A check is made to verify that A is nonsingular. More...
 
interface  tfsm
 Level 3 BLAS like routine for A in RFP Format. TFSM: solves the matrix equation op( A )*X = alpha*B or X*op( A ) = alpha*B where alpha is a scalar, X and B are m by n matrices, A is a unit, or non-unit, upper or lower triangular matrix and op( A ) is one of op( A ) = A or op( A ) = A**H. A is in Rectangular Full Packed (RFP) Format. The matrix X is overwritten on B. More...
 
interface  tftri
 TFTRI: computes the inverse of a triangular matrix A stored in RFP format. This is a Level 3 BLAS version of the algorithm. More...
 
interface  tfttp
 TFTTP: copies a triangular matrix A from rectangular full packed format (TF) to standard packed format (TP). More...
 
interface  tfttr
 TFTTR: copies a triangular matrix A from rectangular full packed format (TF) to standard full format (TR). More...
 
interface  tgevc
 TGEVC: computes some or all of the right and/or left eigenvectors of a pair of complex matrices (S,P), where S and P are upper triangular. Matrix pairs of this type are produced by the generalized Schur factorization of a complex matrix pair (A,B): A = Q*S*Z**H, B = Q*P*Z**H as computed by CGGHRD + CHGEQZ. The right eigenvector x and the left eigenvector y of (S,P) corresponding to an eigenvalue w are defined by: S*x = w*P*x, (y**H)*S = w*(y**H)*P, where y**H denotes the conjugate tranpose of y. The eigenvalues are not input to this routine, but are computed directly from the diagonal elements of S and P. This routine returns the matrices X and/or Y of right and left eigenvectors of (S,P), or the products Z*X and/or Q*Y, where Z and Q are input matrices. If Q and Z are the unitary factors from the generalized Schur factorization of a matrix pair (A,B), then Z*X and Q*Y are the matrices of right and left eigenvectors of (A,B). More...
 
interface  tgexc
 TGEXC: reorders the generalized Schur decomposition of a complex matrix pair (A,B), using an unitary equivalence transformation (A, B) := Q * (A, B) * Z**H, so that the diagonal block of (A, B) with row index IFST is moved to row ILST. (A, B) must be in generalized Schur canonical form, that is, A and B are both upper triangular. Optionally, the matrices Q and Z of generalized Schur vectors are updated. Q(in) * A(in) * Z(in)**H = Q(out) * A(out) * Z(out)**H Q(in) * B(in) * Z(in)**H = Q(out) * B(out) * Z(out)**H. More...
 
interface  tgsen
 TGSEN: reorders the generalized Schur decomposition of a complex matrix pair (A, B) (in terms of an unitary equivalence trans- formation Q**H * (A, B) * Z), so that a selected cluster of eigenvalues appears in the leading diagonal blocks of the pair (A,B). The leading columns of Q and Z form unitary bases of the corresponding left and right eigenspaces (deflating subspaces). (A, B) must be in generalized Schur canonical form, that is, A and B are both upper triangular. TGSEN also computes the generalized eigenvalues w(j)= ALPHA(j) / BETA(j) of the reordered matrix pair (A, B). Optionally, the routine computes estimates of reciprocal condition numbers for eigenvalues and eigenspaces. These are Difu[(A11,B11), (A22,B22)] and Difl[(A11,B11), (A22,B22)], i.e. the separation(s) between the matrix pairs (A11, B11) and (A22,B22) that correspond to the selected cluster and the eigenvalues outside the cluster, resp., and norms of "projections" onto left and right eigenspaces w.r.t. the selected cluster in the (1,1)-block. More...
 
interface  tgsja
 TGSJA: computes the generalized singular value decomposition (GSVD) of two complex upper triangular (or trapezoidal) matrices A and B. On entry, it is assumed that matrices A and B have the following forms, which may be obtained by the preprocessing subroutine CGGSVP from a general M-by-N matrix A and P-by-N matrix B: N-K-L K L A = K ( 0 A12 A13 ) if M-K-L >= 0; L ( 0 0 A23 ) M-K-L ( 0 0 0 ) N-K-L K L A = K ( 0 A12 A13 ) if M-K-L < 0; M-K ( 0 0 A23 ) N-K-L K L B = L ( 0 0 B13 ) P-L ( 0 0 0 ) where the K-by-K matrix A12 and L-by-L matrix B13 are nonsingular upper triangular; A23 is L-by-L upper triangular if M-K-L >= 0, otherwise A23 is (M-K)-by-L upper trapezoidal. On exit, U**H A*Q = D1( 0 R ), V**H B*Q = D2( 0 R ), where U, V and Q are unitary matrices. R is a nonsingular upper triangular matrix, and D1 and D2 are `‘diagonal’' matrices, which are of the following structures: If M-K-L >= 0, K L D1 = K ( I 0 ) L ( 0 C ) M-K-L ( 0 0 ) K L D2 = L ( 0 S ) P-L ( 0 0 ) N-K-L K L ( 0 R ) = K ( 0 R11 R12 ) K L ( 0 0 R22 ) L where C = diag( ALPHA(K+1), ... , ALPHA(K+L) ), S = diag( BETA(K+1), ... , BETA(K+L) ), C**2 + S**2 = I. R is stored in A(1:K+L,N-K-L+1:N) on exit. If M-K-L < 0, K M-K K+L-M D1 = K ( I 0 0 ) M-K ( 0 C 0 ) K M-K K+L-M D2 = M-K ( 0 S 0 ) K+L-M ( 0 0 I ) P-L ( 0 0 0 ) N-K-L K M-K K+L-M ( 0 R ) = K ( 0 R11 R12 R13 ) M-K ( 0 0 R22 R23 ) K+L-M ( 0 0 0 R33 ) where C = diag( ALPHA(K+1), ... , ALPHA(M) ), S = diag( BETA(K+1), ... , BETA(M) ), C**2 + S**2 = I. R = ( R11 R12 R13 ) is stored in A(1:M, N-K-L+1:N) and R33 is stored ( 0 R22 R23 ) in B(M-K+1:L,N+M-K-L+1:N) on exit. The computation of the unitary transformation matrices U, V or Q is optional. These matrices may either be formed explicitly, or they may be postmultiplied into input matrices U1, V1, or Q1. More...
 
interface  tgsna
 TGSNA: estimates reciprocal condition numbers for specified eigenvalues and/or eigenvectors of a matrix pair (A, B). (A, B) must be in generalized Schur canonical form, that is, A and B are both upper triangular. More...
 
interface  tgsyl
 TGSYL: solves the generalized Sylvester equation: A * R - L * B = scale * C (1) D * R - L * E = scale * F where R and L are unknown m-by-n matrices, (A, D), (B, E) and (C, F) are given matrix pairs of size m-by-m, n-by-n and m-by-n, respectively, with complex entries. A, B, D and E are upper triangular (i.e., (A,D) and (B,E) in generalized Schur form). The solution (R, L) overwrites (C, F). 0 <= SCALE <= 1 is an output scaling factor chosen to avoid overflow. In matrix notation (1) is equivalent to solve Zx = scale*b, where Z is defined as Z = [ kron(In, A) -kron(B**H, Im) ] (2) [ kron(In, D) -kron(E**H, Im) ], Here Ix is the identity matrix of size x and X**H is the conjugate transpose of X. Kron(X, Y) is the Kronecker product between the matrices X and Y. If TRANS = 'C', y in the conjugate transposed system Z**H *y = scale*b is solved for, which is equivalent to solve for R and L in A**H * R + D**H * L = scale * C (3) R * B**H + L * E**H = scale * -F This case (TRANS = 'C') is used to compute an one-norm-based estimate of Dif[(A,D), (B,E)], the separation between the matrix pairs (A,D) and (B,E), using CLACON. If IJOB >= 1, TGSYL computes a Frobenius norm-based estimate of Dif[(A,D),(B,E)]. That is, the reciprocal of a lower bound on the reciprocal of the smallest singular value of Z. This is a level-3 BLAS algorithm. More...
 
interface  tpcon
 TPCON: estimates the reciprocal of the condition number of a packed triangular matrix A, in either the 1-norm or the infinity-norm. The norm of A is computed and an estimate is obtained for norm(inv(A)), then the reciprocal of the condition number is computed as RCOND = 1 / ( norm(A) * norm(inv(A)) ). More...
 
interface  tplqt
 TPLQT: computes a blocked LQ factorization of a complex "triangular-pentagonal" matrix C, which is composed of a triangular block A and pentagonal block B, using the compact WY representation for Q. More...
 
interface  tplqt2
 TPLQT2: computes a LQ a factorization of a complex "triangular-pentagonal" matrix C, which is composed of a triangular block A and pentagonal block B, using the compact WY representation for Q. More...
 
interface  tpmlqt
 TPMLQT: applies a complex unitary matrix Q obtained from a "triangular-pentagonal" complex block reflector H to a general complex matrix C, which consists of two blocks A and B. More...
 
interface  tpmqrt
 TPMQRT: applies a complex orthogonal matrix Q obtained from a "triangular-pentagonal" complex block reflector H to a general complex matrix C, which consists of two blocks A and B. More...
 
interface  tpqrt
 TPQRT: computes a blocked QR factorization of a complex "triangular-pentagonal" matrix C, which is composed of a triangular block A and pentagonal block B, using the compact WY representation for Q. More...
 
interface  tpqrt2
 TPQRT2: computes a QR factorization of a complex "triangular-pentagonal" matrix C, which is composed of a triangular block A and pentagonal block B, using the compact WY representation for Q. More...
 
interface  tprfb
 TPRFB: applies a complex "triangular-pentagonal" block reflector H or its conjugate transpose H**H to a complex matrix C, which is composed of two blocks A and B, either from the left or right. More...
 
interface  tprfs
 TPRFS: provides error bounds and backward error estimates for the solution to a system of linear equations with a triangular packed coefficient matrix. The solution matrix X must be computed by CTPTRS or some other means before entering this routine. TPRFS does not do iterative refinement because doing so cannot improve the backward error. More...
 
interface  tptri
 TPTRI: computes the inverse of a complex upper or lower triangular matrix A stored in packed format. More...
 
interface  tptrs
 TPTRS: solves a triangular system of the form A * X = B, A**T * X = B, or A**H * X = B, where A is a triangular matrix of order N stored in packed format, and B is an N-by-NRHS matrix. A check is made to verify that A is nonsingular. More...
 
interface  tpttf
 TPTTF: copies a triangular matrix A from standard packed format (TP) to rectangular full packed format (TF). More...
 
interface  tpttr
 TPTTR: copies a triangular matrix A from standard packed format (TP) to standard full format (TR). More...
 
interface  trcon
 TRCON: estimates the reciprocal of the condition number of a triangular matrix A, in either the 1-norm or the infinity-norm. The norm of A is computed and an estimate is obtained for norm(inv(A)), then the reciprocal of the condition number is computed as RCOND = 1 / ( norm(A) * norm(inv(A)) ). More...
 
interface  trevc
 TREVC: computes some or all of the right and/or left eigenvectors of a complex upper triangular matrix T. Matrices of this type are produced by the Schur factorization of a complex general matrix: A = Q*T*Q**H, as computed by CHSEQR. The right eigenvector x and the left eigenvector y of T corresponding to an eigenvalue w are defined by: T*x = w*x, (y**H)*T = w*(y**H) where y**H denotes the conjugate transpose of the vector y. The eigenvalues are not input to this routine, but are read directly from the diagonal of T. This routine returns the matrices X and/or Y of right and left eigenvectors of T, or the products Q*X and/or Q*Y, where Q is an input matrix. If Q is the unitary factor that reduces a matrix A to Schur form T, then Q*X and Q*Y are the matrices of right and left eigenvectors of A. More...
 
interface  trevc3
 TREVC3: computes some or all of the right and/or left eigenvectors of a complex upper triangular matrix T. Matrices of this type are produced by the Schur factorization of a complex general matrix: A = Q*T*Q**H, as computed by CHSEQR. The right eigenvector x and the left eigenvector y of T corresponding to an eigenvalue w are defined by: T*x = w*x, (y**H)*T = w*(y**H) where y**H denotes the conjugate transpose of the vector y. The eigenvalues are not input to this routine, but are read directly from the diagonal of T. This routine returns the matrices X and/or Y of right and left eigenvectors of T, or the products Q*X and/or Q*Y, where Q is an input matrix. If Q is the unitary factor that reduces a matrix A to Schur form T, then Q*X and Q*Y are the matrices of right and left eigenvectors of A. This uses a Level 3 BLAS version of the back transformation. More...
 
interface  trexc
 TREXC: reorders the Schur factorization of a complex matrix A = Q*T*Q**H, so that the diagonal element of T with row index IFST is moved to row ILST. The Schur form T is reordered by a unitary similarity transformation Z**H*T*Z, and optionally the matrix Q of Schur vectors is updated by postmultplying it with Z. More...
 
interface  trrfs
 TRRFS: provides error bounds and backward error estimates for the solution to a system of linear equations with a triangular coefficient matrix. The solution matrix X must be computed by CTRTRS or some other means before entering this routine. TRRFS does not do iterative refinement because doing so cannot improve the backward error. More...
 
interface  trsen
 TRSEN: reorders the Schur factorization of a complex matrix A = Q*T*Q**H, so that a selected cluster of eigenvalues appears in the leading positions on the diagonal of the upper triangular matrix T, and the leading columns of Q form an orthonormal basis of the corresponding right invariant subspace. Optionally the routine computes the reciprocal condition numbers of the cluster of eigenvalues and/or the invariant subspace. More...
 
interface  trsna
 TRSNA: estimates reciprocal condition numbers for specified eigenvalues and/or right eigenvectors of a complex upper triangular matrix T (or of any matrix Q*T*Q**H with Q unitary). More...
 
interface  trsyl
 TRSYL: solves the complex Sylvester matrix equation: op(A)*X + X*op(B) = scale*C or op(A)*X - X*op(B) = scale*C, where op(A) = A or A**H, and A and B are both upper triangular. A is M-by-M and B is N-by-N; the right hand side C and the solution X are M-by-N; and scale is an output scale factor, set <= 1 to avoid overflow in X. More...
 
interface  trtri
 TRTRI: computes the inverse of a complex upper or lower triangular matrix A. This is the Level 3 BLAS version of the algorithm. More...
 
interface  trtrs
 TRTRS: solves a triangular system of the form A * X = B, A**T * X = B, or A**H * X = B, where A is a triangular matrix of order N, and B is an N-by-NRHS matrix. A check is made to verify that A is nonsingular. More...
 
interface  trttf
 TRTTF: copies a triangular matrix A from standard full format (TR) to rectangular full packed format (TF) . More...
 
interface  trttp
 TRTTP: copies a triangular matrix A from full format (TR) to standard packed format (TP). More...
 
interface  tzrzf
 TZRZF: reduces the M-by-N ( M<=N ) complex upper trapezoidal matrix A to upper triangular form by means of unitary transformations. The upper trapezoidal matrix A is factored as A = ( R 0 ) * Z, where Z is an N-by-N unitary matrix and R is an M-by-M upper triangular matrix. More...
 
interface  unbdb
 UNBDB: simultaneously bidiagonalizes the blocks of an M-by-M partitioned unitary matrix X: [ B11 | B12 0 0 ] [ X11 | X12 ] [ P1 | ] [ 0 | 0 -I 0 ] [ Q1 | ]**H X = [--------—] = [------—] [-------------—] [------—] . [ X21 | X22 ] [ | P2 ] [ B21 | B22 0 0 ] [ | Q2 ] [ 0 | 0 0 I ] X11 is P-by-Q. Q must be no larger than P, M-P, or M-Q. (If this is not the case, then X must be transposed and/or permuted. This can be done in constant time using the TRANS and SIGNS options. See CUNCSD for details.) The unitary matrices P1, P2, Q1, and Q2 are P-by-P, (M-P)-by- (M-P), Q-by-Q, and (M-Q)-by-(M-Q), respectively. They are represented implicitly by Householder vectors. B11, B12, B21, and B22 are Q-by-Q bidiagonal matrices represented implicitly by angles THETA, PHI. More...
 
interface  unbdb1
 UNBDB1: simultaneously bidiagonalizes the blocks of a tall and skinny matrix X with orthonomal columns: [ B11 ] [ X11 ] [ P1 | ] [ 0 ] [--—] = [------—] [--—] Q1**T . [ X21 ] [ | P2 ] [ B21 ] [ 0 ] X11 is P-by-Q, and X21 is (M-P)-by-Q. Q must be no larger than P, M-P, or M-Q. Routines CUNBDB2, CUNBDB3, and CUNBDB4 handle cases in which Q is not the minimum dimension. The unitary matrices P1, P2, and Q1 are P-by-P, (M-P)-by-(M-P), and (M-Q)-by-(M-Q), respectively. They are represented implicitly by Householder vectors. B11 and B12 are Q-by-Q bidiagonal matrices represented implicitly by angles THETA, PHI. More...
 
interface  unbdb2
 UNBDB2: simultaneously bidiagonalizes the blocks of a tall and skinny matrix X with orthonomal columns: [ B11 ] [ X11 ] [ P1 | ] [ 0 ] [--—] = [------—] [--—] Q1**T . [ X21 ] [ | P2 ] [ B21 ] [ 0 ] X11 is P-by-Q, and X21 is (M-P)-by-Q. P must be no larger than M-P, Q, or M-Q. Routines CUNBDB1, CUNBDB3, and CUNBDB4 handle cases in which P is not the minimum dimension. The unitary matrices P1, P2, and Q1 are P-by-P, (M-P)-by-(M-P), and (M-Q)-by-(M-Q), respectively. They are represented implicitly by Householder vectors. B11 and B12 are P-by-P bidiagonal matrices represented implicitly by angles THETA, PHI. More...
 
interface  unbdb3
 UNBDB3: simultaneously bidiagonalizes the blocks of a tall and skinny matrix X with orthonomal columns: [ B11 ] [ X11 ] [ P1 | ] [ 0 ] [--—] = [------—] [--—] Q1**T . [ X21 ] [ | P2 ] [ B21 ] [ 0 ] X11 is P-by-Q, and X21 is (M-P)-by-Q. M-P must be no larger than P, Q, or M-Q. Routines CUNBDB1, CUNBDB2, and CUNBDB4 handle cases in which M-P is not the minimum dimension. The unitary matrices P1, P2, and Q1 are P-by-P, (M-P)-by-(M-P), and (M-Q)-by-(M-Q), respectively. They are represented implicitly by Householder vectors. B11 and B12 are (M-P)-by-(M-P) bidiagonal matrices represented implicitly by angles THETA, PHI. More...
 
interface  unbdb4
 UNBDB4: simultaneously bidiagonalizes the blocks of a tall and skinny matrix X with orthonomal columns: [ B11 ] [ X11 ] [ P1 | ] [ 0 ] [--—] = [------—] [--—] Q1**T . [ X21 ] [ | P2 ] [ B21 ] [ 0 ] X11 is P-by-Q, and X21 is (M-P)-by-Q. M-Q must be no larger than P, M-P, or Q. Routines CUNBDB1, CUNBDB2, and CUNBDB3 handle cases in which M-Q is not the minimum dimension. The unitary matrices P1, P2, and Q1 are P-by-P, (M-P)-by-(M-P), and (M-Q)-by-(M-Q), respectively. They are represented implicitly by Householder vectors. B11 and B12 are (M-Q)-by-(M-Q) bidiagonal matrices represented implicitly by angles THETA, PHI. More...
 
interface  unbdb5
 UNBDB5: orthogonalizes the column vector X = [ X1 ] [ X2 ] with respect to the columns of Q = [ Q1 ] . [ Q2 ] The columns of Q must be orthonormal. If the projection is zero according to Kahan's "twice is enough" criterion, then some other vector from the orthogonal complement is returned. This vector is chosen in an arbitrary but deterministic way. More...
 
interface  unbdb6
 UNBDB6: orthogonalizes the column vector X = [ X1 ] [ X2 ] with respect to the columns of Q = [ Q1 ] . [ Q2 ] The columns of Q must be orthonormal. If the projection is zero according to Kahan's "twice is enough" criterion, then the zero vector is returned. More...
 
interface  uncsd
 UNCSD: computes the CS decomposition of an M-by-M partitioned unitary matrix X: [ I 0 0 | 0 0 0 ] [ 0 C 0 | 0 -S 0 ] [ X11 | X12 ] [ U1 | ] [ 0 0 0 | 0 0 -I ] [ V1 | ]**H X = [--------—] = [------—] [------------------—] [------—] . [ X21 | X22 ] [ | U2 ] [ 0 0 0 | I 0 0 ] [ | V2 ] [ 0 S 0 | 0 C 0 ] [ 0 0 I | 0 0 0 ] X11 is P-by-Q. The unitary matrices U1, U2, V1, and V2 are P-by-P, (M-P)-by-(M-P), Q-by-Q, and (M-Q)-by-(M-Q), respectively. C and S are R-by-R nonnegative diagonal matrices satisfying C^2 + S^2 = I, in which R = MIN(P,M-P,Q,M-Q). More...
 
interface  uncsd2by1
 UNCSD2BY1: computes the CS decomposition of an M-by-Q matrix X with orthonormal columns that has been partitioned into a 2-by-1 block structure: [ I1 0 0 ] [ 0 C 0 ] [ X11 ] [ U1 | ] [ 0 0 0 ] X = [--—] = [------—] [-------—] V1**T . [ X21 ] [ | U2 ] [ 0 0 0 ] [ 0 S 0 ] [ 0 0 I2] X11 is P-by-Q. The unitary matrices U1, U2, and V1 are P-by-P, (M-P)-by-(M-P), and Q-by-Q, respectively. C and S are R-by-R nonnegative diagonal matrices satisfying C^2 + S^2 = I, in which R = MIN(P,M-P,Q,M-Q). I1 is a K1-by-K1 identity matrix and I2 is a K2-by-K2 identity matrix, where K1 = MAX(Q+P-M,0), K2 = MAX(Q-P,0). More...
 
interface  ung2l
 UNG2L: generates an m by n complex matrix Q with orthonormal columns, which is defined as the last n columns of a product of k elementary reflectors of order m Q = H(k) . . . H(2) H(1) as returned by CGEQLF. More...
 
interface  ung2r
 UNG2R: generates an m by n complex matrix Q with orthonormal columns, which is defined as the first n columns of a product of k elementary reflectors of order m Q = H(1) H(2) . . . H(k) as returned by CGEQRF. More...
 
interface  ungbr
 UNGBR: generates one of the complex unitary matrices Q or P**H determined by CGEBRD when reducing a complex matrix A to bidiagonal form: A = Q * B * P**H. Q and P**H are defined as products of elementary reflectors H(i) or G(i) respectively. If VECT = 'Q', A is assumed to have been an M-by-K matrix, and Q is of order M: if m >= k, Q = H(1) H(2) . . . H(k) and UNGBR returns the first n columns of Q, where m >= n >= k; if m < k, Q = H(1) H(2) . . . H(m-1) and UNGBR returns Q as an M-by-M matrix. If VECT = 'P', A is assumed to have been a K-by-N matrix, and P**H is of order N: if k < n, P**H = G(k) . . . G(2) G(1) and UNGBR returns the first m rows of P**H, where n >= m >= k; if k >= n, P**H = G(n-1) . . . G(2) G(1) and UNGBR returns P**H as an N-by-N matrix. More...
 
interface  unghr
 UNGHR: generates a complex unitary matrix Q which is defined as the product of IHI-ILO elementary reflectors of order N, as returned by CGEHRD: Q = H(ilo) H(ilo+1) . . . H(ihi-1). More...
 
interface  unglq
 UNGLQ: generates an M-by-N complex matrix Q with orthonormal rows, which is defined as the first M rows of a product of K elementary reflectors of order N Q = H(k)**H . . . H(2)**H H(1)**H as returned by CGELQF. More...
 
interface  ungql
 UNGQL: generates an M-by-N complex matrix Q with orthonormal columns, which is defined as the last N columns of a product of K elementary reflectors of order M Q = H(k) . . . H(2) H(1) as returned by CGEQLF. More...
 
interface  ungqr
 UNGQR: generates an M-by-N complex matrix Q with orthonormal columns, which is defined as the first N columns of a product of K elementary reflectors of order M Q = H(1) H(2) . . . H(k) as returned by CGEQRF. More...
 
interface  ungrq
 UNGRQ: generates an M-by-N complex matrix Q with orthonormal rows, which is defined as the last M rows of a product of K elementary reflectors of order N Q = H(1)**H H(2)**H . . . H(k)**H as returned by CGERQF. More...
 
interface  ungtr
 UNGTR: generates a complex unitary matrix Q which is defined as the product of n-1 elementary reflectors of order N, as returned by CHETRD: if UPLO = 'U', Q = H(n-1) . . . H(2) H(1), if UPLO = 'L', Q = H(1) H(2) . . . H(n-1). More...
 
interface  ungtsqr
 UNGTSQR: generates an M-by-N complex matrix Q_out with orthonormal columns, which are the first N columns of a product of comlpex unitary matrices of order M which are returned by CLATSQR Q_out = first_N_columns_of( Q(1)_in * Q(2)_in * ... * Q(k)_in ). See the documentation for CLATSQR. More...
 
interface  ungtsqr_row
 UNGTSQR_ROW: generates an M-by-N complex matrix Q_out with orthonormal columns from the output of CLATSQR. These N orthonormal columns are the first N columns of a product of complex unitary matrices Q(k)_in of order M, which are returned by CLATSQR in a special format. Q_out = first_N_columns_of( Q(1)_in * Q(2)_in * ... * Q(k)_in ). The input matrices Q(k)_in are stored in row and column blocks in A. See the documentation of CLATSQR for more details on the format of Q(k)_in, where each Q(k)_in is represented by block Householder transformations. This routine calls an auxiliary routine CLARFB_GETT, where the computation is performed on each individual block. The algorithm first sweeps NB-sized column blocks from the right to left starting in the bottom row block and continues to the top row block (hence _ROW in the routine name). This sweep is in reverse order of the order in which CLATSQR generates the output blocks. More...
 
interface  unhr_col
 UNHR_COL: takes an M-by-N complex matrix Q_in with orthonormal columns as input, stored in A, and performs Householder Reconstruction (HR), i.e. reconstructs Householder vectors V(i) implicitly representing another M-by-N matrix Q_out, with the property that Q_in = Q_out*S, where S is an N-by-N diagonal matrix with diagonal entries equal to +1 or -1. The Householder vectors (columns V(i) of V) are stored in A on output, and the diagonal entries of S are stored in D. Block reflectors are also returned in T (same output format as CGEQRT). More...
 
interface  unm2l
 UNM2L: overwrites the general complex m-by-n matrix C with Q * C if SIDE = 'L' and TRANS = 'N', or Q**H* C if SIDE = 'L' and TRANS = 'C', or C * Q if SIDE = 'R' and TRANS = 'N', or C * Q**H if SIDE = 'R' and TRANS = 'C', where Q is a complex unitary matrix defined as the product of k elementary reflectors Q = H(k) . . . H(2) H(1) as returned by CGEQLF. Q is of order m if SIDE = 'L' and of order n if SIDE = 'R'. More...
 
interface  unm2r
 UNM2R: overwrites the general complex m-by-n matrix C with Q * C if SIDE = 'L' and TRANS = 'N', or Q**H* C if SIDE = 'L' and TRANS = 'C', or C * Q if SIDE = 'R' and TRANS = 'N', or C * Q**H if SIDE = 'R' and TRANS = 'C', where Q is a complex unitary matrix defined as the product of k elementary reflectors Q = H(1) H(2) . . . H(k) as returned by CGEQRF. Q is of order m if SIDE = 'L' and of order n if SIDE = 'R'. More...
 
interface  unmbr
 If VECT = 'Q', UNMBR: overwrites the general complex M-by-N matrix C with SIDE = 'L' SIDE = 'R' TRANS = 'N': Q * C C * Q TRANS = 'C': Q**H * C C * Q**H If VECT = 'P', UNMBR overwrites the general complex M-by-N matrix C with SIDE = 'L' SIDE = 'R' TRANS = 'N': P * C C * P TRANS = 'C': P**H * C C * P**H Here Q and P**H are the unitary matrices determined by CGEBRD when reducing a complex matrix A to bidiagonal form: A = Q * B * P**H. Q and P**H are defined as products of elementary reflectors H(i) and G(i) respectively. Let nq = m if SIDE = 'L' and nq = n if SIDE = 'R'. Thus nq is the order of the unitary matrix Q or P**H that is applied. If VECT = 'Q', A is assumed to have been an NQ-by-K matrix: if nq >= k, Q = H(1) H(2) . . . H(k); if nq < k, Q = H(1) H(2) . . . H(nq-1). If VECT = 'P', A is assumed to have been a K-by-NQ matrix: if k < nq, P = G(1) G(2) . . . G(k); if k >= nq, P = G(1) G(2) . . . G(nq-1). More...
 
interface  unmhr
 UNMHR: overwrites the general complex M-by-N matrix C with SIDE = 'L' SIDE = 'R' TRANS = 'N': Q * C C * Q TRANS = 'C': Q**H * C C * Q**H where Q is a complex unitary matrix of order nq, with nq = m if SIDE = 'L' and nq = n if SIDE = 'R'. Q is defined as the product of IHI-ILO elementary reflectors, as returned by CGEHRD: Q = H(ilo) H(ilo+1) . . . H(ihi-1). More...
 
interface  unmlq
 UNMLQ: overwrites the general complex M-by-N matrix C with SIDE = 'L' SIDE = 'R' TRANS = 'N': Q * C C * Q TRANS = 'C': Q**H * C C * Q**H where Q is a complex unitary matrix defined as the product of k elementary reflectors Q = H(k)**H . . . H(2)**H H(1)**H as returned by CGELQF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. More...
 
interface  unmql
 UNMQL: overwrites the general complex M-by-N matrix C with SIDE = 'L' SIDE = 'R' TRANS = 'N': Q * C C * Q TRANS = 'C': Q**H * C C * Q**H where Q is a complex unitary matrix defined as the product of k elementary reflectors Q = H(k) . . . H(2) H(1) as returned by CGEQLF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. More...
 
interface  unmqr
 UNMQR: overwrites the general complex M-by-N matrix C with SIDE = 'L' SIDE = 'R' TRANS = 'N': Q * C C * Q TRANS = 'C': Q**H * C C * Q**H where Q is a complex unitary matrix defined as the product of k elementary reflectors Q = H(1) H(2) . . . H(k) as returned by CGEQRF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. More...
 
interface  unmrq
 UNMRQ: overwrites the general complex M-by-N matrix C with SIDE = 'L' SIDE = 'R' TRANS = 'N': Q * C C * Q TRANS = 'C': Q**H * C C * Q**H where Q is a complex unitary matrix defined as the product of k elementary reflectors Q = H(1)**H H(2)**H . . . H(k)**H as returned by CGERQF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. More...
 
interface  unmrz
 UNMRZ: overwrites the general complex M-by-N matrix C with SIDE = 'L' SIDE = 'R' TRANS = 'N': Q * C C * Q TRANS = 'C': Q**H * C C * Q**H where Q is a complex unitary matrix defined as the product of k elementary reflectors Q = H(1) H(2) . . . H(k) as returned by CTZRZF. Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'. More...
 
interface  unmtr
 UNMTR: overwrites the general complex M-by-N matrix C with SIDE = 'L' SIDE = 'R' TRANS = 'N': Q * C C * Q TRANS = 'C': Q**H * C C * Q**H where Q is a complex unitary matrix of order nq, with nq = m if SIDE = 'L' and nq = n if SIDE = 'R'. Q is defined as the product of nq-1 elementary reflectors, as returned by CHETRD: if UPLO = 'U', Q = H(nq-1) . . . H(2) H(1); if UPLO = 'L', Q = H(1) H(2) . . . H(nq-1). More...
 
interface  upgtr
 UPGTR: generates a complex unitary matrix Q which is defined as the product of n-1 elementary reflectors H(i) of order n, as returned by CHPTRD using packed storage: if UPLO = 'U', Q = H(n-1) . . . H(2) H(1), if UPLO = 'L', Q = H(1) H(2) . . . H(n-1). More...
 
interface  upmtr
 UPMTR: overwrites the general complex M-by-N matrix C with SIDE = 'L' SIDE = 'R' TRANS = 'N': Q * C C * Q TRANS = 'C': Q**H * C C * Q**H where Q is a complex unitary matrix of order nq, with nq = m if SIDE = 'L' and nq = n if SIDE = 'R'. Q is defined as the product of nq-1 elementary reflectors, as returned by CHPTRD using packed storage: if UPLO = 'U', Q = H(nq-1) . . . H(2) H(1); if UPLO = 'L', Q = H(1) H(2) . . . H(nq-1). More...