Reduced density matrix and partial trace

Posted on Wed 03 May 2017 in Physics

Reduced density matrix

Suppose we have two quantum systems $a, b$, with dimension $N_a, N_b$ respectively. Then the Hilbert space of $a+b$ is of dimension $N=N_aN_b$. Suppose we have a density matrix $$\hat\rho=\sum_{i,j}\rho_{ij}\lvert i\rangle \langle j\rvert=\sum_{i,j,k,l}\rho_{ijkl}\lvert i\rangle_a\lvert j\rangle_b \langle k\rvert_a\langle l\rvert_b$$

Then the reduced density matrix of $a$ is defined as $$\hat\rho_a=\mathrm{tr}_b\hat\rho=\sum_i \langle i\rvert_b\hat\rho\lvert i\rangle_b$$

i.e. reduced density matrix problem is equivalent to partial trace problem.

Tensor

In fact, if we take $\hat\rho$ as a 4-tensor $\rho_{ijkl}$, then the reduced density matrix is $$\rho^{(a)}_{ij}=\delta^{\mu\nu}\rho_{i\mu k\nu}$$ For simple density matrix $\rho=\lvert \psi\rangle \langle \psi\rvert$, the reduced matrix is $$\rho^{(a)}_{ik}=\delta^{jl}\rho_{ijkl}=\delta^{jl}\psi_{ij}\psi^+_{lk}=\sum_i |\langle i_b\lvert \psi\rangle|^2=[\psi\psi^+]_{ik}$$ Here we are taking $\psi$ as an $N_a\times N_b$ matrix.

For general case, if we find decomposition $$\rho=\sum_c \lambda_c\lvert \psi_c\rangle \langle \psi_c\rvert,\quad \sum_c \lambda_c=1$$ then we have $$\rho^{(a)}_{ik}=\left[\sum_c\lambda_c\psi_c\psi^+_c\right]_{ik}$$

Partial trace

Computation of partial trace by hand

$\rho$ can be regarded as $N_a\times N_a$ block matrices $A_{ij}$ with size $N_b\times N_b$ for each $A_{ij}$.

  • Partial trace over $a$ is like summation over block matrices in the diagonal: $\sum_i A_{ii}$
  • Partial trace over $b$ is like tracing over all $A_{ij}$ block matrices in the diagonal: $A_{ij}\rightarrow \mathrm{tr}(A_{ij})$

Compution of partial trace in numpy

It seems that there is no partial trace function in numpy, but there is trace(contraction) function for ndarray(tensors).

As an example, suppose we have an $N\times N$ density matrix $\rho$:

In [7]:
n1, n2, n=2, 2, 4
A=arange(4)[newaxis, :]
rho=(1+A+A.transpose())
rho=rho/trace(rho)
print(rho)
[[ 0.0625  0.125   0.1875  0.25  ]
 [ 0.125   0.1875  0.25    0.3125]
 [ 0.1875  0.25    0.3125  0.375 ]
 [ 0.25    0.3125  0.375   0.4375]]

Then, we can recast it to a tensor with more axes by .reshape().

In [3]:
rho_tensor=rho.reshape([n1, n2, n1, n2]);

The last step is contraction:

  • Partial trace over $a$ is contraction over axis $0, 2$, which produces $\rho_b$
  • Partial trace over $b$ is contraction over axis $1, 3$, which produces $\rho_a$
In [4]:
trace(rho_tensor, axis1=1, axis2=3) #rho_a
Out[4]:
array([[ 0.25,  0.5 ],
       [ 0.5 ,  0.75]])
In [5]:
trace(rho_tensor, axis1=0, axis2=2) #rho_b
Out[5]:
array([[ 0.375,  0.5  ],
       [ 0.5  ,  0.625]])