## Network Programming

This is my note of reading network programming chapter of CSAPP Book

## # Sub Quadratic Loss Function

$$L_k(x)=\frac{x^2}{1+|x|^{2-k}}, \quad 0\leq k\leq 2$$$$\lim_{x\to0} L_k(x)=x^2,\quad \lim_{x\to\infty} L_k(x)=x^k$$

At $|x|^{2-k}\ll 1$, it is $x^2$. That is $|x|<e^{-C/(2-k)}$, where $C\sim3$. In linear regression, we often use loss function $L_2(x)=x^2/2$ which leads to linear fitting.

## # Gain Function

For $k=0$, we can define Gain function $$G(x)=1-L_0(x)=\frac{1}{1+x^2}$$

Consider scaling factor $l$, $$G_l(x)=\frac{1}{1+(x/l)^2}$$

For some parameter $\lambda$, calculate the gain $\Gamma(\lambda)=\sum_i G_l(x_i)$. The optimized gain means best estimation.

## 分钱问题

$m_i$为第$i$个人的钱数，

• 求第$i$个人的钱数$m_i$取值的概率分布
• 求对某固定钱数$m$，抽到这个钱数的人的数量$n_m=\sum (m_i = m)$

## # Frames Export

Get frame rate information

videoname=T-L\ _\ 1-50\ tip-tip.avi
ffmpeg -i $videoname 2>&1 |grep -o '[0-9]\+ fps'  The output is 30 fps ffmpeg -i$videoname -r 30 output_%04d.png


## # Edges Detection

The Canny edge detector is used in this step.

In [1]:
from capillary import edge, fitting, display

In [8]:
from importlib import reload
reload(fitting);
reload(display);


## # Aim

Minimize $$\sum_i \mathrm{distance}^2(\vec r_i, \mathrm{line})=\sum_i (\vec r_i\cdot \hat n-\rho)^2$$ for line $\vec r\cdot \hat n-\rho=0$. It is equivalent to

• The principle axis with least moment of inertia
• The eigenvector with largest eigenval for the covariance matrix

## Convert str(array) back to numpy array

If we print a numpy array, which actually use str(), we will find it almost irreversible.

In [5]:
l=arange(16).reshape(4,4)
print('l is printed as:\n', l)

l is printed as:
[[ 0  1  2  3]
[ 4  5  6  7]
[ 8  9 10 11]
[12 13 14 15]]


Use print() will fallback to str(), so str() is not the correct way.

• repr()
• .tolist()

## # 数据分析

In [2]:
import pandas as pd
from functools import reduce

In [3]:
data=[pd.read_table('%d.txt'%i) for i in range(2, 5)]


## # Reduced density matrix

Suppose we have two quantum systems $a, b$, with dimension $N_a, N_b$ respectively. Then the Hilbert space of $a+b$ is of dimension $N=N_aN_b$. Suppose we have a density matrix $$\hat\rho=\sum_{i,j}\rho_{ij}\lvert i\rangle \langle j\rvert=\sum_{i,j,k,l}\rho_{ijkl}\lvert i\rangle_a\lvert j\rangle_b \langle k\rvert_a\langle l\rvert_b$$

Then the reduced density matrix of $a$ is defined as $$\hat\rho_a=\mathrm{tr}_b\hat\rho=\sum_i \langle i\rvert_b\hat\rho\lvert i\rangle_b$$

i.e. reduced density matrix problem is equivalent to partial trace problem.

### # Tensor

In fact, if we take $\hat\rho$ as a 4-tensor $\rho_{ijkl}$, then the reduced density matrix is $$\rho^{(a)}_{ij}=\delta^{\mu\nu}\rho_{i\mu k\nu}$$ For simple density matrix $\rho=\lvert \psi\rangle \langle \psi\rvert$, the reduced matrix is $$\rho^{(a)}_{ik}=\delta^{jl}\rho_{ijkl}=\delta^{jl}\psi_{ij}\psi^+_{lk}=\sum_i |\langle i_b\lvert \psi\rangle|^2=[\psi\psi^+]_{ik}$$ Here we are taking $\psi$ as an $N_a\times N_b$ matrix.

For general case, if we find decomposition $$\rho=\sum_c \lambda_c\lvert \psi_c\rangle \langle \psi_c\rvert,\quad \sum_c \lambda_c=1$$ then we have $$\rho^{(a)}_{ik}=\left[\sum_c\lambda_c\psi_c\psi^+_c\right]_{ik}$$

## # 贝叶斯分析

\begin{align} p(x)&=\int p(x|y)\rho(y)dy\\ &=\frac{1}{2}\int [\delta(x-2y)+\delta(x-y)]\rho(y)dy\\ &=\frac{\rho(x)}{2}+\frac{\rho(x/2)}{4} \end{align}

## Blogging with Jupyter and Pelican

ipynb2pelican is used to provide jupyter ipynb support in pelican.

This my blog source code repo: https://github.com/peijunz/peijunz.github.io/tree/src, and .travis.yml