Network Programming
This is my note of reading network programming chapter of CSAPP Book
This is my note of reading network programming chapter of CSAPP Book
大概等于Hough变换。
At $|x|^{2-k}\ll 1$, it is $x^2$. That is $|x|<e^{-C/(2-k)}$, where $C\sim3$. In linear regression, we often use loss function $L_2(x)=x^2/2$ which leads to linear fitting.
For $k=0$, we can define Gain function $$G(x)=1-L_0(x)=\frac{1}{1+x^2}$$
Consider scaling factor $l$, $$G_l(x)=\frac{1}{1+(x/l)^2}$$
For some parameter $\lambda$, calculate the gain $\Gamma(\lambda)=\sum_i G_l(x_i)$. The optimized gain means best estimation.
设总钱数为$M$,总人数为$N$,第$i$个人的钱数为$m_i$。
我们需要随机抽满足以下约束的格点$(m_1,\ldots, m_N)$: $$\sum_{i=1}^N m_i = M, m_i\ge 0$$
$m_i$为第$i$个人的钱数,
Get frame rate information
videoname=T-L\ _\ 1-50\ tip-tip.avi
ffmpeg -i $videoname 2>&1 |grep -o '[0-9]\+ fps'
The output is 30 fps
ffmpeg -i $videoname -r 30 output_%04d.png
The Canny edge detector is used in this step.
from capillary import edge, fitting, display
from importlib import reload
reload(fitting);
reload(display);
If we print a numpy array, which actually use str()
, we will find it almost irreversible.
l=arange(16).reshape(4,4)
print('l is printed as:\n', l)
Use print()
will fallback to str()
, so str()
is not the correct way.
repr()
.tolist()
import pandas as pd
from functools import reduce
data=[pd.read_table('%d.txt'%i) for i in range(2, 5)]
Suppose we have two quantum systems $a, b$, with dimension $N_a, N_b$ respectively. Then the Hilbert space of $a+b$ is of dimension $N=N_aN_b$. Suppose we have a density matrix $$\hat\rho=\sum_{i,j}\rho_{ij}\lvert i\rangle \langle j\rvert=\sum_{i,j,k,l}\rho_{ijkl}\lvert i\rangle_a\lvert j\rangle_b \langle k\rvert_a\langle l\rvert_b$$
Then the reduced density matrix of $a$ is defined as $$\hat\rho_a=\mathrm{tr}_b\hat\rho=\sum_i \langle i\rvert_b\hat\rho\lvert i\rangle_b$$
i.e. reduced density matrix problem is equivalent to partial trace problem.
In fact, if we take $\hat\rho$ as a 4-tensor $\rho_{ijkl}$, then the reduced density matrix is $$\rho^{(a)}_{ij}=\delta^{\mu\nu}\rho_{i\mu k\nu}$$ For simple density matrix $\rho=\lvert \psi\rangle \langle \psi\rvert$, the reduced matrix is $$\rho^{(a)}_{ik}=\delta^{jl}\rho_{ijkl}=\delta^{jl}\psi_{ij}\psi^+_{lk}=\sum_i |\langle i_b\lvert \psi\rangle|^2=[\psi\psi^+]_{ik}$$ Here we are taking $\psi$ as an $N_a\times N_b$ matrix.
For general case, if we find decomposition $$\rho=\sum_c \lambda_c\lvert \psi_c\rangle \langle \psi_c\rvert,\quad \sum_c \lambda_c=1$$ then we have $$\rho^{(a)}_{ik}=\left[\sum_c\lambda_c\psi_c\psi^+_c\right]_{ik}$$
假设发钱时,钱少的那个钱数取$y$的先验分布为$\rho(y)$。设某人抽到了钱数是$x$,则此时$y$的取值可以是$x$或者$x/2$。
参数$y$下$x$的分布是 $$p(x|y)=\frac{\delta(x-2y)+\delta(x-y)}{2}$$
因此$x$本身的发生概率:
\begin{align} p(x)&=\int p(x|y)\rho(y)dy\\ &=\frac{1}{2}\int [\delta(x-2y)+\delta(x-y)]\rho(y)dy\\ &=\frac{\rho(x)}{2}+\frac{\rho(x/2)}{4} \end{align}ipynb2pelican is used to provide jupyter ipynb support in pelican.
This my blog source code repo: https://github.com/peijunz/peijunz.github.io/tree/src, and .travis.yml
ghp-import
is needed for github page import. It can be installed by pip