2D Dealing with shapes

- C의 값은 Input과 Kernels에서 동일해야함 (즉, kernal이 input과 같은 차원의 행렬로 이뤄져있다)
- 코드에 익숙해지기 전까지 항상 shape의 개수를 따져보는 연습을 하자
- Numbers of parameters: (w x h x C +1) x D
- 1은 bias
Multi-channel 2D Convolution
- 1D signal is convered into a 1D signal
- 2D signal into a 2D
- Input signal에서 neighboring parts인 것은 output signal에서도 이웃임
Structure of Convolution Layer

- Input
- Conv. blocks
- Convolution + activation (relu)
- Convolution + activation (relu)
- .. 반복
- Maxpooling
- Output
- Fully connected layers
- Softmax
Summary
- Learn features in input image through convolution
(input image에서 convolution을 통해 features를 배우고) - Introduce non-linearity though activation function
(activation fuctiond로 non-liearity를 주고) - Reduce dimensionality and preserve spatial invariance with pooling
(pooling을 통해 공간상의 invariance를 줄임) - Convolution and pooling layers output high-level features of input
(Convolution 과 pooling layers 을 통해 high level features를 출력) - Fully connected layer uses these features for classifying input image
(fully connected layer는 이 features를 사용해서input image를 분류) - Express output as probability of image belonging to a particular class
(outputs를 특정 class에 속할 확률로 표시)

CNN with Tensorflow
- MNIST example

#input layer
input_h = 28 # Input height
input_w = 28 # Input width
input_ch = 1 # Input channel (gray scale)
# (None, 28, 28, 1)
# first convolution layer
k1_h = 3
k1_w = 3
k1_ch = 32
p1_h = 2
p1_2 = 2
# (None, 14, 14, 32)
# second convolution layer
k2_h = 3
k2_w = 3
k2_ch = 64
p2_h = 2
p2_w = 2
# (None, 7, 7, 64)
## fully connected
# flatten the features -> (None, 7*7*64)
conv_result_size = int((28/2*2)) * (28/(2*2)) *k2_ch)
n_hidden = 100
n_output = 10
'TIL(2024y) > Deep learning' 카테고리의 다른 글
24.07.31 pre-trained CNN (0) | 2024.07.31 |
---|---|
24.07.29 CNN 2 (0) | 2024.07.29 |
24.07.04-05 RNN (0) | 2024.07.08 |
24.07.08 Auto encoder (0) | 2024.07.08 |
24.07.03 CNN (0) | 2024.07.06 |