** pre-trained network를 사용하여 inference하기 **
** transfer learning **
In this post, I will show you how to use the pre-trained network, which is saved in the format of matfile.
Acturally, the matfile just contains the filter weights of the convolutional layers and fully connected layers of the trained network as numpy arrary. So you need to know the structure of the network in advance.
This post is based on my previous post 2017/09/22 - [Deep Learning] - [tensorflow] how to save the filter weights of the trained network as matfile.
First, define all the computational operations required for the inference (or regression) by the network.
layers = ('cons11', 'relu11', 'cons12', 'relu12', 'maxp12',
'cons21', 'relu21', 'cons22', 'relu22', 'maxp22',
'cons31', 'relu31', 'cons32', 'relu32', 'con333', 'relu33', 'maxp33',
'cons41', 'relu41', 'cons42', 'relu42', 'conv43', 'relu43',
'cons51', 'relu51', 'cons52', 'relu52', 'conv53', 'relu53',
'fcl_6', 'relu6',
'fcl_7', 'relu7',
'fcl_8', 'relu8',
'outp')
Next, make a for-loop that 1) loads the filter weight, 2) initializes a tensor with the weight, and 3) defines computational operation
def utils.get_variable(weights, name, isTrain):
init = tf.constant_initializer(weights, dtype=tf.float32)
var = tf.get_variable(name=name, initializer=init, shape=weights.shape, trainable=isTrain)
return var
def utils.conv2d_valid(x, W, bias):
conv = tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding="VALID")
return tf.nn.bias_add(conv, bias)
def vgg_net(input_image, path_to_mat):
# load the filter weights of the saved network from vgg_net.mat file
vgg_net = scipy.io.loadmat(path_to_mat + 'vgg_net')
# dictionary that will contain all the tensors
load_net = {}
current = input_image
# for all the operations in layers
for i, name in enumerate(layers):
type = name[:4]
# convolution with the valid size
if type == 'conv':
# load corresponding filter weight (numpy arrary)
kernel = vgg_net[name +'w:0']
bias = vgg_net[name +'b:0']
# initialize a tensor with the filter weight
kernel = utils.get_variable(kernel, name=name + "w_", isTrain=False)
bias = utils.get_variable(bias.reshape(-1), name=name + "b_", isTrain=False)
# do convolution
current = utils.conv2d_valid(current, kernel, bias)
# convolution with the same size
if type == 'cons':
...
elif type == 'relu':
current = tf.nn.relu(current)
elif type == 'maxp':
...
elif type == 'fcl_':
...
elif type == 'outp':
...
load_net[name] = current
return load_net
Finally, you can just use the function 'vgg_net' under a session for an inference (or regression).
input_image = tf.placeholder(...)
path_to_mat = '....'
regression = vgg_net(input_image, path_to_mat)
with tf.Session() as sess:
sess.run(global_variables_initializer())
infer_result = sess.run(regression, feed_dict={input_image: nparray_image})
'Deep Learning' 카테고리의 다른 글
[image pre-processing] image normalization (0) | 2017.11.13 |
---|---|
[data augmentation] random image flip left/right (0) | 2017.11.13 |
[tensorflow] how to save the filter weights of the trained network as matfile (0) | 2017.09.22 |
[TORCS] End-to-end learning for highway assistance driving system (0) | 2017.09.12 |
[GTA5/GTAV] End-to-end learning for autonomous driving (0) | 2017.08.10 |