时间:2021-05-22
我就废话不多说了,大家还是直接看代码吧!
import tensorflow as tfimport numpy as npinput = tf.constant(1,shape=(64,10,1),dtype=tf.float32,name='input')#shape=(batch,in_width,in_channels)w = tf.constant(3,shape=(3,1,32),dtype=tf.float32,name='w')#shape=(filter_width,in_channels,out_channels)conv1 = tf.nn.conv1d(input,w,2,'VALID') #2为步长print(conv1.shape)#宽度计算(width-kernel_size+1)/strides ,(10-3+1)/2=4 (64,4,32)conv2 = tf.nn.conv1d(input,w,2,'SAME') #步长为2print(conv2.shape)#宽度计算width/strides 10/2=5 (64,5,32)conv3 = tf.nn.conv1d(input,w,1,'SAME') #步长为1print(conv3.shape) # (64,10,32)with tf.Session() as sess: print(sess.run(conv1)) print(sess.run(conv2)) print(sess.run(conv3))以下是input_shape=(1,10,1), w = (3,1,1)时,conv1的shape
以下是input_shape=(1,10,1), w = (3,1,3)时,conv1的shape
补充知识:tensorflow中一维卷积conv1d处理语言序列举例
tf.nn.conv1d:
函数形式: tf.nn.conv1d(value, filters, stride, padding, use_cudnn_on_gpu=None, data_format=None, name=None):
程序举例:
import tensorflow as tfimport numpy as npsess = tf.InteractiveSession() # --------------- tf.nn.conv1d -------------------inputs=tf.ones((64,10,3)) # [batch, n_sqs, embedsize]w=tf.constant(1,tf.float32,(5,3,32)) # [w_high, embedsize, n_filers]conv1 = tf.nn.conv1d(inputs,w,stride=2 ,padding='SAME') # conv1=[batch, round(n_sqs/stride), n_filers],stride是步长。 tf.global_variables_initializer().run()out = sess.run(conv1)print(out)注:一维卷积中padding='SAME'只在输入的末尾填充0
tf.layters.conv1d:
函数形式:tf.layters.conv1d(inputs, filters, kernel_size, strides=1, padding='valid', data_format='channels_last', dilation_rate=1, activation=None, use_bias=True,...)
程序举例:
import tensorflow as tfimport numpy as npsess = tf.InteractiveSession() # --------------- tf.layters.conv1d -------------------inputs=tf.ones((64,10,3)) # [batch, n_sqs, embedsize]num_filters=32kernel_size =5conv2 = tf.layers.conv1d(inputs, num_filters, kernel_size,strides=2, padding='valid',name='conv2') # shape = (batchsize, round(n_sqs/strides),num_filters)tf.global_variables_initializer().run()out = sess.run(conv2)print(out)二维卷积实现一维卷积:
import tensorflow as tfsess = tf.InteractiveSession()def conv2d(x, W): return tf.nn.conv2d(x, W, strides=[1,1,1,1], padding='SAME')def max_pool_1x2(x): return tf.nn.avg_pool(x, ksize=[1,1,2,1], strides=[1,1,2,1], padding='SAME')'''ksize = [x, pool_height, pool_width, x]strides = [x, pool_height, pool_width, x]''' x = tf.Variable([[1,2,3,4]], dtype=tf.float32)x = tf.reshape(x, [1,1,4,1]) #这一步必不可少,否则会报错说维度不一致;'''[batch, in_height, in_width, in_channels] = [1,1,4,1]''' W_conv1 = tf.Variable([1,1,1],dtype=tf.float32) # 权重值W_conv1 = tf.reshape(W_conv1, [1,3,1,1]) # 这一步同样必不可少'''[filter_height, filter_width, in_channels, out_channels]'''h_conv1 = conv2d(x, W_conv1) # 结果:[4,8,12,11]h_pool1 = max_pool_1x2(h_conv1)tf.global_variables_initializer().run()print(sess.run(h_conv1)) # 结果array([6,11.5])x两种池化操作:
# 1:stride max poolingconvs = tf.expand_dims(conv, axis=-1) # shape=[?,596,256,1]smp = tf.nn.max_pool(value=convs, ksize=[1, 3, self.config.num_filters, 1], strides=[1, 3, 1, 1], padding='SAME') # shape=[?,299,256,1]smp = tf.squeeze(smp, -1) # shape=[?,299,256]smp = tf.reshape(smp, shape=(-1, 199 * self.config.num_filters)) # 2: global max pooling layergmp = tf.reduce_max(conv, reduction_indices=[1], name='gmp')不同核尺寸卷积操作:
kernel_sizes = [3,4,5] # 分别用窗口大小为3/4/5的卷积核with tf.name_scope("mul_cnn"): pooled_outputs = [] for kernel_size in kernel_sizes: # CNN layer conv = tf.layers.conv1d(embedding_inputs, self.config.num_filters, kernel_size, name='conv-%s' % kernel_size) # global max pooling layer gmp = tf.reduce_max(conv, reduction_indices=[1], name='gmp') pooled_outputs.append(gmp) self.h_pool = tf.concat(pooled_outputs, 1) #池化后进行拼接以上这篇基于Tensorflow一维卷积用法详解就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持。
声明:本页内容来源网络,仅供用户参考;我单位不保证亦不表示资料全面及准确无误,也不保证亦不表示这些资料为最新信息,如因任何原因,本网内容或者用户因倚赖本网内容造成任何损失或损害,我单位将不会负任何法律责任。如涉及版权问题,请提交至online#300.cn邮箱联系删除。
今天来介绍一下Tensorflow里面的反卷积操作,网上反卷积的用法的介绍比较少,希望这篇教程可以帮助到各位反卷积出自这篇论文:DeconvolutionalN
在用tensorflow做一维的卷积神经网络的时候会遇到tf.nn.conv1d和layers.conv1d这两个函数,但是这两个函数有什么区别呢,通过计算得到
最近在使用tensorflow进行网络训练的时候,需要提取出别人训练好的卷积核的部分层的数据。由于tensorflow中的tensor和python中的list
本文实例为大家分享了Tensorflow实现卷积神经网络的具体代码,供大家参考,具体内容如下1.概述定义:卷积神经网络(ConvolutionalNeuralN
前言在tensorflow的官方文档中得卷积神经网络一章,有一个使用cifar-10图片数据集的实验,搭建卷积神经网络倒不难,但是那个cifar10_input