时间:2021-05-22
法一:
循环打印
模板
for (x, y) in zip(tf.global_variables(), sess.run(tf.global_variables())): print '\n', x, y实例
# coding=utf-8import tensorflow as tfdef func(in_put, layer_name, is_training=True): with tf.variable_scope(layer_name, reuse=tf.AUTO_REUSE): bn = tf.contrib.layers.batch_norm(inputs=in_put, decay=0.9, is_training=is_training, updates_collections=None) return bndef main(): with tf.Graph().as_default(): # input_x input_x = tf.placeholder(dtype=tf.float32, shape=[1, 4, 4, 1]) import numpy as np i_p = np.random.uniform(low=0, high=255, size=[1, 4, 4, 1]) # outputs output = func(input_x, 'my', is_training=True) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) t = sess.run(output, feed_dict={input_x:i_p}) # 法一: 循环打印 for (x, y) in zip(tf.global_variables(), sess.run(tf.global_variables())): print '\n', x, yif __name__ == "__main__": main()2017-09-29 10:10:22.714213: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1052] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)<tf.Variable 'my/BatchNorm/beta:0' shape=(1,) dtype=float32_ref> [ 0.]<tf.Variable 'my/BatchNorm/moving_mean:0' shape=(1,) dtype=float32_ref> [ 13.46412563]<tf.Variable 'my/BatchNorm/moving_variance:0' shape=(1,) dtype=float32_ref> [ 452.62246704]Process finished with exit code 0法二:
指定变量名打印
模板
print 'my/BatchNorm/beta:0', (sess.run('my/BatchNorm/beta:0'))实例
# coding=utf-8import tensorflow as tfdef func(in_put, layer_name, is_training=True): with tf.variable_scope(layer_name, reuse=tf.AUTO_REUSE): bn = tf.contrib.layers.batch_norm(inputs=in_put, decay=0.9, is_training=is_training, updates_collections=None) return bndef main(): with tf.Graph().as_default(): # input_x input_x = tf.placeholder(dtype=tf.float32, shape=[1, 4, 4, 1]) import numpy as np i_p = np.random.uniform(low=0, high=255, size=[1, 4, 4, 1]) # outputs output = func(input_x, 'my', is_training=True) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) t = sess.run(output, feed_dict={input_x:i_p}) # 法二: 指定变量名打印 print 'my/BatchNorm/beta:0', (sess.run('my/BatchNorm/beta:0')) print 'my/BatchNorm/moving_mean:0', (sess.run('my/BatchNorm/moving_mean:0')) print 'my/BatchNorm/moving_variance:0', (sess.run('my/BatchNorm/moving_variance:0'))if __name__ == "__main__": main()2017-09-29 10:12:41.374055: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1052] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)my/BatchNorm/beta:0 [ 0.]my/BatchNorm/moving_mean:0 [ 8.08649635]my/BatchNorm/moving_variance:0 [ 368.03442383]Process finished with exit code 0以上这篇tensorflow 打印内存中的变量方法就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持。
声明:本页内容来源网络,仅供用户参考;我单位不保证亦不表示资料全面及准确无误,也不保证亦不表示这些资料为最新信息,如因任何原因,本网内容或者用户因倚赖本网内容造成任何损失或损害,我单位将不会负任何法律责任。如涉及版权问题,请提交至online#300.cn邮箱联系删除。
从tensorflow训练后保存的模型中打印训变量:使用tf.train.NewCheckpointReader()importtensorflowastfre
查看TensorFlow中checkpoint内变量的几种方法查看ckpt中变量的方法有三种:在有model的情况下,使用tf.train.Saver进行res
在使用tensorflow中,我们常常需要获取某个变量的值,比如:打印某一层的权重,通常我们可以直接利用变量的name属性来获取,但是当我们利用一些第三方的库来
tensorflow版本1.4获取变量维度是一个使用频繁的操作,在tensorflow中获取变量维度主要用到的操作有以下三种:Tensor.shapeTenso
一、TensorFlow变量管理1.TensorFLow还提供了tf.get_variable函数来创建或者获取变量,tf.variable用于创建变量时,其功