Tensorflow 开发者大会

Post date: Aug 21, 2017 2:19:45 PM

视频连接:https://www.youtube.com/playlist?list=PLwv-rHS37fS9sj62f4HAbqSrC1EiPsNZx

RNN for sequence

#################

# batch techniques for memory and computation saving.

# tf.train.batch() static batch, pad all sequences to the maximum length

# tf.train.batch(, dynamic_pad=True), pad batch sequences to the maximum batch length

# tf.contrib.training.bucket_by_sequence_length(,dynamic_pad=True), bucketed padding

# tf.contrib.training.batch_sequnces_with_states(), truncated BPTT via State Saver,

# cut sequence into pieces and save state from former pieces

#

######################

# dynamic calculation

# Tools:

# while_loop

# dynamic loops + gradients

# tf.TensorArray

# dynamic Tensor slice + gradients

# tf.nn.dynamic_rnn tf.contrib.seq2seq.dynamic_decode

########################

# seq2seq model

# 8 layers LSTM with residual connections

# each layer is on a seperated GPU

cell = MultiRNNCell( [ DeviceWrapper( ResidualWrapper( LSTMCell( num_units=512 ) ), device='gpu:%d' % i) for i in range(8) ] )

encoder_outputs, encoder_final_state = tf.nn.dynamic( cell, inputs, sequence_length, parallel_iterations=32, swap_memory=True)

# how to communicate between GPU

ML toolkits

Knowing algorithms is basic, and know how to computee in large-scale and distributed setting with big data is better.

#########################

# kmeans clustering method

# Lloyd's algorithm, Kmeans++ init

# Full/mini batch training

# cosine/squared distance

# think about mini batch training.

# GMM

# Matrix factorization, Weighted alternating least squares, distributed computing for large data

# SVM, linear SVM, and nonlinear, with L1, L2 regularization

Distributed Tensorflow

############################

# model replica

# divide data into GPUs and gather results.

# three different types of data. parameters, and inputs, and intermediate values.

# pay attention to data transfermation between devices, and influence of asynchroneous and/or synchoneous mode on perfermance.

#

# fault tolerance by saving the parameters

Integrating Tensorflow with infrastructure

#############################

# data preprocessing: spark, hadoop, capache beam

# Input data: from fastest to slowest

# tf.Example and tf.SquenceExample protocol buffers in TFRecords files

# Native TensorFlow ops to read CSV, JSON

# feed data directly from Python: easiest to use, useful for setting in Reinforcement Learning