Specific values
for var in {list}
do
#body
done
Ranges
for var in {start..stop..step}
do
#body
done
Use this if most of the original IP is the same. This may happen if there's a power outtage and the IPs within a subnet have been reassigned.
nmap -p <original port> <sub.net.mask>.*
These instructions are reproduced from here
Create an .ssh directory on the remote machine. This is where the 'authorized keys' for the remote machine will live.
a@A:~> ssh b@B mkdir -p .ssh
Use the -p option to avoid any errors if the .ssh directory already exists.
Create a public key if required and copy it into the remote .ssh directory.
a@A:~> cat .ssh/id_rsa.pub | ssh b@B 'cat >> .ssh/authorized_keys
Now, sign-ins can be completed without a password.
a@A:~> ssh b@B
The experimental compiler interface in tensorflow only dumps graphs in Dot format after optimization. There doesn't seem to be a way to get the dotfile for the unoptimized graph. This is a workaround.
The XLA compiler ships with a bunch of tools that help with inspection. One of these is the interactive_graphviz utility.
Pre-reqs
bazel
A cloned tensorflow repo
An HLO dump in a .txt file (hlo.txt)
Steps
Navigate to the xla folder
cd path/to/tensorflow/compiler/xla
Use bazel to build the tools you need. (interactive_graphviz) in this case. A list of available tools here.
bazel build tools:interactive_graphviz
This step may take a while
Use the utility (it should be in the bazel-bin directory)
tensorflow/bazel-bin/tensorflow/compiler/xla/tools/interactive_graphviz --hlo_text="opt_pool.txt"
This will launch a command line utility which lists available commands. My most used ones are:
show_fusion_subcomputations [on|off]
list computations
<name of computation to render>