Welcome to tensorboardX’s documentation!¶
tensorboardX¶
A module for visualization with tensorboard
-
class
tensorboardX.
SummaryWriter
(log_dir=None, comment='', **kwargs)[source]¶ Writes Summary directly to event files. The SummaryWriter class provides a high-level api to create an event file in a given directory and add summaries and events to it. The class updates the file contents asynchronously. This allows a training program to call methods to add data to the file directly from the training loop, without slowing down training.
-
__init__
(log_dir=None, comment='', **kwargs)[source]¶ Parameters: - log_dir (string) – save location, default is: runs/CURRENT_DATETIME_HOSTNAME, which changes after each run. Use hierarchical folder structure to compare between runs easily. e.g. ‘runs/exp1’, ‘runs/exp2’
- comment (string) – comment that appends to the default
log_dir
. Iflog_dir
is assigned, this argument will no effect. - purge_step (int) – When logging crashes at step \(T+X\) and restarts at step \(T\), any events
whose global_step larger or equal to \(T\) will be purged and hidden from TensorBoard.
Note that the resumed experiment and crashed experiment should have the same
log_dir
. - filename_suffix (string) – Every event file’s name is suffixed with suffix. example:
SummaryWriter(filename_suffix='.123')
- kwargs – extra keyword arguments for FileWriter (e.g. ‘flush_secs’ controls how often to flush pending events). For more arguments please refer to docs for ‘tf.summary.FileWriter’.
-
add_audio
(tag, snd_tensor, global_step=None, sample_rate=44100, walltime=None)[source]¶ Add audio data to summary.
Parameters: - tag (string) – Data identifier
- snd_tensor (torch.Tensor) – Sound data
- global_step (int) – Global step value to record
- sample_rate (int) – sample rate in Hz
- walltime (float) – Optional override default walltime (time.time()) of event
- Shape:
- snd_tensor: \((1, L)\). The values should lie between [-1, 1].
-
add_custom_scalars
(layout)[source]¶ Create special chart by collecting charts tags in ‘scalars’. Note that this function can only be called once for each SummaryWriter() object. Because it only provides metadata to tensorboard, the function can be called before or after the training loop. See
examples/demo_custom_scalars.py
for more.Parameters: layout (dict) – {categoryName: charts}, where charts is also a dictionary {chartName: ListOfProperties}. The first element in ListOfProperties is the chart’s type (one of Multiline or Margin) and the second element should be a list containing the tags you have used in add_scalar function, which will be collected into the new chart. Examples:
layout = {'Taiwan':{'twse':['Multiline',['twse/0050', 'twse/2330']]}, 'USA':{ 'dow':['Margin', ['dow/aaa', 'dow/bbb', 'dow/ccc']], 'nasdaq':['Margin', ['nasdaq/aaa', 'nasdaq/bbb', 'nasdaq/ccc']]}} writer.add_custom_scalars(layout)
-
add_custom_scalars_marginchart
(tags, category='default', title='untitled')[source]¶ Shorthand for creating marginchart. Similar to
add_custom_scalars()
, but the only necessary argument is tags, which should have exactly 3 elements.Parameters: tags (list) – list of tags that have been used in add_scalar()
Examples:
writer.add_custom_scalars_marginchart(['twse/0050', 'twse/2330', 'twse/2006'])
-
add_custom_scalars_multilinechart
(tags, category='default', title='untitled')[source]¶ Shorthand for creating multilinechart. Similar to
add_custom_scalars()
, but the only necessary argument is tags.Parameters: tags (list) – list of tags that have been used in add_scalar()
Examples:
writer.add_custom_scalars_multilinechart(['twse/0050', 'twse/2330'])
-
add_embedding
(mat, metadata=None, label_img=None, global_step=None, tag='default', metadata_header=None)[source]¶ Add embedding projector data to summary.
Parameters: - mat (torch.Tensor or numpy.array) – A matrix which each row is the feature vector of the data point
- metadata (list) – A list of labels, each element will be convert to string
- label_img (torch.Tensor) – Images correspond to each data point
- global_step (int) – Global step value to record
- tag (string) – Name for the embedding
- Shape:
mat: \((N, D)\), where N is number of data and D is feature dimension
label_img: \((N, C, H, W)\)
Examples:
import keyword import torch meta = [] while len(meta)<100: meta = meta+keyword.kwlist # get some strings meta = meta[:100] for i, v in enumerate(meta): meta[i] = v+str(i) label_img = torch.rand(100, 3, 10, 32) for i in range(100): label_img[i]*=i/100.0 writer.add_embedding(torch.randn(100, 5), metadata=meta, label_img=label_img) writer.add_embedding(torch.randn(100, 5), label_img=label_img) writer.add_embedding(torch.randn(100, 5), metadata=meta)
-
add_figure
(tag, figure, global_step=None, close=True, walltime=None)[source]¶ Render matplotlib figure into an image and add it to summary.
Note that this requires the
matplotlib
package.Parameters:
-
add_graph
(model, input_to_model=None, verbose=False, **kwargs)[source]¶ Add graph data to summary.
Parameters: - model (torch.nn.Module) – model to draw.
- input_to_model (torch.Tensor or list of torch.Tensor) – a variable or a tuple of variables to be fed.
-
add_histogram
(tag, values, global_step=None, bins='tensorflow', walltime=None)[source]¶ Add histogram to summary.
Parameters: - tag (string) – Data identifier
- values (torch.Tensor, numpy.array, or string/blobname) – Values to build histogram
- global_step (int) – Global step value to record
- bins (string) – one of {‘tensorflow’,’auto’, ‘fd’, …}, this determines how the bins are made. You can find other options in: https://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram.html
- walltime (float) – Optional override default walltime (time.time()) of event
-
add_image
(tag, img_tensor, global_step=None, walltime=None, dataformats='CHW')[source]¶ Add image data to summary.
Note that this requires the
pillow
package.Parameters: - tag (string) – Data identifier
- img_tensor (torch.Tensor, numpy.array, or string/blobname) – Image data
- global_step (int) – Global step value to record
- walltime (float) – Optional override default walltime (time.time()) of event
- Shape:
- img_tensor: Default is \((3, H, W)\). You can use
torchvision.utils.make_grid()
to convert a batch of tensor into 3xHxW format or calladd_images
and let tensorboardX do the job. Tensor with \((1, H, W)\), \((H, W)\), \((H, W, 3)\) is also suitible as long as correspondingdataformats
argument is passed. e.g. CHW, HWC, HW.
-
add_image_with_boxes
(tag, img_tensor, box_tensor, global_step=None, walltime=None, dataformats='CHW', **kwargs)[source]¶ Add image and draw bounding boxes on the image.
Parameters: - tag (string) – Data identifier
- img_tensor (torch.Tensor, numpy.array, or string/blobname) – Image data
- box_tensor (torch.Tensor, numpy.array, or string/blobname) – Box data (for detected objects)
- global_step (int) – Global step value to record
- walltime (float) – Optional override default walltime (time.time()) of event
- Shape:
img_tensor: Default is \((3, H, W)\). It can be specified with
dataformat
agrument. e.g. CHW or HWCbox_tensor: (torch.Tensor, numpy.array, or string/blobname): NX4, where N is the number of boxes and each 4 elememts in a row represents (xmin, ymin, xmax, ymax).
-
add_images
(tag, img_tensor, global_step=None, walltime=None, dataformats='NCHW')[source]¶ Add image data to summary.
Note that this requires the
pillow
package.Parameters: - tag (string) – Data identifier
- img_tensor (torch.Tensor, numpy.array, or string/blobname) – Image data
- global_step (int) – Global step value to record
- walltime (float) – Optional override default walltime (time.time()) of event
- Shape:
- img_tensor: Default is \((N, 3, H, W)\). If
dataformats
is specified, other shape will be accepted. e.g. NCHW or NHWC.
-
add_pr_curve
(tag, labels, predictions, global_step=None, num_thresholds=127, weights=None, walltime=None)[source]¶ Adds precision recall curve.
Parameters: - tag (string) – Data identifier
- labels (torch.Tensor, numpy.array, or string/blobname) – Ground truth data. Binary label for each element.
- predictions (torch.Tensor, numpy.array, or string/blobname) –
- probability that an element be classified as true. Value should in [0, 1] (The) –
- global_step (int) – Global step value to record
- num_thresholds (int) – Number of thresholds used to draw the curve.
- walltime (float) – Optional override default walltime (time.time()) of event
-
add_pr_curve_raw
(tag, true_positive_counts, false_positive_counts, true_negative_counts, false_negative_counts, precision, recall, global_step=None, num_thresholds=127, weights=None, walltime=None)[source]¶ Adds precision recall curve with raw data.
Parameters: - tag (string) – Data identifier
- true_positive_counts (torch.Tensor, numpy.array, or string/blobname) – true positive counts
- false_positive_counts (torch.Tensor, numpy.array, or string/blobname) – false positive counts
- true_negative_counts (torch.Tensor, numpy.array, or string/blobname) – true negative counts
- false_negative_counts (torch.Tensor, numpy.array, or string/blobname) – false negative counts
- precision (torch.Tensor, numpy.array, or string/blobname) – precision
- recall (torch.Tensor, numpy.array, or string/blobname) – recall
- global_step (int) – Global step value to record
- num_thresholds (int) – Number of thresholds used to draw the curve.
- walltime (float) – Optional override default walltime (time.time()) of event
- see – https://github.com/tensorflow/tensorboard/blob/master/tensorboard/plugins/pr_curve/README.md
-
add_scalar
(tag, scalar_value, global_step=None, walltime=None)[source]¶ Add scalar data to summary.
Parameters:
-
add_scalars
(main_tag, tag_scalar_dict, global_step=None, walltime=None)[source]¶ Adds many scalar data to summary.
Note that this function also keeps logged scalars in memory. In extreme case it explodes your RAM.
Parameters: Examples:
writer.add_scalars('run_14h', {'xsinx':i*np.sin(i/r), 'xcosx':i*np.cos(i/r), 'arctanx': numsteps*np.arctan(i/r)}, i) # This call adds three values to the same scalar plot with the tag # 'run_14h' in TensorBoard's scalar section.
-
add_text
(tag, text_string, global_step=None, walltime=None)[source]¶ Add text data to summary.
Parameters: Examples:
writer.add_text('lstm', 'This is an lstm', 0) writer.add_text('rnn', 'This is an rnn', 10)
-
add_video
(tag, vid_tensor, global_step=None, fps=4, walltime=None)[source]¶ Add video data to summary.
Note that this requires the
moviepy
package.Parameters: - tag (string) – Data identifier
- vid_tensor (torch.Tensor) – Video data
- global_step (int) – Global step value to record
- fps (float or int) – Frames per second
- walltime (float) – Optional override default walltime (time.time()) of event
- Shape:
- vid_tensor: \((N, T, C, H, W)\).
-
Helper functions¶
-
tensorboardX.utils.
figure_to_image
(figures, close=True)[source]¶ Render matplotlib figure to numpy format.
Note that this requires the
matplotlib
package.Parameters: - figure (matplotlib.pyplot.figure) – figure or a list of figures
- close (bool) – Flag to automatically close the figure
Returns: image in [CHW] order
Return type: numpy.array
Tutorials¶
What is tensorboard X?¶
At first, the package was named tensorboard, and soon there are issues about name confliction. The first alternative name came to my mind is tensorboard-pytorch, but in order to make it more general, I chose tensorboardX which stands for tensorboard for X.
Google’s tensorflow’s tensorboard is a web server to serve visualizations of the training progress of a neural network, it visualizes scalar values, images, text, etc.; these information are saved as events in tensorflow. It’s a pity that other deep learning frameworks lack of such tool, so there are already packages letting users to log the events without tensorflow; however they only provides basic functionalities. The purpose of this package is to let researchers use a simple interface to log events within PyTorch (and then show visualization in tensorboard). This package currently supports logging scalar, image, audio, histogram, text, embedding, and the route of back-propagation. The following manual is tested on Ubuntu and Mac, and the environment are anaconda’s python2 and python3.
Create a summary writer¶
Before logging anything, we need to create a writer instance. This can be done with:
from tensorboardX import SummaryWriter
#SummaryWriter encapsulates everything
writer = SummaryWriter('runs/exp-1')
#creates writer object. The log will be saved in 'runs/exp-1'
writer2 = SummaryWriter()
#creates writer2 object with auto generated file name, the dir will be something like 'runs/Aug20-17-20-33'
writer3 = SummaryWriter(comment='3x learning rate')
#creates writer3 object with auto generated file name, the comment will be appended to the filename. The dir will be something like 'runs/Aug20-17-20-33-3xlearning rate'
Each subfolder will be treated as different experiments in tensorboard. Each
time you re-run the experiment with different settings, you should change the
name of the sub folder such as runs/exp2
, runs/myexp
so that you can
easily compare different experiment settings. Type tensorboard runs
to compare
different runs in tensorboard.
General api format¶
add_something(tag name, object, iteration number)
Add scalar¶
Scalar value is the most simple data type to deal with. Mostly we save the loss
value of each training step, or the accuracy after each epoch. Sometimes I save
the corresponding learning rate as well. It’s cheap to save scalar value. Just
log anything you think is important. To log a scalar value, use
writer.add_scalar('myscalar', value, iteration)
. Note that the program complains
if you feed a PyTorch tensor. Remember to extract the scalar value by
x.item()
if x
is a torch scalar tensor.
Add image¶
An image is represented as 3-dimensional tensor. The simplest case is save one
image at a time. In this case, the image should be passed as a 3-dimension
tensor of size [3, H, W]
. The three dimensions correspond to R, G, B channel of
an image. After your image is computed, use writer.add_image('imresult', x,
iteration)
to save the image. If you have a batch of images to show, use
torchvision
’s make_grid
function to prepare the image array and send the result
to add_image(...)
(make_grid
takes a 4D tensor and returns tiled images in 3D tensor).
Note
Remember to normalize your image.
Add histogram¶
Saving histograms is expensive. Both in computation time and storage. If training
slows down after using this package, check this first. To save a histogram,
convert the array into numpy array and save with writer.add_histogram('hist',
array, iteration)
.
Add figure¶
You can save a matplotlib figure to tensorboard with the add_figure function. figure
input should be matplotlib.pyplot.figure
or a list of matplotlib.pyplot.figure
.
Check https://tensorboardx.readthedocs.io/en/latest/tensorboard.html#tensorboardX.SummaryWriter.add_figure for the detailed usage.
Add graph¶
To visualize a model, you need a model m
and the input t
. t
can be a tensor or a list of tensors
depending on your model. If error happens, make sure that m(t)
runs without problem first. See
The graph demo for
complete example.
Add audio¶
Currently the sampling rate of the this function is fixed at 44100 kHz, single
channel. The input of the add_audio
function is a one dimensional array, with
each element representing the consecutive amplitude samples. For a 2 seconds
audio, the input x
should have 88200 elements. Each element should lie in
[−1, 1].
Add embedding¶
Embeddings, high dimensional data, can be visualized and converted
into human perceptible 3D data by tensorboard, which provides PCA and
t-sne to project the data into low dimensional space. What you need to do is
provide a bunch of points and tensorboard will do the rest for you. The bunch of
points is passed as a tensor of size n x d
, where n
is the number of points and
d
is the feature dimension. The feature representation can either be raw data
(e.g. the MNIST image) or a representation learned by your network (extracted
feature). This determines how the points distributes. To make the visualization
more informative, you can pass optional metadata or label_imgs
for each data
points. In this way you can see that neighboring point have similar label and
distant points have very different label (semantically or visually). Here the
metadata is a list of labels, and the length of the list should equal to n
, the
number of the points. The label_imgs
is a 4D tensor of size NCHW
. N
should equal
to n
as well. See
The embedding demo for
complete example.
Useful commands¶
Install¶
Simply type pip install tensorboardX
in a unix shell to install this package.
To use the newest version, you might need to build from source or pip install
tensorboardX —-no-cache-dir
. To run tensorboard web server, you need
to install it using pip install tensorboard
.
After that, type tensorboard --logdir=<your_log_dir>
to start the server, where
your_log_dir
is the parameter of the object constructor. I think this command is
tedious, so I add a line alias tb='tensorboard --logdir '
in ~/.bashrc
. In
this way, the above command is simplified as tb <your_log_dir>
. Use your favorite
browser to load the tensorboard page, the address will be shown in the terminal
after starting the server.
Misc¶
Performance issue¶
Logging is cheap, but display is expensive. For my experience, if there are 3 or more experiments to show at a time and each experiment have, say, 50k points, tensorboard might need a lot of time to present the data.
Grouping plots¶
Usually, there are many numbers to log in one experiment. For example, when training GANs you should log the loss of the generator, discriminator. If the loss is composed of two other loss functions, say L1 and MSE, you might want to log the value of the other two losses as well. In this case, you can write the tags as Gen/L1, Gen/MSE, Desc/L1, Desc/MSE. In this way, tensorboard will group the plots into two sections (Gen, Desc). You can also use the regular expression to filter data.
Tutorials_zh¶
緣起¶
Google TensorFlow 附加的工具 Tensorboard 是一個很好用的視覺化工具。他可以記錄數字,影像或者是聲音資訊,對於觀察類神經網路訓練的過程非常有幫助。很可惜的是其他的訓練框架(PyTorch, Chainer, numpy)並沒有這麼好用的工具。網路上稍加搜尋可以發現已經有一些現成的套件可以讓不同的訓練框架使用 web 介面來觀察訓練情形,不過他們可以記錄的東西比較有限或是使用起來比較複雜 (tensorboard_logger, visdom)。tensorboardX 的目的就是讓其他 tensorboard 的功能都可以輕易的被非 TensorFlow 的框架使用。 目前這個套件除了 tensorboard beholder 之外支援所有 tensorboard 的紀錄型態。這個套件目前的標準測試環境為 Ubuntu 或是 Mac ,windows 則是有不定期手動測試;使用的 python 版本為 anaconda 的 python3。
安裝¶
在命令列輸入 pip install tensorboardX
即可
或是最新版源碼安裝 pip install tensorboardX
使用¶
建立 event writer 實體 在紀錄任何東西之前,我們需要建立一個 event writer 實體。 from tensorboardX import SummaryWriter #SummaryWriter 是一個類別,包含這套件的所有功能。
writer = SummaryWriter('runs/exp-1')
#建立實體。資料存放在:'runs/exp-1'
#接下來要寫入任何資料都是呼叫 writer.add_某功能()
writer = SummaryWriter()
#使用預設名稱建立實體。資料存放在:'runs/現在時間-機器名字'
ex. 'runs/Aug20-obov01'
writer = SummaryWriter(comment='3xLR')
#在預設資料夾後面加上註解 檔名變為:'runs/Aug20-obov01-3xLR'
上面的程式碼會在目前的工作目錄下建立一個叫 runs
的資料夾
以及子目錄 exp1
。 每個子目錄都會被視為一個實驗。每次執行新的實驗時,比如說改了一些參數,這時請將資料夾重新命名,像是: runs/exp2
, runs/myexp
這樣可以便於比較實驗的結果。 建議:資料夾可以用時間命名或者是直接把參數當成資料夾的名稱。
建立 writer 實體之後就可以開始紀錄資料了
API 的長相大概是:add_xxx(標籤,要記錄的東西,時間戳,其他參數)
紀錄純量¶
純量是最好記錄的東西。通常我們會把每次訓練的損失記錄下來或者是測試的準確度都是值得記錄的東西。其他數據,像是學習率也值得紀錄。
紀錄純量的方法是 writer.add_scalar('myscalar', value, iteration)
value 可以是 PyTorch tensor , numpy或是 float,int 之類的python原生數字類別。
記錄影像¶
影像使用一個三維的矩陣來表示。這三個維度分別代表紅色,綠色,藍色的強度。一張寬200, 高100的影像其對應的矩陣大小為[3, 100, 200] (CHW)。最簡單情況是只有一張影像要存。這時候只需要注意一下是不是符合上述的規格然後將它傳到: writer.add_image('imresult', image, iteration)
即可。
通常訓練的時候會採用批次處理,所以有一大堆影像要存。這時候請確定你的資料維度是 (NCHW)
, 其中 N
是batchsize。add_image
會自動將他排列成適當大小。要注意的是,如果要記錄的影像是 OpenCV/numpy 格式,他們通常呈現 (HWC)
的排列,這時候要呼叫 numpy.transpose
將其轉為正確的維度,否則會報錯。另外就是注意影像的值的範圍要介於 [0, 1] 之間。
紀錄直方圖(histogram)¶
記錄直方圖很耗 CPU 資源,不要常用。如果你用了這個套件之後覺得速度變慢了請先檢查一下是不是這個原因。使用方法很簡單,呼叫 writer.add_histogram('hist', array, iteration)
即可紀錄。
紀錄聲音¶
writer.add_audio('myaudio', audio, iteration, 44100)
這功能只支援單聲道。 add_audio 要傳入的聲音資訊是個一維陣列,陣列的每一個元素代表在每一個取樣點的振幅大小。取樣頻率為 44100 kHz 的情況下。一段2秒鐘的聲音應該要有88200個點;注意其中每個元素的值應該都介於正負1之間。
紀錄文字¶
writer.add_text('mytext', 'this is a pen', iteration)
除了一般字串之外,也支援簡單的 markdown 表格。
記錄網路架構。¶
(實驗性的功能,模型複雜的時候不確定對不對)
問題很多的功能。使用上比較複雜。需要準備兩個東西:網路模型 以及 你要餵給他的 tensor
舉例來說,令模型為 m,輸入為 x,則使用方法為:
add_graph(m, (x, ))
這裡使用 tuple 的原因是當網路有多個輸入時,可以把他擴充成
add_graph(m, (x, y, z))
,如果只有單一輸入,寫成 add_graph(m, x)
也無妨。
常會出錯的原因:
- 較新的 operator pytorch本身不支援JIT
- 輸入是 cpu tensor,model 在 GPU 上。(或是反過來)
- 輸入的 tensor 大小錯誤,跑到後面幾層維度消失了
- model 寫錯,前後兩層 feature dimension 對不上
除錯方法
forward propagate 一次 m(x)
或是多個輸入時:m((x, y, z))
2. 用 torch.onnx.export
導出模型,觀察錯誤訊息。
高維度資料視覺化/降維 (embedding)¶
因為人類對物體的了解程度只有三維,所以當資料的維度超過三的時候我們沒辦法將他視覺化。這時候就需要降維來讓資料的維度小於等於三。降維運算由 tensorboard 以 Javascript 執行,演算法有 PCA 及 t-sne 兩種可選。這邊我們只需要負責提供每個點的高維度特徵即可。提供的格式是一個矩陣,一個 n x d
的矩陣 n
點的數量, d
是維度的多寡。 高維度特徵可以是原始資料。比如說影像,或是網路學到的壓縮結果。這原始資料決定了資料的分佈情形。如果要看得更清楚一點,你可以再傳 metadata / label_imgs 的參數進去(metadata是一個 python list 長度為 n
, label_imgs
是一個 4 維矩陣,大小是 nCHW
。這樣每個點就會有他對應的文字或圖在旁邊。不懂的話就看範例吧:https://github.com/lanpa/tensorboardX/blob/master/examples/demo_embedding.py
紀錄短片¶
類似於紀錄影像,不過傳入的物件維度是 [B, C, T ,H, W]
,其中 T
是影格的數量。所以一個 30 frame 的彩色影片 維度是 [B, 3, 30 ,H, W]
。
紀錄 pr curve¶
根據預測的機率值以及其對應的標準答案計算 precision-recall 的結果並保存。
add_pr_curve (tag, labels, predictions, step)
labels是標準答案,predictions是程式對樣本的預測。
假設有十筆資料 labels就會長得像 [0, 0, 1, 0, 0, 1, 0, 1, 0, 1]
,predictions則長的像 [0.1, 0.3, 0.8, 0.2, 0.4, 0.5, 0.1, 0.7, 0.9, 0.2]
。
pyplot 的圖表¶
用 matplotlib 畫了美美的圖表想紀錄?請用 add_figure
。傳入的物件是 matplotlib 的 figure。
顯示結果
Tensorboard 本質是個網頁伺服器,他讀取的資料來自於訓練網路的時候程式 (tensorboardX) 寫下的事件檔。因為 tensorboard 包含於 tensorflow,所以你需要另外安裝一份 tensorflow 在伺服器主機。我想大部分人都已經裝過了。沒裝過的話就在 unix shell 介面輸入 pip install tensorboard
。如果沒有使用 TensorFlow 訓練的需求,建議裝非 GPU 版本,啟動速度快得多。
接下來在命令列輸入 tensorboard --logdir=<your_log_dir>
(以前面的例子來說:tensorboard --logdir=runs
)伺服器就會啟動了。這個指令打起來很麻煩,所以我都在 ~/.bashrc
加一行:alias tb='tensorboard --logdir '
如此一來指令就簡化成 tb <your_log_dir>
。接下來就是照著終端機上的指示打開你的瀏覽器就可以看到畫面了。