使用PyTorch进行手写数字识别,在20 k参数中获得99.5%的精度。
In this article we’ll build a simple convolutional neural network in PyTorch and train it to recognize handwritten digits using the MNIST dataset.
在本文中,我們將在PyTorch中構(gòu)建一個簡單的卷積神經(jīng)網(wǎng)絡(luò),并使用MNIST數(shù)據(jù)集訓練它來識別手寫數(shù)字。
Figuring out this sequence of numbers is easy, even though the resolution is distorted and the shape of the digits is irregular. Thanks to our brains, which made this process feel natural. We should, of course, also be thankful to ourselves, having spent years learning and applying the numbers in our day-to-day lives.
即使分辨率失真并且數(shù)字的形狀不規(guī)則,也很容易弄清楚數(shù)字的順序。 多虧了我們的大腦,這使這一過程變得自然。 當然,我們也應(yīng)該感謝自己,花了多年的時間學習并將這些數(shù)字應(yīng)用到我們的日常生活中。
第1步-了解您的數(shù)據(jù) (Step 1 — Know Your Data)
Data can tell you a lot if you ask the right questions.
如果您提出正確的問題,數(shù)據(jù)可以告訴您很多信息。
To understand data, data scientists spends most of their time gathering datasets and preprocessing them. Further tasks are comparatively easy.
為了理解數(shù)據(jù),數(shù)據(jù)科學家花費大量時間來收集數(shù)據(jù)集并進行預(yù)處理。 進一步的任務(wù)相對容易。
We will be using the popular MNIST database. It is a collection of 70000 handwritten digits split into training and test set of 60000 and 10000 images respectively. Before starting, we need to make all the necessary imports.
我們將使用流行的MNIST數(shù)據(jù)庫。 它是70000個手寫數(shù)字的集合,分別分為60000個圖像和10000個圖像的訓練和測試集。 開始之前,我們需要進行所有必要的導(dǎo)入。
import torchimport torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
第2步-準備數(shù)據(jù)集 (Step 2 — Preparing the Dataset)
Before downloading the data, let us define the hyperparameters and transformations we’ll be using for the experiment we want to perform on our data before feeding it into the pipeline. In other words, these samples are not the same size. Neural networks will require images to be of same dimensions and properties. We do it using torchvision.transforms. Here the number of epochs defines how many times we’ll loop over the complete training dataset, while learning_rate and momentum are hyperparameters for the optimizer we'll be using later on and batch_size is the number of images we want to read in one go.
在下載數(shù)據(jù)之前,讓我們定義將要用于數(shù)據(jù)的超參數(shù)和轉(zhuǎn)換,然后再將其輸入管道。 換句話說,這些樣本的大小不同。 神經(jīng)網(wǎng)絡(luò)將要求圖像具有相同的尺寸和屬性。 我們使用torchvision.transforms做到這torchvision.transforms 。 這里的時期數(shù)定義了我們將在整個訓練數(shù)據(jù)集上循環(huán)多少次,而learning_rate和momentum是我們稍后將使用的優(yōu)化程序的超參數(shù),而batch_size是我們要一次性讀取的圖像數(shù)。
batch_size = 128momentum_value = 0.9
epochs = 20
learning_rate = 0.01
For repeatable experiments we have to set random seeds for anything using random number generation, so that when next time I come and run the code it will give me same output.
對于可重復(fù)的實驗,我們必須使用隨機數(shù)生成為任何對象設(shè)置隨機種子,以便下次我運行該代碼時,它會提供相同的輸出。
torch.manual_seed(1)kwargs = {'num_workers': 1, 'pin_memory': True} if use_cuda else {}train_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=batch_size, shuffle=True, **kwargs)test_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=batch_size, shuffle=True, **kwargs)
datasets.MNIST we are downloading the MNIST dataset for training and testing at path ../data
datasets.MNIST我們正在下載的訓練數(shù)據(jù)集MNIST和路徑測試../data
transforms.Compose(): This clubs all the transforms provided to it. Compose is applied to the inputs one by one.
transforms.Compose() :這會將所有提供給它的轉(zhuǎn)換合并在一起。 Compose一一應(yīng)用于輸入。
transforms.ToTensor() — converts the image into numbers, that are understandable by the system. It separates the image into three color channels (separate images): red, green & blue. Then it converts the pixels of each image to the brightness of their color between 0 and 255. These values are then scaled down to a range between 0 and 1. The image is now a Torch Tensor.
transforms.ToTensor() —將圖像轉(zhuǎn)換為系統(tǒng)可以理解的數(shù)字。 它將圖像分為三個顏色通道(獨立圖像):紅色,綠色和藍色。 然后,它將每個圖像的像素轉(zhuǎn)換為0到255之間的顏色亮度。然后將這些值縮小到0到1之間的范圍。圖像現(xiàn)在是Torch Tensor。
transforms.Normalize() — normalizes the tensor with a mean and standard deviation which goes as the two parameters respectively.
transforms.Normalize() —使用均值和標準差對張量進行規(guī)格化,分別作為兩個參數(shù)。
torch.utils.data.DataLoader we make Data iterable by loading it to a loader.
torch.utils.data.DataLoader我們通過將數(shù)據(jù)加載到加載器來使其可迭代。
shuffle=True Shuffle the training data to make it independent of the order
shuffle=True隨機調(diào)整訓練數(shù)據(jù),使其與順序無關(guān)
步驟3 —建立神經(jīng)網(wǎng)絡(luò) (Step 3 — Building a Neural Network)
We will be building the following network, as you can see it contains seven Convolution layer, two Max pooling layer two Transition layer followed by Avg pooling layer.
我們將構(gòu)建以下網(wǎng)絡(luò),您將看到它包含七個Convolution layer ,兩個Max pooling layer兩個Transition layer然后是Avg pooling layer 。
In PyTorch a nice way to build a network is by creating a new class. PyTorch’s torch.nn module allows us to build the above network very simply. It is extremely easy to understand as well.
在PyTorch中,建立網(wǎng)絡(luò)的一種好方法是創(chuàng)建一個新類。 PyTorch的torch.nn模塊使我們可以非常簡單地構(gòu)建上述網(wǎng)絡(luò)。 這也是非常容易理解的。
class Net(nn.Module):def __init__(self):
super(Net, self).__init__()self.conv1 = nn.Conv2d(in_channels=1, out_channels=128,
kernel_size=3, padding=1)
self.bn1 = nn.BatchNorm2d(num_features=128)self.tns1 = nn.Conv2d(in_channels=128, out_channels=4,
kernel_size=1, padding=1)self.conv2 = nn.Conv2d(in_channels=4, out_channels=16,
kernel_size=3, padding=1)
self.bn2 = nn.BatchNorm2d(num_features=16)self.pool1 = nn.MaxPool2d(2, 2)self.conv3 = nn.Conv2d(in_channels=16, out_channels=16,
kernel_size=3, padding=1)
self.bn3 = nn.BatchNorm2d(num_features=16)
self.conv4 = nn.Conv2d(in_channels=16, out_channels=32,
kernel_size=3, padding=1)
self.bn4 = nn.BatchNorm2d(num_features=32)
self.pool2 = nn.MaxPool2d(2, 2)self.tns2 = nn.Conv2d(in_channels=32, out_channels=16,
kernel_size=1, padding=1)
self.conv5 = nn.Conv2d(in_channels=16, out_channels=16,
kernel_size=3, padding=1)
self.bn5 = nn.BatchNorm2d(num_features=16)
self.conv6 = nn.Conv2d(in_channels=16, out_channels=32,
kernel_size=3, padding=1)
self.bn6 = nn.BatchNorm2d(num_features=32)self.conv7 = nn.Conv2d(in_channels=32, out_channels=10,
kernel_size=1, padding=1)
self.gpool = nn.AvgPool2d(kernel_size=7)self.drop = nn.Dropout2d(0.1)def forward(self, x):
x = self.tns1(self.drop(self.bn1(F.relu(self.conv1(x)))))
x = self.drop(self.bn2(F.relu(self.conv2(x))))
x = self.pool1(x)
x = self.drop(self.bn3(F.relu(self.conv3(x))))
x = self.drop(self.bn4(F.relu(self.conv4(x))))
x = self.tns2(self.pool2(x))
x = self.drop(self.bn5(F.relu(self.conv5(x))))
x = self.drop(self.bn6(F.relu(self.conv6(x))))
x = self.conv7(x)
x = self.gpool(x)
x = x.view(-1, 10)
return F.log_softmax(x)
We define our own class class Net(nn.Module) and we inherit nn.Module which is Base class for all neural network modules. Then we define initialize function __init__ after we inherit all the functionality of nn.Module in our class super(Net, self).__init__(). After that we start building our model.
我們定義了自己的類class Net(nn.Module)并繼承了nn.Module,它是所有神經(jīng)網(wǎng)絡(luò)模塊的基類。 然后,在繼承類super(Net, self).__init__()的nn.Module的所有功能之后,定義初始化函數(shù)__init__ 。 之后,我們開始構(gòu)建模型。
We’ll use 2-D convolutional layers. As activation function we’ll choose rectified linear units (ReLUs in short). We use Max pooling of kernel size 2x2 to reduce channel size into half.
我們將使用二維卷積層。 作為激活函數(shù),我們將選擇整流線性單位(簡稱ReLU)。 我們使用2x2內(nèi)核大小的最大池將通道大小減小一半。
The forward() pass defines the way we compute our output using the given layers and functions.
forward()傳遞定義了使用給定層和函數(shù)計算輸出的方式。
x.view(-1, 10) The view method returns a tensor with the same data as the self tensor (which means that the returned tensor has the same number of elements), but with a different shape. First parameter represent the batch_size in our case batch_size is 128 if you don't know the batch_size pass -1 tensor.view method will take care of batch_size for you. Second parameter is the column or the number of neurons you want. Look at the code below.
x.view(-1, 10) view方法返回一個張量,該張量具有與自張量相同的數(shù)據(jù)(這意味著返回的張量具有相同數(shù)量的元素),但形狀不同。 第一個參數(shù)表示我們的情況下的batch_size,如果您不知道batch_size傳遞-1 tensor.view方法將為您處理batch_size,則batch_size為128。 第二個參數(shù)是所需的列數(shù)或神經(jīng)元數(shù)。 看下面的代碼。
步驟4 —檢查GPU并總結(jié)模型 (Step 4 — Check for GPU and summerize the model)
Summarize the model help us to give better intuition about each layer of model we build.
匯總模型有助于我們更好地直觀了解所構(gòu)建模型的每一層。
from torchsummary import summary Torch-summary provides information complementary to what is provided by print(your_model) in PyTorch. summary(your_model, input_data)
from torchsummary import summary Torch-summary提供的信息與PyTorch中print(your_model)提供的信息互補。 summary(your_model, input_data)
torch.cuda.is_available() check for the GPU return True if GPU available else return False
torch.cuda.is_available()檢查GPU返回True(如果GPU可用,否則返回False)
model = Net().to(device) load model to available device
model = Net().to(device)模型加載到可用設(shè)備
!pip install torchsummaryfrom torchsummary import summary
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
model = Net().to(device)
summary(model, input_size=(1, 28, 28))
Now run the above code to get detailed view of model.
現(xiàn)在運行上面的代碼以獲取模型的詳細視圖。
Layer (type) Output Shape Param #================================================================
Conv2d-1 [-1, 128, 28, 28] 1,280
BatchNorm2d-2 [-1, 128, 28, 28] 256
Dropout2d-3 [-1, 128, 28, 28] 0
Conv2d-4 [-1, 8, 30, 30] 1,032
Conv2d-5 [-1, 16, 30, 30] 1,168
BatchNorm2d-6 [-1, 16, 30, 30] 32
Dropout2d-7 [-1, 16, 30, 30] 0
MaxPool2d-8 [-1, 16, 15, 15] 0
Conv2d-9 [-1, 16, 15, 15] 2,320
BatchNorm2d-10 [-1, 16, 15, 15] 32
Dropout2d-11 [-1, 16, 15, 15] 0
Conv2d-12 [-1, 32, 15, 15] 4,640
BatchNorm2d-13 [-1, 32, 15, 15] 64
Dropout2d-14 [-1, 32, 15, 15] 0
MaxPool2d-15 [-1, 32, 7, 7] 0
Conv2d-16 [-1, 16, 9, 9] 528
Conv2d-17 [-1, 16, 9, 9] 2,320
BatchNorm2d-18 [-1, 16, 9, 9] 32
Dropout2d-19 [-1, 16, 9, 9] 0
Conv2d-20 [-1, 32, 9, 9] 4,640
BatchNorm2d-21 [-1, 32, 9, 9] 64
Dropout2d-22 [-1, 32, 9, 9] 0
Conv2d-23 [-1, 10, 11, 11] 330
AvgPool2d-24 [-1, 10, 1, 1] 0
================================================================
Total params: 18,738
Trainable params: 18,738
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.00
Forward/backward pass size (MB): 3.08
Params size (MB): 0.07
Estimated Total Size (MB): 3.15
----------------------------------------------------------------
第5步-訓練和測試功能 (Step 5— Train and Test function)
As you can see our model have less than 20k parameters. Now let’s define the train and test function and run it to check whether we are able to achieve 99.5% accuracy within 20 epochs.
如您所見,我們的模型的參數(shù)少于20k。 現(xiàn)在,讓我們定義訓練和測試功能并運行它,以檢查我們是否能夠在20個紀元內(nèi)達到99.5%的準確度。
from tqdm import tqdmdef train(model, device, train_loader, optimizer, epoch):model.train()
pbar = tqdm(train_loader)
for batch_idx, (data, target) in enumerate(pbar):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
pbar.set_description(desc= f'epoch: {epoch} loss=
loss.item()} batch_id={batch_idx}')def test(model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target,
reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True)
correct +=pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{}
({:.1f}%)\n'.format(test_loss, correct,
len(test_loader.dataset), 100. * correct /
len(test_loader.dataset)))
tqdm which can mean "progress," Instantly make your loops show a smart progress meter — just wrap any iterable with tqdm(iterable), and you’re done!
tqdm可能表示“進度”,立即使您的循環(huán)顯示一個智能進度表-只需用tqdm(iterable)包裝任何可迭代的對象,就完成了!
model.train() By default all the modules are initialized to train mode (self.training = True). Also be aware that some layers have different behavior during train/and evaluation (like Batch Norm, Dropout) so setting it matters. Hence we mention in first line of train function i.e model.train() and in first line of test function i.e model.eval()
model.train()默認情況下,所有模塊都初始化為訓練模式(self.training = True)。 另請注意,某些層在訓練/評估過程中會有不同的行為(例如Batch Norm,Dropout),因此進行設(shè)置很重要。 因此,我們在訓練函數(shù)的第一行即model.train()和測試函數(shù)的第一行即model.eval()
zero_grad clears old gradients from the last step (otherwise you’d just accumulate the gradients from all loss.backward()
zero_grad從最后一步清除舊的漸變(否則,您將僅從所有損失中累積漸變。backward()
loss.backward() computes the derivative of the loss w.r.t. the parameters (or anything requiring gradients) using back propagation.
loss.backward()使用反向傳播計算參數(shù)(或任何需要漸變的參數(shù)loss.backward()的損耗導(dǎo)數(shù)。
optimizer.step() causes the optimizer to take a step based on the gradients of the parameters.
optimizer.step()使優(yōu)化器根據(jù)參數(shù)的梯度采取步驟。
To calculate losses in PyTorch, we will use the F.nll_loss we define the negative log-likelihood loss. It is useful to train a classification problem with C classes. Together the LogSoftmax() and NLLLoss() acts as the cross-entropy loss.
為了計算PyTorch中的損失,我們將使用F.nll_loss定義負對數(shù)似然損失。 用C類訓練分類問題很有用。 LogSoftmax()和NLLLoss()一起充當交叉熵損失。
第6步-訓練模型 (Step 6— Training Model)
This is where the actual learning happens. Your neural network iterates over the training set and updates the weights. We make use of torch.optim which is a module provided by PyTorch to optimize the model, perform gradient descent and update the weights by back-propagation. Thus in each epoch (number of times we iterate over the training set), we will be seeing a gradual decrease in training loss.
這是實際學習的地方。 您的神經(jīng)網(wǎng)絡(luò)遍歷訓練集并更新權(quán)重。 我們使用了torch.optim提供的模塊torch.optim來優(yōu)化模型,執(zhí)行梯度下降并通過反向傳播更新權(quán)重。 因此,在每個epoch (我們遍歷訓練集的次數(shù)),我們將看到訓練損失的逐漸減少。
model = Net().to(device)optimizer = optim.SGD(model.parameters(), lr=learning_rate, momentum=momentum_value)
for epoch in range(1, epochs):
train(model, device, train_loader, optimizer, epoch)
test(model, device, test_loader)
Let’s run the above code and check the training logs.
讓我們運行上面的代碼并檢查訓練日志。
epoch: 1 loss=0.27045467495918274 batch_id=468: 100%|██████████| 469/469 [00:22<00:00, 20.58it/s]0%| | 0/469 [00:00<?, ?it/s]
Test set: Average loss: 0.1221, Accuracy: 9685/10000 (96.8%)
epoch: 2 loss=0.09988906979560852 batch_id=468: 100%|██████████| 469/469 [00:22<00:00, 21.15it/s]
0%| | 0/469 [00:00<?, ?it/s]
Test set: Average loss: 0.0604, Accuracy: 9823/10000 (98.2%)
epoch: 3 loss=0.20125557482242584 batch_id=468: 100%|██████████| 469/469 [00:22<00:00, 20.85it/s]
0%| | 0/469 [00:00<?, ?it/s]
Test set: Average loss: 0.0480, Accuracy: 9843/10000 (98.4%)
epoch: 4 loss=0.0712851956486702 batch_id=468: 100%|██████████| 469/469 [00:22<00:00, 21.22it/s]
0%| | 0/469 [00:00<?, ?it/s]
Test set: Average loss: 0.0371, Accuracy: 9890/10000 (98.9%)
epoch: 5 loss=0.04961127042770386 batch_id=468: 100%|██████████| 469/469 [00:21<00:00, 21.45it/s]
0%| | 0/469 [00:00<?, ?it/s]
Test set: Average loss: 0.0321, Accuracy: 9897/10000 (99.0%)
epoch: 6 loss=0.054023560136556625 batch_id=468: 100%|██████████| 469/469 [00:22<00:00, 21.16it/s]
0%| | 0/469 [00:00<?, ?it/s]
Test set: Average loss: 0.0271, Accuracy: 9913/10000 (99.1%)
epoch: 7 loss=0.07397448271512985 batch_id=468: 100%|██████████| 469/469 [00:22<00:00, 21.32it/s]
0%| | 0/469 [00:00<?, ?it/s]
Test set: Average loss: 0.0273, Accuracy: 9909/10000 (99.1%)
epoch: 8 loss=0.05811620131134987 batch_id=468: 100%|██████████| 469/469 [00:22<00:00, 20.65it/s]
0%| | 0/469 [00:00<?, ?it/s]
Test set: Average loss: 0.0239, Accuracy: 9928/10000 (99.3%)
epoch: 9 loss=0.08609984070062637 batch_id=468: 100%|██████████| 469/469 [00:22<00:00, 20.86it/s]
0%| | 0/469 [00:00<?, ?it/s]
Test set: Average loss: 0.0222, Accuracy: 9930/10000 (99.3%)
epoch: 10 loss=0.10347550362348557 batch_id=468: 100%|██████████| 469/469 [00:22<00:00, 21.04it/s]
0%| | 0/469 [00:00<?, ?it/s]
Test set: Average loss: 0.0234, Accuracy: 9921/10000 (99.2%)
epoch: 11 loss=0.10419472306966782 batch_id=468: 100%|██████████| 469/469 [00:22<00:00, 20.88it/s]
0%| | 0/469 [00:00<?, ?it/s]
Test set: Average loss: 0.0196, Accuracy: 9930/10000 (99.3%)
epoch: 12 loss=0.004044002387672663 batch_id=468: 100%|██████████| 469/469 [00:22<00:00, 20.97it/s]
0%| | 0/469 [00:00<?, ?it/s]
Test set: Average loss: 0.0223, Accuracy: 9930/10000 (99.3%)
epoch: 13 loss=0.05143119767308235 batch_id=468: 100%|██████████| 469/469 [00:22<00:00, 20.56it/s]
0%| | 0/469 [00:00<?, ?it/s]
Test set: Average loss: 0.0201, Accuracy: 9930/10000 (99.3%)
epoch: 14 loss=0.03383662924170494 batch_id=468: 100%|██████████| 469/469 [00:22<00:00, 20.86it/s]
0%| | 0/469 [00:00<?, ?it/s]
Test set: Average loss: 0.0187, Accuracy: 9940/10000 (99.4%)
epoch: 15 loss=0.037076253443956375 batch_id=468: 100%|██████████| 469/469 [00:22<00:00, 20.42it/s]
0%| | 0/469 [00:00<?, ?it/s]
Test set: Average loss: 0.0209, Accuracy: 9935/10000 (99.3%)
epoch: 16 loss=0.009786871261894703 batch_id=468: 100%|██████████| 469/469 [00:22<00:00, 20.50it/s]
0%| | 0/469 [00:00<?, ?it/s]
Test set: Average loss: 0.0190, Accuracy: 9944/10000 (99.4%)
epoch: 17 loss=0.024468591436743736 batch_id=468: 100%|██████████| 469/469 [00:23<00:00, 20.36it/s]
0%| | 0/469 [00:00<?, ?it/s]
Test set: Average loss: 0.0177, Accuracy: 9946/10000 (99.5%)
epoch: 18 loss=0.030203601345419884 batch_id=468: 100%|██████████| 469/469 [00:22<00:00, 20.40it/s]
0%| | 0/469 [00:00<?, ?it/s]
Test set: Average loss: 0.0171, Accuracy: 9937/10000 (99.4%)
epoch: 19 loss=0.04640066251158714 batch_id=468: 100%|██████████| 469/469 [00:22<00:00, 20.72it/s]
Test set: Average loss: 0.0179, Accuracy: 9938/10000 (99.4%)
HURRAY! We have over 99.5% accuracy. We don’t need to train the model every time. PyTorch has a functionality that can save our model so that in the future, we can load it and use it directly.
歡呼! 我們的準確率超過99.5%。 我們不需要每次都訓練模型。 PyTorch具有可以保存我們的模型的功能,以便將來我們可以加載它并直接使用它。
torch.save(model, 'path/to/save/my_mnist_model.pth')Check entire notebook here.
在這里檢查整個筆記本。
結(jié)論 (Conclusion)
In summary we built a new environment with PyTorch and TorchVision, used it to classify handwritten digits from the MNIST dataset and hopefully developed a good intuition using PyTorch. For further information the official PyTorch documentation is really nicely written and the forums are also quite active!
總而言之,我們使用PyTorch和TorchVision構(gòu)建了一個新環(huán)境,用它對MNIST數(shù)據(jù)集中的手寫數(shù)字進行分類,并希望使用PyTorch可以開發(fā)出良好的直覺。 有關(guān)更多信息, PyTorch官方文檔編寫得非常好, 論壇也非常活躍!
翻譯自: https://medium.com/@ravivaishnav20/handwritten-digit-recognition-using-pytorch-get-99-5-accuracy-in-20-k-parameters-bcb0a2bdfa09
總結(jié)
以上是生活随笔為你收集整理的使用PyTorch进行手写数字识别,在20 k参数中获得99.5%的精度。的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 迷你世界下架什么时间(24期迷你世界一)
- 下一篇: openai-gpt_您可以使用Open