Implement: DenseNet training using CML
1. Install Tensorflow(version==1.4)
Before this step, you must have installed Anaconda 3 and create envs
conda create -n env-name python = 3.6 anaconda
Tensorflow version depend on your environment
This test environment is ubuntu 14.04, cudnn v6, cuda 8.0.
install command
pip install tensorflow-gpu==1.4
For watch nvidia-smi
watch -n 1 nvidia-smi
2. Install DenseNet
Original DenseNet github is
https://github.com/liuzhuang13/DenseNet
Original is pytorch version. And I install Tensorflow version. Github is
https://github.com/ikhlestov/vision_networks
Example run:
python run_dense_net.py --train --test --dataset=C10
3. Test Deepsort training method - Cosine Metric Learning (CML)
Deepsort training method is
https://github.com/nwojke/cosine_metric_learning
And first I need to install opencv for python
install command
pip install opencv-python
After install opencv-python, I can test cosine_metric_learning source code.
Because I use old version tensorflow, so error exist
Traceback (most recent call last):
File "train_market1501.py", line 127, in <module>
main()
File "train_market1501.py", line 68, in main
**train_kwargs)
File "/home/islab/cosine_metric_learning/train_app.py", line 188, in train_loop
trainable_scopes=trainable_scopes)
File "/home/islab/cosine_metric_learning/train_app.py", line 269, in create_trainer
_create_loss(feature_var, logit_var, label_var, mode=loss_mode)
File "/home/islab/cosine_metric_learning/train_app.py", line 653, in _create_loss
feature_var, logit_var, label_var, monitor_mode=monitor_magnet)
File "/home/islab/cosine_metric_learning/train_app.py", line 625, in _create_magnet_loss
magnet_loss, _, _ = losses.magnet_loss(feature_var, label_var)
File "/home/islab/cosine_metric_learning/losses.py", line 137, in magnet_loss
maxi = tf.reduce_max(linear, reduction_indices=[1], keepdims=True)
TypeError: reduce_max() got an unexpected keyword argument 'keepdims'
keepdims
is the new name for that argument, I need to modify code in function magnet_loss in losses.py
maxi = tf.reduce_max(linear, reduction_indices=[1], keep_dims=True)
Because it use tensorflow-slim to train, if I use terminal to execute and the terminal will not print any information.
So I modify the tensorflow-slim's learning function. The code is in
/home/islab/anaconda3/envs/danieltf/lib/python3.6/site-packages/tensorflow/contrib/slim/python/slim/learning.py
502 lines. I add one line to print loss and global step.
print('global step',np_global_step,': loss = ',total_loss, '(',time_elapsed,'sec/step)' )
Now I can execute Cosine Metric Learning. The training parameter is in train_app.py.
I can train, validation and test. Train command is
python train_market1501.py \
--dataset_dir=./Market-1501-v15.09.15/ \
--loss_mode=cosine-softmax \
--log_dir=./output/market1501/ \
--run_id=cosine-softmax
I can restrict the devices using CUDA_VISIBLE_DEVICES to run it. Like
CUDA_VISIBLE_DEVICES=0 python train_market1501.py --mode=train --dataset_dir=../dataset/Market-1501-v15.09.15/ --loss_mode=cosine-softmax --log_dir=./output/market1501/ --run_id=cosine-softmax
First training, I train market1501 dataset 100000 and 150000 iterations.
After train 151000 iterations, loss is 3.685445 (0.1253 sec/step)
And I should to run CMC evaluation metrics on the validation set, the command is
CUDA_VISIBLE_DEVICES=0 python train_market1501...py --mode=eval --dataset_dir=./Market-1501-v15.09.15/ --loss_mode=cosine-softmax --log_dir=./output/market1501/ --run_id=cosine-softmax --eval_log_dir=./eval_output/market1501
The result is write in tensorboard. If I want to see result, the command is
tensorboard --logdir ./eval_output/market1501/cosine-softmax --port 6007
The tensorboard ip is http://your-server-ip:6007
Finally I need to test my model. I need to export my result. In other hand, I extract the test feature and then use market-1501 evaluation code to obtain test CMC and mAP. The command is
CUDA_VISIBLE_DEVICES=0 python train_market1501.py --mode=export --dataset_dir=../dataset/Market-1501-v15.09.15/ --loss_mode=cosine-softmax --log_dir=./output/market1501/ --run_id=cosine-softmax --sdk_dir=../Market-1501_baseline-v16.01.14/ --restore_path=./output/market1501/cosine-softmax/model.ckpt-100000
CUDA_VISIBLE_DEVICES=1 python train_market1501_DN.py --mode=export --dataset_dir=../dataset/Market-1501-v15.09.15/ --loss_mode=cosine-softmax --log_dir=./densenet-based/output//market1501/ --run_id=cosine-softmax --sdk_dir=../Market-1501_baseline-v16.01.14/ --restore_path=./output/market1501/cosine-softmax/model.ckpt-150000
This command will take 100000 iterations model to extract test feature. Then I use market-1501 evaluation code.
Just run CML_evaluation.m
The result (50000) is
single query: mAP = 0.560679, r1 precision = 0.783848, r5 precision = 0.9038
The result of 150000 iterations model is
single query: mAP = 0.565231, r1 precision = 0.785302, r5 precision = 0.908
After training, if I want to use model with deep_sort tracker, run the following command
python train_market1501.py --mode=freeze --restore_path=./output/market1501/cosine-softmax/model.ckpt-100000
This will create a market1501.pb
file which can be supplied to Deep SORT.
4. Densenet version Create
Now I can execute Densenet training on Cifar10. I also know how to execute CML and trace its code. So I replace Wide Residual Network with Densenet. In code, I create train_market1501_DN.py to execute Densenet-based CML. Training parameter is in train_app_DN.py. Network definition is in network_definition_DN(v1, v2). Densenet definition is in dense_net_definition(v1, v2, v3).
Now I need to tuning network to obtain best accuracy. Test dataset is market1501.
Version Commit
- train_market1501_DN.py, train_app_DN.py ---Using Densenet
- network_definition_DN.py, dense_net_definition.py ---Using Original Densenet
- network_definition_DN_v2.py, dense_net_definition_v2.py ---Using Slim Densenet(slim.conv)
- dense_net_definition_v3.py ---Using Densenet-121(Each block have different depth and k probably different )
- network_definition_DN_v3.py --Remove fc1
5. Densenet version Tuning
12/11(8:47 pm)
- reduction = 1.0, batch size = 64, learning rate = 0.001, k = 12, d = 40, dropout = 0.8, block no drop, feature dims = 240
global step 11000 : loss = 3.7723737 ( 0.353865385055542 sec/step)
global step 20000 : loss = 3.7060163 ( 0.36368846893310547 sec/step)
global step 30000 : loss = 3.5140417 ( 0.3827059268951416 sec/step)
global step 40000 : loss = 3.3794773 ( 0.38283252716064453 sec/step)
global step 50000 : loss = 3.3752463 ( 0.35541296005249023 sec/step)
global step 60000 : loss = 3.3986282 ( 0.37077856063842773 sec/step)
global step 70000 : loss = 3.320786 ( 0.3777048587799072 sec/step)
- single query: mAP = 0.287008, r1 precision = 0.543646 (about iteration 25000)
12/11(8:47 pm)
- reduction = 0.5, feature dims = 132
- r1 precision = 0.48
- batch size = 32, k = 24, d=40 , feature dims = 264
- single query: mAP = 0.32, r1 precision = 0.57(about iteration 50000)
- single query: mAP = 0.345015, r1 precision = 0.589667(about iteration 23000)
global step 10000 : loss = 4.8479285 ( 0.3144700527191162 sec/step)
global step 20000 : loss = 4.2423344 ( 0.3041706085205078 sec/step)
global step 30000 : loss = 4.0414934 ( 0.3028581142425537 sec/step)
global step 40000 : loss = 3.8363497 ( 0.31331491470336914 sec/step)
global step 50000 : loss = 3.7879019 ( 0.3151693344116211 sec/step)
12/12(2:15 pm)
- batch size = 16, k = 12, d = 82, learning = 0.001
- block with dropout keep_prob = 0.8
global step 11000 : loss = 5.297598 ( 0.28325891494750977 sec/step)
global step 20000 : loss = 4.9029636 ( 0.2831423282623291 sec/step)
global step 30000 : loss = 4.9695973 ( 0.27232933044433594 sec/step)
global step 40000 : loss = 4.78743 ( 0.27088093757629395 sec/step)
global step 50000 : loss = 4.782488 ( 0.27114105224609375 sec/step)
global step 60000 : loss = 4.694857 ( 0.2800319194793701 sec/step)
global step 70000 : loss = 4.622987 ( 0.27921414375305176 sec/step)
global step 80000 : loss = 4.107431 ( 0.2805747985839844 sec/step)
global step 90000 : loss = 4.2078185 ( 0.2744913101196289 sec/step)
global step 100000 : loss = 4.3540225 ( 0.2738771438598633 sec/step)
global step 110000 : loss = 4.8001804 ( 0.274733304977417 sec/step)
global step 120000 : loss = 4.178132 ( 0.26960134506225586 sec/step))
global step 130000 : loss = 4.262373 ( 0.2716655731201172 sec/step))
global step 140000 : loss = 4.2028217 ( 0.27532124519348145 sec/step)
global step 150000 : loss = 4.3894825 ( 0.28400254249572754 sec/step)
- single query: mAP = 0.364881, r1 precision = 0.615202(37000 iteration)
- single query: mAP = 0.369322, r1 precision = 0.622328(65000 iteration)
- single query: mAP = 0.369418, r1 precision = 0.625891(100000 iteration)
- single query: mAP = 0.368134, r1 precision = 0.620843(150000 iteration)
12/13(1:45 pm)
- k = 20, d = 82
- ResourceExhaustedError
12/13(2:45 pm)
- k = 15, d = 82
- batch size = 16, learning = 0.001, reduction = 0.5
- block with no dropout
- feature dims = 348
global step 10000 : loss = 5.352465 ( 0.36489200592041016 sec/step)
global step 20000 : loss = 4.944912 ( 0.35619115829467773 sec/step)
global step 30000 : loss = 4.9134364 ( 0.3620772361755371 sec/step)
global step 40000 : loss = 5.0862594 ( 0.35976481437683105 sec/step)
global step 50000 : loss = 4.927738 ( 0.34451913833618164 sec/step)
global step 60000 : loss = 4.297506 ( 0.3554205894470215 sec/step)
global step 70000 : loss = 4.5166907 ( 0.3399078845977783 sec/step)
global step 80000 : loss = 4.225523 ( 0.3478429317474365 sec/step)
global step 90000 : loss = 4.289637 ( 0.3468508720397949 sec/step)
global step 100000 : loss = 4.283307 ( 0.3408632278442383 sec/step)
- single query: mAP = 0.373570, r1 precision = 0.627672(30000)
- single query: mAP = 0.390774, r1 precision = 0.646081(44000)
- single query: mAP = 0.392914, r1 precision = 0.654691(60000)
- single query: mAP = 0.393880, r1 precision = 0.652910(74000)
- single query: mAP = 0.399286, r1 precision = 0.662411(88945)
- single query: mAP = 0.393924, r1 precision = 0.655879(100000)
12/17(7:55 pm)
- Densenet-121 version
- learning rate = 0.001
- k=32, block config = (6,12,24,16)
- batch_size = 32
- feature dimensionality: 1024
- single query: mAP = 0.452539, r1 precision = 0.680523(21890)
- single query: mAP = 0.462251, r1 precision = 0.692696(22036)
- single query: mAP = 0.445389, r1 precision = 0.673694(25000)
- single query: mAP = 0.424830, r1 precision = 0.647862(30462)
- single query: mAP = 0.429624, r1 precision = 0.654097(44144)
- single query: mAP = 0.339640, r1 precision = 0.555523(66135)
- single query: mAP = 0.329122, r1 precision = 0.545131(88149)
- single query: mAP = 0.299727, r1 precision = 0.512173(110235)
- single query: mAP = 0.286005, r1 precision = 0.505938(132314)
- single query: mAP = 0.264188, r1 precision = 0.475950(150000)
12/18(9:42 am)
- Densenet-121 version
- learning rate = 0.001
- k=32, block config = (6,12,24,16)
- batch_size = 32
- feature dimensionality: 1024
- Add Transition_to_classes layer (BN+ReLU+Average_pooling(4*4))
- single query: mAP = 0.325248, r1 precision = 0.535926(41077)
- single query: mAP = 0.330922, r1 precision = 0.545724(61787)
- single query: mAP = 0.311302, r1 precision = 0.513955(82557)
- single query: mAP = 0.336014, r1 precision = 0.548694(103058)
- single query: mAP = 0.351965, r1 precision = 0.564430(133862)
- single query: mAP = 0.339467, r1 precision = 0.552850(150000)
12/18(10:30 pm)
- Densenet-169 version
- k=32, block config = (6,12,32,32)
- feature dimensionality: 1664
- Remove Transition_to_classes layer
- single query: mAP = 0.379766, r1 precision = 0.607779(17860)
- single query: mAP = 0.392208, r1 precision = 0.617280(19397)
- single query: mAP = 0.449198, r1 precision = 0.688836(29404)
- single query: mAP = 0.420878, r1 precision = 0.660629(34000)
- single query: mAP = 0.432588, r1 precision = 0.663302(35703)
- single query: mAP = 0.439509, r1 precision = 0.669240(38839)
- single query: mAP = 0.426971, r1 precision = 0.658551(40000)
- single query: mAP = 0.369430, r1 precision = 0.604810(53603)
- single query: mAP = 0.324533, r1 precision = 0.562352(71501)
- single query: mAP = 0.299985, r1 precision = 0.539489(89408)
12/21(3:30 pm)
- Densenet-121 version
- learning rate = 0.001
- k=40, block config = (6,12,24,16)
- feature dimensionality: 1280
- batch_size = 32
- model_0
- single query: mAP = 0.438680, r1 precision = 0.669240(17142)
- single query: mAP = 0.463361, r1 precision = 0.690914(26711)
- single query: mAP = 0.494494, r1 precision = 0.727435(30317)
- single query: mAP = 0.443470, r1 precision = 0.668052(32037)
- single query: mAP = 0.446950, r1 precision = 0.684679(37371)
- single query: mAP = 0.451969, r1 precision = 0.684086(43513)
- single query: mAP = 0.399027, r1 precision = 0.634204(56719)
- single query: mAP = 0.368263, r1 precision = 0.596793(69869)
- single query: mAP = 0.339806, r1 precision = 0.573931(83024)
- single query: mAP = 0.317353, r1 precision = 0.556116(96211)
- single query: mAP = 0.268757, r1 precision = 0.485451(150000)
- model_1
- single query: mAP = 0.460921, r1 precision = 0.695962(30060)
- single query: mAP = 0.422552, r1 precision = 0.659442(40071)
- model_2
- single query: mAP = 0.476526, r1 precision = 0.699822(23578)
- single query: mAP = 0.472285, r1 precision = 0.711105(29481)
- single query: mAP = 0.455397, r1 precision = 0.682007(35385)
- model_3
- single query: mAP = 0.451942, r1 precision = 0.676069(23414)
- single query: mAP = 0.443865, r1 precision = 0.675178(29282)
- single query: mAP = 0.439757, r1 precision = 0.659145(35154)
- model_4
- single query: mAP = 0.474801, r1 precision = 0.694774(20096)
- single query: mAP = 0.486341, r1 precision = 0.708432(25139)
- single query: mAP = 0.472935, r1 precision = 0.694181(30162)
- single query: mAP = 0.447077, r1 precision = 0.678444(35191)
- single query: mAP = 0.451874, r1 precision = 0.681413(40234)
- single query: mAP = 0.417573, r1 precision = 0.649941(50000)
12/24(1:45 am)
- Densenet-121 version
- learning rate = 0.001
- k=48, block config = (6,12,24,16)
- feature dimensionality: 1536
- batch_size = 32
- model_0
- single query: mAP = 0.297090, r1 precision = 0.534145(9832)
- single query: mAP = 0.475057, r1 precision = 0.709026(19530)
- single query: mAP = 0.478607, r1 precision = 0.704276(22425)
- single query: mAP = 0.487998, r1 precision = 0.710214(24865)
- single query: mAP = 0.470967, r1 precision = 0.700416(27304)
- single query: mAP = 0.482781, r1 precision = 0.712886(29229)
- single query: mAP = 0.485464, r1 precision = 0.714964(29710)
- single query: mAP = 0.466976, r1 precision = 0.703088(32057)
- single query: mAP = 0.468231, r1 precision = 0.697743(34464)
- single query: mAP = 0.466774, r1 precision = 0.697743(39270)
- single query: mAP = 0.428698, r1 precision = 0.667162(49315)
- single query: mAP = 0.389317, r1 precision = 0.627078(59360)
- single query: mAP = 0.369795, r1 precision = 0.607185(69405)
- single query: mAP = 0.338015, r1 precision = 0.571259(79449)
- single query: mAP = 0.322515, r1 precision = 0.564133(89494)
- single query: mAP = 0.309946, r1 precision = 0.547803(100000)
- model_1
- single query: mAP = 0.429492, r1 precision = 0.657363(19493)
- single query: mAP = 0.463505, r1 precision = 0.696259(24378)
- single query: mAP = 0.451918, r1 precision = 0.682601(29262)
- single query: mAP = 0.457487, r1 precision = 0.682601(34148)
- single query: mAP = 0.435298, r1 precision = 0.676069(39033)
- single query: mAP = 0.414627, r1 precision = 0.647268(43918)
12/26(4:30 pm)
- Densenet-121 version
- learning rate = 0.001
- k config = (10,20,40,20)
- block config = (6,12,24,16)
- feature dimensionality: 877
- batch_size = 32
- single query: mAP = 0.445078, r1 precision = 0.675475(19138)
- single query: mAP = 0.453613, r1 precision = 0.694181(28721)
- single query: mAP = 0.474093, r1 precision = 0.693587(38302)
- single query: mAP = 0.458947, r1 precision = 0.679632(47886)
- single query: mAP = 0.408442, r1 precision = 0.637470(57464)
- single query: mAP = 0.407209, r1 precision = 0.634204(67047)
- single query: mAP = 0.387638, r1 precision = 0.614608(76632)
- single query: mAP = 0.363153, r1 precision = 0.592637(86214)
- single query: mAP = 0.342352, r1 precision = 0.568884(100000)
12/26(4:55 pm)
- Densenet-121 version
- learning rate = 0.001
- k config = (15,30,60,30)
- block config = (6,12,24,16)
- feature dimensionality: 1316
- batch_size = 32
- single query: mAP = 0.375182, r1 precision = 0.610748(12960)
- single query: mAP = 0.440646, r1 precision = 0.664192(19453)
- single query: mAP = 0.462142, r1 precision = 0.692993(22328)
- single query: mAP = 0.436096, r1 precision = 0.667162(24697)
- single query: mAP = 0.483793, r1 precision = 0.716449(25942)
- single query: mAP = 0.455596, r1 precision = 0.683492(27069)
- single query: mAP = 0.450426, r1 precision = 0.685867(29462)
- single query: mAP = 0.452111, r1 precision = 0.688539(31925)
- single query: mAP = 0.479246, r1 precision = 0.710808(32398)
- single query: mAP = 0.443648, r1 precision = 0.675178(34409)
- single query: mAP = 0.454829, r1 precision = 0.683492(38876)
- single query: mAP = 0.437295, r1 precision = 0.672209(45368)
- single query: mAP = 0.416041, r1 precision = 0.648456(55168)
- single query: mAP = 0.416041, r1 precision = 0.648456(65047)
- single query: mAP = 0.360663, r1 precision = 0.587886(74990)
- single query: mAP = 0.333795, r1 precision = 0.563242(84958)
- single query: mAP = 0.309724, r1 precision = 0.541568(100000)
1/2(3:30 am)
- Densenet-121 version
- learning rate = 0.001
- k=40, block config = (8,12,24,16)
- feature dimensionality: 1290
- batch_size = 32
- single query: mAP = 0.238210, r1 precision = 0.458135(9617)
- single query: mAP = 0.441395, r1 precision = 0.672803(19248)
- single query: mAP = 0.454540, r1 precision = 0.679038(28873)
- single query: mAP = 0.427032, r1 precision = 0.654988(38504)
- single query: mAP = 0.377666, r1 precision = 0.603622(57762)
Add Average pooling version
1/9(1:49 pm)
- Densenet-121 version
- learning rate = 0.001
- k=40, block config = (6,12,24,16)
- feature dimensionality: 1280
- batch_size = 32
- Add global pooling
- single query: mAP = 0.239310, r1 precision = 0.408848(20800)
- single query: mAP = 0.330017, r1 precision = 0.538302(31112)
- single query: mAP = 0.389638, r1 precision = 0.603029(41479)
- single query: mAP = 0.401658, r1 precision = 0.606888(50000)
- single query: mAP = 0.433570, r1 precision = 0.643112(60382)
- single query: mAP = 0.432515, r1 precision = 0.655879(70813)
- single query: mAP = 0.454274, r1 precision = 0.676366(81243)
- single query: mAP = 0.447973, r1 precision = 0.662411(91651)
- single query: mAP = 0.439264, r1 precision = 0.657957(100000)
1/11(1:37 am)
- Modify Densenet-121
- learning rate = 0.001
- k=40
- 3 blocks, block config = (10,20,26)
- feature dimensionality: 1560
- batch_size = 32
- Add global pooling
- single query: mAP = 0.239854, r1 precision = 0.427553(24308)
- single query: mAP = 0.239854, r1 precision = 0.427553(29173)
- single query: mAP = 0.255651, r1 precision = 0.465855(38901)
- single query: mAP = 0.269874, r1 precision = 0.463480(48630)
- single query: mAP = 0.269874, r1 precision = 0.463480(58359)
- single query: mAP = 0.372107, r1 precision = 0.579869(63218)
- single query: mAP = 0.418636, r1 precision = 0.630938(68078)
- single query: mAP = 0.437543, r1 precision = 0.657363(72937)
- single query: mAP = 0.365906, r1 precision = 0.573040(77795)
- single query: mAP = 0.477703, r1 precision = 0.692696(82650)
- single query: mAP = 0.420904, r1 precision = 0.627078(87492)
- single query: mAP = 0.425580, r1 precision = 0.646971(92344)
- single query: mAP = 0.389752, r1 precision = 0.622328(97204)
- single query: mAP = 0.411566, r1 precision = 0.622031(100000)
1/17(10:44 pm)
- Densenet-121 version
- learning rate = 0.001
- k=48, block config = (6,12,24,16)
- feature dimensionality: 1536
- batch_size = 32
- Add global pooling
- single query: mAP = 0.268931, r1 precision = 0.447743(20162)
- single query: mAP = 0.274008, r1 precision = 0.475356(33633)
- single query: mAP = 0.173987, r1 precision = 0.337589(40371)
- single query: mAP = 0.305575, r1 precision = 0.508907(53841)
- single query: mAP = 0.327435, r1 precision = 0.539489(60570)
- single query: mAP = 0.364826, r1 precision = 0.571556(67299)
- single query: mAP = 0.333763, r1 precision = 0.541271(74031)
- single query: mAP = 0.219380, r1 precision = 0.406176(80763)
- single query: mAP = 0.271345, r1 precision = 0.463480(87496)
- single query: mAP = 0.354567, r1 precision = 0.568884(94227)
- single query: mAP = 0.380233, r1 precision = 0.595606(100000)
- single query: mAP = 0.342382, r1 precision = 0.551960(150000)
1/22 (3:40 pm)
- Modify Densenet
- learning rate = 0.001
- k=48
- 3 blocks, block config = (6,12,24)
- feature dimensionality: 1536
- batch_size = 32
- Add global pooling
- single query: mAP = 0.191238, r1 precision = 0.359264, r5 precision =0.5965(21445)
- single query: mAP = 0.310796, r1 precision = 0.510095, r5 precision = 0.735154(32219)
- single query: mAP = 0.268735, r1 precision = 0.474466, r5 precision = 0.705463(42994)
- single query: mAP = 0.294825, r1 precision = 0.491093, r5 precision = 0.725653(48371)
- single query: mAP = 0.408920, r1 precision = 0.629157, r5 precision = 0.817696(53732)
- single query: mAP = 0.302049, r1 precision = 0.507720, r5 precision = 0.733076(59118)
- single query: mAP = 0.282036, r1 precision = 0.484561, r5 precision = 0.709620(69890)
- single query: mAP = 0.319393, r1 precision = 0.543943, r5 precision = 0.754157(75277)
- single query: mAP = 0.414480, r1 precision = 0.637470, r5 precision = 0.821556(80663)
- single query: mAP = 0.300261, r1 precision = 0.519596, r5 precision = 0.739311(86049)
- single query: mAP = 0.286150, r1 precision = 0.491686, r5 precision = 0.706057(91435)
- single query: mAP = 0.289644, r1 precision = 0.508314, r5 precision = 0.724762(96821)
- single query: mAP = 0.362736, r1 precision = 0.586995, r5 precision = 0.770487(100000)
1/23(10:48 pm)
- Wide Densenet
- learning rate = 0.001
- k=60
- 4 blocks, block config = (5,5,5,5)
- feature dimensionality: 577
- batch_size = 32
- Add global pooling
- single query: mAP = 0.265623, r1 precision = 0.447743, r5 precision = 0.693884(23047)
- single query: mAP = 0.271160, r1 precision = 0.466449, r5 precision = 0.708432(30737)
- single query: mAP = 0.286915, r1 precision = 0.484264, r5 precision = 0.711401(38426)
- single query: mAP = 0.344637, r1 precision = 0.543052, r5 precision = 0.763658(53806)
- single query: mAP = 0.305618, r1 precision = 0.506829, r5 precision = 0.719418(61498)
- single query: mAP = 0.274808, r1 precision = 0.472684, r5 precision = 0.696259(69188)
- single query: mAP = 0.347003, r1 precision = 0.546318, r5 precision = 0.766330(76880)
- single query: mAP = 0.364196, r1 precision = 0.578088, r5 precision = 0.785629(84570)
- single query: mAP = 0.360712, r1 precision = 0.571259, r5 precision = 0.781770(92259)
- single query: mAP = 0.324616, r1 precision = 0.519596, r5 precision = 0.738124(100000)
1/24(7:57 pm)
- Wide Densenet
- learning rate = 0.001
- k= 78
- 4 blocks, block config = (5,5,5,5)
- feature dimensionality: 750
- batch_size = 32
- Add global pooling
- single query: mAP = 0.257129, r1 precision = 0.448634, r5 precision = 0.683195(27896)
- single query: mAP = 0.257474, r1 precision = 0.459917, r5 precision = 0.677553(39032)
- single query: mAP = 0.247385, r1 precision = 0.435273, r5 precision = 0.670724(44587)
- single query: mAP = 0.350293, r1 precision = 0.561758, r5 precision = 0.768705(50153)
- single query: mAP = 0.347575, r1 precision = 0.559382, r5 precision = 0.773159(55728)
- single query: mAP = 0.358653, r1 precision = 0.566211, r5 precision = 0.782957(61309)
- single query: mAP = 0.326599, r1 precision = 0.530582, r5 precision = 0.764252(66891)
- single query: mAP = 0.328076, r1 precision = 0.544240, r5 precision = 0.764846(72470)
- single query: mAP = 0.356151, r1 precision = 0.562945, r5 precision = 0.773753(78036)
- single query: mAP = 0.282612, r1 precision = 0.486639, r5 precision = 0.717043(83592)
- single query: mAP = 0.346877, r1 precision = 0.553147, r5 precision = 0.772268(89148)
- single query: mAP = 0.340787, r1 precision = 0.539489, r5 precision = 0.764846(94700)
- single query: mAP = 0.323108, r1 precision = 0.534145, r5 precision = 0.763658(100000)
1/27(10:37 pm)
- Wide Densenet
- learning rate = 0.001
- k= 78, 68
- 4 blocks, block config = (10,10,10,10)
- feature dimensionality:
- batch_size = 32
- Add global pooling
- ResourceExhaustedError
1/29(6:07 pm)
- Wide Densenet
- learning rate = 0.001
- k= 60
- 4 blocks, block config = (10,10,10,10)
- feature dimensionality: 1140
- batch_size = 32
- Add global pooling
- single query: mAP = 0.172043, r1 precision = 0.317696, r5 precision = 0.566508(20313)
- single query: mAP = 0.271185, r1 precision = 0.465855, r5 precision = 0.703385(30471)
- single query: mAP = 0.311479, r1 precision = 0.512470, r5 precision = 0.748812(40639)
- single query: mAP = 0.299454, r1 precision = 0.501485, r5 precision = 0.719121(50812)
- single query: mAP = 0.248079, r1 precision = 0.435273, r5 precision = 0.660629(60968)
- single query: mAP = 0.320569, r1 precision = 0.539489, r5 precision = 0.747922(71155)
- single query: mAP = 0.235963, r1 precision = 0.422506, r5 precision = 0.659145(81342)
- single query: mAP = 0.296609, r1 precision = 0.500000, r5 precision = 0.732185(91530)
- single query: mAP = 0.278707, r1 precision = 0.479513, r5 precision = 0.708729(100000)
2/9(10:50 am)
- Densenet-169
- learning rate = 0.001
- k= 40
- 4 blocks, block config =(6,12,32,32)
- feature dimensionality: 2080
- batch_size = 32
- Add global pooling
- single query: mAP = 0.270346, r1 precision = 0.468230, r5 precision = 0.690024(20768)
2/18
- Densenet-121 version
- learning rate = 0.001
- k=40, block config = (6,12,24,16)
- feature dimensionality: 1280
- batch_size = 32
- Add global pooling
- Remove fc1 layer
global step 10000 : loss = 4.7003503 ( 0.2852945327758789 sec/step)
global step 20000 : loss = 3.9590898 ( 0.2880268096923828 sec/step)
global step 30000 : loss = 3.6093855 ( 0.2936420440673828 sec/step)
global step 40000 : loss = 3.5029492 ( 0.28386735916137695 sec/step)
global step 50000 : loss = 3.4247813 ( 0.28176093101501465 sec/step)
global step 60000 : loss = 3.3117826 ( 0.28188657760620117 sec/step)
global step 70000 : loss = 3.2655158 ( 0.287639856338501 sec/step)
global step 80000 : loss = 3.2495742 ( 0.28240299224853516 sec/step)
global step 90000 : loss = 3.2200181 ( 0.2838728427886963 sec/step)
global step 100000 : loss = 3.1970477 ( 0.2843348979949951 sec/step)
- single query: mAP = 0.391886, r1 precision = 0.609857, r5 precision = 0.809976(23584)
- single query: mAP = 0.424933, r1 precision = 0.650831, r5 precision = 0.826010(29419)
- single query: mAP = 0.418481, r1 precision = 0.648753, r5 precision = 0.829869(35213)
- single query: mAP = 0.422913, r1 precision = 0.646378, r5 precision = 0.826306(41112)
- single query: mAP = 0.428296, r1 precision = 0.644893, r5 precision = 0.826900(46903)
- single query: mAP = 0.400665, r1 precision = 0.614905, r5 precision = 0.807898(52796)
- single query: mAP = 0.400665, r1 precision = 0.614905, r5 precision = 0.807898(58734)
- single query: mAP = 0.399284, r1 precision = 0.621437, r5 precision = 0.801960(64656)
- single query: mAP = 0.381015, r1 precision = 0.606888, r5 precision = 0.791865(70561)
- single query: mAP = 0.368656, r1 precision = 0.585214, r5 precision = 0.782957(76498)
- single query: mAP = 0.348841, r1 precision = 0.565321, r5 precision = 0.766330(88333)
- single query: mAP = 0.331412, r1 precision = 0.555226, r5 precision = 0.749703(100000)
2/18
- Densenet-121 version
- learning rate = 0.001
- k=48, block config = (6,12,24,16)
- feature dimensionality: 1536
- batch_size = 32
- Add global pooling
- Remove fc1 layer
global step 1000 : loss = 6.6118402 ( 0.32401275634765625 sec/step)
global step 10000 : loss = 4.7062154 ( 0.32209110260009766 sec/step)
global step 20000 : loss = 3.9574044 ( 0.3464930057525635 sec/step)
global step 30000 : loss = 3.595779 ( 0.3243534564971924 sec/step)
global step 40000 : loss = 3.383516 ( 0.32286667823791504 sec/step)
global step 50000 : loss = 3.3422012 ( 0.33188796043395996 sec/step)
global step 60000 : loss = 3.272427 ( 0.32190537452697754 sec/step)
global step 70000 : loss = 3.2394505 ( 0.32762598991394043 sec/step)
global step 80000 : loss = 3.3407981 ( 0.32529759407043457 sec/step)
global step 81000 : loss = 3.1795926 ( 0.3274385929107666 sec/step)
global step 90000 : loss = 3.1812096 ( 0.3234114646911621 sec/step)
global step 100000 : loss = 3.1581864 ( 0.3249349594116211 sec/step)
- single query: mAP = 0.377861, r1 precision = 0.600059, r5 precision = 0.803444(20639)
- single query: mAP = 0.419196, r1 precision = 0.631235, r5 precision = 0.823337(30801)
- single query: mAP = 0.424603, r1 precision = 0.650534, r5 precision = 0.827791(35915)
- single query: mAP = 0.422666, r1 precision = 0.651425, r5 precision = 0.828385(41054)
- single query: mAP = 0.415763, r1 precision = 0.641627, r5 precision = 0.817993(46225)
- single query: mAP = 0.401678, r1 precision = 0.625000, r5 precision = 0.810570(51403)
- single query: mAP = 0.386215, r1 precision = 0.614014, r5 precision = 0.804632(61766)
- single query: mAP = 0.375464, r1 precision = 0.603029, r5 precision = 0.782660(72150)
- single query: mAP = 0.346876, r1 precision = 0.573337, r5 precision = 0.764549(82534)
- single query: mAP = 0.328865, r1 precision = 0.552257, r5 precision = 0.753860(92918)
- single query: mAP = 0.307513, r1 precision = 0.530879, r5 precision = 0.731888(100000)
2/22
- Densenet-169 version
- learning rate = 0.001
- k=40, block config = (6,12,32,32)
- feature dimensionality: 2080
- batch_size = 32
- Add global pooling
- Remove fc1 layer
global step 1000 : loss = 6.7302175 ( 0.3356292247772217 sec/step)
global step 10000 : loss = 4.2443438 ( 0.319150447845459 sec/step)
global step 20000 : loss = 3.6636615 ( 0.32329392433166504 sec/step)
global step 30000 : loss = 3.4289465 ( 0.32384347915649414 sec/step)
global step 40000 : loss = 3.365447 ( 0.32622575759887695 sec/step)
global step 50000 : loss = 3.2770655 ( 0.3245551586151123 sec/step)
global step 60000 : loss = 3.228005 ( 0.3228476047515869 sec/step)
global step 70000 : loss = 3.207745 ( 0.32468247413635254 sec/step)
global step 80000 : loss = 3.1790516 ( 0.3275458812713623 sec/step)
global step 90000 : loss = 3.1807368 ( 0.33211326599121094 sec/step)
global step 100000 : loss = 3.13163 ( 0.32872867584228516 sec/step)
- single query: mAP = 0.450818, r1 precision = 0.662411, r5 precision = 0.832542(31198)
- single query: mAP = 0.447083, r1 precision = 0.647268, r5 precision = 0.827197(36397)
- single query: mAP = 0.445919, r1 precision = 0.665083, r5 precision = 0.830463(41599)
- single query: mAP = 0.436751, r1 precision = 0.651128, r5 precision = 0.823337(46802)
- single query: mAP = 0.426049, r1 precision = 0.648456, r5 precision = 0.823337(57207)
- single query: mAP = 0.379272, r1 precision = 0.603029, r5 precision = 0.785036(67614)
- single query: mAP = 0.380142, r1 precision = 0.605107, r5 precision = 0.792755(78024)
- single query: mAP = 0.337717, r1 precision = 0.559976, r5 precision = 0.753860(88433)
- single query: mAP = 0.336004, r1 precision = 0.558492, r5 precision = 0.755344(100000)