default search action
Pattern Recognition, Volume 90
Volume 90, June 2019
- Timothy Apasiba Abeo, Xiangjun Shen, Bing-Kun Bao, Zheng-Jun Zha, Jianping Fan:
A generalized multi-dictionary least squares framework regularized with multi-graph embeddings. 1-11
- Rizwan Qureshi, Muhammad Uzair, Khurram Khurshid, Hong Yan:
Hyperspectral document image processing: Applications, challenges and future prospects. 12-22
- Vikas Kumar, Arun K. Pujari, Vineet Padmanabhan, Venkateswara Rao Kagita:
Group preserving label embedding for multi-label classification. 23-34
- Tien Thanh Nguyen, Thi Thu Thuy Nguyen, Anh Vu Luong, Quoc Viet Hung Nguyen, Alan Wee-Chung Liew, Bela Stantic:
Multi-label classification via label correlation and first order feature dependance in a data stream. 35-51
- Raja Muhammad Saad Bashir, Muhammad Shahzad, M. M. Fraz:
VR-PROUD: Vehicle Re-identification using PROgressive Unsupervised Deep architecture. 52-65
- Sebastian Zambanini:
Feature-based groupwise registration of historical aerial images to present-day ortho-photo maps. 66-77
- Ruimao Zhang, Wei Yang, Zhanglin Peng, Pengxu Wei, Xiaogang Wang, Liang Lin:
Progressively diffused networks for semantic visual parsing. 78-86 - Xianye Ben, Peng Zhang, Zhihui Lai, Rui Yan, Xinliang Zhai, Weixiao Meng:
A general tensor representation framework for cross-view gait recognition. 87-98 - Yi Li, Lingxiao Song, Xiang Wu, Ran He, Tieniu Tan:
Learning a bi-level adversarial network with global and local perception for makeup-invariant face verification. 99-108
- Canjie Luo, Lianwen Jin, Zenghui Sun:
MORAN: A Multi-Object Rectified Attention Network for scene text recognition. 109-118
- Zifeng Wu, Chunhua Shen, Anton van den Hengel:
Wider or Deeper: Revisiting the ResNet Model for Visual Recognition. 119-133
- Taihao Li, Huai Chen, Min Zhang, Shupeng Liu, Shunren Xia, Xinhua Cao, Geoffrey S. Young, Xiaoyin Xu:
A new design in iterative image deblurring for improved robustness and performance. 134-146
- Xiao-Li Hu, Jiajun Wen, Zhihui Lai, Wai Keung Wong, Linlin Shen:
Binary sparse signal recovery algorithms based on logic observation. 147-160
- Chunlei Peng, Nannan Wang, Jie Li, Xinbo Gao:
DLFace: Deep local descriptor for cross-modality face recognition. 161-171
- Sergio Matiz, Kenneth E. Barner:
Inductive conformal predictor for convolutional neural networks: Applications to active learning for image classification. 172-182
- Yiyong Xiao, Changhao Huang, Jiaoying Huang, Ikou Kaku, Yuchun Xu:
Optimal mathematical programming and variable neighborhood search for k-modes categorical data clustering. 183-195
- Junxuan Zhang, Haifeng Hu:
Domain learning joint with semantic adaptation for human action recognition. 196-209
- Chuan-Xian Ren, Juzheng Li, Pengfei Ge, Xiao-Lin Xu:
Deep metric learning via subtype fuzzy clustering. 210-219
- Renping Yu, Lishan Qiao, Mingming Chen, Seong-Whan Lee, Xuan Fei, Dinggang Shen:
Weighted graph regularized sparse brain network construction for MCI identification. 220-231
- Robert C. O'Brien, Hemant Ishwaran:
A random forests quantile classifier for class imbalanced data. 232-249
- Francesc Serratosa:
Graph edit distance: Restrictions to be a metric. 250-256
- Karl Øyvind Mikalsen, Cristina Soguero-Ruíz, Filippo Maria Bianchi, Robert Jenssen:
Noisy multi-label semi-supervised dimensionality reduction. 257-270
- Thapana Boonchoo, Xiang Ao, Yang Liu, Weizhong Zhao, Fuzhen Zhuang, Qing He:
Grid-based DBSCAN: Indexing and inference. 271-284
- Xinyu Xiao, Lingfeng Wang, Kun Ding, Shiming Xiang, Chunhong Pan:
Dense semantic embedding network for image captioning. 285-296
- Kamel Madi, Eric Paquet, Hamamache Kheddouci:
New graph distance for deformable 3D objects recognition based on triangle-stars decomposition. 297-307
- Ramzi Abiantun, Felix Juefei-Xu, Utsav Prabhu, Marios Savvides:
SSR2: Sparse signal recovery for single-image super-resolution on faces with extreme low resolutions. 308-324
- Zhichao Zheng, Huaijiang Sun:
Jointly discriminative projection and dictionary learning for domain adaptive collaborative representation-based classification. 325-336
- Yuliang Liu, Lianwen Jin, Shuaitao Zhang, Canjie Luo, Sheng Zhang:
Curved scene text detection via transverse and longitudinal sequence connection. 337-345
- MengXin Sun, KunHong Liu, Qingqiang Wu, Qingqi Hong, Beizhan Wang, Haiying Zhang:
A novel ECOC algorithm for multiclass microarray data classification based on data complexity analysis. 346-362
- Zhihong Zhang, Dongdong Chen, Zeli Wang, Heng Li, Lu Bai, Edwin R. Hancock:
Depth-based subgraph convolutional auto-encoder for network representation learning. 363-376
- Du Yong Kim, Ba-Ngu Vo, Ba-Tuong Vo, Moongu Jeon:
A labeled random finite set online multi-object tracker for video data. 377-389 - Hongyun Zhang, Ting Zhang, Witold Pedrycz, Cairong Zhao, Duoqian Miao:
Improved adaptive image retrieval with the use of shadowed sets. 390-403 - Zhiwei Fang, Jing Liu, Yong Li, Yanyuan Qiao, Hanqing Lu:
Improving visual question answering using dropout and enhanced question encoder. 404-414 - Yihong Wu, Haoren Wang, Fulin Tang, Zhiheng Wang:
Efficient conic fitting with an analytical Polar-N-Direction geometric distance. 415-423
- Mohamed Trabelsi, Hichem Frigui:
Robust fuzzy clustering for multiple instance regression. 424-435 - Yabei Li, Zhang Zhang, Yanhua Cheng, Liang Wang, Tieniu Tan:
MAPNet: Multi-modal attentive pooling network for RGB-D indoor scene classification. 436-449
- Yunyun Yang, Wenjing Jia, Yunna Yang:
Multi-atlas segmentation and correction model with level set formulation for 3D brain MR images. 450-463
- He Xiong, Deguang Kong:
Elastic nonnegative matrix factorization. 464-475
- Yongqiang Zhang, Yancheng Bai, Mingli Ding, Yongqiang Li, Bernard Ghanem:
Corrigendum to 'Weakly-supervised Object Detection via Mining Pseudo Ground Truth Bounding-boxes' [Pattern Recognition 84 (2018) 68-81]. 483
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.