Occupancy Estimation from Blurred Video: A Multifaceted Approach with Privacy Consideration
Abstract
:1. Introduction
- 1.
- Developed a motion-based technique for occupancy counting from blurred video frames that is not affected by blur and can be applied directly to the blurred frames, thus inherently preserving privacy.
- 2.
- Developed motion-independent techniques for occupancy counting, including detection-based and density-estimation-based methods.
- 3.
- Proposed two different deblurring methods to improve the accuracy of downstream detection and density estimation models in motion-independent techniques:
- (a)
- The first method is based on the Lucy–Richardson algorithm, but unlike the original approach, the choice of blur radius is informed by the presence of blur in the image.
- (b)
- The second method utilizes a U-Net architecture in an end-to-end fashion, trained on synthetically blurred crowd images.
- 4.
- Conducted an analysis of the privacy implications of the occupancy counting system by comparing the original, blurred, and deblurred frames using metrics such as blur extent, structural similarity, and perceptual difference. The results showed that the deblurred frames used in the motion-independent approaches still maintained some level of visual distortion, providing a degree of privacy protection, even though it was not the primary design goal.
2. Problem Statement
2.1. FROG Building
2.2. Data Collection and Analysis
3. Methodology
3.1. Motion-Based Approach
3.2. Motion-Independent Approach
4. Motion-Based Detection and Counting
4.1. Detection of Moving Objects by Background Subtraction
Experiment
- 1.
- Frame extraction: Frames were extracted from the videos at a rate of six frames per second.
- 2.
- Background modeling: Gaussian mixture modeling (GMM) was used to model each pixel, representing the background as a mixture of K Gaussians. The probability of a pixel belonging to the background was determined using the following equation:
- 3.
- Filtering: Median filtering was applied to remove speckle noise. The median filter calculates the median of pixel values in a local neighborhood :
- 4.
- Morphological operations: Morphological operations, (dilation , erosion , opening and closing ) were performed on the binary image using a structuring element :
- 5.
- Blob detection: Connected component labeling was used to identify connected components (blobs) in the binary image.
- 6.
- Centroid calculation: The centroid of each blob was calculated using image moments:
4.2. Detection of Moving Objects by Optical Flow
Experiment
4.3. Tracking and Counting
Algorithm 1 Object Counting by Line Intersection |
|
5. Motion-Independent Detection and Counting
5.1. Why Do We Need Deblurring?
5.2. Iterative Statistical Deblurring
Experiment
Algorithm 2 Iterative Statistical Deblurring assisted by Blur-extent Calculation |
5.3. Learning-Based Deblurring
Experiment
5.4. Counting by Detection
5.4.1. Aggregated Channel Features
Experiment
5.4.2. Region-Based Convolutional Neural Network
Experiment
5.5. Counting by Density Map Estimation
Dilated Convolutional Neural Network
Experiment
6. Results and Discussion
6.1. Performance Comparison
6.2. Deblurring and Privacy
6.2.1. Blur Extent
6.2.2. Structural Similarity (SSIM)
6.2.3. Perceptual Difference (HaarPSI)
6.2.4. Analysis
- Synthetically blurred dataset
- Naturally blurred dataset
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- International Energy Agency. Available online: https://www.iea.org/topics/buildings (accessed on 24 May 2024).
- Agarwal, Y.; Balaji, B.; Gupta, R.; Lyles, J.; Wei, M.; Weng, T. Occupancydriven energy management for smart building automation. In Proceedings of the ACM Workshop on Embedded Sensing Systems for Energy-Efficiency in Building, Zurich, Switzerland, 3–5 November 2010; pp. 1–6. [Google Scholar]
- Yang, B.Z. Becerik-Gerber, The coupled effects of personalized occupancy profile based HVAC schedules and room reassignment on building energy use. Energy Build. 2014, 78, 113–122. [Google Scholar] [CrossRef]
- Wang, F.; Feng, Q.; Chen, Z.; Zhao, Q.; Cheng, Z.; Zou, J.; Zhang, Y.; Mai, J.; Li, Y.; Reeve, H. Predictive control of indoor environment using occupant number detected by video data and CO2 concentration. Energy Build. 2017, 145, 155–162. [Google Scholar] [CrossRef]
- Nguyen, T.A.; Aiello, M. Energy intelligent buildings based on user activity: A survey. Energy Build. 2013, 56, 244–257. [Google Scholar] [CrossRef]
- Candanedo, L.M.; Feldheim, V. Accurate occupancy detection of an office room from light, temperature, humidity and CO2 measurements using statistical learning models. Energy Build. 2016, 112, 28–39. [Google Scholar] [CrossRef]
- Leephakpreeda, T. Adaptive occupancy-based lighting control via grey prediction. Build. Environ. 2005, 40, 881–886. [Google Scholar] [CrossRef]
- Dodier, R.H.; Henze, G.P.; Tiller, D.K.; Guo, X. Building occupancy detection through sensor belief networks. Energy Build. 2006, 38, 1033–1043. [Google Scholar] [CrossRef]
- Duarte, C.; Wymelenberg, K.V.D.; Rieger, C. Revealing occupancy patterns in an office building through the use of occupancy sensor data. Energy Build. 2013, 67, 587–595. [Google Scholar] [CrossRef]
- Liu, P.; Nguang, S.-K.; Partridge, A. Occupancy inference using pyroelectric infrared sensors through hidden markov models. IEEE Sens. J. 2016, 16, 1062–1068. [Google Scholar] [CrossRef]
- Wahl, F.; Milenkovic, M.; Amft, O. A distributed PIR-based approach for estimating people count in office environments. In Proceedings of the IEEE International Conference on Computational Science and Engineering, Paphos, Cyprus, 5–7 December 2012; pp. 640–647. [Google Scholar]
- Raykov, Y.P.; Ozer, E.; Dasika, G.; Boukouvalas, A.; Little, M.A. Predicting room occupancy with a single passive infrared (PIR) sensor through behavior extraction. In Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing, Heidelberg, Germany, 12–16 September 2016; pp. 1016–1027. [Google Scholar]
- ul Haq, M.A.; Hassan, M.Y.; Abdullah, H.; Rahman, H.A.; Abdullah, M.P.; Hussin, F.; Said, D.M. A review on lighting control technologies in commercial buildings, their performance and affecting factors. Renew. Sustain. Energy Rev. 2014, 33, 268–279. [Google Scholar] [CrossRef]
- Uziel, S.S.; Elste, T.; Kattanek, W.; Hollosi, D.; Gerlach, S.; Goetze, S. Networked embedded acoustic processing system for smart building applications. In Proceedings of the Design and Architectures for Signal and Image Processing (DASIP) 2013 Conference on IEEE, Cagliari, Italy, 8–10 October 2013; pp. 349–350. [Google Scholar]
- Chen, Z.; Zhao, R.; Zhu, Q.; Masood, M.K.; Soh, Y.C.; Mao, K. Building occupancy estimation with environmental sensors via CDBLSTM. IEEE Trans. Ind. Electron. 2017, 64, 9549–9559. [Google Scholar] [CrossRef]
- Kraipeerapun, S.P. Amornsamankul, Room occupancy detection using modified stacking. In Proceedings of the ACM International Conference on Machine Learning and Computing, Singapore, 24–26 February 2017; pp. 162–166. [Google Scholar]
- Lu, X.; Wen, H.; Zou, H.; Jiang, H.; Xie, L.; Trigoni, N. Robust occupancy inference with commodity WiFi. In Proceedings of the IEEE International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob), New York, NY, USA, 17–19 October 2016; pp. 1–8. [Google Scholar]
- Filippoupolitis, A.; Oliff, W.; Loukas, G. Bluetooth low energy based occupancy detection for emergency management. In Proceedings of the IEEE International Conference on Ubiquitous Computing and Communications and International Symposium on Cyberspace and Security (IUCC-CSS), Granada, Spain, 14–16 December 2016; pp. 31–38. [Google Scholar]
- Li, N.; Calis, G.; Becerik-Gerber, B. Measuring and monitoring occupancy with an RFID based system for demand-driven HVAC operations. Autom. Constr. 2012, 24, 89–99. [Google Scholar] [CrossRef]
- Kraft, M.; Aszkowski, P.; Pieczyński, D.; Fularz, M. Low-Cost Thermal Camera-Based Counting Occupancy Meter Facilitating Energy Saving in Smart Buildings. Energies 2021, 14, 4542. [Google Scholar] [CrossRef]
- Maxence, C.; Carré, M.; Jourlin, M.; Bensrhair, A.; Grisel, R. Improvement of small objects detection in thermal images. Integr. Comput. Aided Eng. 2023, 30, 311–325. [Google Scholar]
- Chen, Z.; Jiang, C.; Xie, L. Building occupancy estimation and detection: A review. Energy Build. 2018, 169, 260–270. [Google Scholar] [CrossRef]
- Fleuret, F.; Berclaz, J.; Lengagne, R.; Fua, P.V. Multicamera People Tracking with a Probabilistic Occupancy Map. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 267–282. [Google Scholar] [CrossRef]
- Benezeth, Y.; Laurent, H.; Emile, B.; Rosenberger, C. Towards a sensor for detecting human presence and characterizing activity. Energy Build. 2011, 43, 305–314. [Google Scholar] [CrossRef]
- Erickson, V.L.; Carreira-Perpiñán, M.Á.; Cerpa, A. OBSERVE: Occupancy-based system for efficient reduction of HVAC energy. In Proceedings of the 10th ACM/IEEE International Conference on Information Processing in Sensor Networks, Chicago, IL, USA, 12–14 April 2011; pp. 258–269. [Google Scholar]
- Liu, D.; Guan, X.; Du, Y.; Zhao, Q. Measuring indoor occupancy in intelligent buildings using the fusion of vision sensors. Meas. Sci. Technol. 2013, 24, 074023. [Google Scholar] [CrossRef]
- Zou, J.; Zhao, Q.; Yang, W.; Wang, F. Occupancy detection in the office by analyzing surveillance videos and its application to building energy conservation. Energy Build. 2017, 152, 385–398. [Google Scholar] [CrossRef]
- Steffen, P.; Pedersen, T.H.; Nielsen, K.U.; Knudsen, M.D. Establishing an image-based ground truth for validation of sensor data-based room occupancy detection. Energy Build. 2016, 130, 787–793. [Google Scholar]
- Tomastik, R.; Lin, Y.; Banaszuk, A. Video-based estimation of building occupancy during emergency egress. In Proceedings of the IEEE American Control Conference, Seattle, WA, USA, 11–13 June 2008; pp. 894–901. [Google Scholar]
- Jacoby, M.; Tan, S.Y.; Henze, G.; Sarkar, S. A high-fidelity residential building occupancy detection dataset. Nat. Sci. Data 2021, 8, 280. [Google Scholar] [CrossRef]
- Song, C.; Yavari, E.; Singh, A.; Boric-Lubecke, O.; Lubecke, V. Detection sensitivity and power consumption vs. operation modes using system-on-chip based Doppler radar occupancy sensor. In Proceedings of the 2012 IEEE Topical Conference on Biomedical Wireless Technologies, Networks, and Sensing Systems (BioWireleSS), Santa Clara, CA, USA, 15–18 January 2012; pp. 17–20. [Google Scholar] [CrossRef]
- Yavari, E.; Jou, H.; Lubecke, V.; Boric-Lubecke, O. Doppler radar sensor for occupancy monitoring. In Proceedings of the 2013 IEEE Topical Conference on Power Amplifiers for Wireless and Radio Applications, Austin, TX, USA, 20–20 January 2013; pp. 145–147. [Google Scholar] [CrossRef]
- Yavari, E.; Nuti, P.; Boric-Lubecke, O. Occupancy detection using radar noise floor. In Proceedings of the 2016 IEEE/ACES International Conference on Wireless Information Technology and Systems (ICWITS) and Applied Computational Electromagnetics (ACES), Honolulu, HI, USA, 13–18 March 2016; pp. 1–3. [Google Scholar] [CrossRef]
- Nuti, P.; Yavari, E.; Boric-Lubecke, O. Doppler radar occupancy sensor for small-range motion detection. In Proceedings of the 2017 IEEE Asia Pacific Microwave Conference (APMC), Kuala Lumpur, Malaysia, 13–16 November 2017; pp. 192–195. [Google Scholar] [CrossRef]
- Yavari, E.; Gao, X.; Boric-Lubecke, O. Subject Count Estimation by Using Doppler Radar Occupancy Sensor. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 2018, 2018, 4428–4431. [Google Scholar] [CrossRef]
- Islam, S.M.M.; Yavari, E.; Rahman, A.; Lubecke, V.M.; Boric-Lubecke, O. Multiple Subject Respiratory Pattern Recognition and Estimation of Direction of Arrival using Phase-Comparison Monopulse Radar. In Proceedings of the 2019 IEEE Radio and Wireless Symposium (RWS), Orlando, FL, USA, 20–23 January 2019; pp. 1–4. [Google Scholar] [CrossRef]
- Stauffer, C.; Grimson, W.E.L. Adaptive background mixture models for real-time tracking. In Proceedings of the 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149), Fort Collins, CO, USA, 23–25 June 1999; Volume 2, pp. 246–252. [Google Scholar]
- Chen, Z.; Ellis, T. A Self-Adaptive Gaussian Mixture Model. Comput. Vis. Image Underst. 2014, 122, 35–46. [Google Scholar] [CrossRef]
- Bruce, D.L.; Kanade, T. An iterative image registration technique with an application to stereo vision. In Proceedings of the 7th International Joint Conference on Artificial Intelligence, Vancouver, BC, Canada, 24–28 August 1981; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1981; pp. 674–679. [Google Scholar]
- Farnebäck, G. Two-Frame Motion Estimation Based on Polynomial Expansion. In Image Analysis. SCIA 2003. Lecture Notes in Computer Science; Bigun, J., Gustavsson, T., Eds.; Springer: Berlin/Heidelberg, Germany, 2003; Volume 2749. [Google Scholar]
- Kalman, R.E. A New Approach to Linear Filtering and Prediction Problems. J. Basic Eng. March 1960, 82, 35–45. [Google Scholar] [CrossRef]
- Kerdvibulvech, C. Real-time augmented reality application using color analysis. In Proceedings of the 2010 IEEE Southwest Symposium on Image Analysis & Interpretation (SSIAI), Austin, TX, USA, 23–25 May 2010; pp. 29–32. [Google Scholar]
- Kerdvibulvech, C. Human Hand Motion Recognition Using an Extended Particle Filter. In 2014 Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2014; pp. 71–80. [Google Scholar]
- Dodge, S.F.; Karam, L. Understanding how image quality affects deep neural networks. In Proceedings of the 2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX), Lisbon, Portugal, 6–8 June 2016; pp. 1–6. [Google Scholar]
- Fish, D.A.; Brinicombe, A.M.; Pike, E.R.; Walker, J.G. Blind deconvolution by means of the Richardson–Lucy algorithm. J. Opt. Soc. Am. A 1995, 12, 58–65. [Google Scholar] [CrossRef]
- Tong, H.; Li, M.; Zhang, H.; Zhang, C. Blur detection for digital images using wavelet transform. In Proceedings of the 2004 IEEE International Conference on Multimedia and Expo (ICME) (IEEE Cat. No.04TH8763), Taipei, Taiwan, 27–30 June 2004; Volume 1, pp. 17–20. [Google Scholar] [CrossRef]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.C.; Bengio, Y. Generative adversarial nets. arXiv 2014, arXiv:1406.2661. [Google Scholar]
- Zhang, K.; Ren, W.; Luo, W.; Lai, W.; Stenger, B.; Yang, M.; Li, H.; Lai, W. Deep Image Deblurring: A Survey. Available online: https://arxiv.org/pdf/2201.10700.pdf (accessed on 24 May 2024).
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv 2015, arXiv:1505.04597. Available online: https://arxiv.org/abs/1505.04597 (accessed on 1 March 2024).
- Cho, S.; Ji, S.; Hong, J.; Jung, S.; Ko, S. Rethinking Coarse-to-Fine Approach in Single Image Deblurring. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 4621–4630. [Google Scholar] [CrossRef]
- Liang, C.; Chen, Y.; Liu, Y.; Hsu, W. Raw Image Deblurring. IEEE Trans. Multimed. 2020, 24, 61–72. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. arXiv 2016, arXiv:1512.03385. [Google Scholar]
- Zhang, Y.; Zhou, D.; Chen, S.; Gao, S.; Ma, Y. Single-image crowd counting via multi-column convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 589–597. [Google Scholar]
- Johnson, J.; Alahi, A.; Fei-Fei, L. Perceptual Losses for Real-Time Style Transfer and Super-Resolution. In Computer Vision, ECCV 2016; Lecture Notes in Computer Science; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer: Cham, Switzerland, 2016; Volume 9906. [Google Scholar]
- Howard, J.; others. fastai. GitHub. 2018. Available online: https://github.com/fastai/fastai (accessed on 24 May 2024).
- Dollar, P.; Appel, R.; Belongie, S.; Perona, P. Fast Feature Pyramids for Object Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 1–14. [Google Scholar] [CrossRef]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. arXiv 2014, arXiv:1311.2524. pp. 580–587. [Google Scholar]
- Girshick, R. Fast R-CNN. arXiv 2015, arXiv:1504.080838. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards realtime object detection with region proposal networks. arXiv 2015, arXiv:1506.01497. [Google Scholar] [CrossRef] [PubMed]
- Lempitsky, V.; Zisserman, A. Learning to count objects in images. Adv. Neural Inf. Process. Syst. 2010, 23, 1324–1332. [Google Scholar]
- Pham, V.-Q.; Kozakaya, T.; Yamaguchi, O.; Okada, R. Count forest: Co-voting uncertain number of targets using random forest for crowd density estimation. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 3253–3261. [Google Scholar]
- Li, Y.; Zhang, X.; Chen, D. Csrnet: Dilated convolutional neural networks for understanding the highly congested scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1091–1100. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Zhou, W.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2014, 13, 600–612. [Google Scholar]
- Reisenhofer, R.; Bosse, S.; Kutyniok, G.; Wiegand, T. A Haar wavelet-based perceptual similarity index for image quality assessment. Signal Process. Image Commun. 2018, 61, 33–43. [Google Scholar] [CrossRef]
- Abuolaim, A.; Brown, M.S. Defocus Deblurring Using Dual-Pixel Data. In Computer Vision—ECCV 2020. ECCV 2020; Lecture Notes in Computer Science; Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M., Eds.; Springer: Cham, Switzerland, 2020; Volume 12355. [Google Scholar]
- Michaeli, T.; Irani, M. Nonparametric Blind Super-resolution. In Proceedings of the 2013 IEEE International Conference on Computer Vision, Sydney, NSW, Australia, 1–8 December 2013; pp. 945–952. [Google Scholar] [CrossRef]
- Zhang, K.; Gool, L.V.; Timofte, R. Deep Unfolding Network for Image Super-Resolution. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 3214–3223. [Google Scholar] [CrossRef]
Types | Existing Approaches | Proposed Approaches | |
---|---|---|---|
Motion-dependent | • Multiple sensors need to be installed for a multi-entrance room. • Readings from multiple sensors need to be coordinated for total count for a multi-entrance room. • Difficulty in distinguishing between multiple people who are close together or moving as a group. | • A single camera can cover multiple entrances. • Properly placed camera can identify individual people moving in a group. | |
Motion-independent | Sound, Temp., Humidity, Pressure, CO2 | • Sensors of diverse parameters must be coupled. • Build-up or dispersion rate might be affected by external factors. | • A single camera is used. • Does not depend on changes in environmental characteristics. |
Wi-Fi, Bluetooth, RFID | • Occupants must have their devices turned on. | • Does not depend on occupants’ inputs. | |
Thermal camera | • Challenging to distinguish between individuals who are close together due to low spatial resolution. | • Visible light camera has higher resolution than thermal imaging camera. | |
Camera | • Deals with small and well-separated occupants. • Privacy issue has not been addressed. • Multiple cameras need to be installed for a multi-entrance room. | • Deals with versatile crowd types (Figure 3). - Privacy issue has been addressed. • A single camera is used. |
Parameter | Value |
---|---|
Batch size | 8 |
Learning rate | 0.001 |
Epochs | 300 |
Weight decay | 0.001 |
Optimizer | SGD |
Momentum | 0.90 |
Parameter | Value |
---|---|
Maximum depth of tree | 2 |
Number of stages | 3 |
Number of weak classifiers | [32,64,128] |
Maximum number of negative windows to sample | 5000 |
Algorithms | Dataset | MR |
---|---|---|
ACF | Test | 31 |
U-Net+ACF | Test | 28 |
RL+ACF | Test | 26 |
Parameter | Value |
---|---|
Batch size | 8 |
Learning rate | 0.001 |
Epochs | 500 |
Weight decay | 0.0001 |
Optimizer | SGD |
Momentum | 0.90 |
Algorithms | Dataset | AP |
---|---|---|
Faster R-CNN | Validation | 0.758 |
Test | 0.743 | |
U-Net+Faster R-CNN | Validation | 0.750 |
Test | 0.732 | |
RL+Faster R-CNN | Validation | 0.753 |
Test | 0.736 |
Parameter | Value |
---|---|
Batch size | 8 |
Learning rate | 0.001 |
Epochs | 400 |
Weight decay | 0.0005 |
Optimizer | SGD |
Momentum | 0.95 |
Algorithms | Dataset | MAE |
---|---|---|
CSRNet | Train | 3.682 |
Validation | 3.927 | |
Test | 4.536 | |
U-Net+CSRNet | Train | 1.654 |
Validation | 1.867 | |
Test | 2.014 | |
RL+CSRNet | Train | 1.691 |
Validation | 1.785 | |
Test | 1.813 |
Approache | Algorithms | Counting Error (%) |
---|---|---|
Motion-based | Background Subtraction+Kalman Filter Tracking+Detection of Line Crossing | 48.14 |
Optical Flow Estimation (Sparse)+Kalman Filter Tracking+Detection of Line Crossing | 46.24 | |
Optical Flow Estimation (Dense)+Kalman Filter Tracking+Detection of Line Crossing | 44.73 | |
Motion-independent | ACF | 28.87 |
RL+ACF | 24.54 | |
U-Net+ACF | 26.14 | |
Faster R-CNN | 22.72 | |
RL+Faster R-CNN | 19.95 | |
U-Net+Faster R-CNN | 20.21 | |
CSRNet | 31.28 | |
RL+CSRNet | 16.29 | |
U-Net+CSRNet | 18.24 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sourav, M.S.G.; Yavari, E.; Gao, X.; Maskrey, J.; Zheng, Y.; Lubecke, V.M.; Boric-Lubecke, O. Occupancy Estimation from Blurred Video: A Multifaceted Approach with Privacy Consideration. Sensors 2024, 24, 3739. https://doi.org/10.3390/s24123739
Sourav MSG, Yavari E, Gao X, Maskrey J, Zheng Y, Lubecke VM, Boric-Lubecke O. Occupancy Estimation from Blurred Video: A Multifaceted Approach with Privacy Consideration. Sensors. 2024; 24(12):3739. https://doi.org/10.3390/s24123739
Chicago/Turabian StyleSourav, Md Sakib Galib, Ehsan Yavari, Xiaomeng Gao, James Maskrey, Yao Zheng, Victor M. Lubecke, and Olga Boric-Lubecke. 2024. "Occupancy Estimation from Blurred Video: A Multifaceted Approach with Privacy Consideration" Sensors 24, no. 12: 3739. https://doi.org/10.3390/s24123739
APA StyleSourav, M. S. G., Yavari, E., Gao, X., Maskrey, J., Zheng, Y., Lubecke, V. M., & Boric-Lubecke, O. (2024). Occupancy Estimation from Blurred Video: A Multifaceted Approach with Privacy Consideration. Sensors, 24(12), 3739. https://doi.org/10.3390/s24123739