Binocular Rivalry Impact on Macroblock-Loss Error Concealment for Stereoscopic 3D Video Transmission
Abstract
:1. Introduction
2. Background
2.1. 3D Visual Perception
2.2. Binocular Rivalry
2.3. Binocular Rivalry in S3D Transmission
- In any perspective of a 3D video frame, video coding and compression artifacts (such as blocking, staircase, ringing, mosaic, motion, and blurring) may occur and cause distortions. Modern picture processing and encoding methods, however, have lessened these issues.
- In transmission network channels, queuing, congestion, and serialization effects generate variations in packet transit delays known as jitter. Higher degrees of jitter may cause binocular ambiguity between views because the packets for either view are received too late for their needed display time. In general, larger levels of jitter are more likely to occur in either sluggish or highly crowded channels during transmission.
- A packet loss in the network might cause the decoder to fail to give accurate left and right pictures, which can lead to data loss and binocular mismatch. This discrepancy may cause spatial aliasing, color bleeding, motion artifacts, intensity discrepancies, and other problems [21].
- The decoder synchronization necessary to recover losses and the packet delay brought on by jitter in either of the two views can produce binocular rivalry artifacts that impair accurate depth perception.
- The implementation of existing 2D error concealment approaches for 3D distorted videos also introduces binocular artifacts and can result in binocular rivalry.
- At the acquisition and display phases of a 3D transmission chain, stereoscopic distortions may cause binocular rivalry. Quick variations in object depth caused by motion might result in fast vergence shifts that viewers must track. Additionally, an observer may have crisp and double vision of ordinarily indistinct objects, while abrupt scene changes affect the observer’s impression of the depth of the scene, requiring a re-adaptation of the observer’s vision.
2.4. Macroblock Loss in S3D Transmission
3. Materials and Methods
3.1. Proposed Video Processing Chain
3.2. Video Coding and Transmission
3.3. Frame Predictive Error Concealment
3.3.1. Bilinear Interpolation for I-frame
- Step 1: Choose n adjacent neighbors around the missing packet (e.g., n lines above and below the lost area).
- Step 2: Identify all the cells whose peaks exceed a specified threshold (tp). (The presence of a straight line in the picture is indicated by a peak in the accumulator array. The Hough transform’s ability to connect edges relies on the size of the accumulator cell used to identify the greatest peaks.)
- Step 3: Combine adjacent cells with similar Hough parameters into a single cell.
- Step 4: In order to join the jagged edges, use the inverse Hough transform.
3.3.2. Boundary Matching for P-Frame
3.3.3. Frame Copy for B-Frame
3.4. Subjective Method and Testing Conditions
3.5. Experimental Datasets and Training Sessions
3.6. Objective Video Quality Assessment
4. Results and Discussions
4.1. Mean Opinion Score and Confidence Interval
4.2. Objective Quality Assessment
4.3. Analysis of Student’s T-Test and Statistical Significance
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A. Encoding and Decoding Process
InputFile | = “input.yuv” | # Input sequence |
FramesToBeEncoded | = 250 | # No of frames to be coded |
FrameRate | = 25.0 | # Frame rate (1–100) |
OutputFile | = “output.264” | # Bitstream |
ProfileIDC | = 100 | # Profile IDC |
IntraPeriod | = 25 | # Period of I-picture |
IDRPeriod | = 0 | # Period of IDR picture |
QPISlice | = 28 | # Quant.param for I slices |
QPPSlice | = 28 | # Quant.param for P slices |
SearchRange | = 32 | # Max search range |
NumberReferFrm | = 1 | # Previous frames for inter-motion search (0–16) |
Log2MaxFNumM4 | = −1 | # |
SymbolMode | = 1 | # Entropy coding |
OutFileMode | = 1 | # Output file mode, 1:RTP |
SliceMode | = 1 | # Slice mode (1=fxd mb) |
SliceArgument | = user-defined | # Slice argument |
num_slice_grp_m1 | = 0 | # No. of slice groups - 1 |
slice_grp_map_typ | = 0 to 6 | # Different group types |
ConfigFileName | = “sg0conf.cfg” | # slice_group_map_type 0,2,6 |
InputFile | = “test.264” | # H.264/AVC coded bitstream |
OutputFile | = “test_dec.yuv” | # Output file, YUV/RGB |
RefFile | = “test_rec.yuv” | # Ref sequence (for SNR) |
WriteUV | = 1 | # Write 4:2:0 chroma components |
FileFormat | = 1 | # NAL(0=AnnexB,1=RTP packets) |
ConcealMode | = 1 | # Err concealment, 0: Off, # 1: FrameCopy, 2: MotionCopy |
References
- Van Kasteren, A.; Brunnström, K.; Hedlund, J.; Snijders, C. Quality of experience of 360 video–subjective and eye-tracking assessment of encoding and freezing distortions. Multimed. Tools Appl. 2022, 81, 9771–9802. [Google Scholar] [CrossRef]
- Taha, M.; Ali, A. Smart algorithm in wireless networks for video streaming based on adaptive quantization. Concurr. Comput. Pract. Exp. 2022, 35, e7633. [Google Scholar] [CrossRef]
- Li, C.; Yun, L.; Xu, S. Blind stereoscopic image quality assessment using 3D saliency selected binocular perception and 3D convolutional neural network. Multimed. Tools Appl. 2022, 81, 18437–18455. [Google Scholar] [CrossRef]
- Wan, W.; Huang, D.; Shang, B.; Wei, S.; Wu, H.R.; Wu, J.; Shi, G. Depth Perception Assessment of 3D Videos Based on Stereoscopic and Spatial Orientation Structural Features. IEEE Trans. Circuits Syst. Video Technol. 2022. [Google Scholar] [CrossRef]
- Hewage, C.T.; Martini, M.G. Quality of experience for 3D video streaming. IEEE Commun. Mag. 2013, 51, 101–107. [Google Scholar] [CrossRef]
- Blake, R. The Perceptual Magic of Binocular Rivalry. Curr. Dir. Psychol. Sci. 2022, 31, 139–146. [Google Scholar] [CrossRef]
- Carreira, J.; Pinto, L.; Rodrigues, N.; Faria, S.; Assuncao, P. Subjective assessment of frame loss concealment methods in 3D video. In Proceedings of the 28th Picture Coding Symposium, Nagoya, Japan, 8–10 December 2010; pp. 182–185. [Google Scholar]
- Barkowsky, M.; Wang, K.; Cousseau, R.; Brunnström, K.; Olsson, R.; Le Callet, P. Subjective quality assessment of error concealment strategies for 3DTV in the presence of asymmetric transmission errors. In Proceedings of the 2010 18th International Packet Video Workshop, Hong Kong, China, 13–14 December 2010; pp. 193–200. [Google Scholar]
- Hasan, M.M.; Islam, M.A.; Rahman, S.; Frater, M.R.; Arnold, J.F. No-Reference Quality Assessment of Transmitted Stereoscopic Videos Based on Human Visual System. Appl. Sci. 2022, 12, 10090. [Google Scholar] [CrossRef]
- Zhao, Y.; Zhang, Y.; Yu, L. Subjective Study of Binocular Rivalry in Stereoscopic Images with Transmission and Compression Artifacts. In Proceedings of the ICIP, 20th IEEE Conference, Melbourne, VIC, Australia, 15–18 September 2013; pp. 132–135. [Google Scholar]
- Hu, J.; Wang, X.; Chai, X.; Shao, F.; Jiang, Q. Deep network based stereoscopic image quality assessment via binocular summing and differencing. J. Vis. Commun. Image Represent. 2022, 82, 103420. [Google Scholar] [CrossRef]
- Feng, Y.; Li, S.; Chang, Y. Multi-scale feature-guided stereoscopic video quality assessment based on 3D convolutional neural network. In Proceedings of the ICASSP 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, 6–11 June 2021; pp. 2095–2099. [Google Scholar]
- Zheng, K.; Bai, J.; Zhang, Y.; Yu, J. Stereo Visual Masking Based on Unconscious Binocular Rivalry. In Proceedings of the 2021 IEEE 6th International Conference on Signal and Image Processing (ICSIP), Nanjing, China, 22–24 October 2021; pp. 426–430. [Google Scholar]
- Taha, M.; Canovas, A.; Lloret, J.; Ali, A. A QoE adaptive management system for high definition video streaming over wireless networks. Telecommun. Syst. 2021, 77, 63–81. [Google Scholar] [CrossRef]
- Bensalma, R.; Larabi, M.C. A perceptual metric for stereoscopic image quality assessment based on the binocular energy. Multidimens. Syst. Signal Process. 2013, 24, 281–316. [Google Scholar] [CrossRef]
- Howard, I.P.; Rogers, B.J. Binocular Vision and Stereopsis; Oxford University Press: Oxford, MI, USA, 1995. [Google Scholar]
- Ogle, K.N. Some aspects of stereoscopic depth perception. JOSA 1967, 57, 1073–1081. [Google Scholar] [CrossRef]
- Zhang, P.; Jamison, K.; Engel, S.; He, B.; He, S. Binocular rivalry requires visual attention. Neuron 2011, 71, 362–369. [Google Scholar] [CrossRef] [Green Version]
- Lei, L.; Schor, C.M. The spatial properties of binocular suppression zone. Vision Res. 1994, 34, 937–947. [Google Scholar] [CrossRef] [PubMed]
- Wikipedia. 3D Television. 2016. Available online: https://en.wikipedia.org/wiki/3D_television (accessed on 11 February 2021).
- Boev, A.; Hollosi, D.; Gotchev, A. Classification of Stereoscopic Artefacts. MOBILE3DTV Project No. 216503. 2008. Available online: https://www.researchgate.net/profile/Atanas-Gotchev/publication/265205779_Classification_of_stereoscopic_artefacts/links/5473b6470cf2778985abb9cb/Classification-of-stereoscopic-artefacts.pdf (accessed on 5 August 2022).
- Lambooij, M.; Fortuin, M.; Heynderickx, I.; IJsselsteijn, W. Visual discomfort and visual fatigue of stereoscopic displays: A review. J. Imaging Sci. Technol. 2009, 53, 030201-1–030201-14. [Google Scholar] [CrossRef] [Green Version]
- Xiong, Q.; Chen, K.; Liu, H.; Shi, J.; Chen, Z. A Quantitative Measurement of Binocular Fusion Limit in Stereoscopic Display. In Advances in 3D Image and Graphics Representation, Analysis, Computing and Information Technology; Kountchev, R., Patnaik, S., Shi, J., Favorskaya, M.N., Eds.; Springer: Singapore, 2020; pp. 467–475. [Google Scholar]
- Kanumuri, S. Packet Loss Visibility and Packet Prioritization in Digital Videos; ProQuest: Ann Arbor, MI, USA, 2006. [Google Scholar]
- IJsselsteijn, W.A.; de Ridder, H.; Vliegen, J. Subjective evaluation of stereoscopic images: Effects of camera parameters and display duration. IEEE Trans. Circuits Syst. Video Technol. 2000, 10, 225–233. [Google Scholar] [CrossRef]
- Stelmach, L.B.; Tam, W.J.; Meegan, D.V.; Vincent, A.; Corriveau, P. Human perception of mismatched stereoscopic 3D inputs. In Proceedings of the International Conference on Image Processing, Vancouver, BC, Canada, 10–13 September 2000; Volume 1, pp. 5–8. [Google Scholar]
- Clemens, C.; Kunter, M.; Knorr, S.; Sikora, T. A hybrid approach for error concealment in stereoscopic images. In Proceedings of the 5th International Workshop Image Analysis Multimedia Interactive Services (WIAMIS), Lisboa, Portugal, 21–23 April 2004. [Google Scholar]
- Wang, K.; Barkowsky, M.; Brunnström, K.; Sjöström, M.; Cousseau, R.; Le Callet, P. Perceived 3D TV transmission quality assessment: Multi-laboratory results using absolute category rating on quality of experience scale. IEEE Trans. Broadcast. 2012, 58, 544–557. [Google Scholar] [CrossRef] [Green Version]
- Hasan, M.M.; Arnold, J.F.; Frater, M.R. Subjective evaluation and statistical analysis for improved frame-loss error concealment of 3D videos. In Proceedings of the Visual Communications and Image Processing Conference, Valletta, Malta, 7–10 December 2014; pp. 181–184. [Google Scholar]
- Rochowicz, J.A., Jr. P-Value Approximations for T-Tests of Hypothesis. Spreadsheets Educ. (eJSiE) 2012, 5, 5. [Google Scholar]
- Marpe, D.; Wiegand, T.; Sullivan, G.J. The H. 264/MPEG4 advanced video coding standard and its applications. Commun. Mag. IEEE 2006, 44, 134–143. [Google Scholar] [CrossRef]
- Cermak, G.; Thorpe, L.; Pinson, M. Test Plan for Evaluation of Video Quality Models for Use with High Definition TV Content. VQEG 3DTV Group. 2009. Available online: https://vqeg.org/VQEGSharedFiles/MeetingFiles/2012_12_AdvancedDigialSciencesCenter_Singapore/VQEG_3DTV_2012_105_3dtv_testplan_NTT_v0.0.pdf (accessed on 7 May 2022).
- Gharavi, H.; Gao, S. Spatial interpolation algorithm for error concealment. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Las Vegas, NV, USA, 31 March–4 April 2008; pp. 1153–1156. [Google Scholar]
- Hasan, M.M.; Ahn, K.; Chae, O. Measuring blockiness of videos using edge enhancement filtering. In Proceedings of the Signal Processing, Image Processing and Pattern Recognition: International Conference, SIP 2011, Held as Part of the Future Generation Information Technology Conference FGIT 2011, in Conjunction with GDC 2011, Jeju Island, Republic of Korea, 8–10 December 2011; pp. 10–19. [Google Scholar]
- Ballard, D. Generalizing the Hough transform to detect arbitrary shapes. Pattern Recognit. 1991, 13, 183–194. [Google Scholar] [CrossRef] [Green Version]
- Hadizadeh, H.; Bajić, I.V. NAL-SIM: An interactive simulator for H. 264/AVC video coding and transmission. In Proceedings of the 2010 7th IEEE Consumer Communications and Networking Conference, Las Vegas, NV, USA, 9–12 January 2010. [Google Scholar]
- Su, C.Y.; Huang, C.H. Temporal error concealment algorithm using multi-side boundary matching principle. In Proceedings of the IEEE International Symposium on Signal Processing and Information Technology, Vancouver, BC, Canada, 27–30 August 2006; pp. 448–453. [Google Scholar]
- ITU-R Rec. BT. 500-12; Methodology for the Subjective Assessment of the Quality of Television Pictures. International Telecommunication Union (ITU): Geneva, Switzerland, 2012.
- VQE Group. Final Report from the Video Quality Experts Group on the Validation of Objective Models of Video Quality Assessment, Phase II (FR_TV2). 2003. Available online: https://cir.nii.ac.jp/crid/1574231875001955456 (accessed on 21 February 2021).
- ITU-2021. Subjective Methods for the Assessment of Stereoscopic 3DTV Systems. RECOMMENDATION ITU-R BT.2021. 2012. Available online: https://www.itu.int/dms_pubrec/itu-r/rec/bt/R-REC-BT.2021-1-201502-W!!PDF-E.pdf (accessed on 13 April 2022).
- Cheng, E.; Burton, P.; Burton, J.; Joseski, A.; Burnett, I. RMIT3DV: Pre-announcement of a creative commons uncompressed HD 3D video database. In Proceedings of the International Workshop on Quality of Multimedia Experience, QoMEx, Melbourne, Australia, 5–7 July 2012; pp. 212–217. [Google Scholar]
- Goldmann, L.; De Simone, F.; Ebrahimi, T. A comprehensive database and subjective evaluation methodology for quality of experience in stereoscopic video. In Proceedings of the IS& T/SPIE Electronic Imaging. International Society for Optics and Photonics, San Jose, CA, USA, 17–21 January 2010; p. 75260S. [Google Scholar]
- Horé, A.; Ziou, D. Image Quality Metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar] [CrossRef]
- Wang, Z.; Simoncelli, E.; Bovik, A. Multiscale structural similarity for image quality assessment. In Proceedings of the The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, Pacific Grove, CA, USA, 9–12 November 2003; Volume 2, pp. 1398–1402. [Google Scholar] [CrossRef] [Green Version]
- Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “Completely Blind” Image Quality Analyzer. IEEE Signal Process. Lett. 2013, 20, 209–212. [Google Scholar] [CrossRef]
- Codecian Co., Ltd. CodecVisa is a Powerful Real-Time Analyzer for H.265/HEVC, H.264/AVC/MVC, GOOGLE VP9/VP8, MPEG2 Video and YUV Video Bodecs. 2017. Available online: http://www.codecian.com (accessed on 19 March 2022).
- Ali, Z.; Bhaskar, S.B. Basic statistical tools in research and data analysis. Indian J. Anaesth. 2016, 60, 662. [Google Scholar] [CrossRef] [PubMed]
MOS of Video Quality | ||||||||
---|---|---|---|---|---|---|---|---|
Experimental Videos | Short GOP | Long GOP | ||||||
Datasets | 1% Packet Loss | 3% Packet Loss | 1% Packet Loss | 3% Packet Loss | ||||
Both | Single | Both | Single | Both | Single | Both | Single | |
3D_01 | 2.87 | 2.54 | 2.77 | 2.39 | 2.18 | 1.63 | 1.95 | 1.50 |
3D_02 | 2.99 | 2.58 | 2.69 | 2.32 | 2.05 | 1.55 | 1.92 | 1.52 |
3D_26 | 3.02 | 2.64 | 2.83 | 2.39 | 2.72 | 2.39 | 2.42 | 1.95 |
3D_38 | 3.23 | 2.75 | 3.09 | 2.53 | 2.76 | 2.34 | 2.54 | 2.14 |
3D_40 | 3.05 | 2.58 | 2.88 | 2.42 | 2.50 | 1.99 | 2.37 | 1.89 |
3D_45 | 3.47 | 2.89 | 3.05 | 2.57 | 2.81 | 2.31 | 2.71 | 2.18 |
3D_47 | 2.65 | 2.25 | 2.39 | 2.07 | 2.47 | 1.88 | 2.34 | 1.87 |
MOS of Visual Comfort | ||||||||
---|---|---|---|---|---|---|---|---|
Experimental Videos | Short GOP | Long GOP | ||||||
Datasets | 1% Packet Loss | 3% Packet Loss | 1% Packet Loss | 3% Packet Loss | ||||
Both | Single | Both | Single | Both | Single | Both | Single | |
3D_01 | 2.91 | 2.55 | 2.61 | 2.24 | 2.49 | 2.00 | 1.94 | 1.53 |
3D_02 | 3.07 | 2.62 | 2.87 | 2.45 | 2.25 | 1.96 | 1.91 | 1.60 |
3D_26 | 2.93 | 2.56 | 2.77 | 2.37 | 2.64 | 2.28 | 2.44 | 1.96 |
3D_38 | 3.20 | 2.65 | 3.06 | 2.58 | 2.70 | 2.22 | 2.48 | 2.03 |
3D_40 | 2.79 | 2.36 | 2.64 | 2.21 | 2.66 | 2.21 | 2.59 | 2.20 |
3D_45 | 3.32 | 2.80 | 2.94 | 2.51 | 2.79 | 2.33 | 2.69 | 2.07 |
3D_47 | 2.59 | 2.20 | 2.33 | 1.84 | 2.37 | 1.86 | 2.25 | 1.79 |
Type | Significance | p-value | Comment |
---|---|---|---|
ExSS | Extreme statistical significance | ≤0.001 | Extremely strong evidence against the null hypothesis |
VSS | Very statistical significance | ≤0.01 | Very strong evidence against the null hypothesis |
SS | Statistical significance | ≤0.05 | Strong evidence against the null hypothesis |
NqSS | Not quite SS | ≤0.1 | Marginal |
NSS | Not SS | >0.1 | Fail to reject null hypothesis |
Comp. of Single and Both Views | Video Quality | Visual Comfort | |||||
---|---|---|---|---|---|---|---|
PLR (%) | Video Datasets | p-Value | Stat. Sig. | p-Value | Stat. Sig. | ||
Indoor Sofa | 2.5903 | 0.00854 | VSS | 2.1841 | 0.02022 | SS | |
Bicycle Ride | 3.6643 | 0.00072 | ExSS | 3.4265 | 0.00127 | VSS | |
State Library | 2.8603 | 0.00469 | VSS | 2.6045 | 0.00828 | VSS | |
1% PL and EC | Princess Bridge | 4.4514 | 0.00011 | ExSS | 3.1701 | 0.00231 | VSS |
La Trobe Corridor | 4.0153 | 0.00031 | ExSS | 2.3050 | 0.01574 | SS | |
La Trobe Reading | 3.0548 | 0.00351 | VSS | 3.4056 | 0.00133 | VSS | |
La Trobe Exterior | 3.3775 | 0.00142 | VSS | 2.4221 | 0.01228 | SS | |
Indoor Sofa | 3.8847 | 0.00043 | ExSS | 3.7059 | 0.00065 | ExSS | |
Bicycle Ride | 2.6516 | 0.00746 | VSS | 2.5249 | 0.00984 | VSS | |
State Library | 2.6291 | 0.00784 | VSS | 2.7456 | 0.00606 | VSS | |
3% PL and EC | Princess Bridge | 6.1220 | <0.00001 | ExSS | 3.9772 | 0.00034 | ExSS |
La Trobe Corridor | 2.8565 | 0.00047 | ExSS | 3.7742 | 0.00056 | ExSS | |
La Trobe Reading | 3.7413 | 0.00060 | ExSS | 3.1259 | 0.00255 | VSS | |
La Trobe Exterior | 1.8838 | 0.03676 | SS | 2.1597 | 0.02125 | SS |
Comp. of Single and Both Views | Video Quality | Visual Comfort | |||||
---|---|---|---|---|---|---|---|
PLR (%) | Video Datasets | p-Value | Stat. Sig. | p-Value | Stat. Sig. | ||
Indoor Sofa | 2.6845 | 0.00694 | VSS | 2.3228 | 0.01516 | SS | |
Bicycle Ride | 2.5989 | 0.00828 | VSS | 3.0603 | 0.00297 | VSS | |
State Library | 4.5819 | 0.00008 | ExSS | 4.4152 | 0.00012 | ExSS | |
1% PL and EC | Princess Bridge | 3.4693 | 0.00115 | VSS | 3.1775 | 0.00227 | VSS |
La Trobe Corridor | 5.0294 | 0.00003 | ExSS | 2.0932 | 0.02433 | SS | |
La Trobe Reading | 3.1865 | 0.00222 | VSS | 5.6085 | <0.00001 | ExSS | |
La Trobe Exterior | 4.3416 | 0.00014 | ExSS | 3.1509 | 0.00241 | VSS | |
Indoor Sofa | 2.5493 | 0.00934 | VSS | 2.8627 | 0.00466 | VSS | |
Bicycle Ride | 3.3732 | 0.00144 | VSS | 2.9815 | 0.00356 | VSS | |
State Library | 4.5425 | 0.00009 | ExSS | 2.6932 | 0.00681 | VSS | |
3% PL and EC | Princess Bridge | 2.7983 | 0.00054 | ExSS | 3.5398 | 0.00097 | ExSS |
La Trobe Corridor | 4.1138 | 0.00025 | ExSS | 4.2103 | 0.00019 | ExSS | |
La Trobe Reading | 3.1755 | 0.00228 | VSS | 3.2616 | 0.00187 | VSS | |
La Trobe Exterior | 3.8512 | 0.00046 | ExSS | 3.5696 | 0.00091 | ExSS |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hasan, M.M.; Hossain, M.A.; Alotaibi, N.; Arnold, J.F.; Azad, A. Binocular Rivalry Impact on Macroblock-Loss Error Concealment for Stereoscopic 3D Video Transmission. Sensors 2023, 23, 3604. https://doi.org/10.3390/s23073604
Hasan MM, Hossain MA, Alotaibi N, Arnold JF, Azad A. Binocular Rivalry Impact on Macroblock-Loss Error Concealment for Stereoscopic 3D Video Transmission. Sensors. 2023; 23(7):3604. https://doi.org/10.3390/s23073604
Chicago/Turabian StyleHasan, Md Mehedi, Md. Azam Hossain, Naif Alotaibi, John F. Arnold, and AKM Azad. 2023. "Binocular Rivalry Impact on Macroblock-Loss Error Concealment for Stereoscopic 3D Video Transmission" Sensors 23, no. 7: 3604. https://doi.org/10.3390/s23073604
APA StyleHasan, M. M., Hossain, M. A., Alotaibi, N., Arnold, J. F., & Azad, A. (2023). Binocular Rivalry Impact on Macroblock-Loss Error Concealment for Stereoscopic 3D Video Transmission. Sensors, 23(7), 3604. https://doi.org/10.3390/s23073604